QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,221,822
| 3,296,786
|
Pytest - Test passes when ran individually but not all together using pytest -s
|
<p>this method</p>
<pre><code>def test_clear(self):
url = self._api("progress?_=123&__=456")
import time
time.sleep(60)
print(url)
response_out = requests.get(url)
response_out.raise_for_status()
</code></pre>
<p>fails when ran using <strong>pytest -s</strong> with error - HTTPError: 500 Server Error: Internal Server Error for url: <a href="http://127.0.0.1:54583/gather/progress?_=123&__=456" rel="nofollow noreferrer">http://127.0.0.1:54583/gather/progress?_=123&__=456</a>, but passes when called using <code>pytest -s filename.py</code></p>
|
<python><pytest>
|
2024-11-25 06:09:09
| 0
| 1,156
|
aΨVaN
|
79,221,807
| 874,380
|
How to share code in different Jupyter notebooks in subfolders?
|
<p>I need to create a Github repository where I can organize Jupyter notebooks by topic as tutorials. Some notebooks will require to load large(r) data files, which I don't want to be part of the repository themselves.</p>
<p>My idea is to provide all data files in a different online resource, and download the required files in the notebooks using some custom auxiliary method in some <code>utils.py</code> script.</p>
<p>Since I want to use different subfolders for organizing the notebooks, <code>utils.py</code> would need to reside in a parent folder. However, loading <code>.py</code> files from a parent folder within a notebook seems to require manually tweaking the class path in the notebook.</p>
<p>I guess an alternative would be to put <code>utils.py</code> (and other shared code) into its own package that needs to be installed before using the notebook. Kind feels like overkill?</p>
<p>Is there some other and better alternative to handle this.</p>
|
<python><jupyter-notebook><jupyter>
|
2024-11-25 06:05:51
| 2
| 3,423
|
Christian
|
79,221,718
| 16,525,263
|
How to check if specified file path exists using pyspark
|
<p>I have a 2 dictionaries as below</p>
<pre class="lang-py prettyprint-override"><code>data_path = {
"person": "/data/raw/person/*",
"location": "/data/raw/location/*",
"person_int": "/data/test/person_int/",
"location_int": "/data/test/location_int/*"
}
interim_tables = {
"person_int": ['ID', 'NAME', 'DATE'],
"location_int": ['ID', 'LOCATION', 'DATE'],
"person": ['ID', 'NAME', 'DATE']
}
</code></pre>
<p>I need to check if the interim table exists in the path.
If it exists, I need to load the incremental(delta) data.
If it does not exist, I need to load the historical data from the 'person' and 'location' tables.</p>
<p>This is my code below:</p>
<pre class="lang-py prettyprint-override"><code>check_results={}
for table, cols in interim_tables.items():
if table in data_path:
path=data_path[table]
if os.path.exists(path):
check_results[table] = "path exists"
# << logic to load incremental(delta) data >>
else:
check_results[table] = "path not exists"
# << logic to load historical data >>
for table, status in check_results.items():
print(f"{table}---{status})
</code></pre>
<p>At present, I have only person and location path available. person_int and location_int does not exist.
But I'm getting the output as:</p>
<pre class="lang-none prettyprint-override"><code>person_int---path not exists
location_int---path not exists
person---path not exists
</code></pre>
<p>What is the correct approach for this?</p>
|
<python>
|
2024-11-25 05:19:56
| 1
| 434
|
user175025
|
79,221,652
| 24,758,287
|
How to find the last non-null value before the current row in Polars?
|
<p>I'd like to perform the following:</p>
<p>Input:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
"a": [1,15,None,20,None]
})
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>expected = pl.from_repr("""
ββββββββ¬βββββββ
β a β b β
β --- β --- β
β i64 β i64 β
ββββββββͺβββββββ‘
β 1 β 0 β
β 15 β 14 β # b = 15 - 1
β null β null β
β 20 β 5 β # b = 20 - 15
β null β null β
ββββββββ΄βββββββ
""")
</code></pre>
<p>So, what it does:</p>
<ol>
<li>If the value of "A" is null, then value of B (output column) is also Null</li>
<li>If "A" has some value, please retrieve the last Non-Null value in "A", and then subtract the current value in "A" with the previous Non-Null value</li>
</ol>
<p>I've tried the following question:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/75093945/how-to-select-the-last-non-null-value-from-one-column-and-also-the-value-from-an">How to select the last non-null value from one column and also the value from another column on the same row in Polars?</a></li>
</ul>
<p>But unfortunately, this does not answer the original problem, since the question performs an aggregation of an entire column, and then takes the last value of that column.</p>
<p>What I'd like to do is not to aggregate an entire column, but simply to subtract a current value with a previous non-null value.</p>
<p>I have also tried to use rolling:</p>
<pre><code>df = df.with_row_index().rolling(
index_column = 'index',
period = '???i').agg(pl.col("A").last())
</code></pre>
<p>But, of course, that does not work because the occurence of Null Values cannot be determined (i.e. it is not periodic, so I don't know how many indexes before the current entry contains a non-null value in "A").</p>
<p>Does anyone knows how to do so?</p>
<p>Thanks!</p>
|
<python><dataframe><null><python-polars><rolling-computation>
|
2024-11-25 04:28:19
| 1
| 301
|
user24758287
|
79,221,555
| 10,054,520
|
For reach row in dataframe, how to extract elements from an array?
|
<p>I'm working with a third party dataset that includes location data. I'm trying to extract the Longitude and Latitude coordinates from the location column. As stated in their doc:</p>
<blockquote>
<p>The <code>location</code> column is of the <code>point</code> datatype.</p>
</blockquote>
<p>When I ingest the data and view the location column it appears as below</p>
<pre><code>>>> results_df["location"].iloc[1]
{'coordinates': array([-97.707172829666, 30.385328900508]), 'type': 'Point'}
</code></pre>
<p>Not all rows have a value, some are None.</p>
<p>I know I can do this to get the values for a specific row:</p>
<pre><code>>>> results_df["location"].iloc[1]['coordinates'][0]
-97.707172829666
>>> results_df["location"].iloc[1]['coordinates'][1]
30.385328900508
</code></pre>
<p>But I'd like to create two new columns, longitude and latitude, for the entire dataframe. For each row, how can I extract the coordinates to populate the new column?</p>
<p>Example:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>latitude</th>
<th>longitude</th>
</tr>
</thead>
<tbody>
<tr>
<td>30.385328900508</td>
<td>-97.707172829666</td>
</tr>
<tr>
<td>39.395338984595</td>
<td>-74.587966507573</td>
</tr>
<tr>
<td>None</td>
<td>None</td>
</tr>
<tr>
<td>42.396358150943</td>
<td>-104.664962196304</td>
</tr>
<tr>
<td>None</td>
<td>None</td>
</tr>
</tbody>
</table></div>
<p>EDIT: I am working through the Pyspark Pandas API.</p>
|
<python><python-3.x><pandas><pyspark><pyspark-pandas>
|
2024-11-25 03:26:18
| 2
| 337
|
MyNameHere
|
79,221,525
| 7,887,965
|
Apache Nifi (ExecuteStreamCommand): Executable command python3 ended in an error:
|
<p>I am trying to run the following script in the <code>executestreamcommand</code> processor, which will read the data from <code>listfile</code> and <code>fetchfile</code> processor. Then I am trying to merge the content of Excel files in parquet format in the following script but giving the <code>python3 ended in an error</code></p>
<pre><code>import os
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
import sys
import io
def merge_and_split_parquet(data, output_dir, max_size=1 * 1024**3):
"""
Merges and splits data into Parquet files based on size.
Args:
data (pd.DataFrame): Data to process.
output_dir (str): Directory to save Parquet files.
max_size (int): Maximum size in bytes for each Parquet file.
"""
merged_data = pd.DataFrame()
parquet_index = 1
# Process the data row by row
for _, row in data.iterrows():
merged_data = pd.concat([merged_data, pd.DataFrame([row])])
# Check the size of the DataFrame
table = pa.Table.from_pandas(merged_data)
buffer = pa.BufferOutputStream()
pq.write_table(table, buffer)
if buffer.size() >= max_size:
# Write to a new Parquet file
output_file = os.path.join(output_dir, f"merged_{parquet_index}.parquet")
pq.write_table(table, output_file)
print(f"Generated {output_file}")
merged_data = pd.DataFrame() # Reset the DataFrame
parquet_index += 1
# Write any remaining data
if not merged_data.empty:
output_file = os.path.join(output_dir, f"merged_{parquet_index}.parquet")
pq.write_table(pa.Table.from_pandas(merged_data), output_file)
print(f"Generated {output_file}")
if __name__ == "__main__":
# Ensure proper arguments are passed
if len(sys.argv) < 2:
print("Usage: python script.py <output_dir>")
sys.exit(1)
output_dir = sys.argv[1]
# Ensure the output directory exists
os.makedirs(output_dir, exist_ok=True)
# Read data from STDIN
try:
input_stream = sys.stdin.read()
excel_data = pd.read_excel(io.BytesIO(input_stream.encode()))
except Exception as e:
print(f"Error reading input data: {e}")
sys.exit(1)
# Process and write Parquet files
merge_and_split_parquet(excel_data, output_dir)
</code></pre>
<p>When i run the processor, it is giving the following error :</p>
<pre><code>2024-11-25 07:46:07,540 ERROR [Timer-Driven Process Thread-5] o.a.n.p.standard.ExecuteStreamCommand ExecuteStreamCommand[id=4e7bd5aa-0193-1000-6154-675d13b6c0e5] Transferring StandardFlowFileRecord[uuid=c534f453-59d0-4958-819e-01295652d375,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1732502757533-52, container=default, section=52], offset=1680, length=112],offset=0,name=3GNANKANA.xlsx,size=112] to nonzero status. Executable command python3 ended in an error:
</code></pre>
<p>Below is my my workflow</p>
<p><a href="https://i.sstatic.net/EDi5bixZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDi5bixZ.png" alt="main workflow of apachenifi processor" /></a></p>
<p>My <code>executestreamcommand</code> configuration is as follows:</p>
<p><a href="https://i.sstatic.net/MB6OfHop.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MB6OfHop.png" alt="configuration" /></a></p>
<p>How can I resolve this?</p>
|
<python><apache-nifi>
|
2024-11-25 03:12:01
| 0
| 407
|
Filbadeha
|
79,221,167
| 7,917,771
|
blip2 type mismatch exception
|
<p>I'm trying to create an image captioning model using hugging face blip2 model on colab. My code was working fine till last week (Nov 8) but it gives me an exception now.</p>
<p>To install packages I use the following command:</p>
<pre><code>!pip install -q git+https://github.com/huggingface/peft.git transformers bitsandbytes datasets
</code></pre>
<p>To load blip2 processor and model I use the following code:</p>
<pre><code>model_name = "Salesforce/blip2-opt-2.7b"
processor = AutoProcessor.from_pretrained(model_name)
model = Blip2ForConditionalGeneration.from_pretrained(model_name,device_map="auto",load_in_8bit=False)
</code></pre>
<p>I use the following code to generate captions:</p>
<pre><code>def generate_caption(processor, model, image_path):
image = PILImage.open(image_path).convert("RGB")
print("image shape:" + image.size)
device = "cuda" if torch.cuda.is_available() else "cpu"
# Preprocess the image
inputs = processor(images=image, return_tensors="pt").to(device)
print("Input shape:", inputs['pixel_values'].shape)
print("Device:", device) # Additional debugging
for key, value in inputs.items():
print(f"Key: {key}, Shape: {value.shape}")
# Generate caption
with torch.no_grad():
generated_ids = model.generate(**inputs)
caption = processor.decode(generated_ids[0], skip_special_tokens=True)
return caption
</code></pre>
<p>here is the code that uses this method to generate captions:</p>
<pre><code> image_path = "my_image_path.jpg"
caption = generate_caption(processor, model, image_path)
print(f"{image_path}: {caption}"
</code></pre>
<p>finally, this is the outputs and errors of running the code above:</p>
<pre><code>image shape: (320, 240)
Input shape: torch.Size([1, 3, 224, 224])
Device: cuda
Key: pixel_values, Shape: torch.Size([1, 3, 224, 224])
---------------------------------------------------------------------------
.
.
.
/usr/local/lib/python3.10/dist-packages/transformers/models/blip_2/modeling_blip_2.py in generate(self, pixel_values, input_ids, attention_mask, interpolate_pos_encoding, **generate_kwargs)
2314 if getattr(self.config, "image_token_index", None) is not None:
2315 special_image_mask = (input_ids == self.config.image_token_index).unsqueeze(-1).expand_as(inputs_embeds)
-> 2316 inputs_embeds[special_image_mask] = language_model_inputs.flatten()
2317 else:
2318 logger.warning_once(
RuntimeError: shape mismatch: value tensor of shape [81920] cannot be broadcast to indexing result of shape [0]
</code></pre>
<p>I have searched the internet and used various AI models for help but to no avail. My guess is that this is a package update problem since my code had no problem last week. (I tried to restore my code to Nov 8 version but it throws an exception.) Moreover, I don't understand how 81920 is calculated in the error message.</p>
|
<python><artificial-intelligence><huggingface-transformers><large-language-model>
|
2024-11-24 22:05:23
| 2
| 571
|
Soroush Hosseinpour
|
79,220,947
| 4,451,315
|
Write DuckDB csv to Python string
|
<p>I have a DuckDBPyRelation and I'd like to get it as a csv</p>
<p>For example, if I have</p>
<pre><code>import polars as pl
import duckdb
data = pl.DataFrame({"a": [1, 2, 3]})
rel = duckdb.sql('select * from data')
</code></pre>
<p>then I'd like to do something like</p>
<pre><code>out = rel.to_csv()
</code></pre>
<p>However, DuckDB doesn't support this, you can only pass a file name to <code>to_csv</code></p>
<p>I'd like to end up with</p>
<pre><code>>>> out
'a\n1\n2\n3\n'
</code></pre>
<p>What would be the most efficient way to do this, without converting to other Python libraries like pandas / Polars / PyArrow?</p>
<p>All I can think of is to write to a temporary file, then read that with Python</p>
|
<python><duckdb>
|
2024-11-24 19:45:24
| 1
| 11,062
|
ignoring_gravity
|
79,220,838
| 21,395,742
|
Create a scaled molecule rdkit
|
<p>I am using RDKit in python to draw a molecule, and I want to get a high-definition image</p>
<p>This is my current code</p>
<pre><code>mol = Chem.MolFromSmiles("CCO")
mol = Chem.AddHs(mol)
img = Draw.MolToImage(mol)
</code></pre>
<p>I want it to be ~ 2000x1000 pixels</p>
<p>I tried: <code>img = Draw.MolToImage(mol, size = (2000,1000))</code> but although the canvas increases in size, the line width and font size remains constant.</p>
<p>Scaling with <code>img.resize()</code> is not ok because I want to get a non-pixelated output.</p>
<p>The closes to an answer is <a href="https://www.rdkit.org/docs/source/rdkit.Chem.Draw.rdMolDraw2D.html#rdkit.Chem.Draw.rdMolDraw2D.MolDraw2D.SetScale" rel="nofollow noreferrer">this</a>. However when I try creating its parent class I get an error:</p>
<pre><code>>> a = Draw.rdMolDraw2D.MolDraw2D()
RuntimeError: This class cannot be instantiated from Python
</code></pre>
<p>tldr; I am trying to find out a way to scale the image while it is rendering.</p>
<p>I am also ok using an alternative to RDKit, all I need is a way to display chemical structures from SMILES in a high-def (2000x1000) image.</p>
<p>Side question: Is there a way to show carbon atoms too? Can't find any docs for both these questions.</p>
|
<python><chemistry><rdkit>
|
2024-11-24 18:51:25
| 2
| 845
|
hehe
|
79,220,668
| 561,243
|
Is it possible to teach TOML Kit how to dump an object?
|
<p>I am generating TOML files with several tables using TOML Kit without any problem in general.</p>
<p>So far all the values were either strings or numbers, but today I first bumped into a problem. I was trying to dump a <code>pathlib.Path</code> object and it fails with a ConvertError <code>Unable to convert an object of <class 'pathlib.WindowsPath'> to a TOML item</code>. I fixed right away, adding a str in front, but I was thinking to do something valid in general.</p>
<p>Is there a way to teach TOML Kit how to convert a custom object to a valid TOML value? In the case of Path, would be extremely easy.</p>
|
<python><toml><tomlkit>
|
2024-11-24 17:11:00
| 1
| 367
|
toto
|
79,220,660
| 4,752,738
|
Possible fields values to depend on other values
|
<p>I have those 3 fields: event, category, subcategory
Depending on the name I allow different categories and depending on the category I allow different subcategories.
Example:</p>
<ul>
<li>If name is "foo" then category can be "foo_1" or "foo_2".</li>
<li>If category is foo_1 then subcategory can be foo_11 or foo_111.</li>
<li>If category is foo_2 then subcategory can be foo_22 or foo_222.</li>
<li>If name is "goo" then category and subcategory can only be none.</li>
</ul>
<p>How can I do that? I need it for FastAPI validation and if there is a way of doing it more efficiently that would be great.</p>
|
<python><fastapi><pydantic>
|
2024-11-24 17:05:49
| 1
| 943
|
idan ahal
|
79,220,412
| 12,520,740
|
Fresh install of pythonpy gives SyntaxWarnings
|
<p>I just freshly installed <code>pythonpy</code>. However, the package gives <code>SyntaxWarning</code>s during installation:</p>
<pre class="lang-bash prettyprint-override"><code>$ sudo apt install pythonpy
[sudo] password for melvio:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
pythonpy
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 10.2 kB of archives.
After this operation, 49.2 kB of additional disk space will be used.
Get:1 http://nl.archive.ubuntu.com/ubuntu noble/universe amd64 pythonpy all 0.4.11b-3.1 [10.2 kB]
Fetched 10.2 kB in 0s (223 kB/s)
Selecting previously unselected package pythonpy.
(Reading database ... 187227 files and directories currently installed.)
Preparing to unpack .../pythonpy_0.4.11b-3.1_all.deb ...
Unpacking pythonpy (0.4.11b-3.1) ...
Setting up pythonpy (0.4.11b-3.1) ...
/usr/share/pythonpy/pythonpy/__main__.py:29: SyntaxWarning: invalid escape sequence '\.'
if re.match('np(\..*)?$', raw_module_name):
/usr/share/pythonpy/pythonpy/__main__.py:31: SyntaxWarning: invalid escape sequence '\.'
elif re.match('pd(\..*)?$', raw_module_name):
/usr/share/pythonpy/pythonpy/pycompleter.py:31: SyntaxWarning: invalid escape sequence '\.'
regex = re.compile("([a-zA-Z_][a-zA-Z0-9_]*)\.?")
/usr/share/pythonpy/pythonpy/pycompleter.py:34: SyntaxWarning: invalid escape sequence '\.'
if re.match('np(\..*)?$', raw_module_name):
/usr/share/pythonpy/pythonpy/pycompleter.py:36: SyntaxWarning: invalid escape sequence '\.'
elif re.match('pd(\..*)?$', raw_module_name):
Processing triggers for man-db (2.12.0-4build2) ...
</code></pre>
<p>And every command I run also gives syntax warnings:</p>
<pre class="lang-bash prettyprint-override"><code>$ py --version
/usr/bin/py:29: SyntaxWarning: invalid escape sequence '\.'
if re.match('np(\..*)?$', raw_module_name):
/usr/bin/py:31: SyntaxWarning: invalid escape sequence '\.'
elif re.match('pd(\..*)?$', raw_module_name):
Pythonpy ???
Python 3.12.3
</code></pre>
<p>Could this be something with my system's configuration, or is this a <code>pythonpy</code> issue?</p>
|
<python><apt><python-py>
|
2024-11-24 15:20:39
| 1
| 1,156
|
melvio
|
79,220,232
| 8,185,618
|
Mediapipe gives different results in two cases image file path and numpy array input
|
<p>As you may know, <strong>Mediapipe</strong> provides landmark locations based on the <strong>aligned output image</strong> rather than the <strong>input image</strong>.</p>
<p><strong>Objective</strong>:
I intend to perform <strong>landmark detection</strong> on multiple images. Below, Iβve included code that uses <code>PoseLandmarkerOptions</code> to identify <strong>33 body landmarks</strong>. After locating these landmarks, I plan to classify the face angle as either <strong>0 degrees</strong>, <strong>90 degrees</strong>, <strong>180 degrees</strong>, or <strong>270 degrees</strong>.</p>
<p><strong>Data</strong>:
I have included sample images from the MARS dataset, as I was unable to use my original images due to issuesβThey have higher resolution and dimensions compared to the MARS dataset.</p>
<p><a href="https://i.sstatic.net/QsMETm8n.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsMETm8n.jpg" alt="1" /></a>
<a href="https://i.sstatic.net/UD4aiLUE.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UD4aiLUE.jpg" alt="2" /></a>
<a href="https://i.sstatic.net/f5OxKTv6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5OxKTv6.jpg" alt="3" /></a>
<a href="https://i.sstatic.net/XBuQtHcg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XBuQtHcg.jpg" alt="4" /></a>
<a href="https://i.sstatic.net/QD1M6onZ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QD1M6onZ.jpg" alt="5" /></a>
<a href="https://i.sstatic.net/JLwyOJ2C.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JLwyOJ2C.jpg" alt="6" /></a>
<a href="https://i.sstatic.net/A2My11Y8.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2My11Y8.jpg" alt="7" /></a>
<a href="https://i.sstatic.net/LhAHwOCd.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LhAHwOCd.jpg" alt="8" /></a>
<a href="https://i.sstatic.net/2cUBzFM6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2cUBzFM6.jpg" alt="9" /></a></p>
<p>all images as a compressed file:</p>
<p>Code:
I have provided the main code to detect landmarks in the images.</p>
<pre class="lang-py prettyprint-override"><code>import sys
import cv2
import numpy as np
import glob
import os
import base64
import mediapipe as mp
from mediapipe.tasks import python
from mediapipe.tasks.python import vision
from typing import Dict
base_options = python.BaseOptions(
model_asset_path="./models/pose_landmarker.task",
delegate=python.BaseOptions.Delegate.GPU,
)
options = vision.PoseLandmarkerOptions(
base_options=base_options,
output_segmentation_masks=True,
min_pose_detection_confidence=0.5,
min_pose_presence_confidence=0.5,
min_tracking_confidence=0.5,
)
detector = vision.PoseLandmarker.create_from_options(options)
def check_landmarks(detection_result, img, address):
file_name = address.split("/")[-1]
w, h, _ = img.shape
for each_person_pose in detection_result.pose_landmarks:
for each_key_point in each_person_pose:
if each_key_point.presence > 0.5 and each_key_point.visibility > 0.5:
x_px = int(each_key_point.x * h)
y_px = int(each_key_point.y * w)
cv2.circle(img, (x_px, y_px), 3, (255, 0, 0), 2)
cv2.imwrite("./landmarks/" + file_name, img)
def rectifier(detector, image, address):
try:
srgb_image = mp.Image.create_from_file(address)
detection_result = detector.detect(srgb_image)
check_landmarks(detection_result, srgb_image.numpy_view(), address)
except Exception as e:
print(f"error {e}")
def rectify_image(rectify_image_request):
image = cv2.imdecode(
np.frombuffer(base64.b64decode(rectify_image_request["image"]), np.byte),
cv2.IMREAD_COLOR,
)
rectifier(detector, image, rectify_image_request["address"])
def read_image_for_rectify(address: str) -> Dict:
face_object = dict()
img = cv2.imread(address)
_, buffer = cv2.imencode(".jpg", img)
img = base64.b64encode(buffer).decode()
face_object["image"] = img
face_object["address"] = address
return face_object
folder_path = "./png2jpg"
file_paths = glob.glob(os.path.join(folder_path, "*.jpg"), recursive=True)
for id_file, file in enumerate(file_paths):
print(id_file, file)
rectify_image(read_image_for_rectify(file))
</code></pre>
<p><strong>Problem</strong>:
Initially, I used image addresses to <strong>feed images directly</strong> to Mediapipe, and the results indicated acceptable performance.</p>
<p><a href="https://i.sstatic.net/6FHiTEBM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6FHiTEBM.jpg" alt="1" /></a>
<a href="https://i.sstatic.net/IYRHHjVW.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYRHHjVW.jpg" alt="2" /></a>
<a href="https://i.sstatic.net/AJcqFpX8.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJcqFpX8.jpg" alt="3" /></a>
<a href="https://i.sstatic.net/Du6F6Q4E.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Du6F6Q4E.jpg" alt="4" /></a>
<a href="https://i.sstatic.net/H3R3Ez9O.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3R3Ez9O.jpg" alt="5" /></a>
<a href="https://i.sstatic.net/ObQJKu18.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ObQJKu18.jpg" alt="6" /></a>
<a href="https://i.sstatic.net/1KqrHFA3.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1KqrHFA3.jpg" alt="7" /></a>
<a href="https://i.sstatic.net/lQVwwLP9.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lQVwwLP9.jpg" alt="8" /></a>
<a href="https://i.sstatic.net/WxLHUNLw.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WxLHUNLw.jpg" alt="9" /></a></p>
<p>However, I now need to receive images as dictionaries with the images encoded in <strong>base64</strong>. I have modified the data input accordingly, but upon reviewing the output in this scenario, Mediapipe fails to detect landmarks in many of the images. So I feed images as <strong>numpy array</strong> into mediapipe by changing this line from</p>
<pre class="lang-py prettyprint-override"><code>srgb_image = mp.Image.create_from_file(address)
</code></pre>
<p>into</p>
<pre class="lang-py prettyprint-override"><code>srgb_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=image)
</code></pre>
<p>output in the second scenario:</p>
<p><a href="https://i.sstatic.net/UA87ocED.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UA87ocED.jpg" alt="1" /></a>
<a href="https://i.sstatic.net/vDZG2Mo7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vDZG2Mo7.jpg" alt="2" /></a>
<a href="https://i.sstatic.net/cWnp390g.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWnp390g.jpg" alt="3" /></a>
<a href="https://i.sstatic.net/9QA4fR7K.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QA4fR7K.jpg" alt="4" /></a>
<a href="https://i.sstatic.net/BHZ1GfYz.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHZ1GfYz.jpg" alt="5" /></a>
<a href="https://i.sstatic.net/LhgZCxAd.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LhgZCxAd.jpg" alt="6" /></a>
<a href="https://i.sstatic.net/BlJQZYzu.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BlJQZYzu.jpg" alt="7" /></a>
<a href="https://i.sstatic.net/FyHKEueV.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FyHKEueV.jpg" alt="8" /></a>
<a href="https://i.sstatic.net/M60G4agp.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M60G4agp.jpg" alt="9" /></a></p>
<p>How can I achieve consistent output in both scenarios?</p>
|
<python><numpy><opencv><mediapipe>
|
2024-11-24 13:59:43
| 1
| 978
|
BarzanHayati
|
79,219,875
| 1,613,983
|
How to perform forward-fill along 0th dimension of N-D tensor in Tensorflow
|
<p>For example, take the following tensor:</p>
<pre><code>tf.constant([
[0, np.nan, 2, 1],
[np.nan, 3, 3, 4],
])
</code></pre>
<p>I'd like to implement a forward-fill operation like <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.ffill.html" rel="nofollow noreferrer"><code>pd.DataFrame.ffill</code></a> (with <code>axis=0</code>). The output of the <code>ffill</code> operation should therefore be:</p>
<pre><code>[
[0, np.nan, 2, 1]
[0, 3, 3, 4]
]
</code></pre>
<p>Note that in practice this tensor might have a 3rd <code>batch</code> dimension or further dimensions. It's not clear to me how I can achieve this tensor transformation. I did find the following question that addresses this for the 1D case:</p>
<p><a href="https://stackoverflow.com/a/52818310/1613983">https://stackoverflow.com/a/52818310/1613983</a></p>
<p>However, the <code>values</code> variable in that implementation is a sparse representation of the input data, which works fine in one dimension, but I wasn't able to translate that to the 2D (or higher) case.</p>
|
<python><tensorflow>
|
2024-11-24 10:44:48
| 0
| 23,470
|
quant
|
79,219,823
| 4,710,828
|
'n' vs 's' command for Python debugger (pdb)
|
<p>I'm new to <code>pdb</code> and trying to learn. There is one thing that's bugging me a lot. Lot of examples that I read on blogs mention a code like this below:</p>
<pre class="lang-py prettyprint-override"><code>import pdb
def buggy_function(x):
result = 0
for i in range(x):
result += i / (i - 4)
return result
# Use pdb.run() to debug
pdb.run("buggy_function(10)")
</code></pre>
<p>They mention that:</p>
<ul>
<li>When you run the code, the debugger starts before executing <code>buggy_function(10)</code></li>
<li>At the (Pdb) prompt, you need to use <code>n</code> to step into function <code>buggy_function</code>.</li>
</ul>
<p>An example session output given the blog was as below:</p>
<pre class="lang-none prettyprint-override"><code>> <string>(1)<module>()
(Pdb) n
> <string>(1)<module>()
(Pdb) s
> <string>(1)buggy_function()
(Pdb) p x
10
(Pdb) n
> <string>(3)buggy_function()
(Pdb) p i
4
(Pdb) p result
6.0
(Pdb) c
ZeroDivisionError: division by zero
</code></pre>
<p>However, when I run the program (using <code>> python ./main.py</code>), I get the following output</p>
<pre><code>(codeexec) PS C:\codeexec> python .\main.py
> <string>(1)<module>()
(Pdb) n
ZeroDivisionError: division by zero
> <string>(1)<module>()
(Pdb)
</code></pre>
<p>It seems like pressing <code>n</code> at this point, has caused the program to run the function and returned the exception.
If, however, I start with <code>s</code>, then the control steps into the method properly</p>
<pre><code>(codeexec) PS C:\codeexec> python .\main.py
> <string>(1)<module>()
(Pdb) s
--Call--
> c:\codeexec\main.py(3)buggy_function()
-> def buggy_function(x):
(Pdb) n
> c:\codeexec\main.py(4)buggy_function()
-> result = 0
(Pdb) n
> c:\codeexec\main.py(5)buggy_function()
-> for i in range(x):
(Pdb) n
> c:\codeexec\main.py(6)buggy_function()
-> result += i / (i - 4)
(Pdb) p i
0
(Pdb)
</code></pre>
<p>What I don't understand is why so many blogs are insisting to type <code>n</code> to debug the function, when we need to press <code>s</code>. Am I missing something here?</p>
|
<python><pdb>
|
2024-11-24 10:11:18
| 0
| 373
|
sdawar
|
79,219,726
| 4,054,314
|
Problem in passing dictionaries from one notebook to another in Pyspark
|
<p>I am new to PySpark. My current project requirement is to do ETL in Databricks. I have a CSV file which has almost 300 million rows, and this is only one such source. There will be 2 more data sources. Below will be my approach to solve it:</p>
<p>Step1 : Create Abstract class and method to read data from various sources</p>
<p>Step2: Read the data from step1 and create dictionaries for each source</p>
<p>Step3: Pass the dictionaries from step 2 into this step and do all the transformations needed</p>
<p>Step4: Load the data into parquet files and then into tables</p>
<p>My problem is in Step 3, where I'll be using the dictionary passed from step2. Will this be possible, because the Data volume is so huge and will be bad in performance.</p>
<p>Please let me know what approach should I follow as I am stuck in step3.</p>
<p>Thank you in advance.</p>
|
<python><apache-spark><pyspark><apache-spark-sql><databricks>
|
2024-11-24 09:11:37
| 1
| 1,304
|
sam
|
79,219,651
| 7,972,317
|
reserve space for a legend in pyplot while fixing plot size and x-axis position
|
<p>here are two of my plotting functions and example use:</p>
<pre><code>import matplotlib.pyplot as plt
def set_legend(ax, item_count, title=None):
legend = ax.legend(
title=title,
loc='upper center',
bbox_to_anchor=(0.5, -0.1),
ncol=item_count,
frameon=False,
prop={'size': 6}
)
legend.get_title().set_fontsize('8')
return ax
def generate_plot_base():
fig, ax = plt.subplots(1, 1, figsize=(6, 2.71), tight_layout={'pad': 0})
return ax
ax = generate_plot_base()
ax.plot([], [], label='Label 1', color='blue') # Toy labels
ax.plot([], [], label='Label 2', color='orange')
ax.plot([], [], label='Label 3', color='green')
ax = set_legend(ax, 3, title="Example Legend") # if you comment out the x-axis goes down
plt.show()
</code></pre>
<p>the thing is - when I don't have a legend, the size of the plot is the figsize.
but, when I add a legend without a title, the bottom of the plot expands to accommodate the legend. if I add a legend title, it expands more (or if I use tight_layout, it stays the same but the x-axis moves upwards within the plot).</p>
<p>instead, I want to add extra space between the bottom and the x-axis, so that if I add a legend (with or without a legend title), both the area of the plot (pixel dimensions) and the location of the axis will stay the same, since there is already enough free area.</p>
<p>I tried experimenting with tight_layout and subplot_adjust, but I can't get the behavior that I describe here.</p>
|
<python><matplotlib><plot><visualization>
|
2024-11-24 08:27:27
| 1
| 1,391
|
Moran Reznik
|
79,219,592
| 4,755,229
|
How to properly convert between types in Cython?
|
<p>I have a function which looks like</p>
<pre class="lang-py prettyprint-override"><code>cdef double __test_func(double x, double y, double z):
return (x-y)/((2*x-y)*y)**(0.5*z)
def test_func(x, y, z):
return __test_func(<double>x, <double>y, <double>z)
</code></pre>
<p>What I want to do is to use this <code>test_func</code> in python without regarding whether I typed a dot or not like a real python function. The <code><double></code> part is what I wrote after reading <a href="https://cython.readthedocs.io/en/stable/src/userguide/language_basics.html" rel="nofollow noreferrer">the Cython guide</a>, but honestly I am not sure if I did right, especially when they called it "type casting". Because, if I recall correctly, type casting isn't exactly the same thing as type conversion.</p>
<p>And as I suspected, this is not a great idea. This function works somehow as I wrote. But, if I were to type the arguments of the python wrapper,</p>
<pre class="lang-py prettyprint-override"><code>ctypedef fused Numeric:
char
short
int
long
long long
float
double
long double
cdef double __test_func(double x, double y, double z):
return (x-y)/((2*x-y)*y)**(0.5*z)
def test_func(Numeric x, Numeric y, Numeric z):
return __test_func(<double>x, <double>y, <double>z)
</code></pre>
<p>and after compiling and loading it in a python shell, if I give the argument <code>x</code> as something integer, for instance, <code>x=100</code>, then it gets horribly wrong and gives completely different result to when I give <code>x=100.</code>, as if I did the type casting in C.</p>
<p>So, how do I ensure the type conversion is done correctly in cython files? Specifically, how do I convert it to double-precision floating point (64bit floating point)? I know in python it is synonymous to <code>float</code>, but given it is cython, I am not entirely sure what precision <code>float(x)</code> would end up.</p>
|
<python><types><double><cython><floating>
|
2024-11-24 07:44:45
| 0
| 498
|
Hojin Cho
|
79,219,382
| 6,042,172
|
How to keep only some fields in json of list of json?
|
<p>I have this data structure:</p>
<pre><code>[
{
'field_a': 8,
'field_b': 9,
'field_c': 'word_a',
'field_d': True,
'children': [
{
'field_a': 9,
'field_b': 9,
'field_c': 'word_b',
'field_d': False,
'chilren': [
{
'field_a': 9,
'field_b': 9,
'field_c': 'wod_c',
'field_d': False,
'chilren': [
]
}
]
}
]
}
]
</code></pre>
<p>and I want to keep (for printing purposes) something like this:</p>
<pre><code>[
{
'field_c': 'word_a',
'children': [
{
'field_c': 'word_b',
'chilren': [
{
'field_c': 'wod_c',
'chilren': [
]
}
]
}
]
}
]
</code></pre>
<p>What is the most pythonic way to achieve it?</p>
<p>I cannot modify the original data, but I can make a copy of it</p>
|
<python><json><hierarchy>
|
2024-11-24 05:34:53
| 2
| 908
|
glezo
|
79,219,193
| 229,075
|
Is duckdb table persisted
|
<p>I have a huge parquet file. I only want to explore a specific sunset of it. That is hundreds of rows out a hundred of million rows.</p>
<p>So I do this to create a temporary table <code>vloc1</code></p>
<pre><code>CREATE OR REPLACE TABLE vloc1 AS FROM '.data/muni_vloc_202101.parquet' WHERE vid=5773
</code></pre>
<p>Like many things with duckdb, it works great. Querying the <code>vloc1</code> table is instantaneous.</p>
<p>My question is, where does <code>vloc1</code> exists? Is it in memory? On disk? Should I drop the table when I'm done? I am doing all this in a Python session in Jupyter notebook.</p>
|
<python><duckdb>
|
2024-11-24 01:57:47
| 0
| 18,924
|
Wai Yip Tung
|
79,219,190
| 7,034,613
|
Twilio inbound call recording - keeps calling "incoming-call" and "recording-callback"
|
<p>I'm trying to use the official <a href="https://github.com/twilio-samples/speech-assistant-openai-realtime-api-python" rel="nofollow noreferrer">Twilio <> OpenAI realtime API tutorial</a> for realtime LLM-based agents.</p>
<p>Now, when trying to introduce call recording to be able to analyze the code afterward, I keep getting an infinite loop of HTTPS requests (for the same call), from /incoming-call to /recording-callback, as shown below (all these HTTPS requests were done for a single phone call:</p>
<pre><code>% uvicorn main:app --host 0.0.0.0 --port 8082
INFO: Started server process [11841]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8082 (Press CTRL+C to quit)
event
Host URL is XXXXX.ngrok-free.app
INFO: 34.227.88.132:0 - "POST /incoming-call HTTP/1.1" 200 OK
event
Host URL is XXXXX.ngrok-free.app
INFO: 34.239.115.233:0 - "POST /incoming-call HTTP/1.1" 200 OK
INFO: 54.163.201.30:0 - "POST /recording-callback HTTP/1.1" 200 OK
event
Host URL is XXXXX.ngrok-free.app
INFO: 54.144.122.199:0 - "POST /incoming-call HTTP/1.1" 200 OK
INFO: 3.89.62.101:0 - "POST /recording-callback HTTP/1.1" 200 OK
</code></pre>
<p>Before the <code>response.record</code> call within <code>/incoming-call</code>, the code would go to <code>media-stream</code> and function properly. However, now itβs stuck between "incoming-call" and "recording-callback" for no understandable reason.<br />
Any ideas or help? Thanks.</p>
<pre><code>import os
import json
import base64
import asyncio
import websockets
import requests
from fastapi import FastAPI, WebSocket, Request, Form
from fastapi.responses import HTMLResponse, JSONResponse
from fastapi.websockets import WebSocketDisconnect
from twilio.twiml.voice_response import VoiceResponse, Connect, Say, Stream
from dotenv import load_dotenv
load_dotenv()
# Configuration
OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')
# PORT = int(os.getenv('PORT', 5050))
PORT = 8082
SYSTEM_MESSAGE = """Example Prompt."""
VOICE = 'ash'
LOG_EVENT_TYPES = [
'error', 'response.content.done', 'rate_limits.updated',
'response.done', 'input_audio_buffer.committed',
'input_audio_buffer.speech_stopped', 'input_audio_buffer.speech_started',
'session.created', 'conversation.item.input_audio_transcription.completed'
]
SHOW_TIMING_MATH = False
app = FastAPI()
if not OPENAI_API_KEY:
raise ValueError('Missing the OpenAI API key. Please set it in the .env file.')
@app.get("/", response_class=JSONResponse)
async def index_page():
return {"message": "Twilio Media Stream Server is running!"}
@app.api_route("/incoming-call", methods=["GET", "POST"])
async def handle_incoming_call(request: Request):
"""Handle incoming call and return TwiML response to connect to Media Stream."""
print("event")
response = VoiceResponse()
# <Say> punctuation to improve text-to-speech flow
# response.say("Please wait while we connect your call")
response.pause(length=1)
# response.say("O.K. you can start talking!")
host = request.url.hostname
print(f"Host URL is {host}")
response.record(
recording_status_callback=f'https://{host}/recording-callback',
recording_status_callback_method="POST",
play_beep=True
)
connect = Connect()
connect.stream(url=f'wss://{host}/media-stream')
response.append(connect)
return HTMLResponse(content=str(response), media_type="application/xml")
@app.websocket("/media-stream")
async def handle_media_stream(websocket: WebSocket):
"""Handle WebSocket connections between Twilio and OpenAI."""
print("Client connected")
await websocket.accept()
async with websockets.connect(
'wss://api.openai.com/v1/realtime?model=gpt-4o-realtime-preview-2024-10-01',
extra_headers={
"Authorization": f"Bearer {OPENAI_API_KEY}",
"OpenAI-Beta": "realtime=v1"
}
) as openai_ws:
await initialize_session(openai_ws)
# Connection specific state
stream_sid = None
latest_media_timestamp = 0
last_assistant_item = None
mark_queue = []
response_start_timestamp_twilio = None
async def receive_from_twilio():
"""Receive audio data from Twilio and send it to the OpenAI Realtime API."""
nonlocal stream_sid, latest_media_timestamp
try:
async for message in websocket.iter_text():
data = json.loads(message)
if data['event'] == 'media' and openai_ws.open:
latest_media_timestamp = int(data['media']['timestamp'])
audio_append = {
"type": "input_audio_buffer.append",
"audio": data['media']['payload']
}
await openai_ws.send(json.dumps(audio_append))
elif data['event'] == 'start':
stream_sid = data['start']['streamSid']
print(f"Incoming stream has started {stream_sid}")
response_start_timestamp_twilio = None
latest_media_timestamp = 0
last_assistant_item = None
elif data['event'] == 'mark':
if mark_queue:
mark_queue.pop(0)
except WebSocketDisconnect:
print("Client disconnected.")
if openai_ws.open:
await openai_ws.close()
async def send_to_twilio():
"""Receive events from the OpenAI Realtime API, send audio back to Twilio."""
nonlocal stream_sid, last_assistant_item, response_start_timestamp_twilio
try:
async for openai_message in openai_ws:
response = json.loads(openai_message)
if response['type'] in LOG_EVENT_TYPES:
print(f"Received event: {response['type']}", response)
if response.get('type') == 'response.audio.delta' and 'delta' in response:
audio_payload = base64.b64encode(base64.b64decode(response['delta'])).decode('utf-8')
audio_delta = {
"event": "media",
"streamSid": stream_sid,
"media": {
"payload": audio_payload
}
}
await websocket.send_json(audio_delta)
if response_start_timestamp_twilio is None:
response_start_timestamp_twilio = latest_media_timestamp
if SHOW_TIMING_MATH:
print(f"Setting start timestamp for new response: {response_start_timestamp_twilio}ms")
# Update last_assistant_item safely
if response.get('item_id'):
last_assistant_item = response['item_id']
await send_mark(websocket, stream_sid)
# Trigger an interruption. Your use case might work better using `input_audio_buffer.speech_stopped`, or combining the two.
if response.get('type') == 'input_audio_buffer.speech_started':
print("Speech started detected.")
if last_assistant_item:
print(f"Interrupting response with id: {last_assistant_item}")
await handle_speech_started_event()
if response.get('type') == "conversation.item.input_audio_transcription.completed":
print("Input Audio Transcription Completed Message")
print(f" Id: {response.get('item_id')}")
print(f" Content Index: {response.get('content_index')}")
print(f" Transcript: {response.get('transcript')}")
except Exception as e:
print(f"Error in send_to_twilio: {e}")
async def handle_speech_started_event():
"""Handle interruption when the caller's speech starts."""
nonlocal response_start_timestamp_twilio, last_assistant_item
print("Handling speech started event.")
if mark_queue and response_start_timestamp_twilio is not None:
elapsed_time = latest_media_timestamp - response_start_timestamp_twilio
if SHOW_TIMING_MATH:
print(f"Calculating elapsed time for truncation: {latest_media_timestamp} - {response_start_timestamp_twilio} = {elapsed_time}ms")
if last_assistant_item:
if SHOW_TIMING_MATH:
print(f"Truncating item with ID: {last_assistant_item}, Truncated at: {elapsed_time}ms")
truncate_event = {
"type": "conversation.item.truncate",
"item_id": last_assistant_item,
"content_index": 0,
"audio_end_ms": elapsed_time
}
await openai_ws.send(json.dumps(truncate_event))
await websocket.send_json({
"event": "clear",
"streamSid": stream_sid
})
mark_queue.clear()
last_assistant_item = None
response_start_timestamp_twilio = None
async def send_mark(connection, stream_sid):
if stream_sid:
mark_event = {
"event": "mark",
"streamSid": stream_sid,
"mark": {"name": "responsePart"}
}
await connection.send_json(mark_event)
mark_queue.append('responsePart')
await asyncio.gather(receive_from_twilio(), send_to_twilio())
async def send_initial_conversation_item(openai_ws):
"""Send initial conversation item if AI talks first."""
initial_conversation_item = {
"type": "conversation.item.create",
"item": {
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": SYSTEM_MESSAGE
}
]
}
}
await openai_ws.send(json.dumps(initial_conversation_item))
await openai_ws.send(json.dumps({"type": "response.create"}))
async def initialize_session(openai_ws):
"""Control initial session with OpenAI."""
session_update = {
"type": "session.update",
"session": {
"turn_detection": {"type": "server_vad"},
"input_audio_format": "g711_ulaw",
"output_audio_format": "g711_ulaw",
"voice": VOICE,
"instructions": SYSTEM_MESSAGE,
"modalities": ["text", "audio"],
"temperature": 0.6,
"input_audio_transcription": {
"model": "whisper-1"
},
}
}
print('Sending session update:', json.dumps(session_update))
await openai_ws.send(json.dumps(session_update))
await send_initial_conversation_item(openai_ws)
@app.api_route("/recording-callback", methods=["GET", "POST"])
async def recording_callback(
RecordingSid: str = Form(...),
RecordingUrl: str = Form(...),
RecordingStatus: str = Form(...)
):
print("IN RECORDING")
"""Handle Twilio's recording status callback."""
if RecordingStatus == "completed":
# Download the recording and save it locally
file_path = f"./recordings/{RecordingSid}.mp3"
os.makedirs(os.path.dirname(file_path), exist_ok=True)
try:
response = requests.get(f"{RecordingUrl}.mp3")
with open(file_path, "wb") as file:
file.write(response.content)
print(f"Recording saved: {file_path}")
except Exception as e:
print(f"Failed to download recording: {e}")
return JSONResponse({"status": "success"})
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port="8082")
</code></pre>
|
<python><twilio><openai-api>
|
2024-11-24 01:57:19
| 0
| 2,519
|
DsCpp
|
79,219,163
| 6,312,979
|
Polars Read Excel file from Form (ie. request.FILES.get('file'))
|
<p>All the Polars examples show reading an Excel file from a path string.</p>
<pre><code>df = pl.read_excel("docs/assets/data/path.xlsx")
</code></pre>
<p>But I am passing in the Excel file from a Django Form Post.</p>
<pre><code>file_name = request.FILES.get('file')
df = pl.read_excel(file_name)
</code></pre>
<p>I am getting a "InvalidParametersError".</p>
<p>How do input the file to Polars from a HTML form?</p>
<p>or do I somehow have to get the path from the request.FILES and use that?</p>
<p>I am new to the whole Polars thing but very excited to try it.</p>
<p>thank you.</p>
|
<python><django><excel><dataframe><python-polars>
|
2024-11-24 01:27:07
| 1
| 2,181
|
diogenes
|
79,219,137
| 6,843,153
|
VSC debugger fails to import my own project modules in a Dev Container even when the script can be run from VSC terminal
|
<p>I have a python project in Ubuntu 24.04.1 LTS and I have a DevContainer in VSC with Debian GNU/Linux 11, and If I run the application from the terminal with <code>streamlit run myfile.py</code>, it runs perfectly, but launching the debugger raises this exception:</p>
<pre><code>/usr/bin/python3: No module named streamlit
</code></pre>
<p>This is my <code>devcontainer.json</code>:</p>
<pre><code>{
"build": {"dockerfile": "Dockerfile"},
"customizations": {
"vscode": {
"settings": {},
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance"
]
},
"forwardPorts": [8501],
"runArgs": ["--env-file",".devcontainer/devcontainer.env"]
}
}
</code></pre>
<p>This is my <code>Dockerfile</code>:</p>
<pre><code>FROM python:3.10-bullseye
COPY requirements.txt ./requirements.txt
RUN pip install oscrypto@git+https://github.com/wbond/oscrypto.git@d5f3437ed24257895ae1edd9e503cfb352e635a8
# COPY src ./src
# WORKDIR /src
RUN pip install --no-cache-dir -r requirements.txt
ENV PYTHONPATH=/workspaces/my_project/src
EXPOSE 8001
CMD ["streamlit", "run", "view/frontend/main.py"]
</code></pre>
<p>And this is my <code>launch.json</code>:</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true,
"env": {"PYTHONPATH": "${workspaceFolder}/src"}
},
{
"name": "Python:Streamlit",
"type": "debugpy",
"request": "launch",
"module": "streamlit",
"args": [
"run",
"${file}",
"--server.port",
"8501",
"--server.fileWatcherType",
"poll",
"--server.address",
"0.0.0.0"
],
"cwd": "${workspaceFolder}/src",
"env": {
"PYTHONPATH": "${workspaceFolder}/src",
"PYTHONHOME": "/usr/local/bin"
}
}
]
}
</code></pre>
<p>What else do I have to set up for VSC Debugger to work properly?</p>
|
<python><visual-studio-code><streamlit><vscode-debugger>
|
2024-11-24 00:51:19
| 0
| 5,505
|
HuLu ViCa
|
79,219,125
| 13,279,557
|
How can I let a shared library, called by Python, access the same Python instance's globals?
|
<p>So I've created this <code>main.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/python3
import ctypes
foo = 1
# Should print 2, but prints 1
def print_foo():
global foo
print(foo)
def main():
global foo
foo = 2
dll = ctypes.PyDLL("./foo.so")
foo = dll.foo
foo.restype = None
foo()
if __name__ == "__main__":
main()
</code></pre>
<p>and this <code>foo.c</code>:</p>
<pre class="lang-c prettyprint-override"><code>#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include <assert.h>
#define CHECK_PYTHON_ERROR() {\
if (PyErr_Occurred()) {\
PyErr_Print();\
fprintf(stderr, "Error detected in foo.c:%d\n", __LINE__);\
exit(EXIT_FAILURE);\
}\
}
void foo(void) {
PyObject *module = PyImport_ImportModule("main");
CHECK_PYTHON_ERROR();
assert(module);
PyObject *print_foo = PyObject_GetAttrString(module, "print_foo");
CHECK_PYTHON_ERROR();
assert(print_foo);
PyObject *result = PyObject_CallObject(print_foo, NULL);
CHECK_PYTHON_ERROR();
assert(result);
}
</code></pre>
<p>I've compiled <code>foo.c</code> to <code>foo.so</code> with <code>gcc foo.c -o foo.so -shared -fPIC -Wall -Wextra -Wpedantic -Wfatal-errors -g -lpython3.13 -I/home/trez/.pyenv/versions/3.13.0/include/python3.13 -L/home/trez/.pyenv/versions/3.13.0/lib -ldl -Wl,-rpath,/home/trez/.pyenv/versions/3.13.0/lib -lm</code>, where those last flags came from running <code>python3.13-config --includes --ldflags</code>.</p>
<p>Running <code>python3.13 main.py</code> prints 1. It implies that the <code>foo = 2</code> statement did not happen in the instance which printed <code>foo</code>, which is bad.</p>
<p>The reason this program currently prints 1, is because the <code>PyImport_ImportModule()</code> (<a href="https://docs.python.org/3/c-api/import.html#c.PyImport_ImportModule" rel="nofollow noreferrer">docs</a>) call essentially creates a new environment, rather than reusing the original one.</p>
<p>Changing it to <code>PyImport_AddModule()</code> (<a href="https://docs.python.org/3/c-api/import.html#c.PyImport_AddModule" rel="nofollow noreferrer">docs</a>) or <code>PyImport_AddModuleRef()</code> (<a href="https://docs.python.org/3/c-api/import.html#c.PyImport_AddModuleRef" rel="nofollow noreferrer">docs</a>) prints this:</p>
<pre><code>AttributeError: module 'main' has no attribute 'print_foo'
Error detected in foo.c:20
</code></pre>
<p>If we look at <code>PyImport_AddModule</code> its docs, we just see:</p>
<blockquote>
<p>Similar to PyImport_AddModuleRef(), but return a borrowed reference.</p>
</blockquote>
<p>And if we then look at <code>PyImport_AddModuleRef</code> its docs, there's this snippet:</p>
<blockquote>
<p>This function does not load or import the module; if the module wasnβt already loaded, you will get an empty module object. Use PyImport_ImportModule() or one of its variants to import a module.</p>
</blockquote>
<p>So this seems to imply that the <code>AttributeError: module 'main' has no attribute 'print_foo'</code> error was caused by the module not being loaded. This doesn't make sense to me though, since using <code>PyImport_AddModule</code>/<code>PyImport_AddModuleRef</code> and adding this after the <code>assert(module);</code> line:</p>
<pre class="lang-c prettyprint-override"><code>PyObject *g = PyEval_GetGlobals();
CHECK_PYTHON_ERROR();
PyObject *g_repr = PyObject_Repr(g);
CHECK_PYTHON_ERROR();
const char *s = PyUnicode_AsUTF8(g_repr);
CHECK_PYTHON_ERROR();
printf("s: '%s'\n", s);
</code></pre>
<p>prints this:</p>
<pre><code>s: '{'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <_frozen_importlib_external.SourceFileLoader object at 0x6ffc309a5e00>, '__spec__': None, '__annotations__': {}, '__builtins__': <module 'builtins' (built-in)>, '__file__': '/home/trez/Programming/ctypes-run-bug/main.py', '__cached__': None, 'ctypes': <module 'ctypes' from '/home/trez/.pyenv/versions/3.13.0/lib/python3.13/ctypes/__init__.py'>, 'foo': <_FuncPtr object at 0x6ffc30946450>, 'print_foo': <function print_foo at 0x6ffc308231a0>, 'main': <function main at 0x6ffc3088f4c0>}'
AttributeError: module 'main' has no attribute 'print_foo'
Error detected in foo.c:28
</code></pre>
<p>Note the <code>'print_foo': <function print_foo at 0x6ffc308231a0></code> near the end of the first line, implying that the function <em>is</em> found.</p>
<p>My guess as to what's happening here is that these globals <em>are</em> from the original Python instance, but that the <code>PyImport_AddModule("main")</code> call somehow fails to find that instance.</p>
<p>How do I keep <code>main.py</code> untouched, while modifying <code>foo.c</code> in a way that 2 gets printed, instead of 1? Cheers.</p>
|
<python><shared-libraries><ctypes>
|
2024-11-24 00:39:12
| 1
| 672
|
MyNameIsTrez
|
79,219,057
| 1,613,983
|
How to efficiently represent a matrix product with repeated elements
|
<p>I have a tensor <code>a</code> that is of shape <code>(n/f, c, c)</code> that I want to multiply by another tensor <code>b</code> of shape <code>(n, c, 1)</code>. Each row of <code>a</code> represents <code>f</code> rows of <code>b</code>, such that the naiive way of implementing this would be to simply repeat each row of <code>a</code> <code>f</code> times before performing the multiplication:</p>
<pre><code>n = 100
c = 5
f = 10
a = tf.constant(np.random.rand(n//f, c, c))
b = tf.constant(np.random.rand(n, c, c))
a_prime = tf.repeat(a, f, 0)
result = a_prime @ b
</code></pre>
<p>This works, but for large <code>n</code> and <code>f</code> I'm worried about the memory footprint of the <code>repeat</code>. I could of course loop through each row and perform dot-products manually, but that would have implications on performance. Is there a better way?</p>
|
<python><tensorflow>
|
2024-11-23 23:37:22
| 1
| 23,470
|
quant
|
79,219,035
| 913,098
|
How to hide webapi url and show a "pretty" url?
|
<p>I need to generate a pretty URL for my own URL shortening service.<br />
My web server address looks something like <code>https://my-backend-api-server.us-central1.run.app/redirectapp/redirect/wAzclnp3</code></p>
<p>and I wouldn't want to expose that, nor is it short.</p>
<p>Assuming I have a domain <code>www.mydomain123.com</code>, I want to "prettify" my URLs so that <code>www.mydomain123.com/wAzclnp3</code> will serve <code>https://my-backend-api-server.us-central1.run.app/redirectapp/redirect/wAzclnp3</code> and <code>www.mydomain123.com</code> or <code>www.mydomain123.com/otherapp</code> will serve from my webserver (not the api server)</p>
<p>How to do this in django ?</p>
|
<javascript><python><django><dns><friendly-url>
|
2024-11-23 23:25:06
| 1
| 28,697
|
Gulzar
|
79,218,827
| 65,659
|
What does a trailing slash mean when reading Python function documentation?
|
<p>I'm reading the Python documentation in order to become familiar with the enormous library of functions and modules available. A few times I've seen a trailing slash in the parameter list, such as <a href="https://docs.python.org/3/library/stdtypes.html#str.removeprefix" rel="nofollow noreferrer">this</a>:</p>
<pre><code>str.removeprefix(prefix, /)
</code></pre>
<p>I don't see any mention of a second parameter in the documentation nor any mention of what this means. When I see this in the documentation, what is it documenting?</p>
|
<python>
|
2024-11-23 21:02:42
| 0
| 4,968
|
Chuck
|
79,218,768
| 1,833,028
|
Why do I need to execute a getch() before ncurses will display anything?
|
<p>I have an curses application with a getch() loop.</p>
<p>I found that the application would only display after accepting user input. In other words, I could not draw anything to the terminal before accepting user input; I had to do it after. This code fixes it:</p>
<pre><code>stdscr.nodelay(True)
while stdscr.getch() != -1:
pass
stdscr.nodelay(False)
</code></pre>
<p>But it doesn't take this much.. just running a single <code>getch()</code> and discarding the return value allows ncurses to do it's job, and render the screen as intended. Without that, nothing is drawn to the console (other than maybe the blank screen)!</p>
<p>What happened?</p>
<p><strong>Edit</strong></p>
<p>I have been advised to include a minimally reproducible example. This code here:</p>
<pre><code>import curses
def main(stdscr):
# Initialize ncurses
curses.curs_set(1) # Show the cursor
stdscr.clear()
# Get screen dimensions
height, width = stdscr.getmaxyx()
input_win_height = 4
output_win_height = height - input_win_height - 1
# Create windows
input_win = curses.newwin(input_win_height, width, 0, 0)
currentCommand=""
max_height, max_width = input_win.getmaxyx()
outputRow=0
inData={"row": 2, "col":1,"window":input_win}
#stdscr.nodelay(True)
#stdscr.getch() # Drain the input buffer
#stdscr.nodelay(False)
def setCommandWindow(inData):
inData["row"]=2
inData["col"]=1+1
inData["window"].clear()
inData["window"].box()
inData["window"].addch(2,1, ">")
inData["window"].refresh()
setCommandWindow(inData)
while True:
key = stdscr.getch()
setCommandWindow(inData)
print("Quitting the application")
if __name__ == "__main__":
curses.wrapper(main)
</code></pre>
<p>If you uncomment those lines, it starts up and draws a box. If you do not uncomment those lines, it starts up to a blank screen, and draws a box only after getting a character.</p>
|
<python><curses><python-curses>
|
2024-11-23 20:26:48
| 0
| 963
|
user1833028
|
79,218,764
| 11,550,339
|
CrewAI SeleniumScrapingTool canΒ΄t initialize the Chrome Driver inside Docker container
|
<p>I have a docker image as follows:</p>
<pre><code>FROM python:3.12-slim
WORKDIR /usr/src/app
COPY ./requirements.txt ./
RUN pip install --no-cache-dir --upgrade -r ./requirements.txt
COPY ./app ./
ENV ENVIRONMENT prod
ENV PORT 3000
EXPOSE 3000
CMD ["fastapi", "run", "main.py", "--port", "3000"]
</code></pre>
<p>And my requirements.txt:</p>
<pre><code>fastapi==0.111.1
langchain==0.3.7
langchain-community==0.3.7
langchain-core==0.3.19
crewai==0.80.0
crewai-tools==0.14.0
</code></pre>
<p>And my python code that starts an CrewAI Agent and pass the SeleniumScrapingTool as a tool to him:</p>
<pre><code>from crewai import Agent, Task, Crew, Process
from crewai_tools import SeleniumScrapingTool
chrome_options = {
'args': [
'--no-sandbox',
'--headless',
'--disable-dev-shm-usage',
'--disable-gpu',
'--disable-setuid-sandbox',
'--disable-software-rasterizer',
'--disable-dbus',
'--disable-notifications',
'--disable-extensions',
'--disable-infobars'
],
'service_args': ['--verbose'], # For debugging
'experimental_options': {
'excludeSwitches': ['enable-automation'],
'prefs': {
'profile.default_content_setting_values': {
'cookies': 1,
'images': 2, # Don't load images for better performance
'plugins': 2,
'popups': 2,
'geolocation': 2,
'notifications': 2
}
}
}
}
tool = SeleniumScrapingTool(website_url='URL', chrome_options=chrome_options)
Agent(
role=ROLE,
goal=GOAL,
backstory=BACKSTORY,
tools=tool
)
</code></pre>
<p>But somehow when i start my docker container it gives the following error trying to use the SeleniumScrapingTool:</p>
<pre><code>2024-11-23 17:23:10 I encountered an error while trying to use the tool. This was the error: Message: Service /root/.cache/selenium/chromedriver/linux64/131.0.6778.85/chromedriver unexpectedly exited. Status code was: 127
</code></pre>
<p>But when i try to run at my local environment (windows 11) it runs perfectly. Only insides the docker image the error occurs.</p>
|
<python><docker><selenium-chromedriver><crewai>
|
2024-11-23 20:26:17
| 0
| 535
|
Matheus Carvalho
|
79,218,748
| 561,243
|
peewee: cannot find reference db_url
|
<p>I am playing with <a href="https://docs.peewee-orm.com/en/latest/index.html" rel="nofollow noreferrer">peewee</a> for my next project involving a database.</p>
<p>My first impression is rather good because it offers a good ORM and at the same time is not as big as SQLAlchemy.</p>
<p>Nevertheless I have a question for you.</p>
<p>Here is my piece of code:</p>
<pre class="lang-py prettyprint-override"><code>from playhouse.db_url import connect
from peewee import Model, AutoField, TextField
database = connect('sqlite:///:memory:')
database.connect()
class Person(Model):
id = AutoField()
name = TextField()
class Meta:
database = database
database.create_tables((Person,))
Person.create(name='Jim')
Person.create(name='John')
assert Person.select().count() == 2
database.close()
</code></pre>
<p>This code works perfectly. But PyCharm keeps on marking the first import statement as an error. It says it cannot find reference db_url, but when I run it either from the IDE or from the console it is working.</p>
<p>I have installed peewee 3.17.8 with pip on a venv with python 3.12.7</p>
<p>Since it works, I do not care much, but these red underlines are disturbing especially because they will make any code linters crazy.</p>
<p>Do you have an idea or a workaround for this issue?</p>
<p>For the moment, I have put <code># noinspection PyUnresolvedReferences</code> on the line before, but I honestly do not like it.</p>
<p>Thanks!</p>
|
<python><pycharm><peewee>
|
2024-11-23 20:13:22
| 0
| 367
|
toto
|
79,218,720
| 12,415,855
|
Clicking on expand button using Selenium not possible?
|
<p>i try to click the "Expand All" Button
<a href="https://i.sstatic.net/fzTSZbz6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzTSZbz6.png" alt="button" /></a></p>
<p>using the following code:</p>
<pre><code>import time
import os, sys
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
WAIT = 1
path = os.path.abspath(os.path.dirname(sys.argv[0]))
print(f"Checking Browser driver...")
options = Options()
# options.add_argument('--headless=new')
options.add_argument("start-maximized")
options.add_argument('--log-level=3')
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
# driver.minimize_window()
waitWD = WebDriverWait (driver, 10)
baseLink = "https://tmsearch.uspto.gov/search/search-information"
print(f"Working for {baseLink}")
driver.get (baseLink)
waitWD.until(EC.presence_of_element_located((By.XPATH,'//input[@id="searchbar"]'))).send_keys("SpaceX")
waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[@class="btn btn-primary md-icon ng-star-inserted"]'))).click()
waitWD.until(EC.presence_of_element_located((By.XPATH,'//input[@id="goodsAndServices"]'))).send_keys("shirt")
waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[@class="btn btn-primary md-icon ng-star-inserted"]'))).click()
time.sleep(WAIT)
soup = BeautifulSoup (driver.page_source, 'lxml')
driver.execute_script("arguments[0].click();", waitWD.until(EC.element_to_be_clickable((By.XPATH, "(//span[text()=' wordmark '])[1]"))))
time.sleep(WAIT)
driver.execute_script("arguments[0].click();", waitWD.until(EC.presence_of_element_located((By.XPATH, '//div[@class="expand_all expanded"]'))))
</code></pre>
<p>but i only get this error</p>
<pre><code>Working for https://tmsearch.uspto.gov/search/search-information
Traceback (most recent call last):
File "F:\DEV\Fiverr2024\TRY\cliff_ckshorts\temp.py", line 41, in <module>
driver.execute_script("arguments[0].click();", waitWD.until(EC.presence_of_element_located((By.XPATH, '//div[@class="expand_all expanded"]'))))
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\DEV\venv\selenium\Lib\site-packages\selenium\webdriver\support\wait.py", line 105, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
</code></pre>
<p>How can i press the button on this website using selenium?</p>
|
<python><selenium-webdriver>
|
2024-11-23 20:01:05
| 2
| 1,515
|
Rapid1898
|
79,218,567
| 7,886,968
|
keyboard_trigger = Event() returns "NameError: name 'Event' is not defined"
|
<p>I am trying to write a simple "Object avoidance" program for my robot.</p>
<p>The initial version works, but when I try to exit with a Ctrl-C, the <em>program</em> halts but the <em>robot</em> keeps moving until I physically turn it off.</p>
<p>I'm trying to use a trick that was used in a different program that listens for keyboard events and then stops the robot when one is detected.</p>
<p>The code that I am having difficulty with:</p>
<pre><code># Object Avoidance Robot
# Drives in a straight line untill it reaches an obsticle within "X" cm.
# It then stops, backs up a few cm, turns a random amount, and then proceeds.
import easygopigo3 as easy
import time
import random
import signal
import os
import sys
gpg = easy.EasyGoPiGo3()
gpg = easy.EasyGoPiGo3()
servo_1 = gpg.init_servo('SERVO1')
servo_2 = gpg.init_servo('SERVO2')
sleep_time = 0.50 # pause time in seconds
my_distance_portI2C = gpg.init_distance_sensor('I2C')
time.sleep(0.1)
# for triggering the shutdown procedure when a signal is detected
keyboard_trigger = Event()
def signal_handler(signal, frame):
print("Signal detected. Stopping threads.")
gpg.stop()
keyboard_trigger.set()
print("Running: Test Servos.\n")
# Read the connected robot's serial number
serial_number = gpg.get_id() # read and display the serial number
# For testing only - "invalid" serial number
# serial_number = "invalid"
print("This robot's serial number is\n"+serial_number+"\n")
# Servo constants for each robot
# Each robot is identified by its unique serial number
# Charlene:
if serial_number == "A0F6E45F4E514B4B41202020FF152B11":
# Charlene's servo constants
print("Robot is \"Charlene\"")
center_1 = 86 # Horizontal centering - smaller is further right.
center_2 = 76 # Vertical centering - smaller is further up.
# Charlie:
elif serial_number == "64B61037514E343732202020FF111A05":
# Charlie's servo constants
print("Robot is \"Charlie\"")
center_1 = 86 # Horizontal centering - smaller is further right.
center_2 = 90 # Vertical centering - smaller is further up.
else:
# Unknown serial number
print("I don't know who robot", serial_number, "is,")
print("If we got this far, it's obviously a GoPiGo robot")
print("But I don't know what robot it is, so I'm using")
print("the default centering constants of 90/90.\n")
# Default servo constants
print("Robot is \"Unknown\"")
print("Please record the robot's serial number, name,")
print("and derived centering constants.")
center_1 = 90
center_2 = 90
# Start Test Servos
# Define excursions
right = center_1 - 45
left = center_1 + 45
up = center_2 - 45
down = center_2 + 45
def test_servos():
# Test servos
print("\nStarting test:")
print("Using centering constants "+str(center_1)+"/"+str(center_2), "for this robot")
print("\nCenter Both Servos")
servo_1.rotate_servo(center_1)
servo_2.rotate_servo(center_2)
time.sleep(sleep_time)
print("Test Servo 1 (horizontal motion)")
servo_1.rotate_servo(right)
time.sleep(sleep_time)
servo_1.rotate_servo(left)
time.sleep(sleep_time)
servo_1.rotate_servo(center_1)
time.sleep(sleep_time)
print("Test Servo 2 (vertical motion)")
servo_2.rotate_servo(up)
time.sleep(sleep_time)
servo_2.rotate_servo(down)
time.sleep(sleep_time)
print("Re-Center Both Servos")
servo_1.rotate_servo(center_1)
servo_2.rotate_servo(center_2)
time.sleep(sleep_time)
servo_1.disable_servo()
servo_2.disable_servo()
print("Complete: Test Servos - Exiting.")
def avoid():
while True:
while my_distance_portI2C.read_inches() > 20:
gpg.forward()
gpg.stop()
time.sleep(1)
gpg.backward()
time.sleep(1)
gpg.stop()
test_servos()
gpg.backward()
time.sleep(4)
gpg.stop()
time.sleep(1)
gpg.turn_degrees((random.randint(90, 270)), blocking=True)
time.sleep(1) # slowdown
#------------------------------------
# Main routine entry point is here
#------------------------------------
if __name__ == "__main__":
# registering both types of termination signals
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
test_servos() # make sure servos are centered
avoid()
logging.info("Finished!")
sys.exit(0)
while not keyboard_trigger.is_set():
sleep(0.25)
# until some keyboard event is detected
print("\n ==========================\n\n")
print("A \"shutdown\" command event was received!\n")
gpg.stop()
exit()
</code></pre>
<p>When I run the <code>object_avoidance.py</code> code above within VS Code, it fails with:</p>
<pre><code>pi@Charlene:~/Project_Files/object_avoidance $ /usr/bin/python /home/pi/Project_Files/object_avoidance/Object_Avoidence.py
Traceback (most recent call last):
File "/home/pi/Project_Files/object_avoidance/Object_Avoidence.py", line 22, in <module>
keyboard_trigger = Event()
NameError: name 'Event' is not defined
pi@Charlene:~/Project_Files/object_avoidance $
</code></pre>
<p>The example of the code that works is quite large, (it's a web-based FPV remote-controlled robot), I've posted it <a href="https://www.mediafire.com/file/mzjr5s34sxy70yo/New_Remote_Camera_Robot.py/file" rel="nofollow noreferrer">on MediaFire</a>.</p>
|
<python><event-handling>
|
2024-11-23 18:39:22
| 1
| 643
|
Jim JR Harris
|
79,218,490
| 12,466,687
|
Problem with recognizing single-cell tables in pdfplumber
|
<p>I have sample medical report and on <strong>top of each page</strong> in pdf there is a <strong>table</strong> that contains personal information.</p>
<p>I have been trying to <strong>remove/crop</strong> the personal information <strong>table</strong> from that sample <strong><a href="https://github.com/johnsnow09/covid19-df_stack-code/blob/main/Sample_Report.pdf" rel="nofollow noreferrer">sample_pdf</a></strong> from all pages by finding <strong>layout</strong> values of the <strong>table</strong>. I am new to <code>pdfplumber</code> and not sure if that's the right approach but below is the code that I have tried and I am not able to get <code>layout values</code> of the table even when I am able to get red box on the table using pdfplumber.</p>
<p>Code that I have tried:</p>
<pre><code>sample_data = []
sample_path = r"local_path_file"
with pdfplumber.open(sample_path) as pdf:
pages = pdf.pages
for p in pages:
sample_data.append(p.extract_tables())
print(sample_data)
</code></pre>
<pre><code>pages[0].to_image()
</code></pre>
<p><a href="https://i.sstatic.net/XCp1J4cg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XCp1J4cg.png" alt="sample_image" /></a></p>
<p>I am able to identify the first table from it by using below code</p>
<pre><code>pages[0].to_image().debug_tablefinder()
</code></pre>
<p><a href="https://i.sstatic.net/wjvM92yY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wjvM92yY.png" alt="table_identified" /></a></p>
<p>Now when I try below code to extract tables then I am not getting anything</p>
<pre><code>with pdfplumber.open(sample_path) as pdf:
pages = pdf.pages[0]
print(pages.extract_tables())
</code></pre>
<p><strong>output:</strong> <code>[]</code></p>
<hr />
<h3>Update</h3>
<p>There is an issue when working on <a href="https://github.com/johnsnow09/covid19-df_stack-code/blob/main/Sample_Report.pdf" rel="nofollow noreferrer">this particular sample pdf</a> but when I used a similar pdf report I was able to <strong>crop</strong> it based on boundaries like this:</p>
<pre><code>pages[0].find_tables()[0].bbox
</code></pre>
<p><strong>output:</strong></p>
<pre><code>(25.19059366666667, 125.0, 569.773065, 269.64727650000003)
</code></pre>
<p>This shows the part that I want to get rid of:</p>
<pre><code>p0.crop((25.19059366666667, 125.0, 569.773065, 269.64727650000003)).to_image().debug_tablefinder()
</code></pre>
<p>Below takes <code>y0 = 269.64</code>, where the top table ends, to almost the bottom of the page <code>y1 = 840</code>, and from the leftmost part <code>x0 = 0</code> of the page to nearly the right edge <code>x1 = 590</code>:</p>
<pre><code>p0.crop((0, 269.0, 590, 840)).to_image()
</code></pre>
<hr />
<p>There is an issue when working on this particular sample pdf but when I used a similar pdf report I was able to <strong>crop</strong> it based on boundaries.</p>
<p>This is what I used:</p>
<pre><code>pages[0].find_tables()[0].bbox
</code></pre>
<p><strong>output:</strong></p>
<p>(25.19059366666667, 125.0, 569.773065, 269.64727650000003)</p>
<pre><code># this shows the part that I want to get rid off
p0.crop((25.19059366666667, 125.0, 569.773065, 269.64727650000003)).to_image().debug_tablefinder()
# below taking y0 value from where top table ends (269.64) to almost bottom of page 840
# x0 from leftmost part (0) of page and x1 as (590) to almost right end of page
p0.crop((0, 269.0, 590, 840)).to_image()
</code></pre>
|
<python><pdfplumber>
|
2024-11-23 17:51:38
| 1
| 2,357
|
ViSa
|
79,218,262
| 202,807
|
Filter pandas DataFrame by multiple thresholds defined in a dictionary
|
<p>I want to filter a DataFrame against multiple thresholds, based on the ID's prefix.</p>
<p>Ideally I'd configure these thresholds with a dictionary e.g.</p>
<pre class="lang-py prettyprint-override"><code>minimum_thresholds = {
'alpha': 3,
'beta' : 5,
'gamma': 7,
'default': 4
}
</code></pre>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>data = {
'id': [
'alpha-164232e7-75c9-4e2e-9bb2-b6ba2449beba', 'alpha-205acbf0-64ba-40ad-a026-cc1c6fc06a6f',
'beta-76ece555-e336-42d8-9f8d-ee92dd90ef19', 'beta-6c91c1cc-1025-4714-a2b2-c30b2717e3c4',
'gamma-f650fd43-03d3-440c-8e14-da18cdeb78d4', 'gamma-a8cb84b5-e94c-46f7-b2c5-135b59dcd1e3',
'pi-8189aff9-ea1c-4e22-bcf4-584821c9dfd6'
],
'freq': [4, 2, 1, 4, 7, 9, 8]
}
</code></pre>
<pre class="lang-none prettyprint-override"><code> id freq
0 alpha-164232e7-75c9-4e2e-9bb2-b6ba2449beba 4
1 alpha-205acbf0-64ba-40ad-a026-cc1c6fc06a6f 2
2 beta-76ece555-e336-42d8-9f8d-ee92dd90ef19 1
3 beta-6c91c1cc-1025-4714-a2b2-c30b2717e3c4 4
4 gamma-f650fd43-03d3-440c-8e14-da18cdeb78d4 7
5 gamma-a8cb84b5-e94c-46f7-b2c5-135b59dcd1e3 9
6 pi-8189aff9-ea1c-4e22-bcf4-584821c9dfd6 8
</code></pre>
<p>I would then get an output like:</p>
<pre class="lang-none prettyprint-override"><code> id freq
0 alpha-164232e7-75c9-4e2e-9bb2-b6ba2449beba 4
1 gamma-f650fd43-03d3-440c-8e14-da18cdeb78d4 7
2 gamma-a8cb84b5-e94c-46f7-b2c5-135b59dcd1e3 9
3 pi-8189aff9-ea1c-4e22-bcf4-584821c9dfd6 8
</code></pre>
<p>I could do this bluntly by looping through each threshold, but it feels like there must be a more Pythonic way?</p>
|
<python><pandas>
|
2024-11-23 15:52:47
| 2
| 409
|
Chris
|
79,218,166
| 13,259,162
|
Determine in which cluster goes a new element with scipy linkage
|
<p>I have the following program where <code>data</code> is a <code>pandas.DataFrame</code>:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.cluster.hierarchy import linkage
Z = linkage(data, method='ward', metric='euclidean')
clusters = fcluster(Z, 2, criterion='maxclust')
</code></pre>
<p>Now, considering a set of new elements that have the same structure as <code>data</code>, how do i know which cluster do they belong to?</p>
|
<python><pandas><scipy>
|
2024-11-23 15:17:08
| 0
| 309
|
NoΓ© Mastrorillo
|
79,218,073
| 2,414,934
|
Sympy - return Real solution
|
<p>I'm using the GeoSolver package to solve 3D constraints.<a href="https://pypi.org/project/GeoSolver/" rel="nofollow noreferrer">link to GeoSolver - PyPI</a>.
When I solve a parallel constraint I get a complex solution while a Real solution exist.
Is it possible to get only the Real solution?</p>
<p>code:</p>
<pre><code>result = sp.solve(AllEquestions, AllVariables)
</code></pre>
<p>where AllEquestions =</p>
<pre><code>['x1- 0', 'y1- 0', 'z1- 0', ((-x1 + x2)*(-x3 + x4) + (-y1 + y2)*(-y3 + y4) + (-z1 + z2)*(-z3 + z4))**2/(((x1 - x2)**2 + (y1 - y2)**2 + (z1 - z2)**2)**1.0*((x3 - x4)**2 + (y3 - y4)**2 + (z3 - z4)**2)**1.0) - 1, 'x2- 100', 'y2- 25', 'z2- 0', ((-x1 + x2)*(-x3 + x4) + (-y1 + y2)*(-y3 + y4) + (-z1 + z2)*(-z3 + z4))**2/(((x1 - x2)**2 + (y1 - y2)**2 + (z1 - z2)**2)**1.0*((x3 - x4)**2 + (y3 - y4)**2 + (z3 - z4)**2)**1.0) - 1, 'x3- 10', 'y3- 10', 'z3- 0', ((-x1 + x2)*(-x3 + x4) + (-y1 + y2)*(-y3 + y4) + (-z1 + z2)*(-z3 + z4))**2/(((x1 - x2)**2 + (y1 - y2)**2 + (z1 - z2)**2)**1.0*((x3 - x4)**2 + (y3 - y4)**2 + (z3 - z4)**2)**1.0) - 1, 'x4- 35', ((-x1 + x2)*(-x3 + x4) + (-y1 + y2)*(-y3 + y4) + (-z1 + z2)*(-z3 + z4))**2/(((x1 - x2)**2 + (y1 - y2)**2 + (z1 - z2)**2)**1.0*((x3 - x4)**2 + (y3 - y4)**2 + (z3 - z4)**2)**1.0) - 1]
</code></pre>
<p>and AllVariables =</p>
<pre><code>[x1, y1, z1, x2, y2, z2, x3, y3, z3, x4, y4, z4]
</code></pre>
<p>returns:</p>
<pre><code>[(0.0, 0.0, 0.0, 100.000000000000, 25.0000000000000, 0.0, 10.0000000000000, 10.0000000000000, 0.0, 35.0000000000000, -1.03077640640442*I*z4 + 16.25, z4), (0.0, 0.0, 0.0, 100.000000000000, 25.0000000000000, 0.0, 10.0000000000000, 10.0000000000000, 0.0, 35.0000000000000, 1.03077640640442*I*z4 + 16.25, z4)]
</code></pre>
<p>should return:</p>
<pre><code>[(0.0, 0.0, 0.0, 100.000000000000, 25.0000000000000, 0.0, 10.0000000000000, 10.0000000000000, 0.0, 35.0000000000000, 16.2500000000000, 0.0)]
</code></pre>
<p>My points are defined like:</p>
<pre><code>self.x = sp.Symbol('x' + str(self.local_var))
</code></pre>
<p>where local_var is just a number</p>
<p>when I set the flag real=True on my points</p>
<pre><code>self.x = sp.Symbol('x' + str(self.local_var), real=True)
</code></pre>
<p>I get the following error:</p>
<blockquote>
<p>File "C:\Users\Achaibou
Karim\AppData\Roaming\FreeCAD\Macro\fpo\tube\GeoSolver\solver.py",
line 25, in solve
result = sp.solve(AllEquestions, AllVariables) File "c:\Program Files\FreeCAD 0.21\bin\lib\site-packages\sympy\solvers\solvers.py",
line 1172, in solve
linear, solution = _solve_system(f, symbols, **flags) File "c:\Program Files\FreeCAD
0.21\bin\lib\site-packages\sympy\solvers\solvers.py", line 1896, in _solve_system
raise NotImplementedError('no valid subset found') NotImplementedError: no valid subset found</p>
</blockquote>
|
<python><sympy>
|
2024-11-23 14:43:40
| 2
| 673
|
Achaibou Karim
|
79,217,960
| 17,487,457
|
Plotting cumulative distribution from data
|
<p>I have a large data to plot the <code>ECDF</code> but got confused, so I decided using small data subset, which still didn't make sentence to me (as complete to what I read from the source).</p>
<p>For that, I produced a synthetic <code>MWE</code> to replicate the problem. Say I have the following <code>df</code>:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid")
# DataFrame
df = pd.DataFrame(
{'id': [54, 54, 54, 54, 54, 16, 16, 16, 50, 50, 28, 28, 28, 19, 19, 32, 32, 32, 81, 81, 81, 81, 81],
'user_id': [10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 84, 84, 84, 84, 84, 179, 179, 179, 179, 179],
'trip_id': [101, 101, 101, 101, 101, 101, 101, 101, 102, 102, 102, 102, 102, 841, 841, 841, 841, 841, 1796, 1796,
1796, 1796, 1796],
'travel_mode': ['train', 'train', 'train', 'train', 'train', 'walk', 'walk', 'walk', 'train', 'train', 'train',
'train', 'train', 'taxi', 'taxi', 'bus', 'bus', 'bus', 'train', 'train', 'train', 'train', 'train']}
)
</code></pre>
<p>In this example, 50% of the trips (2/4) were travelled by 1 user. I want to plot the number of trips per user. So Proceeded like so:</p>
<pre class="lang-py prettyprint-override"><code># number of trips per user
trips_per_user = df.groupby('user_id')['trip_id'].nunique()
trips_per_user
trip_id
user_id
10 2
84 1
179 1
# Create a DataFrame for plotting
plot_data = trips_per_user.reset_index(name='num_trips')
plot_data
user_id num_trips
0 10 2
1 84 1
2 179 1
</code></pre>
<p>Now, plotting the <code>ECDF</code>.</p>
<pre class="lang-py prettyprint-override"><code># ECDF
plt.figure(figsize=(5, 4))
sns.ecdfplot(data=plot_data, x='num_trips', stat='proportion', complementary=False)
plt.xlabel('Number of Trips')
plt.ylabel('Cumulative Proportion')
</code></pre>
<p>Output:
<a href="https://i.sstatic.net/GP6j9ooQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GP6j9ooQ.png" alt="enter image description here" /></a></p>
<p>Obviously, I am not doing this correctly.</p>
<ol>
<li>1 trip was travelled in 50% of the data (not about 70% as in the plot obtained).</li>
<li>The ecdf curve isn't starting from 0.</li>
</ol>
<p>Required answer:</p>
<p>I wanted to plot something like below (from the source):
<a href="https://i.sstatic.net/oJqC15tA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJqC15tA.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib><cdf><ecdf>
|
2024-11-23 13:52:54
| 1
| 305
|
Amina Umar
|
79,217,872
| 15,946,347
|
SQL {ADD PRIMARY KEY} Error via python/sqlite3 with {c.execute("ALTER TABLE
|
<p>Could anyone please help meππΎ resolve below error from Python code on Google Colab while executing an SQL query:</p>
<pre><code>import sqlite3
conn=sqlite3.connection(':memory:')
c=conn.cursor()
c.execute("ALTER TABLE Persons ADD PRIMARY KEY (ID)")
</code></pre>
<h2>ERROR</h2>
<p>OperationalError Traceback (most recent call last)
in <cell line: 1>()
----> 1 c.execute("ALTER TABLE Persons ADD PRIMARY KEY (ID)")
Operational error: near "PRIMARY": syntax error</p>
<p>Thanks,</p>
|
<python><sql><sqlite><memory>
|
2024-11-23 12:51:32
| 1
| 341
|
Jay eMineM
|
79,217,778
| 893,254
|
How to list all files and folders of a file or directory target in Python?
|
<h1>Function Description</h1>
<p>I am trying to write a function which takes an input <code>target</code> which is a target to a path on the filesystem. It could be a directory itself, or it could be a file.</p>
<p>This function should return a list of all files and folders which are sub-targets below <code>target</code>.</p>
<p>To explain in more detail:</p>
<ul>
<li>If <code>target</code> is a file, the return value is a list of length 1 with the fully qualified path of <code>target</code>.</li>
<li>If <code>target</code> is a directory, the return value should be a list containing the recursive sub-contents of <code>target</code>, including <code>target</code> itself.</li>
<li>Items should be returned in sorted order</li>
<li>Items should be returned in such a way that it is possible to distinguish between which returned items are paths to files and which returned items are paths to directorys.</li>
</ul>
<h1>Existing Questions on Stack Overflow</h1>
<p>There are similar questions on Stack Overflow, but none cover this case.</p>
<p>Here is a short summary:</p>
<ul>
<li><p><a href="https://stackoverflow.com/questions/3207219/how-do-i-list-all-files-of-a-directory">How do I list all files of a directory?</a></p>
</li>
<li><p><a href="https://stackoverflow.com/questions/973473/getting-a-list-of-all-subdirectories-in-the-current-directory">Getting a list of all subdirectories in the current directory</a></p>
</li>
</ul>
<p>The answers under both question assume the target is a directory. They do not work if the target is a file. Additionally, in the first question, only files are returned. Not directories. In the second question, only directories are returned. Not files.</p>
<h1>Strategy for Implementation</h1>
<p>This function should be broken down into three steps.</p>
<ol>
<li>The first step should obtain the relative paths.</li>
<li>The second step should be a map operation which converts the relative paths to absolute, or fully-qualified, paths.</li>
<li>The third steps should sort the returned items.</li>
</ol>
<p>This design will offer a high-degree of flexibility, such that each function does one job and functions are composable. This is good software design, some would call it a <em>clean</em> design.</p>
<p>An arguably better design might split the collection of files and directories into two independent functions, however this would result in lower performance as the target will have to be walked twice.</p>
<h1>Choice of base function</h1>
<p>There are several choices of base function of which I am aware. We have:</p>
<ul>
<li><code>os.listdir</code></li>
<li><code>os.scandir</code></li>
<li><code>os.walk</code></li>
<li><code>os.fwalk</code></li>
<li>Maybe others?</li>
</ul>
<p>I am not sure exactly which might be the best choice.</p>
<h1>Attempted Solution</h1>
<p>I started off trying to implement this. It's quite complicated because the input needs to be a target path, and the whether this is a file or directory is not initially known.</p>
<p>This means the logic of the interface level needs to be different to the recursive calls.</p>
<p>I can't figure out a simple or elegant solution to this. That makes me think there is probably a better way to do it, I just can't see what that is.</p>
<p>Building on this example:</p>
<pre><code>import os
def fast_scandir(target_dir: str) -> list[str]:
items = []
for f in os.scandir(target_dir):
if f.is_dir():
items.extend(fast_scandir(f.path))
if f.is_file():
items.append(f.path)
items.sort()
return items
</code></pre>
<p>Which is a simple solution, but does not implement all the requirements, I came up with this:</p>
<pre><code># Interface level, takes a `target`, don't know if it is a file or dir
def fast_scandir_3(target:str) -> list[tuple[str, str]]:
items:list[tuple[str, str]] = []
if os.path.isfile(target):
item = ('f', target)
items.extend(_fast_scandir_3_impl(item))
elif os.path.isdir(target):
item = ('d', target)
items.extend(_fast_scandir_3_impl(item))
return items
</code></pre>
<p>This (below) is just nasty. It is excessively complicated, hard to read, and hard to understand. There must be a more straightforward approach to this.</p>
<pre><code># Implementation layer
# TODO: will add sorting with `sorted` and fully-qualified paths later
def _fast_scandir_3_impl(target:tuple[str, str]) -> list[tuple[str, str]]:
items:list[tuple[str, str]] = []
items.append(target)
if target[0] == 'f':
pass
elif target[0] == 'd':
target_path = target[1]
for subtarget in os.listdir(target_path):
if os.path.isfile(subtarget):
target = ('f', subtarget)
items.extend(_fast_scandir_3_impl(target))
elif os.path.isdir(subtarget):
target = ('d', subtarget):
items.extend(_fast_scandir_3_impl(target ))
return items
</code></pre>
<p>Idea for implementing the fully-qualified paths:</p>
<pre><code>def _fast_scandir_3_fully_qualified_impl(target:tuple[str, str]) -> list[tuple[str, str]]:
return (
list(
map(
lambda item: (item[0], os.path.abspath(item[1])),
_fast_scandir_3_impl(target),
)
)
)
</code></pre>
<p>Idea for implementing sorting:</p>
<pre><code>def _fast_scandir_3_fully_qualified_sorted_impl(target:tuple[str, str]) -> list[tuple[str, str]]:
return sorted(_fast_scandir_3_fully_qualified_impl(target), key=lambda pair: pair[1])
</code></pre>
<p>The most significant issue being this doesn't actually do what it is supposed to do.</p>
<p>In addition, some further questions remain:</p>
<ul>
<li>Is this implementation with <code>listdir</code> the right choice?</li>
<li>Is <code>sorted</code> the most efficient way to sort the returned items? (Not shown in above implementation.)</li>
</ul>
|
<python>
|
2024-11-23 11:59:33
| 4
| 18,579
|
user2138149
|
79,217,608
| 4,499,832
|
Python 3.12.7 module ssl has no attribute wrap_socket
|
<p>Here is a sample of a python script:</p>
<pre><code>import mysql.connector
...
mydb = mysql.connector.connect(
host="localhost",
user="root",
passwd="********",
database="mydatabase",
auth_plugin='mysql_native_password'
)
mycursor = mydb.cursor()
</code></pre>
<p>When I'm running this script with Python 3.12.7, I have the following error:</p>
<pre><code>$ python3 my_script.py
Traceback (most recent call last):
File "/home/user/my_script.py", line 580, in <module>
function()
File "/home/user/my_script.py", line 345, in function
mydb = mysql.connector.connect(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/mysql/connector/__init__.py", line 173, in connect
return MySQLConnection(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/mysql/connector/connection.py", line 102, in __init__
self.connect(**kwargs)
File "/usr/lib/python3/dist-packages/mysql/connector/abstracts.py", line 735, in connect
self._open_connection()
File "/usr/lib/python3/dist-packages/mysql/connector/connection.py", line 250, in _open_connection
self._do_auth(self._user, self._password,
File "/usr/lib/python3/dist-packages/mysql/connector/connection.py", line 155, in _do_auth
self._socket.switch_to_ssl(ssl_options.get('ca'),
File "/usr/lib/python3/dist-packages/mysql/connector/network.py", line 427, in switch_to_ssl
self.sock = ssl.wrap_socket(
^^^^^^^^^^^^^^^
AttributeError: module 'ssl' has no attribute 'wrap_socket'
</code></pre>
<p>It looks like it is related to Python but I'm not sure.</p>
<p>I'm running this script under Ubuntu 24.10.</p>
<p>Do you have any ideas to fix this bug?</p>
|
<python>
|
2024-11-23 10:18:55
| 1
| 811
|
klaus
|
79,217,488
| 7,797,146
|
XLA PJRT plugin on Mac reveals only 4 CPUs
|
<p>On my MacOS M3, I have compiled the pjrt_c_api_cpu_plugin.so and I'm using it with JAX.</p>
<p>My Macbook has 12 CPUs but with a simple python script "jax.devices()" the pjrt plugin reveals just 4 cpus.</p>
<p>Can you tell me why ?</p>
|
<python><jax><xla>
|
2024-11-23 09:06:40
| 0
| 304
|
lordav
|
79,217,350
| 7,643,771
|
How to compute disk I/O usage percentage in Windows, exactly like Task Manager, for the current process?
|
<p>I'm trying to compute the disk I/O usage percentage for the current process in Windows, similar to what the Task Manager displays under the "Disk" column. However, I haven't been able to get an accurate match.</p>
<p>I used psutil.Process.io_counters() to retrieve the read/write byte counts for the current process, but it doesn't seem to align with what the Task Manager shows. Specifically, when using libraries like yara for file scanning, the io_counters read/write values don't seem to account for all the I/O operations correctly. It might be due to how YARA handles file reads in memory or how Windows tracks I/O for such operations.</p>
<p><strong>What I need:</strong></p>
<p>A method or tool to calculate the disk I/O usage percentage for the current process, in a way that matches the Windows Task Manager.
Preferably, a Python-based approach, but I'm open to any reliable solutions or libraries.
Insights into why psutil might not reflect accurate I/O stats for processes using specific libraries like YARA.</p>
<p><strong>What Iβve tried:</strong></p>
<p>Using psutil.Process.io_counters() to get read/write bytes, but the reported values don't match the Task Manager's disk usage.
Exploring system-level counters (e.g., Performance Counters in Windows), but I couldn't figure out how to map these to the Task Manager's percentage values.
Is there a way to reliably compute this disk I/O percentage for the current process? Or could someone clarify how Task Manager calculates this metric?</p>
|
<python><io><taskmanager><yara>
|
2024-11-23 07:25:22
| 1
| 678
|
Pedram
|
79,217,331
| 3,875,610
|
Function to convert a pandas dataframe into JSON
|
<p>I have a pandas dataframe that I want to convert to JSON in the required format. The JSON is basically a tree structure of the dataframe.</p>
<p><strong>Input:</strong></p>
<pre><code>Total Resolution Category Escalated Count
Total Tickets False IT False 4
Total Tickets False IT True 3
Total Tickets True IT False 1
Total Tickets True IT True 15
Total Tickets True Unknown True 1
</code></pre>
<p><strong>Current Output:</strong></p>
<pre><code>{
"chart":
{
"data":
{
"path":
[],
"displayColumnLabel": "Total",
"displayValueLabel": "Total Tickets",
"values":
[
{
"type": "ticket",
"value": 24,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[
{
"path":
[],
"displayColumnLabel": "Resolution",
"displayValueLabel": "True",
"values":
[
{
"type": "ticket",
"value": 17,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[
{
"path":
[],
"displayColumnLabel": "Category",
"displayValueLabel": "IT",
"values":
[
{
"type": "ticket",
"value": 16,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[
{
"path":
[],
"displayColumnLabel": "Escalated",
"displayValueLabel": "True",
"values":
[
{
"type": "ticket",
"value": 15,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[
{
"path":
[],
"displayColumnLabel": "tickets",
"displayValueLabel": "15",
"values":
[
{
"type": "ticket",
"value": 15,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[]
}
]
},
{
"path":
[],
"displayColumnLabel": "Escalated",
"displayValueLabel": "False",
"values":
[
{
"type": "ticket",
"value": 1,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[
{
"path":
[],
"displayColumnLabel": "tickets",
"displayValueLabel": "1",
"values":
[
{
"type": "ticket",
"value": 1,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[]
}
]
}
]
},
{
"path":
[],
"displayColumnLabel": "Category",
"displayValueLabel": "Unknown",
"values":
[
{
"type": "ticket",
"value": 1,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[
{
"path":
[],
"displayColumnLabel": "Escalated",
"displayValueLabel": "True",
"values":
[
{
"type": "ticket",
"value": 1,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[
{
"path":
[],
"displayColumnLabel": "tickets",
"displayValueLabel": "1",
"values":
[
{
"type": "ticket",
"value": 1,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[]
}
]
}
]
}
]
},
{
"path":
[],
"displayColumnLabel": "Resolution",
"displayValueLabel": "False",
"values":
[
{
"type": "ticket",
"value": 7,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[
{
"path":
[],
"displayColumnLabel": "Category",
"displayValueLabel": "IT",
"values":
[
{
"type": "ticket",
"value": 7,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[
{
"path":
[],
"displayColumnLabel": "Escalated",
"displayValueLabel": "False",
"values":
[
{
"type": "ticket",
"value": 4,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[
{
"path":
[],
"displayColumnLabel": "tickets",
"displayValueLabel": "4",
"values":
[
{
"type": "ticket",
"value": 4,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[]
}
]
},
{
"path":
[],
"displayColumnLabel": "Escalated",
"displayValueLabel": "True",
"values":
[
{
"type": "ticket",
"value": 3,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[
{
"path":
[],
"displayColumnLabel": "tickets",
"displayValueLabel": "3",
"values":
[
{
"type": "ticket",
"value": 3,
"change": 0,
"changeType": "neutral"
}
],
"isCritial": false,
"children":
[]
}
]
}
]
}
]
}
]
}
}
}
</code></pre>
<p><strong>Problem:</strong></p>
<p>My current function works, but there are two things I cant figure out.</p>
<ol>
<li>'path' is always [], the logic for path is that for root node path=[], and for the child nodes it follows the parent node path. Eg: for 2nd Node, path:[Total Tickets]</li>
<li>'isCritial' (typo is intentional) flag, basically this flag is true for all nodes specified by the user. eg: the row with count 15 is deemed target and all nodes(Total:Total Tickets, Resolution:True, Category:IT, Escalated:True, Count:15)</li>
</ol>
<p><strong>Current function:</strong></p>
<pre><code>def create_hierarchical_json_new(df, value_column='Ticket Id'):
# Extract column names
columns = df.columns.tolist()
# Build data and children sections
def build_children(df, group_cols):
if not group_cols:
return [] # Base case: no more grouping columns, return empty list
current_col = group_cols[0]
grouped = df.groupby(current_col)
children = []
for value, group in grouped:
is_terminal_node = len(group_cols) == 1
child = {
"path": [],
"displayColumnLabel": current_col,
"displayValueLabel": str(value),
"values": [
{
"type": "ticket",
"value": group[value_column].sum(),
"change": 0,
"changeType": "neutral",
}
],
"isCritial": False,
"children": build_children(group, group_cols[1:]),
}
# Add one more node if this is the terminal node
if is_terminal_node:
child["children"].append({
"path": [],
"displayColumnLabel": "tickets",
"displayValueLabel": str(group[value_column].sum()),
"values": [
{
"type": "ticket",
"value": group[value_column].sum(),
"change": 0,
"changeType": "neutral",
}
],
"isCritial": False,
"children": []
})
children.append(child)
return children
data = {
"path": [],
"displayColumnLabel": columns[0],
"displayValueLabel": str(df[columns[0]].iloc[0]),
"values": [
{
"type": "ticket",
"value": df[value_column].sum(),
"change": 0,
"changeType": "neutral",
}
],
"isCritial": False,
"children": build_children(df, columns[1:-1]), # Group by all columns except first and last
}
# Combine meta and data sections
result = {
"chart": {
"data": data
}
}
return result
</code></pre>
|
<python><json>
|
2024-11-23 07:09:54
| 0
| 1,829
|
Anubhav Dikshit
|
79,216,981
| 317,563
|
expect: make two python scripts communicate
|
<p>I want to have two scripts communicating by exchanging messages. I have to use pexpect because of other restrictions. I am trying to make a minimal working example before I build it out for my application.</p>
<p>I have tried to do a minimal working example by following the tutorials I could find on the internet. But I've failed and here is my attempt.The first script is the one that initiates communication:</p>
<pre><code>#script_1.py
import pexpect
p = pexpect.spawn("python /path/to/script/script_2.py")
p.expect(">")
p.sendline(input("Message to the other script: "))
print( p.before )
</code></pre>
<p>Here is the second script which should receive communication and send answer:</p>
<pre><code>#script_2.py
indata = input(">")
print(indata)
</code></pre>
<p>How can I make two python scripts communicate by using pexpect?</p>
<hr />
<p>EDIT: I was asked why I say that the scripts fail given that there are no error messages. The reason it is a failure is that script 2 should echo the message sent by script 1, and script 1 should print out (or in other ways capture the response), but that doesn't happen</p>
|
<python><automation><expect><pexpect>
|
2024-11-23 01:47:38
| 2
| 911
|
Mikkel Rev
|
79,216,975
| 1,680,980
|
how to uninstall specific opencv
|
<p>I am getting an error on running <code>cv2.imshow()</code></p>
<pre><code>cv2.imshow("Image", image)
</code></pre>
<pre><code>cv2.error: OpenCV(4.9.0) /io/opencv/modules/highgui/src/window.cpp:1272: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage'
</code></pre>
<p>and following suggestions in this forum I did</p>
<pre><code>sudo apt install python3-opencv
</code></pre>
<p>which installed opencv4.5d. the same code still threw an error and this time I saw the error was coming from opencv4.9. I do not remember installing it, but I found this</p>
<p>usr/local/lib/python3.10/dist-packages/opencv_python_headless-4.9.0.80.dist-info
/usr/local/lib/python3.10/dist-packages/opencv_python_headless.libs</p>
<p>How do I remove 4.9 or make python import 4.5 and not 4.9??<br />
thanks everybody.</p>
|
<python><opencv><pip><apt>
|
2024-11-23 01:45:12
| 2
| 624
|
Stephen
|
79,216,878
| 2,603,579
|
Is Tkinter's askopenfilename safe?
|
<p>I'm developing an app with Tkinter as the UI. Currently, I'm using <code>tkinter.filedialog</code>'s <code>askopenfilename</code> function to let a user choose a file.</p>
<p>I then create a hidden file for [redacted] purposes with the same filename. As a result, I need to use <code>subprocess</code> which I've read can have suffer from command injection vulnerabilities if the input is not sanitized.</p>
<p>So far, I've done pretty minor sanitization and before I get down the weeds, I wanted to find out: does <code>askopenfilename</code> already pretty sanitized/safe? Or should I treat it as unsanitized as an <code>input</code> statement? The <a href="https://docs.python.org/3/library/dialog.html#tkinter.filedialog.askopenfilename" rel="nofollow noreferrer">docs</a> only say this:</p>
<blockquote>
<p>tkinter.filedialog.askopenfilename(**options)</p>
</blockquote>
<blockquote>
<p>tkinter.filedialog.askopenfilenames(**options)</p>
</blockquote>
<blockquote>
<p>The above two functions
create an Open dialog and return the selected filename(s) that
correspond to existing file(s).</p>
</blockquote>
|
<python><tkinter>
|
2024-11-22 23:57:12
| 0
| 402
|
Ryan Farber
|
79,216,489
| 2,039,866
|
easyocr readtext ends with illegal instruction exception
|
<p>I'm trying to process a very clear image with python's easyocr package.</p>
<p>This is my image: <a href="https://photos.app.goo.gl/eiPTPqnZtJ3h9rXu9" rel="nofollow noreferrer">phonenum.png</a></p>
<p>Here is my code:</p>
<pre><code>import easyocr
reader = easyocr.Reader(['en'])
result = reader.readtext('images/phonenum.png')
for (bbox, text, prob) in result:
print(text)
</code></pre>
<p>I'm using python 3.12.7, easyocr 1.7.2, torch 2.5.1, torchvision 0.20.1, torchaudio 2.5.1, and opencv-python 4.10.0.84. My CPU supports AVX.</p>
<p>I'm expecting some text output.</p>
<p>Here is the terminal output using pycharm:</p>
<pre><code>C:\Users\charl\PYTHON\stackoverflow_questions_2\.venv\Scripts\python.exe C:\Users\charl\PYTHON\stackoverflow_questions_2\read_text_ocr.py
Neither CUDA nor MPS are available - defaulting to CPU. Note: This module is much faster with a GPU.
Process finished with exit code -1073741795 (0xC000001D)
</code></pre>
<p>Using debug, the code never reaches the for loop. Exit code 0xC000001D is an illegal instruction exception.</p>
<p>How can I get some text output?</p>
|
<python><easyocr>
|
2024-11-22 20:31:32
| 0
| 770
|
Charles Knell
|
79,216,349
| 6,574,178
|
How can getattr() respect python class properties?
|
<p>I have a class that performs some useful job in my project</p>
<pre><code>class Test:
def __init__(self):
self.__value = None
def set_value(self, value):
print(f"set_value(): value={value}")
self.__value = value
def get_value(self):
print(f"get_value()")
return self.__value
value = property(get_value, set_value)
</code></pre>
<p>Among regular functions it has some getters/setters that is handy to group into properties for simpler client code. E.g.</p>
<pre><code>t.value = 123
</code></pre>
<p>Now, in some parts of the application I need to do some extra actions (a print() call in my case) on every access to the Test object, on each method call. So I did this:</p>
<pre><code>class Wrapper:
def __init__(self, wrapped):
self.wrapped = wrapped
def __getattr__(self, attr):
print(f"Wrapper::__getattr__(): {attr}")
return getattr(self.wrapped, attr)
</code></pre>
<p>This is very similar to what decorators do, but this approach has 2 benefits:</p>
<ul>
<li>Original class is untouched, and I can use it separately from the Wrapper</li>
<li>I can add as many new methods to the Test class, without having to add new decorators in Wrapper</li>
</ul>
<p>Now the usage code:</p>
<pre><code>t = Test()
w = Wrapper(t)
w.set_value(123)
print(f"Value: {w.get_value()}")
w.value = 234
print(f"Value: {w.value}")
</code></pre>
<p>Direct set/get calls work as expected. Unfortunately w.value syntax no longer work, and silently assign a value of 234 to the <code>value</code> name, instead of calling getters and setters.</p>
<p>How this issue can be fixed, so that Wrapper respects Test class properties?</p>
|
<python><decorator><python-decorators>
|
2024-11-22 19:29:00
| 2
| 905
|
Oleksandr Masliuchenko
|
79,216,296
| 11,870,534
|
How to pickle a class instance with persistent methods in Python?
|
<p>I want to serialize a class instance in python and keep methods persistent. I have tried with joblib and pickle and am really close with dill, but can't quite get it.</p>
<p>Here is the problem. Say I want to pickle a class instance like so:</p>
<pre class="lang-py prettyprint-override"><code>import dill
class Test():
def __init__(self, value=10):
self.value = value
def foo(self):
print(f"Bar! Value is: {self.value}")
t = Test(value=20)
with open('Test.pkl', 'wb+') as fp:
dill.dump(t, fp)
# Test it
print('Original: ')
t.foo() # Prints "Bar! Value is: 20"
</code></pre>
<p>Later, the definition of <code>Test</code> changes and when I reload my pickled object the method is different:</p>
<pre class="lang-py prettyprint-override"><code>class Test():
def __init__(self, value=10):
self.value = value
def foo(self):
print("...not bar?")
with open('Test.pkl', 'rb') as fp:
t2 = dill.load(fp)
# Test it
print('Reloaded: ')
t2.foo() # Prints "...not bar?"
</code></pre>
<p>Now in the reloaded case, the attribute value is preserved (<code>t2.value</code> is 20). I can get really close to what I want by serializing the class with dill and not the instance, like so:</p>
<pre class="lang-py prettyprint-override"><code>class Test():
def __init__(self, value=10):
self.value = value
def foo(self):
print(f"Bar! Value is: {self.value}")
t = Test(value=20)
with open('Test.pkl', 'wb+') as fp:
dill.dump(Test, fp)
# Test it
print('Original: ')
t.foo() # Prints "Bar! Value is: 20"
</code></pre>
<p>But then when I rebuild it, I get the old method (what I want) but I lose the attributes of the instance <code>t</code> (in this case I get the default value of 10 instead of the instance value of 20):</p>
<pre class="lang-py prettyprint-override"><code>class Test():
def __init__(self, value=10):
self.value = value
def foo(self):
print("...not bar?")
with open('Test.pkl', 'rb') as fp:
test_class = dill.load(fp)
t2 = test_class()
# Test it
print('Reloaded: ')
t2.foo() # Prints "Bar! Value is: 10"
</code></pre>
<p>In my actual use case, I have a lot of attributes in the class instance. I want to be able to pickle the attributes as well as the methods so that later source code changes don't make that particular object un-recoverable.</p>
<p>Currently to recover these objects I am copying source code files but the imports get very messy--a lot of <code>sys.path</code> manipulations that get confusing to make sure I load the correct old source code. I could also do something where I pickle the class definition with dill and then save all the attributes to json or something and rebuild that way, but I'm wondering if there is an easy way to do this with dill or some other package that I have not yet discovered. Seems like a straightforward use case to me.</p>
|
<python><python-3.x><pickle><dill>
|
2024-11-22 19:06:40
| 2
| 620
|
thehumaneraser
|
79,216,188
| 5,080,858
|
pytest-recording / VCR for S3: IncompleteReadError (but only sometimes?)
|
<p>Looking to use <code>pytest-recording</code> in my tests that involve connecting to and downloading data from S3.</p>
<p>I import all the functions from the script I'm testing. This is using prod env vars, but only to test downloading and reading data from S3 (not uploading). In a REPL, the exact same code works fine -- I can connect to my S3 instance, read and download the data that is on it.</p>
<p>Now, in my test suite, I keep getting an <code>IncompleteReadError</code> when running <code>pytest --record-mode=once</code>. This happens regardless of whether I delete existing cassettes or not.</p>
<p>Here's the original functions I'm testing:</p>
<pre class="lang-py prettyprint-override"><code>def s3_connect_get_files(
validated_target_params: dict,
) -> Tuple[s3fs.S3FileSystem, List[str]]:
"""
connects to our s3 instance, returning
our bucket s3fs object and a list of the files
in the target infile directory specified in
our data model for the target.
returns:
- Tuple(the_bucket (s3fs obj), files (list) )
raises:
- FileNotFoundError if we can't find the dir
on our s3
"""
try:
the_bucket = s3fs.S3FileSystem(
key=AWS_ACCESS_KEY_ID,
secret=AWS_SECRET_ACCESS_KEY,
client_kwargs={"endpoint_url": validated_target_params["endpoint_url"]},
)
files = the_bucket.glob(
f"{validated_target_params['in_path']}/"
f"{validated_target_params['glob_pattern']}"
)
return the_bucket, files
except FileNotFoundError as e:
logger.error("could not connect to s3. check credentials!")
logger.error(f"original error type: {type(e).__name__}")
logger.error(f"original error message: {e}")
raise
def read_files(
the_bucket: s3fs.S3FileSystem, files: List[str], validated_target_params: dict
) -> dict:
"""reads all files into memory
raises:
- NotImplementedError; if we encounter a reader
type we haven't defined yet.
"""
records = {}
logger.info("Now reading data to be validated and de-duped.")
for file in files:
if (
validated_target_params["reader"].value == "pandas"
): # we need to call value as we're using an Enum
try:
df = pd.read_csv(the_bucket.open(file))
file_last_modified = the_bucket.info(file).get("LastModified")
df["file_last_modified"] = file_last_modified
records[file] = {
"data": df,
"last_modified_at": file_last_modified,
}
logger.info(f"Loaded {file} with {len(df)} rows using pandas")
except Exception as e:
logger.error(f"original error type: {type(e).__name__}")
logger.error(f"original error message: {e}")
raise
else:
raise NotImplementedError(
f"{validated_target_params['reader']} is not yet implemented as a reader."
)
return records
</code></pre>
<p>and here are the corresponding tests:</p>
<pre class="lang-py prettyprint-override"><code>@pytest.mark.vcr()
def test_s3_connect_get_files(validated_target_params) -> None:
"""test s3 connection and file retrieval"""
the_bucket, files = s3_connect_get_files(validated_target_params)
assert isinstance(the_bucket, S3FileSystem)
assert isinstance(files, list)
@pytest.mark.vcr()
def test_read_files_pandas(validated_target_params) -> None:
"""test reading files using pandas"""
the_bucket, files = s3_connect_get_files(validated_target_params)
records = read_files(the_bucket, files, validated_target_params)
assert isinstance(records, dict)
assert len(records) == len(files)
assert all(isinstance(df["data"], pd.DataFrame) for df in records.values())
</code></pre>
<p>If I comment out <code>test_read_files_pandas</code>, it runs fine and all tests pass. If I keep it in, it inevitably fails, like this:</p>
<pre><code>E botocore.exceptions.IncompleteReadError: 0 read, but total bytes expected is 6163243.
.venv/lib/python3.11/site-packages/aiobotocore/response.py:125: IncompleteReadError
</code></pre>
<p>I am new to <code>pytest-recording</code>, and to be honest, not the best test writer ever. So, I do apologise if I've made a stupid mistake, and would greatly appreciate any pointers as to how to either get these tests to pass, or modify my functions.</p>
|
<python><amazon-s3><testing><pytest><vcr>
|
2024-11-22 18:28:22
| 0
| 679
|
nikUoM
|
79,216,008
| 30,997
|
Python FlickrAPI throwing exception on construction: "sqlite3.OperationalError: no such table: oauthtokens"
|
<p>There's nothing in the installation documentation about installing sqlite3, but if it's a dependency, it will have installed as part of the pip install process. I've manually installed sqlite3 so I have a command-line client I can poke at, but had no expectation that adding it would change anything.</p>
<p>I have version 2.1.2 of the api. I'm on python3.12.2 on an Ubuntu machine, but I get the same issue from Python 3.9 on OSX. I've double-checked the key and secret and they are correct. When I try to set up the interface object, just copying the code from the <a href="https://stuvel.eu/flickrapi-doc/2-calling.html" rel="nofollow noreferrer">first example in the documentation</a>:</p>
<pre><code>#!/usr/bin/env python3
import flickrapi
api_key = u'aoeuaoeuaoeu'
api_secret = u'natohueth'
flickr = flickrapi.FlickrAPI(api_key, api_secret)
</code></pre>
<p>I just get an exception:</p>
<pre><code>Traceback (most recent call last):
File "/home/ronb/flickr/./mvtags.py", line 10, in <module>
flickr = flickrapi.FlickrAPI(api_key, api_secret, db_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/flickrapi/core.py", line 201, in __init__
self.flickr_oauth = auth.OAuthFlickrInterface(api_key, secret, self.token_cache)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/flickrapi/auth.py", line 159, in __init__
if oauth_token.token:
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/flickrapi/tokencache.py", line 175, in token
curs.execute('''SELECT oauth_token, oauth_token_secret, access_level, fullname, username, user_nsid
sqlite3.OperationalError: no such table: oauth_tokens
</code></pre>
|
<python><python-3.x><flickr>
|
2024-11-22 17:28:32
| 0
| 11,862
|
Sniggerfardimungus
|
79,215,877
| 392,687
|
Trying to automate using pyautogui and docker firefox, but can't stick to the constraints of the display
|
<p>I'm using an M1 MacBook Pro with docker desktop. So this is a Retina display.</p>
<p>I've deployed the jlesage/firefox image in the following way:</p>
<pre><code>docker run -d \
--name firefox \
-p 5800:5800 \
-p 5900:5900 \
-e DISPLAY_WIDTH=1024 \
-e DISPLAY_HEIGHT=768 \
-e DISPLAY=:0 \
-v /Users/vw/DEV/docker/scripts:/scripts \
-e VNC_PASSWORD=password \
jlesage/firefox
</code></pre>
<p>It works fine, I can VNC into it and browse and all.</p>
<p>Next I created a python script with pyautogui to find and click the Facebook button:</p>
<pre><code>import os
import pyautogui
import time
# Ensure the script uses the Docker container's display
os.environ["DISPLAY"] = ":0"
# Delay to allow setup
time.sleep(3)
try:
# Locate the image within the Docker display
location = pyautogui.locateCenterOnScreen('Facebook.png', confidence=0.8)
if location:
x, y = location
# Constrain search to 1024x768
if 0 <= x < 1024 and 0 <= y < 768:
print(f"Image found at: {location}")
pyautogui.click(location)
else:
print(f"Image found outside bounds (1024x768): {location}")
else:
print("Image not found on the screen.")
except Exception as e:
print(f"An error occurred: {e}")
</code></pre>
<p>The error message I get is:
Image found outside bounds (1024x768): Point(x=np.int64(1743), y=np.int64(1145))</p>
<p>so it's not using the display I'm specifying.</p>
<p>I've also exported DISPLAY manually from my terminal.</p>
|
<python><docker><pyautogui>
|
2024-11-22 16:48:32
| 0
| 1,005
|
vwdewaal
|
79,215,872
| 6,170,340
|
Jupyter Notebook & Lab won't startafter anaconda's successful installation
|
<p>It is around one year that some of my students got issues with Anaconda Jupyter lab/notebook; they all got similar errors, as below:</p>
<pre><code>[I 2024-11-22 06:01:59.231 ServerApp] Extension package aext_assistant took 0.3674s to import
[I 2024-11-22 06:01:59.263 ServerApp] **** ENVIRONMENT Environment.PRODUCTION ****
[I 2024-11-22 06:01:59.268 ServerApp] **** ENVIRONMENT Environment.PRODUCTION ****
[W 2024-11-22 06:01:59.322 ServerApp] A _jupyter_server_extension_points function was not found in jupyter_lsp. Instead, a _jupyter_server_extension_paths function was found and will be used for now. This function name will be deprecated in future releases of Jupyter Server.
[W 2024-11-22 06:01:59.429 ServerApp] A _jupyter_server_extension_points function was not found in notebook_shim. Instead, a _jupyter_server_extension_paths function was found and will be used for now. This function name will be deprecated in future releases of Jupyter Server.
[I 2024-11-22 06:02:00.930 ServerApp] Extension package panel.io.jupyter_server_extension took 1.5002s to import
[I 2024-11-22 06:02:00.931 ServerApp] aext_assistant | extension was successfully linked.
[I 2024-11-22 06:02:00.931 ServerApp] aext_core | extension was successfully linked.
[I 2024-11-22 06:02:00.931 ServerApp] aext_panels | extension was successfully linked.
[I 2024-11-22 06:02:00.931 ServerApp] aext_share_notebook | extension was successfully linked.
[I 2024-11-22 06:02:00.931 ServerApp] jupyter_lsp | extension was successfully linked.
[I 2024-11-22 06:02:00.935 ServerApp] jupyter_server_terminals | extension was successfully linked.
[I 2024-11-22 06:02:00.940 ServerApp] jupyterlab | extension was successfully linked.
[I 2024-11-22 06:02:00.946 ServerApp] notebook | extension was successfully linked.
[I 2024-11-22 06:02:01.702 ServerApp] notebook_shim | extension was successfully linked.
[I 2024-11-22 06:02:01.702 ServerApp] panel.io.jupyter_server_extension | extension was successfully linked.
[I 2024-11-22 06:02:01.749 ServerApp] notebook_shim | extension was successfully loaded.
[I 2024-11-22 06:02:01.749 ServerApp] Registered aext_assistant server extension
[I 2024-11-22 06:02:01.749 ServerApp] aext_assistant | extension was successfully loaded.
[I 2024-11-22 06:02:01.749 ServerApp] Registered aext_core server extension
[I 2024-11-22 06:02:01.749 ServerApp] aext_core | extension was successfully loaded.
[I 2024-11-22 06:02:01.749 ServerApp] Registered aext_panels server extension
[I 2024-11-22 06:02:01.749 ServerApp] aext_panels | extension was successfully loaded.
[I 2024-11-22 06:02:01.749 ServerApp] Registered aext_share_notebook_server server extension
[I 2024-11-22 06:02:01.749 ServerApp] aext_share_notebook | extension was successfully loaded.
[I 2024-11-22 06:02:01.749 ServerApp] jupyter_lsp | extension was successfully loaded.
[I 2024-11-22 06:02:01.749 ServerApp] jupyter_server_terminals | extension was successfully loaded.
[I 2024-11-22 06:02:01.749 LabApp] JupyterLab extension loaded from C:\ProgramData\anaconda3\Lib\site-packages\jupyterlab
[I 2024-11-22 06:02:01.749 LabApp] JupyterLab application directory is C:\ProgramData\anaconda3\share\jupyter\lab
[I 2024-11-22 06:02:01.749 LabApp] Extension Manager is 'pypi'.
[I 2024-11-22 06:02:01.905 ServerApp] jupyterlab | extension was successfully loaded.
[I 2024-11-22 06:02:01.905 ServerApp] notebook | extension was successfully loaded.
[I 2024-11-22 06:02:01.905 ServerApp] panel.io.jupyter_server_extension | extension was successfully loaded.
[I 2024-11-22 06:02:01.905 ServerApp] Serving notebooks from local directory: C:\Users\RaSa
[I 2024-11-22 06:02:01.905 ServerApp] Jupyter Server 2.14.1 is running at:
[I 2024-11-22 06:02:01.905 ServerApp] http://localhost:8888/tree?token=aee506bb54b16beb90787b9f72d3154f449b2f2589f46435
[I 2024-11-22 06:02:01.905 ServerApp] http://127.0.0.1:8888/tree?token=aee506bb54b16beb90787b9f72d3154f449b2f2589f46435
[I 2024-11-22 06:02:01.905 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[E 2024-11-22 06:02:01.905 ServerApp] Failed to write server-info to C:\Users\RaSa\AppData\Roaming\jupyter\runtime\jpserver-17000.json: PermissionError(13, 'Permission denied')
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\Scripts\jupyter-notebook-script.py", line 10, in
sys.exit(main())
^^^^^^
File "C:\ProgramData\anaconda3\Lib\site-packages\jupyter_server\extension\application.py", line 623, in launch_instance
serverapp.start()
File "C:\ProgramData\anaconda3\Lib\site-packages\jupyter_server\serverapp.py", line 3119, in start
self.start_app()
File "C:\ProgramData\anaconda3\Lib\site-packages\jupyter_server\serverapp.py", line 3023, in start_app
self.write_browser_open_files()
File "C:\ProgramData\anaconda3\Lib\site-packages\jupyter_server\serverapp.py", line 2890, in write_browser_open_files
self.write_browser_open_file()
File "C:\ProgramData\anaconda3\Lib\site-packages\jupyter_server\serverapp.py", line 2913, in write_browser_open_file
with open(self.browser_open_file, "w", encoding="utf-8") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\RaSa\\AppData\\Roaming\\jupyter\\runtime\\jpserver-17000-open.html'
</code></pre>
<p><a href="https://i.sstatic.net/0kf30iDC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0kf30iDC.png" alt="error after running jupyter notebook" /></a></p>
<p>I couldn't recreate this error to solve it. since these students are new to programming, if it possible, I want an simple solution.
Best Regards</p>
|
<python><jupyter-notebook><anaconda><jupyter-lab><anaconda3>
|
2024-11-22 16:47:40
| 0
| 419
|
B nM
|
79,215,796
| 536,262
|
Chrome Selenium show file downloaded but I can't find it or click the file in download panel
|
<p>Browser act as file is downloaded, I can see the file in the panel, but I can't open folder or click on the link in the panel, nor find the file in its designated downloadpath or anywhere else. Worked fine with chrome v121.</p>
<p>test outputs: <code>20241122162434|ERROR|downloadpath:C:\dist\work\remote-eseal-fullstack-test</code></p>
<p>So I know it is executing the cdp command. But nothing in downloadpath, or anywhere else.</p>
<p>Same Chrome the usual way work all fine, but not via selenium. Not manual in this window either.</p>
<p>All active chrome settings below extracted:</p>
<pre class="lang-py prettyprint-override"><code>elif os.environ['WEBDRIVER'].lower()=='chrome':
options = webdriver.ChromeOptions()
options.add_argument('--no-sandbox')
options.add_argument('--disable-gpu')
options.add_argument('--log-level=3')
options.add_argument('--default-shm-size=32m')
options.add_argument('--disable-translate')
options.add_argument('--disable-extensions')
options.add_argument("--disable-search-engine-choice-screen") #from v127
ptions.add_argument("--proxy-server='direct://'")
options.add_argument("--proxy-bypass-list=*")
options.set_capability('acceptInsecureCerts', True)
s = ChromeService(log_output="./chromedriver.log", service_args=['--append-log', '--readable-timestamp', '--log-level=DEBUG'])
driver = webdriver.Chrome(service=s, options=options)
if downloadpath:
params = {'behavior': 'allow', 'downloadPath': downloadpath}
driver.execute_cdp_cmd('Page.setDownloadBehavior', params)
log.error(f"downloadpath:{downloadpath}")
</code></pre>
<p><code>Chromedriver: 131.0.6778.69, Chrome: 131.0.6778.86, Selenium: 4.26.1, win11</code></p>
<p>In chromedriver.log I see this:</p>
<pre><code>[11-22-2024 15:50:50.149][INFO]: [8910dee67519a9b5866d52afebfbf3c1] COMMAND ExecuteCDP {
"cmd": "Page.setDownloadBehavior",
"params": {
"behavior": "allow",
"downloadPath": "C:\\dist\\work\\remote-eseal-fullstack-test"
}
}
[11-22-2024 15:50:50.149][INFO]: Waiting for pending navigations...
[11-22-2024 15:50:50.149][DEBUG]: DevTools WebSocket Command: Runtime.evaluate (id=11) (session_id=61E316801F90E64FADC42C44338093B5) A4E93A55626FB44C23A9F9249BA7239F {
"expression": "1"
}
[11-22-2024 15:50:50.150][DEBUG]: DevTools WebSocket Response: Runtime.evaluate (id=11) (session_id=61E316801F90E64FADC42C44338093B5) A4E93A55626FB44C23A9F9249BA7239F {
"result": {
"description": "1",
"type": "number",
"value": 1
}
}
[11-22-2024 15:50:50.150][INFO]: Done waiting for pending navigations. Status: ok
[11-22-2024 15:50:50.150][DEBUG]: DevTools WebSocket Command: Page.setDownloadBehavior (id=12) (session_id=61E316801F90E64FADC42C44338093B5) A4E93A55626FB44C23A9F9249BA7239F {
"behavior": "allow",
"downloadPath": "C:\\dist\\work\\remote-eseal-fullstack-test"
}
:
<down to last entry mentioning the pdf>
:
[11-22-2024 15:50:55.321][DEBUG]: DevTools WebSocket Event: Page.windowOpen (session_id=D072486C5680E0BC8FE19F80C8467224) 9A5B198EA74FBCFC94BF48195F3EB3B8 {
"url": "https://**********/remote-eseal-demo-web/signed/buypass-finse-2022_signed.pdf",
"userGesture": true,
"windowFeatures": [ "menubar", "toolbar", "status", "scrollbars", "resizable", "noopener" ],
"windowName": "_blank"
}
</code></pre>
<p><a href="https://i.sstatic.net/KPVwm4nG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KPVwm4nG.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><selenium-chromedriver><google-chrome-devtools>
|
2024-11-22 16:23:49
| 0
| 3,731
|
MortenB
|
79,215,742
| 2,155,362
|
How can I declare a string NumPy array?
|
<p>Below is my code:</p>
<pre><code>import numpy as np
row_length = 5
col_length = 3
x = np.empty([row_length,col_length],dtype=str)
x[1,2]='ddd'
print(x)
</code></pre>
<p>and the result is:</p>
<pre><code>[['' '' '']
['' '' 'd']
['' '' '']
['' '' '']
['' '' '']]
</code></pre>
<p>why? The result I expected is:</p>
<pre><code>[['' '' '']
['' '' 'ddd']
['' '' '']
['' '' '']
['' '' '']]
</code></pre>
<p>Who can tell me how to fix it?</p>
|
<python><numpy>
|
2024-11-22 16:05:57
| 1
| 1,713
|
user2155362
|
79,215,657
| 8,792,159
|
Why is joblib's Parallel delayed faster than dasks map block and compute()
|
<p>This question is possibly related to <a href="https://stackoverflow.com/questions/79206947/how-to-apply-a-function-to-each-2d-slice-of-a-4d-numpy-array-in-parallel-with-da?noredirect=1#comment139675265_79206947">this one</a>. I have 4D numpy array and would like to apply a function to each 2D slice across the first two dimensions. I have implemented the analysis for both dask and joblib. joblib takes only 0.10 minutes, the dask variant takes 8 minutes. Can somebody explain me why?</p>
<p>Here's an MRE:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import time
import dask.array as da
import statsmodels.formula.api as smf
from joblib import Parallel,delayed
from statsmodels.tools.sm_exceptions import ConvergenceWarning
import warnings
warnings.simplefilter('ignore', ConvergenceWarning)
###############################################################################
## Preparation (only to create data)
###############################################################################
rng = np.random.RandomState(42)
def generate_4d_array(rng,time_steps=10,brain_regions=15,transitions=50, subjects=200,
time_autocorr=0.8, subject_autocorr=0.6, dtype=np.float32):
"""
Generates a 4D array with temporal and subject-based autocorrelation,
and performs Z-score normalization over axis 1 (brain regions).
Parameters:
- rng (np.random.RandomState): RandomState instance for reproducibility (mandatory).
- time_steps (int): Number of time points (default is 100).
- brain_regions (int): Number of brain regions (default is 300).
- transitions (int): Number of transitions per subject (default is 50).
- subjects (int): Number of subjects (default is 100).
- time_autocorr (float): Temporal autocorrelation coefficient (0 < time_autocorr < 1, default is 0.8).
- subject_autocorr (float): Subject autocorrelation coefficient (0 < subject_autocorr < 1, default is 0.6).
- dtype (data type): Data type of the array (default is np.float32).
Returns:
- numpy.ndarray: 4D array with shape (time_steps, brain_regions, transitions, subjects),
Z-score normalized over brain regions (axis 1).
"""
# Generate base signals for subjects
subject_base_signal = rng.normal(size=(subjects, brain_regions)).astype(dtype) * 2
# Precompute random noise for efficiency
noise = rng.normal(scale=0.5, size=(time_steps, brain_regions, transitions, subjects)).astype(dtype)
transition_noise = rng.normal(scale=0.5, size=(transitions, subjects, brain_regions)).astype(dtype)
# Initialize the 4D array
data = np.zeros((time_steps, brain_regions, transitions, subjects), dtype=dtype)
# Populate the 4D array with time series data
for subject in range(subjects):
base_signal = subject_base_signal[subject]
for transition in range(transitions):
transition_signal = base_signal + transition_noise[transition, subject]
time_series = np.zeros((time_steps, brain_regions), dtype=dtype)
time_series[0] = transition_signal + noise[0, :, transition, subject]
# Temporal autocorrelation generation
for t in range(1, time_steps):
time_series[t] = (time_autocorr * time_series[t - 1] +
(1 - time_autocorr) * transition_signal +
noise[t, :, transition, subject])
# Store in the data array
data[:, :, transition, subject] = time_series
# Perform Z-score normalization over axis 1 (brain regions)
mean = np.mean(data, axis=1, keepdims=True) # Mean of brain regions
std = np.std(data, axis=1, keepdims=True) # Standard deviation of brain regions
data = (data - mean) / std # Z-score normalization
return data
data = generate_4d_array(rng)
# how big is whole array and how big is each chunk?
array_gib = data.nbytes / (1024 ** 3)
chunk_gib = data[0, 0, :, :].nbytes / (1024 ** 3)
print(f"Memory occupied by array: {array_gib} GiB, Memory occupied by chunk: {chunk_gib} GiB")
###############################################################################
## With joblib
###############################################################################
def mixed_model(a):
'''Runs one mixed model for one region and one point in time over
all subjects and state transitions'''
# make a dataframe out of input chunk
df = pd.DataFrame(a)
df.index.name = 'transition'
df.columns.name = 'subject'
df = df.melt(ignore_index=False)
# run mixel model
result = smf.mixedlm("value ~ 1", df, groups=df["subject"]).fit(method=["cg"])
t_stat = result.tvalues['Intercept']
return t_stat
def mixed_models(data):
'''Compute mixed-model for every region and every point in time'''
time_points, regions = data.shape[:2]
result = np.zeros((time_points,regions)) # Result matrix (1001x376)
result = Parallel(n_jobs=-1)(delayed(mixed_model)(data[i,j])
for i in range(time_points)
for j in range(regions))
result = np.array(result).reshape((time_points, regions))
return result
start = time.time()
result_matrix_1 = mixed_models(data)
print(f"Took {(time.time() - start) / 60} minutes")
###############################################################################
## With Dask
###############################################################################
def mixed_model(chunk):
'''Runs one mixed model for one region and one point in time over
all subjects and state transitions'''
# make an array out of chunk
X = chunk[0,0,:,:]
# make a dataframe out of input chunk
X = pd.DataFrame(X)
X.index.name = 'transition'
X.columns.name = 'subject'
X = X.melt(ignore_index=False)
# run mixel model
result = smf.mixedlm("value ~ 1", X, groups=X["subject"]).fit(method=["cg"])
t_stat = result.tvalues['Intercept']
# return single value array
t_stat = np.array(t_stat)[None,None]
return t_stat
def mixed_models(data):
# map function to each chunk and compute
result_matrix = data.map_blocks(mixed_model,drop_axis=[2,3]).compute()
return result_matrix
start = time.time()
# convert to dask array (overwrite to not occupy RAM twice)
data = da.from_array(data,chunks=(1,1,data.shape[2],data.shape[3]))
# map function to each chunk and compute
result_matrix_2 = mixed_models(data)
print(f"Took {(time.time() - start) / 60} minutes")
###############################################################################
## Compare outputs
###############################################################################
print(f"Outputs are equal: {np.array_equal(result_matrix_1,result_matrix_2)}")
</code></pre>
|
<python><numpy><parallel-processing><dask><joblib>
|
2024-11-22 15:39:31
| 0
| 1,317
|
Johannes Wiesner
|
79,215,556
| 7,009,666
|
Pydantic throwing pydantic.errors.PydanticSchemaGenerationError all of a sudden
|
<p>I have a package (<code>llama_index</code>) in my project which uses a bunch of pydantic classes. I have been using this dependency without any issues for a couple days. Now today, all of a sudden, I try to run one of my scripts, and it throws an error when trying to import</p>
<pre><code>from llama_index.core.schema import Document, TextNode
</code></pre>
<p>Here is the main part of the error message</p>
<pre><code>schema
raise PydanticSchemaGenerationError(
pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for typing.AsyncGenerator[str, NoneType]. Set `arbitrary_types_allowed=True` in the model_config to ignore this error or implement `__get_pydantic_core_schema__` on your type to fully support it.
If you got this error by calling handler(<some type>) within `__get_pydantic_core_schema__` then you likely need to call `handler.generate_schema(<some type>)` since we do not call `__get_pydantic_core_schema__` on `<some type>` otherwise to avoid infinite recursion.
For further information visit https://errors.pydantic.dev/2.10/u/schema-for-unknown-type
</code></pre>
<p>I would understand what to do if this was one of my classes throwing this error (just add in the class attribute of <code>arbitrary_types_allowed=True</code> ).</p>
<p>But this is springing up from a class in the llama_index package, which up until now was working without any issues</p>
<pre><code> File "/home/stevea/repos/interne/document-processing/.venv/lib/python3.12/site-packages/llama_index/core/instrumentation/events/query.py", line 21, in <module>
class QueryEndEvent(BaseEvent):
File "/home/stevea/repos/interne/document-processing/.venv/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py", line 226, in __new__
complete_model_class(
File "/home/stevea/repos/interne/document-processing/.venv/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py", line 658, in complete_model_class
schema = cls.__get_pydantic_core_schema__(cls, handler)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
</code></pre>
<p>I suspect that it may have to do with the pydantic version I am running being incompatible with the llama_index package, but I checked my poetry lock file and the package is in line with the one I have installed</p>
<pre><code>[[package]]
name = "llama-index-core"
version = "0.11.23"
description = "Interface between LLMs and your data"
optional = false
python-versions = "<4.0,>=3.8.1"
files = [
{file = "llama_index_core-0.11.23-py3-none-any.whl", hash = "sha256:25a0cb4a055bfb348655ca4acd1b475529bd8537a7b81874ef14ed13f56e06c1"},
{file = "llama_index_core-0.11.23.tar.gz", hash = "sha256:e150859696a0eae169fe19323f46e9a31af2c12c3182012e4d0353ea8eb06d24"},
]
[package.dependencies]
...
pydantic = ">=2.7.0,<3.0.0"
</code></pre>
<p>My installed version:</p>
<pre><code>[[package]]
name = "pydantic"
version = "2.10.0"
description = "Data validation using Python type hints"
optional = false
python-versions = ">=3.8"
</code></pre>
<p>And I've also checked the history of my poetry file and none of the pydantic stuff has changed since I initially created the venv. At this point I don't understand what could be causing this or how I can fix it. I've tried re-installing the venv but no luck...</p>
|
<python><python-3.x><pydantic><llama-index>
|
2024-11-22 15:13:30
| 0
| 653
|
Steve Ahlswede
|
79,215,377
| 1,745,291
|
How to connect to same in-memory sqlite database instance, with both sync and async sessions with sqlalchemy?
|
<p>Is it possible (and if so, how ?) with sqlalchemy to have an instance of <code>AsyncEngine</code> and one of <code>Engine</code> pointing to the same <strong>in-memory</strong> sqlite database ?</p>
<p>I know it is possible with file databases, and I also know the default behavior when creating such engines when working in-memory is to create two distinct in-memory db. Is it possible to share the db, so that sessions and async sessions share the same DB and reflect each other changes ?</p>
|
<python><sqlite><sqlalchemy><in-memory>
|
2024-11-22 14:18:33
| 0
| 3,937
|
hl037_
|
79,214,996
| 1,581,090
|
How can I get the link/reference of a "Requirement" and "Testrail: Cases" from a Jira entry via the Python API?
|
<p>When using JIRA with TestRail and Confluence, you can link a Requirement to a ticket and you can link a TestRail Test Case to a Jira ticket. For example:</p>
<p><a href="https://i.sstatic.net/XWgvJKsc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XWgvJKsc.png" alt="Enter image description here" /></a></p>
<p>Now using the Jira Python module, it is possible to get the ticket details using a code like</p>
<pre><code>from jira import JIRA
myjira = JIRA(server="https://my.jiraserver.com/", basic_auth=(username, password))
issue = jira.issue("PROJ-1")
print(issue.raw)
</code></pre>
<p>and print all the details of the ticket. Unfortunately, there isn't any mention of the linked requirement "DALEX-REQ-001" or the TestRail TestCase "C155535"!</p>
<p>How can I use the Jira API to get these two fields for the Requirement and the TestRail TestCase?</p>
|
<python><jira>
|
2024-11-22 12:17:17
| 4
| 45,023
|
Alex
|
79,214,906
| 774,575
|
How to solve 'cannot install both pin-1-1 and pin-1-1'?
|
<p>While installing <a href="https://anaconda.org/conda-forge/vispy" rel="nofollow noreferrer">VisPy</a> in a miniconda environment:</p>
<pre><code>> conda install vispy
Channels:
- conda-forge
- defaults
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: \ warning libmamba Problem type not implemented SOLVER_RULE_STRICT_REPO_PRIORITY
failed
LibMambaUnsatisfiableError: Encountered problems while solving:
- cannot install both pin-1-1 and pin-1-1
Could not solve for environment specs
The following packages are incompatible
ββ pin-1 is installable with the potential options
ββ pin-1 1, which can be installed;
ββ pin-1 1 conflicts with any installable versions previously reported.
Pins seem to be involved in the conflict. Currently pinned specs:
- python 3.11.* (labeled as 'pin-1')
</code></pre>
<p>Doesn't make sense for me, but seems related to <a href="https://github.com/conda/conda-libmamba-solver/issues/354" rel="nofollow noreferrer">Unable to downgrade (python) if (unrelated) packages are pinned and some package needs to be removed</a></p>
<ul>
<li>what does that mean?</li>
<li>how to install VisPy safely? (without altering permanently the environment)</li>
</ul>
|
<python><conda><miniconda><vispy>
|
2024-11-22 11:55:00
| 1
| 7,768
|
mins
|
79,214,876
| 11,751,799
|
Locking `matplotlib` x-axis range and then plotting on top of it
|
<p>I can do the following in base <code>R</code> plotting.</p>
<pre class="lang-r prettyprint-override"><code>x1 <- c(3, 4)
y1 <- c(5, 8)
x2 <- c(2, 5)
y2 <- c(5, 7)
plot(x1, y1, type = 'l')
lines(x2, y2, col = 'red')
</code></pre>
<p><a href="https://i.sstatic.net/53OTqUiH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/53OTqUiH.png" alt="in R" /></a></p>
<p>The key point is that, when I call that final <code>lines</code> command, the plotting function has already fixed the x-axis.</p>
<p>How can I do this in <code>matplotlib</code> to superimpose a new set of data on top of the current axis graph while keeping the x-axis range the same as before but allowing the y-axis range to accommodate the vertical range of the new points?</p>
<p>The code below fails because it expands the x-axis.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
x1 = [3, 4]
y1 = [5, 8]
x2 = [2, 5]
y2 = [4, 9]
fig, ax = plt.subplots()
ax.plot(x1, y1)
ax.plot(x2, y2)
plt.show()
plt.close()
</code></pre>
<p><a href="https://i.sstatic.net/CbIPdlFr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CbIPdlFr.png" alt="python expands x-axis" /></a></p>
<p>The code below fails by cropping a bit too tight compared to how just a plot of <code>x1</code> and <code>y1</code> would be.</p>
<pre class="lang-py prettyprint-override"><code>fig, ax = plt.subplots()
ax.plot(x1, y1)
ax.plot(x2, y2)
ax.set_xlim([min(x1), max(x1)])
plt.show()
plt.close()
</code></pre>
<p><a href="https://i.sstatic.net/GsRo5wQE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GsRo5wQE.png" alt="python too tight" /></a></p>
<p>The code below has a little bit of padding to the right and to the left of the maximum and minmum values of <code>x1</code> but does not superimpose the orange line for <code>x2</code> and <code>y2</code>.</p>
<pre class="lang-py prettyprint-override"><code>fig, ax = plt.subplots()
ax.plot(x1, y1)
# ax.plot(x2, y2)
# ax.set_xlim([min(x1), max(x1)])
plt.show()
plt.close()
</code></pre>
<p><a href="https://i.sstatic.net/tr7t9zmy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tr7t9zmy.png" alt="python misses x2, y2" /></a></p>
<p>So how can I get <code>matplotlib</code> to superimpose the second set of data on top of the original graph while allowing the vertical axis to expand to include new y-values, yet keep the x-axis at the original range that contains a bit of padding to the right and the left?</p>
|
<python><matplotlib><plot>
|
2024-11-22 11:47:17
| 2
| 500
|
Dave
|
79,214,751
| 6,356,565
|
unnecessary axis over all subplots numbered from 0 to 1
|
<p><a href="https://i.sstatic.net/yr2xv8p0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yr2xv8p0.png" alt="enter image description here" /></a>I came up with this strange axis around all subplots I've plotted, which is numbered from 0 to 1.
I tried simplifying the code e.g. removing plotting through the for loop etc., but it still plots.
Any suggestions or ideas about it? Thanks.</p>
<pre><code>fig, ax = plt.subplots(figsize=(ncols*4,nrows*4))
#plt.subplots_adjust(wspace=0.0001,hspace=0.2)
plt.suptitle('Profiles \n run=%s'%(run), fontsize=10)
ibeg=1 ; iend=len(dtimes)+1
for i,idt in zip(range(ibeg,iend),range(0,len(dtimes))):
ax=plt.subplot(nrows, ncols,i)
dtime=dtimes[idt]
df=pd.read_excel('plots/%s/model_temp_profiles.xlsx'%run, sheet_name=dtime)
line1=ax.plot(df['TEMPERATURE'],df['depth'], color='black', linewidth=2, label='Temp. model')
lines=line1
[![enter image description here][1]][1]
ax.set_xlim(min_temp,max_temp)
ax.set_ylim(-0.1,max_depth)
#-adjust y and x labels and ticks
if i==ibeg or i in [5,9,13,17,21]: #- first column subplots
ax.set_ylabel('depth (m)', fontsize=9)
else: #-other subplots
ax.tick_params(left=False, labelleft=False)
ax.set_xlabel('degC', fontsize=9)
#ax.set_yticks(np.arange(min(depths), max(depths), 5.0))
ax.set_xticks(np.arange(0, max_temp, 5.0))
#- prepare legend
labs=[l.get_label() for l in lines]
ax.legend(lines, labs, loc=4, frameon=True, fontsize=10)
ax.grid(color='grey',linestyle=':', linewidth=0.2)
ax.set_title('Temp.\n%s' %dtime, x=x_title, y=y_title, fontsize=10,\
bbox=dict(facecolor='white', alpha=0.5, edgecolor='none'))
ax.invert_yaxis()
plt.savefig('plots/%s/profiles.png' %(run), dpi=600)
</code></pre>
|
<python><matplotlib>
|
2024-11-22 11:08:24
| 0
| 511
|
Behnam
|
79,214,362
| 8,176,731
|
Azure Functions: Microsoft.Azure.WebJobs.Script: WorkerConfig for runtime: python not found
|
<p>I have Azure function app which is deployed via pipeline in Azure DevOps. Deployment completes without any issues, but when I navigate to my app in Azure portal, on Overview page I get error message <code>Microsoft.Azure.WebJobs.Script: WorkerConfig for runtime: python not found.</code> Runtime version displays Error, instead of Python version.</p>
<p>What I've done:</p>
<ol>
<li>In deployment pipeline I'm setting the Python version as follows:</li>
</ol>
<pre><code>az functionapp config set --name ${{ parameters.pyFuncAppName }} \
--resource-group ${{ parameters.resourceGroupName }} \
--linux-fx-version 'Python|3.10'
</code></pre>
<ol start="2">
<li>Check App configuration in Configuration -> General settings.</li>
</ol>
<ul>
<li>On Stack settings I have Stack = Python</li>
<li>Python Version is 3.10.</li>
</ul>
<ol start="3">
<li>SSH into App host</li>
</ol>
<ul>
<li><code>python --version</code> gives <code>Python 3.10.15</code></li>
</ul>
<ol start="4">
<li>SSH into Kudu SSH</li>
</ol>
<ul>
<li><code>python</code> command not found</li>
<li><code>python3 --version</code> gives <code>Python 3.9.2</code> <-- is this the root of the issue?</li>
</ul>
<p>Unfortunately this is a client project and everything is extremely sensitive so I cannot publish pipeline code or function app code but I'm happy to provide clarification if needed.</p>
|
<python><azure><azure-devops><azure-functions>
|
2024-11-22 09:25:10
| 1
| 393
|
Kilipukki
|
79,214,347
| 9,400,502
|
I can not get an animation in jupyter notebook
|
<p>When I run the code below in VSC or directly run a script via the command line in linux, I get the sought animation. Since I have done most part of the work in jupyter notebook, which I am running locally (not google colab), I want to get the animation in the jupyter notebook which unfortunately is not providing the animation. I see a plot which is remaining static. Can you help make the animation run in jupyter notebook ? I thank you. Here is the code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from numpy.fft import fft, ifft
# Define the kinetic operator (Fourier space)
def kinetic_operator(psi, N, mu, epsilon, tau):
k_vals = np.fft.fftfreq(N, epsilon) * 2 * np.pi # Discrete k-values for FFT
K_vals = (2 * (np.sin(np.pi * k_vals / N)**2) / (mu * epsilon**2)) # Eigenvalues of K
psi_k = fft(psi) # Transform to Fourier space
psi_k = psi_k * np.exp(-1j * tau * K_vals) # Apply kinetic operator in k-space
return ifft(psi_k) # Transform back to real space
# Define the potential operator (diagonal in real space)
def potential_operator(psi, V, tau, half=False):
factor = np.exp(-1j * V * tau / 2) if half else np.exp(-1j * V * tau)
return factor * psi
# Strang splitting integrator
def strang_splitting_integrator(psi, V, N, tau, mu, epsilon):
# Apply half-step potential, full-step kinetic, and another half-step potential
psi_half_potential = potential_operator(psi, V, tau, half=True)
psi_kinetic = kinetic_operator(psi_half_potential, N, mu, epsilon, tau)
psi_new = potential_operator(psi_kinetic, V, tau, half=True)
return psi_new
# Animation function with optimizations
def animate_wavefunction(N, V, psi0, mu, epsilon, tau, steps=200, interval=50):
x = np.linspace(0, N * epsilon, N)
psi = np.copy(psi0) # Starting point for the wavefunction
fig, ax = plt.subplots()
line_prob, = ax.plot(x, np.abs(psi)**2, color='g', linestyle='--', label="Probability density")
ax.set_ylim(0, 1) # Adjust based on the scale of |psi|^2
ax.set_xlim(0, N * epsilon)
ax.set_title("Time Evolution of Wavefunction")
ax.set_xlabel("Position")
ax.set_ylabel("Probability Density")
#plt.show()
# The update function for the animation
def update(frame):
nonlocal psi # Ensure we update the original psi
# Update the wavefunction using Strang-Splitting integrator
psi = strang_splitting_integrator(psi, V, N, tau, mu, epsilon)
line_prob.set_ydata(np.abs(psi)**2) # Update probability density
return line_prob, # Return the updated line object for blitting
# Create the animation using FuncAnimation
ani = FuncAnimation(fig, update, frames=range(steps), interval=interval)#, blit=True)
plt.show()
return ani
# Set up parameters for the simulation
N = 200 # Decrease the number of lattice points to improve performance
mu = 1.0 # Mass of particle
epsilon = 0.1 # Small spatial grid spacing for a better approximation of the continuum
tau = 0.1 # Time step for accurate evolution
steps = 200 # Number of time steps for animation
# Initial wavefunction (Gaussian wave packet)
x = np.linspace(0, N * epsilon, N)
psi0 = np.exp(-(x - N * epsilon / 2)**2 / (2.0 * (N * epsilon / 20)**2))
psi0 /= np.sqrt(epsilon) * np.linalg.norm(psi0) # Normalize wavefunction
# Define potential V (free particle case)
V = np.zeros(N) # No potential energy (free particle)
# Run the animation
ani = animate_wavefunction(N, V, psi0, mu, epsilon, tau, steps=steps, interval=50)
</code></pre>
|
<python><jupyter-notebook>
|
2024-11-22 09:20:56
| 0
| 569
|
user249018
|
79,214,094
| 11,460,896
|
How to Optimize Preprocessing and Post-Processing in DETR-Based Object Detection?
|
<h3>My Question:</h3>
<p>How can I reduce the time spent on preprocessing and post-processing?</p>
<h3>Background Information</h3>
<p>I'm implementing object detection on video frames using <a href="https://github.com/facebookresearch/detr" rel="nofollow noreferrer">DETR</a>. My system processes frames from a 30 FPS, 30-second video (900 frames total), which are read from a Redis queue in batches of 30 and processed on an NVIDIA RTX A4000 GPU. Despite the high-end hardware, I noticed that the GPU is not fully utilized. Here's a breakdown of my pipeline and timing results:</p>
<h4>Pipeline Overview</h4>
<ol>
<li>Frames are read from Redis and decoded.</li>
<li>The <code>DetrImageProcessor</code> processes the frames.</li>
<li>The DETR model performs inference.</li>
<li>Post-processing is applied to generate detection results.</li>
</ol>
<h3>Timing Results (Total: 74.38 seconds for 900 frames):</h3>
<h4>Key Observations:</h4>
<ul>
<li>Post-processing (<code>post_process_object_detection</code>) is the bottleneck, taking <strong>34.13 seconds</strong> for all frames.</li>
<li>Input preparation (<code>processor(images=frames, ...)</code>) also takes significant time: <strong>29.30 seconds</strong>.</li>
<li>Total detection time (<code>outputs = model(**inputs)</code>) takes 3.06 seconds.</li>
</ul>
<p>Hereβs my batch processing code:</p>
<p><strong><code>frame_consumer.py</code>:</strong></p>
<pre class="lang-py prettyprint-override"><code># Batch processing logic
def process_batch():
for _ in range(30):
image_bytes = redis_client.rpop(FRAME_QUEUE, 30)
imgs = decode_img_list(image_bytes)
results = model2.process_batch(imgs)
</code></pre>
<p><strong><code>model.py</code>:</strong></p>
<pre class="lang-py prettyprint-override"><code>import torch
from transformers import DetrImageProcessor, DetrForObjectDetection
from processing_timer import ProcessingTimer
import time
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"device: {device}")
processor = DetrImageProcessor.from_pretrained(
"facebook/detr-resnet-50", revision="no_timm"
)
model = DetrForObjectDetection.from_pretrained(
"facebook/detr-resnet-50", revision="no_timm"
).to(device)
model.eval()
def process_batch(frames):
inputs = processor(images=frames, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model(**inputs)
target_sizes = torch.tensor([frame.shape[:2] for frame in frames]).to(device)
results = processor.post_process_object_detection(outputs, target_sizes=target_sizes)
del outputs
del inputs
torch.cuda.empty_cache()
return results
</code></pre>
<h3>Additional Information:</h3>
<ul>
<li><strong>GPU:</strong> NVIDIA RTX A4000 16GB</li>
<li><strong>Processor:</strong> Intel Core i9-13900K</li>
<li><strong>CUDA version:</strong> 12.4</li>
<li><strong>Python version:</strong> 3.12</li>
<li><strong>PyTorch version:</strong> 2.5.1+cu124</li>
</ul>
<h3>What I've Tried:</h3>
<ul>
<li>Profiling individual pipeline steps to identify bottlenecks.</li>
<li>Clearing GPU cache after each batch.</li>
</ul>
|
<python><deep-learning><pytorch><computer-vision><huggingface-transformers>
|
2024-11-22 07:53:34
| 0
| 307
|
birdalugur
|
79,213,940
| 991,234
|
Reading data from a CSV file does not give me the correct output
|
<p>I am trying to read some records from a CSV file. There are some characters prepended to the first records file column data.</p>
<p>Python code:</p>
<pre><code>import csv
with open('test.csv','r') as csvfile:
spamreader = csv.reader(csvfile)
for row in spamreader:
print(row)
</code></pre>
<p>CSV file content:</p>
<pre><code>SO 5/4-3
DO 7015/2-1
</code></pre>
<p>There is only one column in the CSV file. Just copying the file above contents and changing the file extension to .csv should be enough to reproduce the issue.</p>
<p>Output:</p>
<pre><code>['SO 5/4-3']
['DO 7015/2-1']
</code></pre>
|
<python>
|
2024-11-22 06:55:05
| 0
| 2,295
|
Joshua
|
79,213,910
| 988,279
|
How to compare two dicts with deepdiff and modify dict in place?
|
<p>I've a list of two dicts and compare it with deepdiff.
How can I modify/overwrite the values in the dict1 with the modified values of the dict2 "in-place"?</p>
<pre><code>import deepdiff
dict_1 = [{"id": "first", "name": "first"}, {"id": "second", "name": "second"}, {"id": "third", "name": "third"}]
dict_2 = [{"id": "first", "name": "first"}, {"id": "second modified", "name": "second modified"}, {"id": "third", "name": "third"}]
diff = deepdiff.DeepDiff(dict_1, dict_2).get('values_changed',{})
print(diff)
</code></pre>
<p>This results to:</p>
<pre><code>{"root[1]['id']": {'new_value': 'second modified', 'old_value': 'second'}, "root[1]['name']": {'new_value': 'second modified', 'old_value': 'second'}}
</code></pre>
<p>How can I process the results of Deepdiff? The result should be:</p>
<pre><code>dict_1 = [{"id": "first", "name": "first"}, {"id": "second modified", "name": "second modified"}, {"id": "third", "name": "third"}]
</code></pre>
<p>Abbendum:
If the "in-place" replacement is not working, a newly created dict_3 would also be ok.</p>
|
<python>
|
2024-11-22 06:47:23
| 1
| 522
|
saromba
|
79,213,043
| 876,201
|
Installing Intel TBB on Mac OS Big Sur
|
<p>Due to certain dependencies, we are restricted to Mac OS Big Sur for the moment. We are trying to install <code>Intel tbb</code> via Homebrew with <code>brew install tbb</code> but failing with the following error. Based on the error message, it appeared to us Homebrew is trying to install <code>Python 3.13.0</code>, looks for Tkinter for the same, and is unable to find. So we tried to do <code>brew install python-tk</code>. Homebrew decided that Python 3.13.0 is a dependency, and decided to install the latter first, and so we are stuck in a cycle. Could you see where we are going wrong, and how we could fix this issue so as to have tbb up and running? Thank you very much!</p>
<pre><code>==> Installing dependencies for tbb: python-setuptools, python@3.13, pcre2, swig and hwloc
==> Installing tbb dependency: python-setuptools
==> Downloading https://ghcr.io/v2/homebrew/core/python-setuptools/manifests/75.6.0
Already downloaded: /Users/senthil/Library/Caches/Homebrew/downloads/78d060d30c6c92fd7d0f6232745566828cf6b8bc6c56732c79acd6daf82d7fda--python-setuptools-75.6.0.bottle_manifest.json
==> Pouring python-setuptools--75.6.0.all.bottle.tar.gz
πΊ /usr/local/Cellar/python-setuptools/75.6.0: 984 files, 8.2MB
==> Installing tbb dependency: python@3.13
Warning: Your Xcode (12.5.1) is outdated.
Please update to Xcode 13.2.1 (or delete it).
Xcode can be updated from the App Store.
==> Patching
==> Applying 3.13-sysconfig.diff
==> ./configure --enable-ipv6 --datarootdir=/usr/local/Cellar/python@3.13/3.13.0_1/share --datadir=/usr/local/Cellar/python@3.13/3.13.0_1/s
==> make
==> make install PYTHONAPPSDIR=/usr/local/Cellar/python@3.13/3.13.0_1
Last 15 lines from /Users/senthil/Library/Logs/Homebrew/python@3.13/03.make:
/usr/bin/install -c -m 755 Modules/_socket.cpython-313-darwin.so /usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload/_socket.cpython-313-darwin.so
/usr/bin/install -c -m 755 Modules/syslog.cpython-313-darwin.so /usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload/syslog.cpython-313-darwin.so
/usr/bin/install -c -m 755 Modules/termios.cpython-313-darwin.so /usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload/termios.cpython-313-darwin.so
/usr/bin/install -c -m 755 Modules/_posixshmem.cpython-313-darwin.so /usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload/_posixshmem.cpython-313-darwin.so
/usr/bin/install -c -m 755 Modules/_multiprocessing.cpython-313-darwin.so /usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload/_multiprocessing.cpython-313-darwin.so
/usr/bin/install -c -m 755 Modules/_ctypes.cpython-313-darwin.so /usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload/_ctypes.cpython-313-darwin.so
/usr/bin/install -c -m 755 Modules/_curses.cpython-313-darwin.so /usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload/_curses.cpython-313-darwin.so
/usr/bin/install -c -m 755 Modules/_curses_panel.cpython-313-darwin.so /usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload/_curses_panel.cpython-313-darwin.so
/usr/bin/install -c -m 755 Modules/_sqlite3.cpython-313-darwin.so /usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload/_sqlite3.cpython-313-darwin.so
/usr/bin/install -c -m 755 Modules/_ssl.cpython-313-darwin.so /usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload/_ssl.cpython-313-darwin.so
/usr/bin/install -c -m 755 Modules/_hashlib.cpython-313-darwin.so /usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload/_hashlib.cpython-313-darwin.so
/usr/bin/install -c -m 755 Modules/_uuid.cpython-313-darwin.so /usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload/_uuid.cpython-313-darwin.so
/usr/bin/install -c -m 755 Modules/_tkinter.cpython-313-darwin.so /usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload/_tkinter.cpython-313-darwin.so
install: Modules/_tkinter.cpython-313-darwin.so: No such file or directory
make: *** [sharedinstall] Error 71
</code></pre>
|
<python><tkinter><homebrew><tbb>
|
2024-11-21 21:58:31
| 0
| 350
|
mskb
|
79,212,904
| 850,781
|
Why is tz-naive Timestamp converted to integer while tz-aware is kept as Timestamp?
|
<p><strong>Understandable and expected</strong> (tz-aware):</p>
<pre><code>import datetime
import numpy as np
import pandas as pd
aware = pd.DatetimeIndex(["2024-11-21", "2024-11-21 12:00"], tz="UTC")
eod = datetime.datetime.combine(aware[-1].date(), datetime.time.max, aware.tz)
aware, eod, np.concat([aware, [eod]])
</code></pre>
<p>returns</p>
<pre><code>(DatetimeIndex(['2024-11-21 00:00:00+00:00', '2024-11-21 12:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq=None),
datetime.datetime(2024, 11, 21, 23, 59, 59, 999999,
tzinfo=datetime.timezone.utc),
array([Timestamp('2024-11-21 00:00:00+0000', tz='UTC'),
Timestamp('2024-11-21 12:00:00+0000', tz='UTC'),
datetime.datetime(2024, 11, 21, 23, 59, 59, 999999,
tzinfo=datetime.timezone.utc)],
dtype=object))
</code></pre>
<p>note <strong>Timestamps</strong> (and a <code>datetime</code>) in the return value of <code>np.concat</code>.</p>
<p><strong>Unexpected</strong> (tz-naive):</p>
<pre><code>naive = pd.DatetimeIndex(["2024-11-21", "2024-11-21 12:00"])
eod = datetime.datetime.combine(naive[-1].date(), datetime.time.max, aware.tz)
naive, eod, np.concat([naive, [eod]])
</code></pre>
<p>returns</p>
<pre><code>(DatetimeIndex(['2024-11-21 00:00:00', '2024-11-21 12:00:00'],
dtype='datetime64[ns]', freq=None),
datetime.datetime(2024, 11, 21, 23, 59, 59, 999999),
array([1732147200000000000, 1732190400000000000,
datetime.datetime(2024, 11, 21, 23, 59, 59, 999999)], dtype=object))
</code></pre>
<p>Note <strong>integers</strong> (and a <code>datetime</code>) in the return value of <code>np.concat</code>.</p>
<ol>
<li>why do I get integers in the concatenated array for a tz-naive index?</li>
<li>how do I avoid it? I.e., how do I append EOD to a tz-naive <code>DatetimeIndex</code>?</li>
</ol>
<p><strong>PS</strong>: interestingly enough, at the <code>numpy</code> level the indexes are identical:</p>
<pre><code>np.testing.assert_array_equal(aware.values, naive.values)
</code></pre>
|
<python><pandas><numpy><datetime><timezone>
|
2024-11-21 20:51:27
| 1
| 60,468
|
sds
|
79,212,853
| 1,033,217
|
SWIG Hello World; ImportError: dynamic module does not define module export function
|
<p>This is supposed to be the absolute minimum Hello World using SWIG, C, and setuptools. But the following exception is raised when the module is imported:</p>
<pre class="lang-none prettyprint-override"><code>>>> import hello
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
import hello
ImportError: dynamic module does not define module export function (PyInit_hello)
</code></pre>
<p>This is the directory structure:</p>
<pre class="lang-none prettyprint-override"><code>README.md
pyproject.toml
src
src/hc
src/hc/hello.c
src/hc/hello.i
</code></pre>
<p>Here is the <code>pyproject.toml</code></p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools>=75.6"]
build-backend = "setuptools.build_meta"
[project]
name = "helloc"
version = "0.0.1"
authors = [
{ name = "Foo" }
]
description = "Hello world SWIG C"
readme = "README.md"
requires-python = ">=3.13"
classifiers = [
"Development Status :: 1 - Planning",
"License :: Public Domain",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.13"
]
[tool.setuptools]
ext-modules = [
{ name = "hello", sources = ["src/hc/hello.c", "src/hc/hello.i"] }
]
</code></pre>
<p>Here is <code>hello.c</code></p>
<pre class="lang-c prettyprint-override"><code>#include <stdio.h>
void say_hello() {
printf("Hello, World!\n");
}
</code></pre>
<p>And here is the interface file:</p>
<pre class="lang-none prettyprint-override"><code>%module hello
%{
void say_hello();
%}
void say_hello();
</code></pre>
|
<python><c><setuptools><swig>
|
2024-11-21 20:29:34
| 1
| 795
|
Utkonos
|
79,212,852
| 5,056,387
|
Constraint to forbid NaN in postgres numeric columns using Django ORM
|
<p>Postgresql allows <code>NaN</code> values in numeric columns <a href="https://www.postgresql.org/docs/current/datatype-numeric.html" rel="nofollow noreferrer">according to its documentation here.</a></p>
<p>When defining Postgres tables using <code>Django ORM</code>, a <code>DecimalField</code> is translated to <code>numeric</code> column in Postgres. Even if you define the column as bellow:</p>
<pre class="lang-none prettyprint-override"><code>from django.db import models
# You can insert NaN to this column without any issue
numeric_field = models.DecimalField(max_digits=32, decimal_places=8, blank=False, null=False)
</code></pre>
<p>Is there a way to use Python/Django syntax to forbid <code>NaN</code> values in this scenario? The Postgres native solution is to probably use some kind of constraint. But is that possible using Django syntax?</p>
<p><strong>Edit: As willeM_ Van Onsem pointed out, Django does not allow <code>NaN</code> to be inserted to <code>DecimalField</code> natively. However, the DB is manipulated from other sources as well, hence, the need to have an extra constraint at the DB level (as opposed to Django's built-in application level constraint).</strong></p>
|
<python><django><postgresql><orm>
|
2024-11-21 20:28:55
| 2
| 722
|
szamani20
|
79,212,797
| 4,588,188
|
DRF serializer missing fields after validation
|
<p>I have a simple DRF serializer:</p>
<pre><code>class CustomerFeatureSerializer(serializers.Serializer):
"""
Serializer for a customer feature.
"""
feature_id = serializers.CharField()
feature_value = serializers.SerializerMethodField()
feature_type = serializers.CharField()
name = serializers.CharField()
def get_feature_value(self, obj):
"""
Return the feature_value of the feature.
"""
return obj.get("feature_value", None)
</code></pre>
<p>the <code>feature_value</code> field is a method field since it can be either a boolean, an integer, or a string value (so for now I just kept it as a method field to fetch the value).</p>
<p>I am using this to serialize some objects that I am hand-generating (they do not correspond 1:1 with a model) but for some reason after validation the <code>name</code> and <code>feature_value</code> fields are just completely disappearing.</p>
<pre><code>for feature in toggleable_features:
display = {
"feature_id": feature.internal_name,
"name": feature.name,
"feature_value": profile.feature_value(feature)
if profile.has_feature(feature.internal_name, use_cache=False)
else False,
"feature_type": feature.type,
}
features.append(display)
print(display)
print(features)
serializer = CustomerFeatureSerializer(data=features, many=True)
print(serializer.initial_data)
serializer.is_valid(raise_exception=True)
print(serializer.validated_data)
return Response(serializer.validated_data)
</code></pre>
<p>and the print statements output:</p>
<pre><code>{'feature_id': 'feat-id', 'name': 'feat name', 'feature_value': True, 'feature_type': 'boolean'}
[{'feature_id': 'feat-id', 'name': 'feat name', 'feature_value': True, 'feature_type': 'boolean'}]
[{'feature_id': 'feat-id', 'name': 'feat name', 'feature_value': True, 'feature_type': 'boolean'}]
[OrderedDict([('feature_id', 'feat-id'), ('feature_type', 'boolean'), ('name', 'feat name')])]
</code></pre>
<p>the value is just gone. Any idea why this could be happening? Is the object I am serializing not valid somehow?</p>
|
<python><django><serialization><django-rest-framework>
|
2024-11-21 20:08:17
| 0
| 618
|
AJwr
|
79,212,751
| 19,318,120
|
Celery rabbitmq, multiple consumers consuming the same task
|
<p>as the title says, I have rabbitmq on a server and 6 machines connected to it with celery on them</p>
<p>machine 1:
[2024-11-21 19:15:12,181: INFO/MainProcess] Task tasks.task_name[ef00cc1f-1be5-44ba-8911-90c0746196ba] received</p>
<p>machine 2:
[2024-11-21 19:04:29,949: INFO/MainProcess] Task tasks.task_name[ef00cc1f-1be5-44ba-8911-90c0746196ba] received</p>
<p>celery config:</p>
<pre><code>import os
from celery import Celery
import settings
import sys
sys.path.insert(0, f"{settings.BASE_DIR.parent}/")
broker_url = "amqp://username:password@machine_ip:rabbitmq_port/"
app = Celery(broker=broker_url)
app.conf.update(
task_acks_late=True,
broker_transport_options={'visibility_timeout': 3600}, # Adjust timeout as needed
broker_connection_retry_on_startup = True,
imports = ['tasks']
)
app.autodiscover_tasks()
# celery -A celery_app worker --pool=solo -Q queue_name -l info --logfile logs/celery.logs
</code></pre>
<p>I'm also using docker compose for rabbitmq</p>
<p>config:</p>
<pre><code>rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5677:5672"
- "15677:15672"
environment:
RABBITMQ_DEFAULT_USER: "username"
RABBITMQ_DEFAULT_PASS: "password"
RABBITMQ_DEFAULT_VHOST: "/"
volumes:
- ./rabbitmq_data:/var/lib/rabbitmq
</code></pre>
<p>celery task:</p>
<pre><code>@app.task(queue="queue_name")
def task_name(batch_s3_path, task_id, api_data, index):
pass
</code></pre>
<p>what am I missing? why is this happening?</p>
|
<python><rabbitmq><celery>
|
2024-11-21 19:55:00
| 0
| 484
|
mohamed naser
|
79,212,641
| 1,306,892
|
"ValueError: unsupported format character 'I' (0x49) at index 34" in Python LaTeX generation
|
<p>Iβm trying to generate a LaTeX document using Python. Hereβs my code:</p>
<pre class="lang-py prettyprint-override"><code>import sympy as sp
import numpy as np
import random
import matplotlib.pyplot as plt
import string
latex_document = ""
for counter in range(1, 301):
# Random parameters for the function
k = random.randint(2, 9)
x0 = random.choice([i for i in range(1, 9) if i != 0])
y0 = random.choice([i for i in range(1, 9) if i != 0])
# Initialize the LaTeX document
latex_document += r"""
% Positioning of the string of zeros in the top-right corner
\begin{textblock*}{2cm}(17cm, 1cm) % Modify position and size as needed
\textbf{000""" + str(counter) + r"""}
\end{textblock*}
\begin{center}
\large{{\bf Name of the class}}
\vspace{0.2cm}
{\bf November 29, 2026 }\\[.2cm]
Name of the teacher \\[1ex]
\vspace{0.3cm}
Surname, Name, Student ID .........................................................................
\end{center}
\vspace{0.5cm}
\normalsize
\begin{enumerate}
\item Write the function
\[
f(x, y) = \sqrt{{%d - (x - %d)^2 - (y - %d)^2}} + \frac{1}{x - %d}
\]
""" % (k, x0, y0)
# Close the document
latex_document += r"""
\end{enumerate}"""
# Save the LaTeX document to a file
with open('quiz.tex', 'w') as f:
f.write(latex_document)
print("LaTeX document generated and saved as 'quiz.tex'.")
</code></pre>
<p>When I run it, I get this error:</p>
<pre class="lang-none prettyprint-override"><code>ValueError: unsupported format character 'I' (0x49) at index 34
</code></pre>
<p>I suspect it has something to do with how Iβm formatting the string, but I canβt figure out why. How can I fix this issue?</p>
|
<python>
|
2024-11-21 19:17:56
| 1
| 1,801
|
Mark
|
79,212,561
| 16,611,809
|
Equivalent to pandas.to_csv() without filename for polars?
|
<p>Is there an equivalent of casting a Polars DataFrame to a string like for a Pandas DataFrame using <code>pandasdf.to_csv()</code> without an file path ("If None, the result is returned as a string.")?</p>
<p>Similar to this question (<a href="https://stackoverflow.com/questions/77238309/how-can-i-cast-a-polars-dataframe-to-string">How can I cast a Polars DataFrame to string</a>), as a workaround I first cast the Polars DataFrame to a Pandas DataFrame and then use <code>to_csv()</code> without a filename, which results the desiread tab delimited string of the whole Pandas DataFrame:</p>
<pre><code>polarsdf.to_pandas().to_csv(sep='\t', index=False)
</code></pre>
<p>What I ultimately want to achieve with that is to download the Polars DataFrame in a Shiny for Python app (similar to this question: <a href="https://stackoverflow.com/questions/76017296/how-to-add-download-button-in-shiny-for-pandas-dataframe">How to add "download button" in shiny for pandas dataframe</a>)</p>
<p>This I currently achieve using this code:</p>
<pre><code>def download_something():
yield polarsdf.to_pandas().to_csv(sep='\t', index=False)
</code></pre>
<p>If there is a way of doing this without casting the Polars DataFrame to a string, that would also be a sufficient answer to me.</p>
|
<python><python-polars><py-shiny>
|
2024-11-21 18:46:12
| 1
| 627
|
gernophil
|
79,212,507
| 16,611,809
|
Is there an ui.update_input_file and ui.update_download_button in Shiny for Python?
|
<p>I have a Shiny for Python app that uses <code>ui.input_file()</code> and <code>ui.download_button()</code>. For most of the UI Inputs there exists a corresponding <code>ui.update_...()</code> with that you can activate/deactivate a button (<code>ui.update_action_button(id="some_id", disabled=False)</code> or simply undo and made input (<code>ui.update_text("some_other_id", value='')</code>).</p>
<p>Is there a way to clear the selected file in an <code>ui.input_file()</code> input or to enable disable an <code>ui.download_button</code>? I know these are two questions, but they are so closely related that I didn't want to make this two questions.</p>
|
<python><py-shiny>
|
2024-11-21 18:26:53
| 1
| 627
|
gernophil
|
79,212,385
| 670,338
|
Basemap nightshade() on Robinson Projection and lon_0=-180
|
<p>I'm attempting to plot day/night shading on a Robinson projection centered at -180 degrees with Basemap, and as you can see, the shading doesn't look right. I'm also getting a warning about a non-monotonically increasing x coordinate. Maybe there's some way of using shiftgrid to fix the nightshade? Any suggestions would be greatly appreciated!</p>
<blockquote>
<p>WARNING: x coordinate not montonically increasing - contour plot
may not be what you expect. If it looks odd, your can either
adjust the map projection region to be consistent with your data, or
(if your data is on a global lat/lon grid) use the shiftgrid
function to adjust the data to be consistent with the map projection
region (see examples/contour_demo.py).</p>
</blockquote>
<p><a href="https://i.sstatic.net/KPjnAR2G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KPjnAR2G.png" alt="enter image description here" /></a></p>
<p>Here is some toy code to reproduce this plot:</p>
<pre><code>from mpl_toolkits.basemap import Basemap
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
# lon_0 is central longitude of projection.
# resolution = 'c' means use crude resolution coastlines.
m = Basemap(projection='robin',lon_0=-180,resolution='c')
m.drawcoastlines()
m.fillcontinents(color='coral',lake_color='aqua')
# draw parallels and meridians.
m.drawparallels(np.arange(-90.,120.,30.))
m.drawmeridians(np.arange(0.,360.,60.))
m.drawmapboundary(fill_color='aqua')
date = datetime(2024,10,10,6)
m.nightshade(date,alpha=0.2)
plt.title("Robinson Projection")
plt.savefig('test.png')
</code></pre>
|
<python><matplotlib><matplotlib-basemap><map-projections>
|
2024-11-21 17:44:04
| 0
| 914
|
GPSmaster
|
79,212,376
| 3,486,684
|
How can I change the selections presented by an Altair chart based on filtering performed by another selection?
|
<p>Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import polars as pl
import altair as alt
import numpy as np
# ============================================
# Creating the dummy data dataframe
channels = pl.DataFrame(pl.Series("channel", ["a", "b"]))
flavors = pl.DataFrame(pl.Series("flavor", ["spicy", "mild"]))
dates = pl.DataFrame(pl.Series("date", [0, 1]))
flavors_dates = flavors.join(dates, how="cross")
channels_flavors_dates = channels.join(flavors_dates, how="cross").filter(
~(pl.col.channel.eq("a") & pl.col.flavor.ne("spicy"))
)
rng = np.random.default_rng(seed=0)
df = channels_flavors_dates.with_columns(
pl.Series(
"stock",
rng.integers(0, 11, size=len(channels_flavors_dates)),
)
)
sys.displayhook(df)
# ============================================
channel_selection = alt.selection_point(
"channel_selection",
fields=["channel"],
bind=alt.binding_select(
options=list(df["channel"].unique()), name="channel_selector"
),
)
# the problem is that this is initialized with too many flavor options, as it is
# insensitive to the channel selection
flavor_selection = alt.selection_point(
"flavor_selection",
fields=["flavor"],
# options is a required property...
bind=alt.binding_select(
options=list(df["flavor"].unique()), name="flavor_selector"
),
)
alt.Chart(df).mark_bar().encode(alt.X("date:O"), alt.Y("stock:Q")).add_params(
channel_selection, flavor_selection
).transform_filter(channel_selection).transform_filter(flavor_selection)
</code></pre>
<p>The dummy dataframe created is:</p>
<pre><code>shape: (6, 4)
βββββββββββ¬βββββββββ¬βββββββ¬ββββββββ
β channel β flavor β date β stock β
β --- β --- β --- β --- β
β str β str β i64 β i64 β
βββββββββββͺβββββββββͺβββββββͺββββββββ‘
β a β spicy β 0 β 9 β
β a β spicy β 1 β 7 β
β b β spicy β 0 β 5 β
β b β spicy β 1 β 2 β
β b β mild β 0 β 3 β
β b β mild β 1 β 0 β
βββββββββββ΄βββββββββ΄βββββββ΄ββββββββ
</code></pre>
<p>Note that channel <code>a</code> only has <code>spicy</code> flavor available, while <code>b</code> has both <code>spicy</code> and <code>mild</code>.</p>
<p>However, the chart that is created does not update the flavors available for selection by the user based on the channel they select. This is because the <code>flavor</code> selector takes a fixed list of options (a required argument) that is insensitive to the <code>transform_filter</code> performed on the chart after <code>channel</code> selection.</p>
<p>Is there a way for me to update the options presented by <code>flavor</code> based on the selection the user has made for <code>channel</code>? There might be a way using <a href="https://altair-viz.github.io/user_guide/jupyter_chart.html#offline-usage" rel="nofollow noreferrer">JupyterChart</a>, but I am not sure if this produces an HTML chart which can is similarly interactive?</p>
|
<python><altair>
|
2024-11-21 17:41:28
| 0
| 4,654
|
bzm3r
|
79,212,366
| 1,435,803
|
Read same file from src and test in PyCharm
|
<p>Where do I put a resource file so that it can be accessed from both <code>src</code> and <code>test</code> in PyCharm?</p>
<p>I've got a PyCharm project structured like this:</p>
<pre><code>src
scrabble
board.py
words.txt
test
scrabble
test_board.py
</code></pre>
<p><code>board.py</code> contains this line:</p>
<pre class="lang-py prettyprint-override"><code>with open('words.txt') as file:
DICTIONARY = set(word.strip() for word in file)
</code></pre>
<p>Within <code>test_board.py</code>, which is a pytest file, I have this:</p>
<pre class="lang-py prettyprint-override"><code>from scrabble.board import *
</code></pre>
<p>This works fine if I run <code>board.py</code>, but if I run <code>test_board.py</code> I get:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'words.txt'
</code></pre>
<p>Where can I put <code>words.txt</code> so that it can be accessed from either place? What path should I use in <code>board.txt</code>?</p>
<p>Edit: Someone pointed to <a href="https://stackoverflow.com/questions/50499/how-do-i-get-the-path-and-name-of-the-python-file-that-is-currently-executing">How do I get the path and name of the python file that is currently executing?</a>, which answers a different but related question. Indeed, if I put</p>
<pre class="lang-py prettyprint-override"><code>import os
path = os.path.realpath('words.txt')
print(path)
</code></pre>
<p>inside <code>board.py</code>, I see different results when I run <code>board.py</code></p>
<pre><code>/Users/drake/PycharmProjects/algo_f24_solutions/src/scrabble/words.txt
</code></pre>
<p>vs when I run the tests in <code>test_board.py</code>:</p>
<pre><code>/Users/drake/PycharmProjects/algo_f24_solutions/test/words.txt
</code></pre>
<p>The same code is looking for the same file in different place depending on whence it is run. Yes, I could hard-code the full path, but that wouldn't work if someone cloned this repo on another machine.</p>
<p>My question, clarified: <em>within PyCharm</em>, where can I put the file so that the same code can access it no matter where the code is run?</p>
<p>This is an assignment I'm giving to students, so I'd rather not ask them to reconfigure path settings or anything like that. I want it to work "out of the box".</p>
|
<python><pycharm><pytest>
|
2024-11-21 17:39:41
| 2
| 462
|
Peter Drake
|
79,212,165
| 4,118,462
|
How does Pandas.Series.nbytes work for strings? Results don't seem to match expectations
|
<p>The help doc for <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.nbytes.html" rel="nofollow noreferrer">pandas.Series.nbytes</a> shows the following example:</p>
<pre><code>s = pd.Series(['Ant', 'Bear', 'Cow'])
s
</code></pre>
<p>0 Ant<br />
1 Bear<br />
2 Cow<br />
dtype: object</p>
<pre><code>s.nbytes
</code></pre>
<p>24<br />
<< end example >></p>
<p>How is that 24 bytes?<br />
I tried looking at three different encodings, none of which seems to yield that total.</p>
<pre><code>print(s.str.encode('utf-8').str.len().sum())
print(s.str.encode('utf-16').str.len().sum())
print(s.str.encode('ascii').str.len().sum())
</code></pre>
<p>10<br />
26<br />
10</p>
|
<python><pandas>
|
2024-11-21 16:43:40
| 1
| 395
|
MCornejo
|
79,212,123
| 3,241,653
|
Unauthorized to Delete Tweet with OAuth2.0 path on free-tier developer account
|
<h2>Context</h2>
<ul>
<li><p>I'm using a free-tier Twitter / X developer account.</p>
</li>
<li><p>Admittedly, the OAuth stuff confuses me. I have the following key-values in my .env (values removed), not knowing which are actually necessary and which aren't.</p>
<pre><code>API_KEY=
API_SECRET=
ACCESS_TOKEN=
ACCESS_TOKEN_SECRET=
CLIENT_ID=
CLIENT_SECRET=
BEARER_TOKEN=
</code></pre>
</li>
</ul>
<h2>The problem</h2>
<p>The following is a minimal and (semi-)complete example Twitter / X Flask client. It uses Tweepy and OAuth2.0 to authorize with Twitter / X via an auth url and callback flow, gets the authorized user's info, and gets the auth'd users last 10 tweets. These pieces work as expected.</p>
<p>The issue is when I try to use the delete flow (clicking the "Delete" button on the example), I get a 401 Unauthorized error.</p>
<p>How do I correct this so that I can delete my tweets?</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, jsonify, redirect, request, session, render_template_string
import tweepy
import os
from dotenv import load_dotenv
load_dotenv()
app = Flask(__name__)
app.secret_key = os.getenv("FLASK_SECRET_KEY", "supersecretkey")
CLIENT_ID = os.getenv("CLIENT_ID")
CLIENT_SECRET = os.getenv("CLIENT_SECRET")
CALLBACK_URL = "http://127.0.0.1:5000/callback"
oauth2_handler = tweepy.OAuth2UserHandler(
client_id=CLIENT_ID,
client_secret=CLIENT_SECRET,
redirect_uri=CALLBACK_URL,
scope=["tweet.read", "tweet.write", "users.read", "offline.access"],
)
@app.route("/", methods=["GET"])
def auth_redirect():
try:
auth_url = oauth2_handler.get_authorization_url()
return redirect(auth_url)
except Exception as e:
return jsonify({"error": str(e)}), 500
@app.route("/callback", methods=["GET"])
def callback():
try:
code = request.args.get("code")
if not code:
return jsonify({"error": "Missing authorization code"}), 400
access_token = oauth2_handler.fetch_token(request.url)
session["access_token"] = access_token
client = tweepy.Client(access_token["access_token"], wait_on_rate_limit=True)
user = client.get_me(user_auth=False)
tweets = client.get_users_tweets(id=user.data.id, max_results=10)
tweet_list = (
"".join(
[
f"""
<li>{tweet.text}
<form action="/delete_tweet/{tweet.id}" method="POST">
<button type="submit">Delete</button>
</form>
</li>
"""
for tweet in (tweets.data or [])
]
)
or "<p>No tweets found.</p>"
)
return render_template_string(
f"""
<h1>Authorization Successful</h1>
<h2>User Information</h2>
<ul>
<li><strong>ID:</strong> {user.data.id}</li>
<li><strong>Username:</strong> {user.data.username}</li>
<li><strong>Name:</strong> {user.data.name}</li>
</ul>
<h2>Last 10 Tweets</h2>
<ul>{tweet_list}</ul>
"""
)
except Exception as e:
return jsonify({"error": str(e)}), 500
@app.route("/delete_tweet/<string:tweet_id>", methods=["POST"])
def delete_tweet(tweet_id):
try:
access_token = session.get("access_token")
if not access_token:
return jsonify({"error": "Access token not found"}), 400
client = tweepy.Client(
consumer_key=CLIENT_ID,
consumer_secret=CLIENT_SECRET,
access_token=access_token["access_token"],
wait_on_rate_limit=True,
)
response = client.delete_tweet(tweet_id)
if response.data:
return jsonify({"message": f"Tweet {tweet_id} deleted successfully"})
return jsonify({"error": "Failed to delete tweet"}), 500
except Exception as e:
return jsonify({"error": str(e)}), 500
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
|
<python><twitter><oauth-2.0><tweepy><twitter-oauth>
|
2024-11-21 16:30:29
| 1
| 467
|
geofflittle
|
79,212,072
| 2,108,771
|
conflict when using multiprocessing's share memory
|
<p>I am using <code>multiprocessing</code>'s shared memory to share a numpy array between tasks. While each task should originally just read the array, I was curious if writing was also possible. I wrote the following example to test it in a similar situation as I actually use it. In this toy example, each process "opens" the array, adds 1 to the first element (which is initialized as <code>0.0</code>) and returns it. The returned values of the first index should therefore be <code>[1,2,3,...]</code>, which is mostly the case, but if I run it a few times, every now and then, I get an issue where two values are the same.</p>
<p>Is there a way to avoid these conflicts? I know that in this example it would make no sense (or not cause any speedup if other processes need to wait), but I found no way to control the access, so any pointers would be appreciated to fix the actually different problem.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from multiprocessing import shared_memory, Pool
from itertools import repeat
import time
def test_shm(N=500, n_proc=8, name='example'):
# create a shared memory array
a = np.random.rand(N, N, N).astype(np.float64)
a[0, 0, 0] = 0.0
shm = shared_memory.SharedMemory(
name=name, create=True, size=a.nbytes)
b = np.ndarray(a.shape, dtype=a.dtype, buffer=shm.buf)
b[:] = a[:]
shm.close()
with Pool(n_proc) as p:
res = p.starmap(
work, zip(range(n_proc), repeat(name), repeat(a.dtype), repeat(N)))
for r in res:
print(f'{r[0]}\t{r[1]}')
res = np.array([r[0] for r in res])
print('not ' * int(~np.all(np.sort(res) == 1 + np.arange(n_proc))) + 'all good')
shm.unlink()
def work(i, name, dtype, N=500):
shm = shared_memory.SharedMemory(name=name)
arr = np.ndarray((N, N, N), dtype=dtype, buffer=shm.buf)
# now do some work
time.sleep(2)
val = arr[0, 0, 0:2].copy()
val[0] += 1.0
arr[0, 0, 0] = val[0]
shm.close()
return val
if __name__ == '__main__':
test_shm()
</code></pre>
|
<python><multiprocessing><python-multiprocessing><shared-memory>
|
2024-11-21 16:18:53
| 2
| 1,253
|
John Smith
|
79,212,063
| 7,713,770
|
How to run django-ckeditor-5 with docker container?
|
<p>I have a django app and I installed the module django-ckeditor. I can run the app with the command:</p>
<pre><code>python manage.py runserver without any problems.
</code></pre>
<p>But after I build the docker container with the command:</p>
<pre><code>docker-compose -f docker-compose-deploy.yml up
</code></pre>
<p>I get this errors:</p>
<pre><code> import_module("%s.%s" % (app_config.name, module_to_search))
web-1 | File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
web-1 | return _bootstrap._gcd_import(name[level:], package, level)
web-1 | File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
web-1 | File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
web-1 | File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
web-1 | File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
web-1 | File "<frozen importlib._bootstrap_external>", line 850, in exec_module
web-1 | File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
web-1 | File "/usr/src/app/DierenWelzijnAdmin/admin.py", line 17, in <module>
web-1 | from ckeditor.widgets import CKEditorWidget
web-1 | ModuleNotFoundError: No module named 'ckeditor'
web-1 | [2024-11-21 15:50:43 +0000] [7] [INFO] Worker exiting (pid: 7)
web-1 | [2024-11-21 15:50:43 +0000] [1] [ERROR] Worker (pid:7) exited with code 3
web-1 | [2024-11-21 15:50:43 +0000] [1] [ERROR] Shutting down: Master
web-1 | [2024-11-21 15:50:43 +0000] [1] [ERROR] Reason: Worker failed to boot.
</code></pre>
<p>I added ckeditor in settings.py file:</p>
<pre><code>INSTALLED_APPS = [
'django_ckeditor_5'
</code></pre>
<p>I added ckeditor in requirements.txt file:</p>
<pre><code>
django-ckeditor-5==0.2.15
</code></pre>
<p>I added ckeditor in pip file:</p>
<pre><code>[packages]
django-ckeditor-5==0.2.15
[dev-packages]
[requires]
python_version = "3.11"
python_full_version = "3.11.*"
</code></pre>
<p>urls.py file:</p>
<pre><code> path('ckeditor5/', include('django_ckeditor_5.urls')),
</code></pre>
<p>docker-compose-deploy.yml file:</p>
<pre><code>services:
web:
image: crdierenwelzijn.azurecr.io/web1
build:
context: ./
dockerfile: Dockerfile.prod
restart: always
command: gunicorn DierenWelzijn.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/usr/src/app/staticfiles
- media_volume:/usr/src/app/media
expose:
- 8000
env_file:
- ./.env
proxy:
image: crdierenwelzijn.azurecr.io/proxy1
build:
context: ./proxy
restart: always
depends_on:
- web
ports:
- 80:80
volumes:
- static_volume:/home/app/staticfiles
- media_volume:/home/app/media
volumes:
static_volume:
media_volume:
</code></pre>
<p>dockerfile.prod:</p>
<pre><code># pull official base image
FROM python:3.9-alpine3.13
# ENV PATH="/scripts:${PATH}"
# set work directory
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /tmp/requirements.txt
COPY ./scripts /scripts
WORKDIR /usr/src/app
EXPOSE 8000
# install psycopg2 dependencies
ARG DEV=false
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
apk add --update --no-cache postgresql-client jpeg-dev && \
apk add --update --no-cache --virtual .tmp-build-deps \
build-base postgresql-dev musl-dev zlib zlib-dev linux-headers && \
/py/bin/pip install -r /tmp/requirements.txt && \
if [ $DEV = "true" ]; \
then /py/bin/pip install -r /tmp/requirements.dev.txt ; \
fi && \
rm -rf /tmp && \
apk del .tmp-build-deps && \
adduser \
--disabled-password \
--no-create-home \
django-user && \
mkdir -p /vol/web/media && \
mkdir -p /vol/web/static && \
chown -R django-user:django-user /vol && \
chmod -R 755 /vol && \
chmod -R +x /scripts
ENV PATH="/scripts:/py/bin:$PATH"
COPY . .
# run entrypoint.sh
CMD ["runprod.sh"]
</code></pre>
<p>And I checked the installed django-ckeditor version with</p>
<pre><code>pip show django-ckeditor-5
Name: django-ckeditor-5
Version: 0.2.15
Summary: CKEditor 5 for Django.
</code></pre>
<p>So I don't know how to resolve this error?</p>
<p>Question: How to build the docker container with the ckeditor module?</p>
|
<python><django><docker>
|
2024-11-21 16:14:04
| 0
| 3,991
|
mightycode Newton
|
79,212,023
| 9,272,737
|
Dynamic many2one field domain in Odoo 17 based on callback function
|
<p>I need to create a new Many2one field in Odoo 17, extending the standard res.partner model so that the domain of the newly created <code>res_partner.settore_principale</code> field and its possible values are selected dynamically according to the value of the <code>res_partner.x_studio_macrocategoria</code> attribute of the same record, both in case the latter is modified and in case it hasn't been.</p>
<p>I tries to create a new field with a callback defining it's domain, as shown below:</p>
<pre class="lang-py prettyprint-override"><code> settore_principale = fields.Many2one(
'res.partner.industry',
string="Settore",
domain=lambda self: self._compute_settore_domain() # I tried both with and without str()
)
def _compute_settore_domain(self):
domain = [('x_studio_settore_padre', '=', False)]
if self.x_studio_macrocategoria:
domain.append(('x_studio_macrocategoria', '=', self.x_studio_macrocategoria))
return domain # Return domain as a list of tuples, not string
@api.onchange('x_studio_macrocategoria')
def _onchange_macrocategoria(self):
return {'domain': {'settore_principale': self._compute_settore_domain()}}
</code></pre>
<p>fact is that on loading the addon, even on a first try, I receive the following error:</p>
<pre><code>odoo.addons.base.models.ir_qweb.QWebException: Error while render the template
UndefinedColumn: column res_partner.settore_principale does not exist
LINE 1: ...uid", "res_partner"."write_date" AS "write_date", "res_partn...
^
</code></pre>
<p>it appears that while the code gets parsed and executed in at least a partially correct way, it doesn't allow Odoo to follow with the actual field creation in the database.</p>
<p>Could anyone help me understand what i'm doing wrong?</p>
<p>Thanks in advance</p>
|
<python><orm><odoo><odoo-16><odoo-17>
|
2024-11-21 16:06:34
| 0
| 303
|
Fed C
|
79,212,020
| 1,609,514
|
How to write a generic Python function that works with Python, Numpy or Pandas arguments and returns the same type
|
<p>What's the best way to write a python function that can be used with either float, Numpy, or Pandas data types and always returns the same data type as the arguments it was given. The catch is, the calculation includes one or more float values.</p>
<p>E.g. toy example:</p>
<pre class="lang-py prettyprint-override"><code>def mycalc(x, a=1.0, b=1.0):
return a * x + b
</code></pre>
<p>(I've simplified the problem a lot here as I would ideally want to have more than one input argument like <code>x</code>, but you can assume that the function is vectorized in the sense that it works with Numpy array arguments and Pandas series).</p>
<p>For Numpy arrays and Pandas Series this works fine because the dtype is dictated by the input arguments.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
x = np.array([1, 2, 3], dtype="float32")
print(mycalc(x).dtype) # float32
</code></pre>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
x = pd.Series([1.0, 2.0, 3.0], dtype="float32")
print(mycalc(x).dtype) # float32
</code></pre>
<p>But when using numpy floats of lower precision, the dtype is 'lifted' to float64, presumably due to the float arguments in the formula:</p>
<pre class="lang-py prettyprint-override"><code>x = np.float32(1.0)
print(mycalc(x).dtype) # float64
</code></pre>
<p>Ideally, I would like the function to work with Python floats, numpy scalars, numpy arrays, Pandas series, Jax arrays, and even Sympy symbolic variables if possible.</p>
<p>But I don't want to clutter up the function with too many additional statements to handle each case.</p>
<p>I tried this, which works with Numpy scalars but breaks when you provide arrays or series:</p>
<pre><code>def mycalc(x, a=1.0, b=1.0):
a = type(x)(a)
b = type(x)(b)
return a * x + b
assert isinstance(mycalc(1.0), float)
assert isinstance(mycalc(np.float32(1.0)), np.float32)
mycalc(np.array([1, 2, 3], dtype="float32")) # raises TypeError: expected a sequence of integers or a single integer, got '1.0'
</code></pre>
<p>Also, there is an <a href="https://stackoverflow.com/a/78051488/1609514">answer here</a> to <a href="https://stackoverflow.com/questions/78050752/how-to-write-a-function-that-works-on-both-numpy-arrays-and-pandas-series-retur">a similar question</a> which uses a decorator function to make copies of the input argument, which is a nice idea, but this was only for extending the function from Numpy arrays to Pandas series and doesn't work with Python floats or Numpy scalars.</p>
<pre class="lang-py prettyprint-override"><code>import functools
def apply_to_pandas(func):
@functools.wraps(func)
def wrapper_func(x, *args, **kwargs):
if isinstance(x, (np.ndarray, list)):
out = func(x, *args, **kwargs)
else:
out = x.copy(deep=False)
out[:] = np.apply_along_axis(func, 0, x, *args, **kwargs)
return out
return wrapper_func
@apply_to_pandas
def mycalc(x, a=1.0, b=1.0):
return a * x + b
mycalc(1.0) # TypeError: copy() got an unexpected keyword argument 'deep'
</code></pre>
<p><em><strong>Update</strong></em></p>
<p>As pointed out by @Dunes in the comments below, this is no longer a problem in Numpy versions 2.x as explained <a href="https://numpy.org/devdocs/numpy_2_0_migration_guide.html#changes-to-numpy-data-type-promotion" rel="nofollow noreferrer">here</a> in the Numpy 2.0 Migration Guide.</p>
<p>In the new version, <code>(np.float32(1.0) + 1).dtype == "float32"</code>. Therefore the original function above returns a result of the same dtype as the input <code>x</code>.</p>
|
<python><pandas><numpy><types>
|
2024-11-21 16:06:17
| 3
| 11,755
|
Bill
|
79,211,955
| 4,377,521
|
Spyne define a function that accepts array of complex types
|
<p>I am trying to make a SOAP function on my spyne server that follows this structure.</p>
<pre><code><soap-env:Envelope xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/">
<soap-env:Body>
<test_function xmlns:ns0="Test">
<resource>
<item>
<material>gold</material>
</item>
<item>
<material>silver</material>
</item>
</resource>
</test_function>
</soap-env:Body>
</soap-env:Envelope>
</code></pre>
<p>Code that is closest to work is this</p>
<pre><code>class Material(ComplexModel):
material = Unicode
class Resource(Array): # Changing to ComplexModel doesn't affect
item = Material
class Soap(ServiceBase):
@rpc(Resource)
def test_function(ctx, resource):
print(resource)
return "Yes"
app = Application(
[Soap],
tns='Test',
in_protocol=Soap11(),
out_protocol=Soap11()
)
application = WsgiApplication(app)
</code></pre>
<p>I make requests with zeep and code of client looks like this</p>
<pre><code>params = {
"resource": [
{"item": {"material": "gold"}},
{"item": {"material": "silver"}}
]
}
...
with zeep_client as client:
msg = client.create_message(
client.service, 'test_function', **params)
</code></pre>
<p>But my request have only one <code><item></code>, second one is missing.
I tried to play <code>Array</code> in different places but they either makes code not working (unexpected arguments errors) or doesn't change anything.</p>
<p>Decorating function with <code>@rpc(Array(Resource))</code> leads to</p>
<pre><code>`TypeError: {Test}ResourceArray() got an unexpected keyword argument 'item'. Signature: `Resource: {Test}Resource[]`
</code></pre>
|
<python><soap><zeep><spyne>
|
2024-11-21 15:51:44
| 0
| 2,938
|
sashaaero
|
79,211,778
| 16,389,095
|
How to convert a pypdf reader object into a base64 string
|
<p>I'm trying to develop a simple app in Python Flet for displaying each page of a Pdf file. The code imports the <em>pypdf</em> library for PDF management. The UI consists of a button for loading the first page of the PDF and for skipping to the next page, and of a Flet <a href="https://flet.dev/docs/controls/container/" rel="nofollow noreferrer">container</a> whose content is a Flet <a href="https://flet.dev/docs/controls/image" rel="nofollow noreferrer">image</a>. Flet image takes a <a href="https://flet.dev/docs/controls/image#src_base64" rel="nofollow noreferrer">Base64 encoded string</a> which should correspond, in turn, to every single page of the PDF.</p>
<pre><code>import flet as ft
import pypdf
def main(page: ft.Page):
def btn_Click(e):
cont.content = ft.Image(src_base64 = reader.pages[0],
fit=ft.ImageFit.FILL,
)
page.update()
if btn.data < len(reader.pages):
btn.data +=1
reader = pypdf.PdfReader('Your Pdf filename.pdf')
cont = ft.Container(height = 0.8*page.height,
width = 0.4 * page.width,
border=ft.border.all(3, ft.colors.RED),)
btn = ft.IconButton(
icon=ft.icons.UPLOAD_FILE,
on_click=btn_Click,
icon_size=35,
data=0,)
page.add(ft.Column([cont, btn], horizontal_alignment="center"))
page.horizontal_alignment = "center"
page.scroll = ft.ScrollMode.AUTO
page.update()
ft.app(target=main, assets_dir="assets")
</code></pre>
<p>Once the button is clicked, I get this error:</p>
<pre><code>Error decoding base64: FormatException: Invalid character (at character 1)
{'/Contents': [IndirectObject(2286, 0, 1969514531216), IndirectObject(2287,...
^
</code></pre>
<p>Searching in the web, I found that this exception already happened with Flutter, from which the framework Flet is derived. See this <a href="https://stackoverflow.com/questions/73096538/invalid-character-at-character-77-when-decoding-a-base-64-image-to-show-using">link</a>, <a href="https://flutterfixes.com/decoding-base64-string-to-image-in-flutter-invalid-character-exception/" rel="nofollow noreferrer">this</a> and <a href="https://stackoverflow.com/questions/59015053/base64-string-to-image-in-flutter/59015116#59015116">this</a>. It was suggested to apply this conversion:</p>
<pre><code>base64.decode(sourceContent.replaceAll(RegExp(r'\s+'), ''))
</code></pre>
<p>I don't know how to apply it to the my variable <em>reader</em>. Or, alternatevely, if <em>pypdf</em> contains a method for making this conversion.</p>
|
<python><flutter><dart><pypdf><flet>
|
2024-11-21 15:07:46
| 1
| 421
|
eljamba
|
79,211,733
| 3,557,405
|
Airflow UI parameters not being passing on to DAG
|
<p>I am trying to run a DAG with user specified value for a parameter - using this <a href="https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/params.html" rel="nofollow noreferrer">documentation</a> as a guide.</p>
<p>In the Airflow UI when I click the play button next to the DAG I see a page where my parameter <code>schema_prefix</code> is shown with its default value.</p>
<p><strong>Problem:</strong> The DAG always runs with the default value for the parameter. I tried changing it and it still uses the default. I also tried setting this env variable <code>AIRFLOW__CORE__DAG_RUN_CONF_OVERRIDES_PARAMS</code> to <code>true</code> as mentioned <a href="https://stackoverflow.com/questions/72634021/my-dag-config-params-arent-being-passed-to-my-task">here</a> but it has no effect (DAG still being run with the default value).</p>
<p>Can anyone please help with this? Or guide me to what is the best practice for running a DAG with user provided parameters from the UI?</p>
<p>It is suggested <a href="https://stackoverflow.com/a/47398817/3557405">here</a> to use Airflow variables via the admin tab but this does not work in my use case as then all DAGs will run with the same parameter values. Our use case requires that different users using Airflow be able to trigger their own pipelines with their specific parameters for testing and other purposes.</p>
<p>Thank you</p>
<pre><code>my_params = {
"schema_prefix": Param(
"ABC",
description="Prefix to schema",
type="string",
minLength=2)
}
with DAG(
dag_id="test_dag_1",
start_date=datetime(2024, 1, 1),
params=my_params,
) as dag:
python_task = PythonOperator(
python_callable=my_task_python_file,
task_id="python_task_id",
op_kwargs={k: v for k, v in my_params.items()},
)
)
</code></pre>
|
<python><airflow><orchestration>
|
2024-11-21 14:56:36
| 1
| 636
|
user3557405
|
79,211,709
| 2,807,964
|
How to mix pytest fixtures that uses request and regular values for parametrize?
|
<p>This is very hard for me to find in pytest documentation. That's why I'm asking here.</p>
<p>I have a fixture that is loading data.</p>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.fixture(scope="function")
def param_data(request):
with open(f'tests/fixtures/{request.param}.json') as f:
return json.load(f)
</code></pre>
<p>And with that, I want to test the execution of a function of 3 JSON files:</p>
<ul>
<li>tests/fixtures/data1.json</li>
<li>tests/fixtures/data2.json</li>
<li>tests/fixtures/data3.json</li>
</ul>
<p>How can I do it using <code>@pytest.mark.parametrize</code>? I mean...</p>
<pre class="lang-py prettyprint-override"><code>@pytest.mark.parametrize(???)
def test_my_function(dict, expected):
# Test my_function() with the dict loaded by fixture and the expected result.
assert my_function(dict) == expected
</code></pre>
<p>I see examples of both usages, but not with both of them.
And, all the fixtures I see are fixed with a value return, not using <code>request.param</code>.</p>
|
<python><pytest><fixtures>
|
2024-11-21 14:49:54
| 1
| 880
|
jcfaracco
|
79,211,394
| 301,723
|
Alternative to "with" template command for loops
|
<p>I have a template file <code>letter.txt</code> in the format</p>
<pre><code>Hello {{ first_name }} {{ last_name }}!
</code></pre>
<p>(In reality there are many more variables than these two, but for simplicity I'll stick with only <code>first_name</code> and <code>last_name</code>.)</p>
<p>This works nicely with a call like</p>
<pre><code>from jinja2 import Environment, FileSystemLoader
environment = Environment(loader=FileSystemLoader("./"))
template = environment.get_template("letter.txt")
template.render(first_name='Jane', last_name='Doe')
</code></pre>
<p>Now if I have a list of names that should appear, like in a form letter, I need to do the following VERY verbose <code>with</code> statement in the <code>form_letter.txt</code> template file which includes the original untouched <code>letter.txt</code> template.</p>
<pre><code>{% for person in people %}
{% with first_name=person.first_name, last_name=person.last_name %}
{% include 'letter.txt' %}
{% endwith %}
------8<---cut here------------
{% endfor %}
</code></pre>
<p>and rendering the template via:</p>
<pre><code>template = environment.get_template("form_letter.txt")
template.render(people=[{'first_name': 'Jane', 'last_name': 'Doe'},
{'first_name': 'Bob', 'last_name': 'Builder'},
{'first_name': 'Sara', 'last_name': 'Sample'}])
</code></pre>
<p>Creating and possibly having to expand the <code>with</code> statement by hand is cumbersome and error prone. Is there a better way to "create a namespace inside a template" automatically from key/value of a dict like object? Like <code>{% with **person %}</code> or something similar?</p>
|
<python><jinja2>
|
2024-11-21 13:33:27
| 1
| 4,403
|
mawimawi
|
79,211,272
| 6,930,340
|
dtype changes during collect process in polars dataframe
|
<p>I have a <code>pl.LazyFrame</code> with a number of columns. One of the columns is called <code>signal</code> and is supposed to have <code>dtype=pl.Int8</code>. It only contains <code>0</code> and <code>1</code>.</p>
<p>This will be confirmed if I do <code>collect_schema</code>.<br />
However, when I actually <code>collect</code> the dataframe, the <code>dtype</code> switches to <code>pl.Int32</code>.</p>
<p>I wasn't able to come up with a toy example, so I show the behaviour with my existing <code>pl.LazyFrame</code>. Hopefully somebody can still point me in the right direction.</p>
<pre><code>In [1]: lf.select(pl.col("signal")).collect_schema()
Out[1]: Schema([('signal', Int8)])
In [2]: lf.select(pl.col("signal")).collect()
Out[2]:
shape: (7_556, 1)
ββββββββββ
β signal β
β --- β
β i32 β
ββββββββββ‘
β 0 β
β 0 β
β 0 β
β 0 β
β 1 β
β β¦ β
β 1 β
β 1 β
β 1 β
β 0 β
β 0 β
ββββββββββ
In [3]: lf.select(pl.col("signal")).collect().collect_schema()
Out[3]: Schema([('signal', Int32)])
In [4]: lf.select(pl.col("signal")).collect().describe()
Out[4]:
shape: (9, 2)
ββββββββββββββ¬βββββββββββ
β statistic β signal β
β --- β --- β
β str β f64 β
ββββββββββββββͺβββββββββββ‘
β count β 7556.0 β
β null_count β 0.0 β
β mean β 0.55585 β
β std β 0.496904 β
β min β 0.0 β
β 25% β 0.0 β
β 50% β 1.0 β
β 75% β 1.0 β
β max β 1.0 β
ββββββββββββββ΄βββββββββββ
</code></pre>
<p>In my view, this looks like a bug, doesn't it?</p>
|
<python><python-polars>
|
2024-11-21 12:58:38
| 0
| 5,167
|
Andi
|
79,211,174
| 3,802,122
|
Environment Variables Not Loading in FastAPI App on Vercel
|
<p>I am trying to deploy a FastAPI application on Vercel, but I am having trouble getting environment variables to load. My setup works perfectly fine locally, where I use <code>python-dotenv</code> with <code>load_dotenv()</code> to load the <code>.env</code> file, but on Vercel, the environment variables are not being found at runtime.</p>
<p>Here are the steps I've taken so far:</p>
<ol>
<li>Added environment variables in the Vercel dashboard under Settings >
Environment Variables.</li>
<li>Verified that the app works locally with the
same .env file using load_dotenv().</li>
<li>Deployed the app to Vercel, but
it fails to find the environment variables.</li>
<li>Tried changing the Vercel Node.js version to 18.x (based on suggestions from other
threads), but this caused my application to stop working entirely.</li>
</ol>
<p>Additional Details:</p>
<ol>
<li>My application uses Python 3.11 and FastAPI.</li>
<li>My environment variables include SECRET_KEY and DATABASE_URL.</li>
</ol>
<p>Questions:</p>
<ol>
<li>Is there a specific configuration required to make Python-based apps load environment variables correctly on Vercel?</li>
<li>Does Vercel require a particular runtime configuration to support Python 3.11 and environment variables together?</li>
<li>Has anyone successfully run a FastAPI app on Vercel with environment variables recently?</li>
</ol>
<p>Any guidance or examples would be greatly appreciated!</p>
<p>Thank you in advance for your help.</p>
|
<python><environment-variables><fastapi><vercel><python-dotenv>
|
2024-11-21 12:34:44
| 2
| 5,525
|
Anderson K
|
79,211,072
| 10,061,193
|
Docker Actions Ignore the Installed Dependencies That Are Installed During the Build Step
|
<p>I've created a Python GitHub action. It runs as a Docker container. One of its layers is the installation of some packages using <code>uv</code>. When I use that action from another repository, although it shows the dependencies are installed in the building step, it seems none of its dependencies are installed when it tries to use that action.</p>
<h2>Action</h2>
<p>Dockerfile:</p>
<pre class="lang-none prettyprint-override"><code>FROM python:3.12-slim
# Installing uv
COPY --from=ghcr.io/astral-sh/uv:0.5.1 /uv /uvx /bin/
WORKDIR /action
COPY . .
# Installing the dependencies (it works)
RUN uv sync --no-install-project --no-cache
CMD [ "uv", "run", "/action/main.py" ]
</code></pre>
<p>The Python file (<code>/action/main.py</code>):</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
subprocess.call(["pip", "list"])
import requests
...
</code></pre>
<h3>Action Usage</h3>
<p>Here is the pipeline that I'm using from a different repository.</p>
<pre class="lang-yaml prettyprint-override"><code>jobs:
build:
runs-on: ubuntu-latest
name: Running the action
steps:
- name: Using the action
id: my_action
uses: me/my-action@main
with:
url: "https://google.com"
- name: Echo result
run: echo ${{steps.my_action.outputs.status_code}}
</code></pre>
<p>Here is the error that I'm getting by running this pipeline:</p>
<p><a href="https://i.sstatic.net/peZYlIfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/peZYlIfg.png" alt="enter image description here" /></a></p>
<p>I don't know what "Build me/my-action@main" step is, but in that step, the <code>RUN uv sync --no-install-project --no-cache</code> is executed successfully and all the requirements are installed in there, but in the "Using the action" step, there is nothing installed but <code>pip</code>.</p>
|
<python><docker><github><github-actions><cicd>
|
2024-11-21 12:07:26
| 1
| 394
|
Sadra
|
79,210,963
| 3,782,128
|
Problems testing with pytest with mock in a subclass
|
<p>I'm having problems trying to mock a class called into another class.
I have three modules:</p>
<p>tree.py</p>
<pre><code>from branch import Branch
class Tree:
type_of_tree = None
branches = None
def __init__(self, type_of_tree, branches = 0, default_leaves = 0):
self.type_of_tree = type_of_tree
self.branches = []
for branch in range(branches):
self.create_branch(default_leaves)
def create_branch(self, leaves):
self.branches.append(Branch(leaves))
def get_leaves(self):
return_value = []
for branch in self.branches:
return_value.extend(branch.get_leaves())
return return_value
</code></pre>
<p>branch.py</p>
<pre><code>import uuid
from leave import Leave
class Branch:
leaves = None
identifier = None
def __init__(self, leaves=0):
self.identifier = str(uuid.uuid4())
self.leaves = []
for leave in range(leaves):
self.create_leave()
def create_leave(self):
self.leaves.append(Leave())
def get_leaves(self):
return_value = []
for leave in self.leaves:
return_value.append(leave.get_identifier())
return return_value
</code></pre>
<p>leave.py</p>
<pre><code>import uuid
class Leave:
identifier = None
def __init__(self):
self.identifier = str(uuid.uuid4())
def get_identifier(self):
return self.identifier
</code></pre>
<p>I'm trying to mock Leave class with this code:</p>
<pre><code>from unittest.mock import patch, MagicMock
from uuid import uuid4
from tree import Tree
FAKE_UUID = str(uuid4())
class TestLeave:
def test_leave(self):
with patch('leave.Leave') as mock_leave:
mock_leave.return_value = MagicMock()
mock_leave.get_identifier.return_value = FAKE_UUID
tree = Tree("Pine", 1, 1)
leaves = tree.get_leaves()
assert leaves == [FAKE_UUID]
</code></pre>
<p>The test is failing. I have been reading <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.patch" rel="nofollow noreferrer">patch documentation</a> but I don't find related info.</p>
|
<python><mocking><pytest>
|
2024-11-21 11:36:46
| 1
| 1,093
|
RubΓ©n Pozo
|
79,210,940
| 8,726,488
|
How to calculate Dataframe size?
|
<p>In Pyspark, How to find dataframe size ( Approx. row count : 300 million records) through any available methods in Pyspark.? My Production system is running on < 3.0 spark version. The idea is based on dataframe size i need to calculate shuffle partition number before joining</p>
|
<python><apache-spark><pyspark>
|
2024-11-21 11:30:14
| 3
| 3,058
|
Learn Hadoop
|
79,210,615
| 3,494,790
|
Python NLTK recognizing last name as ORGANISATION if name comes first in sentence
|
<p>I am using Python's <strong>nltk</strong> library to extract names from a sentence. I am expecting output as <code>['Barack Obama', 'Michelle Obama']</code> but I am getting <code>['Barack', 'Michelle Obama']</code>. My example code is as follows.
When I tried printing the <code>ner_tree</code>, I come to know that it recognise the last name Obama as Organization name.</p>
<pre><code>import nltk
from nltk import word_tokenize, pos_tag, ne_chunk
nltk.download('punkt')
nltk.download('maxent_ne_chunker')
nltk.download('words')
sentence = "Barack Obama and Michelle Obama visited New York last week"
words = word_tokenize(sentence)
tagged_words = pos_tag(words)
ner_tree = ne_chunk(tagged_words)
names = []
for subname in ner_tree:
if isinstance(subname, nltk.Tree) and subname.label() == 'PERSON':
name = " ".join(word for word, tag in subname)
names.append(name)
print(names)
# Expected output: ['Barack Obama', 'Michelle Obama']
# Actual output: ['Barack', 'Michelle Obama']
</code></pre>
<p>The result of variable <code>ner_tree</code> is as follows:</p>
<pre><code>(S
(PERSON Barack/NNP)
(ORGANIZATION Obama/NNP)
and/CC
(PERSON Michelle/NNP Obama/NNP)
visited/VBD
(GPE New/NNP York/NNP)
last/JJ
week/NN)
</code></pre>
<p>In above code, if the sentence is changed as follows then it will produce the expected output.</p>
<p><code>sentence = "As per our sources, Barack Obama and Michelle Obama visited New York last week"</code></p>
|
<python><nltk>
|
2024-11-21 09:56:49
| 1
| 553
|
kishor10d
|
79,210,603
| 4,706,711
|
How can I update display of chat history upon page refresh?
|
<p>My chat UI using Gradio:</p>
<pre><code>import sqlite3
import gradio as gr
import time
formatted_history = []
sqlite = None
def loadHistoryFromDB():
global sqlite,formatted_history
sql="SELECT role,message from chat_history order by created_at_unix ASC Limit 10";
cur = sqlite.cursor()
cur.execute(sql)
rows = cur.fetchall()
for row in rows:
formatted_history.append({
'role':row[0],
'content':row[1]
})
def chatHandler(message,history,generate_image):
sql = "INSERT INTO chat_history (role, message, created_at_unix) VALUES (?, ?, ?)"
current_unix_time = int(time.time()) # Get the current Unix timestamp
response = f"Hello {message}"
sqlite.execute(sql, ('user', message, int(time.time())))
time.sleep(3)
sqlite.execute(sql, ('assistant', response, int(time.time())))
sqlite.commit()
yield response
if __name__ == "__main__":
sqlite = sqlite3.connect("chat_history.db",check_same_thread=False)
loadHistoryFromDB()
with gr.Blocks() as demo:
chatbot = gr.Chatbot(type="messages", value=formatted_history)
chat = gr.ChatInterface(chatHandler,type="messages",chatbot=chatbot)
demo.launch()
</code></pre>
<p>I use sqlite3 to store chat history. But a page refresh fails to retrieve recently sent messages. I only get the ones already fetched from database.</p>
<p>The bug occurs when:</p>
<ol>
<li>I launch my python script.</li>
<li>I visit page indicated by Gradio.</li>
<li>I place a message.</li>
<li>I then refresh the page.</li>
</ol>
<p>How can I show recently sent messages on page refresh and not only history already loaded? I tried:</p>
<pre><code>import sqlite3
import gradio as gr
import time
formatted_history = []
sqlite = None
def loadHistoryFromDB():
global sqlite,formatted_history
sql="SELECT role,message from chat_history order by created_at_unix ASC Limit 10";
cur = sqlite.cursor()
cur.execute(sql)
rows = cur.fetchall()
for row in rows:
formatted_history.append({
'role':row[0],
'content':row[1]
})
def chatHandler(message,history,generate_image):
sql = "INSERT INTO chat_history (role, message, created_at_unix) VALUES (?, ?, ?)"
current_unix_time = int(time.time()) # Get the current Unix timestamp
response = f"Hello {message}"
history.append({"role":'user','content':message})
sqlite.execute(sql, ('user', message, int(time.time())))
time.sleep(3)
history.append({"role":'assistant','content':response})
sqlite.execute(sql, ('assistant', response, int(time.time())))
sqlite.commit()
yield response
if __name__ == "__main__":
sqlite = sqlite3.connect("chat_history.db",check_same_thread=False)
loadHistoryFromDB()
with gr.Blocks() as demo:
chatbot = gr.Chatbot(type="messages", value=formatted_history)
chat = gr.ChatInterface(chatHandler,type="messages",chatbot=chatbot)
demo.launch()
</code></pre>
<p>But I still fail to load the most recent chat history upon refresh.</p>
|
<python><sqlite><chat><gradio><sqlite3-python>
|
2024-11-21 09:52:30
| 1
| 10,444
|
Dimitrios Desyllas
|
79,210,369
| 3,049,419
|
How do I unzip a password protected zip file using Python inside ADLS
|
<p>Below is the code.</p>
<pre><code>from zipfile import ZipFile
file_name = "XYZ.zip"
with ZipFile(file_name, 'r') as zip:
zip.printdir()
print('Extracting all the files now...')
zip.extractall(pwd=b'123123$SADMK6%002#')
print('Done!')
</code></pre>
<p>It is giving "<code>FileNotFoundError</code>" error. Also, I can read other csv files inside ADLS using abfss path without providing accesskey of storage account.</p>
|
<python><azure><zip>
|
2024-11-21 08:51:52
| 1
| 1,715
|
Govind Gupta
|
79,210,355
| 7,895,542
|
How to get model field types in pydantic v2?
|
<p>I am trying to write a generic class that takes a pydantic model type, however the model can only have string fields. So i am trying to verify this at runtime.</p>
<p>I looked and found <a href="https://stackoverflow.com/a/75827637/7895542">this</a> answer, but it does not seem to work in v2 as the <code>FieldInfo</code> type that is returned as the values of the dict from <code>model_info</code> does not have a <code>type_</code> property.</p>
|
<python><pydantic><pydantic-v2>
|
2024-11-21 08:48:47
| 1
| 360
|
J.N.
|
79,210,306
| 17,148,835
|
start a program via Jenkins service
|
<p>I have installed Jenkins as a service on my windows 10 computer. Jenkins connects via SSH to the machine.</p>
<p>The pipeline script looks like this:</p>
<pre><code>timestamps {
node('abc') {
stage("exec") {
bat """
echo 'script start'
D:\\tools\\python\\python.exe D:\\test_dir\\test_script.py
echo 'script end'
"""
}
}
}
</code></pre>
<p>The python script is getting triggered.
In the python script I am calling a program via subprocess.Popen().
For example MS Edge:</p>
<pre><code>proc = subprocess.Popen(['C:/Program Files (x86)/Microsoft/Edge/Application/msedge.exe'])
print(f"{proc.pid=}")
time.sleep(60) # sleep for a minute
</code></pre>
<p>The subprocess gets created and the pid is printed to the console.
But Edge does not open... also in the Task Manager the process is not showing up.</p>
<p>I tried launching the python script locally and it worked.
Is there any better way to do this than with subprocess.Popen() ?</p>
<p>I think it has something to do with windows sessions ans processes, but I have no idea how exactly they work.</p>
|
<python><windows><jenkins><subprocess>
|
2024-11-21 08:38:23
| 0
| 1,045
|
BeanBoy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.