QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,361,457
| 4,575,173
|
Python IMAP get all the messages from one conversation via latest message id
|
<p>I am using Python and imapib. I have a message id of a specific message. If that message is part of conversation (I think it's called thread, I am really unfamiliar with the standard) I need to get list of all previous messages in that conversation, preferably sorted by time. I can get raw email, thread topic and index, but so far everything that I've tried to get all the previous messages failed, due to my lacking knowledge of the standard. I am avoiding diving deep into it, since I'm quite overwhelmed with work, but maybe I'll have to. Please, help. Any library is allowed.</p>
|
<python><email><imap>
|
2024-04-21 11:57:19
| 1
| 374
|
ajaleksa
|
78,361,389
| 1,082,349
|
Pystata: run stata instances in parallel from python
|
<p>I'm using the pystata package that allows me to run stata code from python, and send data from python to stata and back.</p>
<p>The way I understand this, is that there is a single stata instance that is running in the background. I want to bootstrap some code that wraps around the stata code, and I would like to run this in parallel.</p>
<p>Essentially, I would like to have something like</p>
<pre><code>from joblib import Parallel, delayed
import pandas as pd
def single_instance(seed):
# initialize stata
from pystata import config, stata
config.init('be')
# run some stata code (load a data set and collapse, for example)
stata.run('some code')
# load stata data to python
df = stata.pdataframe_from_data()
out = do_something_with_data(df, seed)
return out
if __name__ == '__main__':
seeds = np.arange(1, 100)
Parallel(backend='loky', n_jobs=-1)(
delayed(single_instance)(seeds[i]) for i in values)
</code></pre>
<p>where there is some code that is run in parallel, and each thread is initializing its own stata instance in parallel. However, I'm worried that all these parallelized threads are accessing the same stata instance -- can this work as I expect? How should I set this up?</p>
<pre><code>joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/x/miniconda3/envs/stata/lib/python3.12/site-packages/joblib/externals/loky/process_executor.py", line 391, in _process_worker
call_item = call_queue.get(block=True, timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/x/miniconda3/envs/stata/lib/python3.12/multiprocessing/queues.py", line 122, in get
return _ForkingPickler.loads(res)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/x/miniconda3/envs/stata/lib/python3.12/site-packages/joblib/externals/cloudpickle/cloudpickle.py", line 649, in subimport
__import__(name)
File "/usr/local/stata/utilities/pystata/stata.py", line 8, in <module>
config.check_initialized()
File "/usr/local/stata/utilities/pystata/config.py", line 281, in check_initialized
_RaiseSystemException('''
File "/usr/local/stata/utilities/pystata/config.py", line 86, in _RaiseSystemException
raise SystemError(msg)
SystemError:
Note: Stata environment has not been initialized yet.
To proceed, you must call init() function in the config module as follows:
from pystata import config
config.init()
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "test.py", line 299, in <module>
bootstrap(aggregation='occ')
File "test.py", line 277, in bootstrap
z = Parallel(backend='loky', n_jobs=-1)(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/x/miniconda3/envs/stata/lib/python3.12/site-packages/joblib/parallel.py", line 1098, in __call__
self.retrieve()
File "/home/x/miniconda3/envs/stata/lib/python3.12/site-packages/joblib/parallel.py", line 975, in retrieve
self._output.extend(job.get(timeout=self.timeout))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/x/miniconda3/envs/stata/lib/python3.12/site-packages/joblib/_parallel_backends.py", line 567, in wrap_future_result
return future.result(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/x/miniconda3/envs/stata/lib/python3.12/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/home/x/miniconda3/envs/stata/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
joblib.externals.loky.process_executor.BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.
</code></pre>
|
<python><stata>
|
2024-04-21 11:33:58
| 1
| 16,698
|
FooBar
|
78,361,331
| 5,201,005
|
BLE: Measure Respiratory rate or breathing rate from bluetooth low energy Device
|
<p>I want a Bluetooth low energy device that measures Respiratory rate (Breathing rate) in breaths per minute. I don't see any Service or characteristic in Bluetooth SIG assigned numbers and no resources.</p>
<p>Some article suggests to calculate it from heart rate monitor but I don't see anything that tells how to do it.</p>
<p>Any help is appreciated.</p>
|
<python><android><ios><bluetooth><bluetooth-lowenergy>
|
2024-04-21 11:14:34
| 1
| 305
|
Sudhanshu
|
78,361,309
| 614,944
|
Load python data similar to json.loads
|
<p>I'm wondering if there's a way to import Python data, e.g</p>
<pre><code>[{'test1': True, 'test2': [0, 1, 2], 'test3': None}]
</code></pre>
<p>I need to produce a large Python data dump, e.g 300MB. First thought is to import the data as a module, so I tried to add a heading and a trailing,</p>
<pre><code># heading, new newline here, it assigns the list to variable "a"
import json
a =
# trailing
with open('json-dump', 'w') as f:
for row in a:
f.write(json.dumps(row) + '\n')
</code></pre>
<p>But it would take tons of memory to do that, I created a 8GB swap and it results in OOM, and Python is killed.</p>
<p>Is there any better way to do that?</p>
|
<python>
|
2024-04-21 11:09:26
| 0
| 23,701
|
daisy
|
78,361,283
| 6,430,403
|
run an async websocket loop in background of flask app
|
<p>I am trying to build an app which will connect to a websocket (for streaming data) and provide this data with rest API. I am using <code>Flask</code> for creating my API and <code>websockets</code> package for connecting to websocket server.</p>
<p>This is my code so far:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import websockets
from flask import Flask, jsonify
app = Flask(__name__)
websocket = None
async def websocket_setup():
global websocket
uri = 'wss://ws.example.com'
websocket = await websockets.connect(uri)
print("WebSocket connected")
async def websocket_loop():
global websocket
await websocket_setup()
while True:
data = await websocket.recv()
print("Received:", data) # will save to a global variable
@app.get('/')
def home():
return jsonify({'msg':'hello'})
# Will have APIs for get data, send msg to websocket, etc
if __name__ == '__main__':
asyncio.create_task(websocket_loop())
app.run(debug=False)
</code></pre>
<p>error:</p>
<pre><code>RuntimeError: no running event loop
sys:1: RuntimeWarning: coroutine 'websocket_loop' was never awaited
</code></pre>
<p>I am keeping <code>websocket</code> as a global variable, because I will need to use it in my API methods, like sending msg. I am okay to use any other framework also, I have tried with <code>FastAPI</code> and <code>Sanic</code>, but no success.</p>
|
<python><flask><asynchronous><websocket><python-asyncio>
|
2024-04-21 10:54:24
| 1
| 401
|
Rishabh Gupta
|
78,361,034
| 4,417,586
|
Django queryset filter on a ManyToMany with custom fields
|
<p>I have the following Django models <code>Block</code> and <code>CustomCondition</code> that are related to each other through a custom ManyToMany relationship which has customs fields and a custom DB table:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
class CustomCondition(models.Model):
name = models.CharField("custom condition name")
class Meta:
db_table = "custom_conditions"
def __str__(self) -> str:
return self.name
class BlockCondition(models.Model):
block = models.ForeignKey(
"Block",
related_name="block_condition_as_block",
on_delete=models.CASCADE,
)
custom_condition = models.ForeignKey(
CustomCondition,
related_name="block_condition_as_custom_condition",
on_delete=models.CASCADE,
)
choice = models.CharField(
"choice",
choices=[
("NO_CONDITION", "No condition"),
("ACTIVATED", "Activated"),
("NOT_ACTIVATED", "Not activated"),
],
default="NO_CONDITION",
)
class Meta:
db_table = "block_conditions"
def __str__(self) -> str:
return self.custom_condition.name
class Block(models.Model):
name = models.CharField("name", null=False)
block_conditions = models.ManyToManyField(
CustomCondition,
through="BlockCondition",
blank=True,
default=None,
related_name="blocks_as_block_condition",
)
class Meta:
db_table = "blocks"
</code></pre>
<p>Now I want to filter blocks for each condition name when that condition name is activated:</p>
<pre class="lang-py prettyprint-override"><code>for c in CustomCondition.objects.all():
if c.name:
filtered_blocks = Block.objects.filter(
block_conditions__custom_condition__name=c.name
)
filtered_blocks = filtered_blocks.filter(
block_conditions__choice__in=["ACTIVATED", "NO_CONDITION"]
)
print(filtered_blocks)
</code></pre>
<p>But I get the following Django error:</p>
<p><code>django.core.exceptions.FieldError: Unsupported lookup 'custom_condition' for ForeignKey or join on the field not permitted.</code></p>
<p>What am I doing wrong here?</p>
|
<python><django><django-models>
|
2024-04-21 09:33:33
| 1
| 1,152
|
bolino
|
78,360,988
| 13,942,929
|
Cython : how can I import multiple class or object from a single .pyx file
|
<p>For simplicity, Imma give you guys my .pyx.
I have a file called Geometry.pyx and in that file I have Segment, Angle and Circle classes.</p>
<pre><code>cimport Geometry
# Define Python classes
cdef class Segment:
def __cinit__(self, float length):
self.c_segment = make_shared[_Segment](length)
def get_length(self) -> float:
return self.c_segment.get().get_length()
cdef class Angle:
def __cinit__(self, float degrees):
self.c_angle = make_shared[_Angle](degrees)
def get_degrees(self) -> float:
return self.c_angle.get().get_degrees()
def get_radians(self) -> float:
return self.c_angle.get().get_radians()
cdef class Circle:
def __cinit__(self, float radius):
self.c_circle = make_shared[_Circle](radius)
def get_radius(self) -> float:
return self.c_circle.get().get_radius()
def get_diameter(self) -> float:
return self.c_circle.get().get_diameter()
def get_area(self) -> float:
return self.c_circle.get().get_area()
def get_perimeter(self) -> float:
return self.c_circle.get().get_perimeter()
</code></pre>
<p>How should I write my setup.py file?
I wrote as below but I got an error</p>
<pre><code>from setuptools import setup, Extension, find_packages
from Cython.Build import cythonize
# extension = Extension(
# "Geometry.Geometry",
# ["src/Geometry/Geometry.pyx", "../cpp/lib/src/Geometry.cpp"],
# include_dirs=["../cpp/lib/include"],
# extra_compile_args=['-std=c++17', '-O3'],
# language='c++'
# )
extension_segment = Extension(
"Geometry.Segment",
["src/Geometry/Geometry.pyx", "../cpp/lib/src/Geometry.cpp"],
include_dirs=["../cpp/lib/include"],
extra_compile_args=['-std=c++17', '-O3'],
language='c++'
)
extension_angle= Extension(
"Geometry.Angle",
["src/Geometry/Geometry.pyx", "../cpp/lib/src/Geometry.cpp"],
include_dirs=["../cpp/lib/include"],
extra_compile_args=['-std=c++17', '-O3'],
language='c++'
)
extension_circle= Extension(
"Geometry.Circle",
["src/Geometry/Geometry.pyx", "../cpp/lib/src/Geometry.cpp"],
include_dirs=["../cpp/lib/include"],
extra_compile_args=['-std=c++17', '-O3'],
language='c++'
)
setup(
name='Geometry',
version='0.1',
packages=find_packages(where='src'),
package_dir={"": "src"},
package_data={"Geometry": ["*.pyx"]},
ext_modules=cythonize(extension_segment,
compiler_directives={'language_level': 3},
include_path=["src/Geometry"],
annotate=True
)+
cythonize(extension_angle,
compiler_directives={'language_level': 3},
include_path=["src/Geometry"],
annotate=True
)+
cythonize(extension_circle,
compiler_directives={'language_level': 3},
include_path=["src/Geometry"],
annotate=True
)
)
</code></pre>
<p>Test.py</p>
<pre><code>from Geometry import Segment, Angle, Circle
class TestGeometry:
def test_segment_length(self):
seg = Segment(10.0)
# assert seg.get_length() == 5.0
print("HELO")
#
# def test_angle_degrees(self):
# ang = Angle(90.0)
# assert ang.get_degrees() == 90.0
#
# def test_angle_radians(self):
# ang = Angle(90.0)
# assert ang.get_radians() == 1.5707963267948966 # approximately pi/2
#
# def test_circle_radius(self):
# circ = Circle(3.0)
# assert circ.get_radius() == 3.0
#
# def test_circle_diameter(self):
# circ = Circle(3.0)
# assert circ.get_diameter() == 6.0
#
# def test_circle_area(self):
# circ = Circle(3.0)
# assert circ.get_area() == 28.274333882308138 # approximately pi * r^2
#
# def test_circle_perimeter(self):
# circ = Circle(3.0)
# assert circ.get_perimeter() == 18.84955592153876 # approximately 2 * pi * r
</code></pre>
<p>Error Message</p>
<pre><code>/home/punreach/Desktop/A_Folder/project-core/venv/bin/python /opt/pycharm-2024.1/plugins/python/helpers/pycharm/_jb_pytest_runner.py --target test_geometry.py::TestGeometry.test_segment_length
Testing started at 오후 6:11 ...
Launching pytest with arguments test_geometry.py::TestGeometry::test_segment_length --no-header --no-summary -q in /home/punreach/Desktop/A_Folder/project-core/lib/Geometry/cython/test
============================= test session starts ==============================
collecting ...
test/test_geometry.py:None (test/test_geometry.py)
test_geometry.py:1: in <module>
from Geometry import Segment, Angle, Circle
../../../../venv/lib/python3.10/site-packages/Geometry-0.1-py3.10-linux-x86_64.egg/Geometry/__init__.py:1: in <module>
from .Geometry import *
src/Geometry/Geometry.pyx:1: in init Geometry.Geometry
???
E AttributeError: partially initialized module 'Geometry' has no attribute 'Segment' (most likely due to a circular import)
collected 0 items / 1 error
=============================== 1 error in 0.02s ===============================
ERROR: found no collectors for /home/punreach/Desktop/A_Folder/project-core/lib/Geometry/cython/test/test_geometry.py::TestGeometry::test_segment_length
Process finished with exit code 4
</code></pre>
|
<python><c++><cython>
|
2024-04-21 09:18:10
| 1
| 3,779
|
Punreach Rany
|
78,360,900
| 1,413,856
|
Extending a line
|
<p>As far as I understand, you can use the <code>create_line()</code> method to create a path with multiple points. For example, you can write the following:</p>
<pre class="lang-py prettyprint-override"><code>points = [(10,20), (30, 40), (50,60)]
line = canvas.create_line(points)
</code></pre>
<p>The <code>create_line()</code> method flattens the parameters, so it’s equivalent to:</p>
<pre class="lang-py prettyprint-override"><code>line = canvas.create_line(10,20, 30,40, 50,60)
</code></pre>
<p>(I’ve left spaces between the number pairs for clarity).</p>
<p>The <code>line</code> variable contains resource id of the line, but what, exactly is the line? Is it really a multi-segment object?</p>
<p>If so, it is possible to add another point to this object without re-creating it? Something like:</p>
<pre class="lang-py prettyprint-override"><code>canvas.extend(line, (70,80)) # I know this doesn’t work
</code></pre>
|
<python><tkinter><tkinter-canvas>
|
2024-04-21 08:35:47
| 1
| 16,921
|
Manngo
|
78,360,777
| 3,405,291
|
Compile TensorFlow while installing by Conda
|
<h1>Error due to AVX</h1>
<p>I have run into this error when importing TensorFlow by a simple Python statement:</p>
<pre><code>>>> import tensorflow as tf
Illegal instruction (core dumped)
</code></pre>
<p>The <code>lscpu</code> command confirms that CPU lacks AVX instruction set. It looks like the error is due to my CPU not supporting the AVX instructions:</p>
<p><a href="https://github.com/tensorflow/tensorflow/issues/62342#issuecomment-1801151769" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/62342#issuecomment-1801151769</a></p>
<h1>Option 1</h1>
<h2>Compile TensorFlow</h2>
<p>I'm using <code>conda</code>. How can I force <code>conda</code> to compile TensorFlow while installing it? Only TensorFlow needs to be compiled, not others.</p>
<p>This is my <code>environment.yml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>name: deep3d_pytorch
channels:
- pytorch
- conda-forge
- defaults
dependencies:
- python
- pytorch
- torchvision
- numpy
- scikit-image
- scipy
- pillow
- pip
- ipython
- yaml
- pip:
- matplotlib
- opencv-python
- tensorboard
- tensorflow
- kornia
- dominate
- trimesh
</code></pre>
<h1>Option 2</h1>
<h2>Install older TensorFlow</h2>
<p>The other option might be forcing <code>conda</code> to install the latest TensorFlow version that doesn't require AVX. I wonder how I can force <code>conda</code> to do that.</p>
<h1>Option 3</h1>
<h2>Get a virtual machine with proper CPU</h2>
<p>I have contacted the virtual server provider to give me a CPU with AVX capability. Since they are creating and selling virtual machines, so maybe it's easy for them to just create another virtual CPU with AVX support. I'm not sure. Let's see.</p>
<h1>Option 4</h1>
<h2>Wheel</h2>
<p>The other option is to find a precompiled <code>wheel</code> of the TensorFlow which is compiled without AVX requirement.</p>
<p>Are there any other options?</p>
|
<python><tensorflow><anaconda><conda>
|
2024-04-21 07:45:05
| 1
| 8,185
|
Megidd
|
78,360,693
| 3,882,290
|
Pip ERROR: No matching distribution found for python
|
<p>I'm tidying up a python project for public distribution.</p>
<p>It uses a virtual environment and has a requirements.txt file containing all dependencies.</p>
<p>I would like this environment to include a dependency on the specific version of python I built it with. I seem to remember that this is good practice.</p>
<p>So I attempted to install python using pip, which kind of seems the obvious thing to do, although pip itself was installed using the standard python download (for version 3.12.0)</p>
<p>So I tried two possible commands:</p>
<pre><code>pip install python
</code></pre>
<p>OR</p>
<pre><code>pip install python==3.12.0
</code></pre>
<p>However, pip gives the following error messages for the first:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement python (from versions: none)
ERROR: No matching distribution found for python
</code></pre>
<p>and a similar message for the second.</p>
<p>I also found that the same occurs if I try to use pip outside the virtual environment (though that isn't something I intend to do - I did this to pin down the exact source of the problem). Thus, this seems to be an issue with pip, not an issue with virtual environments.</p>
<p>I looked around but couldn't see any queries from others who experienced the same problem.</p>
<p>This suggests it's a dumb thing to do, so maybe nobody else has tried it. Or alternatively, and this is a bit like the question 'why haven't aliens contacted us' it's because every time they do it, it works fine so there's nothing worth remarking on (like, maybe the aliens contact us all the time, but we just don't notice, a theory I'm not averse to).</p>
<p>I found a 'standard' method for installing a specific version of python in a virtual environment <a href="https://stackoverflow.com/questions/5506110/is-it-possible-to-install-another-version-of-python-to-virtualenv">here</a></p>
<p>But it seems odd that pip can't be used to install python.</p>
<p>Is the problem an error or a design feature? I'm flagging it as an error, because if there is something wrong with my setup, I want to know.</p>
|
<python><pip><virtualenv>
|
2024-04-21 07:09:01
| 1
| 365
|
ancient geek
|
78,360,369
| 195,562
|
The results of math.fsum is different from a simple loop sum in python
|
<p>I'd like to understand why the two methods of summing a list of floating point numbers differs by exactly 32.0?</p>
<pre class="lang-py prettyprint-override"><code>import math
nums = [26015151255025000.,26015151255025000.,26015151255025000.,26015151255025000.,26015151255025000.,26015151255025000.,26015151255025000.,26015151255025000.,26015151255025000.]
math_fsum_result = math.fsum(nums)
loop_result = 0.0
for n in nums:
loop_result += n
print(format(math_fsum_result, '.1f'))
print(format(loop_result, '.1f'))
</code></pre>
<p>Which produces:<br />
234136361295224992.0<br />
234136361295224960.0</p>
<p>It is understood that both are accumulating rounding errors but the inconsistency of exactly 32.0 seems weird.</p>
<p>For the record the correct value calculated with decimals is:<br />
234136361295225000.0</p>
|
<python><arrays><math><sum>
|
2024-04-21 04:11:42
| 3
| 1,313
|
Firestrand
|
78,360,190
| 11,951,910
|
How to add a title to a plot in ploty using a pandas histogram
|
<p>I am trying to add a title to my plot but it's not working.</p>
<pre><code>data.hist(by=['ascites', 'sex', 'drug'])
plt.title('Ascites Distribution')
plt.xlabel('Age')
plt.ylabel('Number of Patients')
plt.show()
</code></pre>
<p>The title and xlabel are not showing.</p>
<p><a href="https://i.sstatic.net/GI3An.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GI3An.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><plot>
|
2024-04-21 01:49:21
| 1
| 718
|
newdeveloper
|
78,360,154
| 826,112
|
Why is there a large difference between measured time and actual time?
|
<p>I am playing around with big lists to test a RPi cluster. I am building a list of character strings that I will eventually calculate hashes of to simulate cryptocurrency mining. The code is running on a Raspberry Pi 3B Rev 1.2.</p>
<p>The code to build the list is:</p>
<pre><code>from pympler import asizeof
import itertools
import time
start = time.process_time()
combos = list(itertools.combinations_with_replacement('abcdefghijlkmnopqrstuvwxyz', 6))
stop = time.process_time()
print(len(combos), asizeof.asizeof(combos)/1e6, (stop-start))
</code></pre>
<p>The results are:</p>
<pre><code>andrew@master:~ $ python mem_test_1.py
736281 70.727904 0.68859771
</code></pre>
<p>The problem is that the actual time (using a stopwatch) is closer to 45 sec. This time difference is reproducible when running the code on my M2 macbook (different numbers but still a big discrepancy). I have also tried time.time() and time.perf_counter() with the same results.</p>
<p>Where is all the missing time going and what have I missed?</p>
|
<python>
|
2024-04-21 01:19:38
| 1
| 536
|
Andrew H
|
78,359,858
| 301,644
|
Why would a parquet file get larger when FIXED_LEN_BYTE_ARRAY data type is used for fixed length byte array column?
|
<p>When trying to store a dataset in a parquet file for uploading it to HuggingFace I've encountered a strange phenomenon: when storing 50-byte array as a column, the file size gets larger when <a href="https://parquet.apache.org/docs/file-format/types/" rel="nofollow noreferrer">type</a> <code>FIXED_LEN_BYTE_ARRAY</code> is used instead of <code>BYTE_ARRAY</code>.</p>
<p>Here is python code demonstrating that observation:</p>
<pre><code>import os
import numpy as np
import pyarrow as pa
import pyarrow.parquet as pq
tmp_dir = os.path.expanduser("~/tmp/pq") # directory for example files to be stored
n_bytes = 50
n_records = 4096 * 126
np_rng = np.random.default_rng(1)
array = np.frombuffer(
np_rng.bytes(n_bytes * n_records), dtype=np.dtype((np.bytes_, n_bytes)))
table_byte_array = pa.Table.from_pydict({'byte_data': array})
pq.write_table(table_byte_array, f'{tmp_dir}/byte_array_example.parquet')
table_fixed_len_byte_array = pa.Table.from_pydict(
{'byte_data': array},
schema=pa.schema([
pa.field('byte_data', pa.binary(array.itemsize), nullable=False)]))
pq.write_table(
table_fixed_len_byte_array,
f'{tmp_dir}/fixed_len_byte_array_example.parquet')
file_sizes = []
for name in ['byte_array', 'fixed_len_byte_array']:
filename = f'{tmp_dir}/{name}_example.parquet'
table = pq.read_table(filename)
file_size = os.path.getsize(filename)
file_sizes.append(file_size)
print(f'{name}: {table.schema.field("byte_data").type} {file_size}')
print(
f'size ratio: {file_sizes[1] / file_sizes[0]:.3f}; '
f'size diff: {file_sizes[1] - file_sizes[0]}')
</code></pre>
<p>It prints:</p>
<pre><code>byte_array: binary 25493170
fixed_len_byte_array: fixed_size_binary[50] 25850348
size ratio: 1.014; size diff: 357178
</code></pre>
<p>This seems to be very counter-intuitive: not having a per-element length (i.e. storing less data) results in larger file size. Why is that the case?</p>
<hr />
<p>System info for the above test: Ubuntu 22.04.4 LTS; Python 3.10.11; pyarrow version 12.0.1 (using parquet-cpp-arrow version 12.0.1). When looking at the output of <code>pqrs schema <name>_example.parquet --detailed</code>, I see the same <code>encoding</code> line for both files: <code>encodings: RLE_DICTIONARY PLAIN RLE PLAIN</code>.</p>
|
<python><serialization><encoding><format><parquet>
|
2024-04-20 22:14:57
| 1
| 1,493
|
fiktor
|
78,359,764
| 2,058,333
|
FastAPI fails spinning up worker
|
<p>I am running FastAPI inside a docker container on my deployment VM. The image tests well on my develop OSX + Docker. Now, running it on the target machine it fails with the issue below.</p>
<p>It says max workers is -1?</p>
<pre><code>[2024-04-20 23:26:12 +0200] [165] [INFO] Starting gunicorn 22.0.0
[2024-04-20 23:26:12 +0200] [165] [INFO] Listening at: http://0.0.0.0:8000 (165)
[2024-04-20 23:26:12 +0200] [165] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2024-04-20 23:26:12 +0200] [166] [INFO] Booting worker with pid: 166
OpenBLAS blas_thread_init: pthread_create failed for thread 1 of 2: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
</code></pre>
<p>Here is my <code>gunicorn_conf.py</code> file</p>
<pre><code>"""
GUNICORN CONFIG
"""
import multiprocessing
import os
workers_per_core_str = os.getenv("WORKERS_PER_CORE", "1")
max_workers_str = os.getenv("MAX_WORKERS", 2)
use_max_workers = None
if max_workers_str:
use_max_workers = int(max_workers_str)
use_max_workers = 2
workers_per_core_str = "1"
web_concurrency_str = os.getenv("WEB_CONCURRENCY", None)
host = "0.0.0.0"
port = "8000"
bind_env = os.getenv("BIND", None)
use_loglevel = os.getenv("LOG_LEVEL", "info")
bind = "{}:{}".format(host, port)
cores = multiprocessing.cpu_count()
workers_per_core = float(workers_per_core_str)
default_web_concurrency = workers_per_core * cores
if web_concurrency_str:
web_concurrency = int(web_concurrency_str)
assert web_concurrency > 0
else:
web_concurrency = max(int(default_web_concurrency), 2)
if use_max_workers:
web_concurrency = min(web_concurrency, use_max_workers)
accesslog_var = "/gunicorn.log"
use_accesslog = accesslog_var or None
errorlog_var = "/gunicorn.err"
use_errorlog = errorlog_var or None
graceful_timeout_str = os.getenv("GRACEFUL_TIMEOUT", "120")
timeout_str = os.getenv("TIMEOUT", "120")
keepalive_str = os.getenv("KEEP_ALIVE", "5")
</code></pre>
<p>I assume some settings on the target machine may be off?
Here is my <code>ulimit</code> output from the host</p>
<pre><code>ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7841
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7841
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
</code></pre>
<p>and inside the running container</p>
<pre><code>ulimit -a
real-time non-blocking time (microseconds, -R) unlimited
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7841
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
</code></pre>
<p>Where should I look? Thanks!</p>
|
<python><linux><docker><fastapi><gunicorn>
|
2024-04-20 21:31:23
| 0
| 5,698
|
El Dude
|
78,359,535
| 5,424,117
|
FastAPI + Async DB calls + Async response to client
|
<p>I have code already for using FastAPI and making an async call to the database.</p>
<p>Next I need to trigger an async call to the database (to create or update) and also trigger a response to the client calling my API in an async way, so that each process runs separately and does not wait for the other one to complete.</p>
<p>How can I do this, if it's possible?</p>
<p>Here is an example FastAPI endpoint - it is not a perfect example, but shows how I'm sending async updates to the database. How would I also send a response to the client in an async manner? (assume that the code calls an LLM and once the response from the LLM is received, the code wants to update the DB and also send some of the LLM's response back to the client.)</p>
<p>That code is not present, but the async call to the database is present, as well as a "return" to the client. Assume that what I want to return to the client is <em>not</em> dependent on the database call, but came back from the LLM.</p>
<p>Essentially, I'm asking for the async "pattern" to send a response while also updating the database.</p>
<pre><code>async def update_conversation_title(conversation_id: UUID, title: Annotated[str, Body(embed=True)], session: AsyncSession = Depends(db.get_session)):
logger.info(f"get_conversations_by_id() - Request Received, finding all Conversations for ID: {conversation_id}.")
res = await db.update_db(session, m.Conversation, update_fields={'title': title}, constraints=[m.Conversation.id == conversation_id])
print(res)
# Something here to send LLM response to calling client in a async way...
return res[0]
</code></pre>
|
<python><asynchronous><fastapi>
|
2024-04-20 19:50:34
| 0
| 2,474
|
jb62
|
78,359,422
| 1,380,285
|
Trying to find pythonic way to partially fill numpy array
|
<p>I have a numpy array <code>psi</code> of shape (3,1000)</p>
<pre><code>psi.__class__
Out[115]: numpy.ndarray
psi.shape
Out[116]: (3, 1000)
</code></pre>
<p>I want to partially fill <code>psi</code> with another array <code>b</code></p>
<pre><code>b.__class__
Out[113]: numpy.ndarray
b.shape
Out[114]: (3, 500)
</code></pre>
<p>I can do this with a loop:</p>
<pre><code>for n in range(3):
psi[n][:500] = b[n]
</code></pre>
<p>But it seems to me that there ought to be a way to do this more directly. But for instance</p>
<pre><code>psi[:][:500] = b
</code></pre>
<p>fails with error</p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-120-6b23082d9d6b>", line 1, in <module>
psi[:][:500] = b
ValueError: could not broadcast input array from shape (3,500) into shape (3,1000)
</code></pre>
<p>I've a few variations on the theme as well, with similar results. This seems pretty straightforward. Any idea how to do it?</p>
|
<python><arrays><numpy>
|
2024-04-20 19:09:04
| 1
| 6,713
|
bob.sacamento
|
78,359,397
| 279,097
|
Polars: zscore with grouping on multiple period
|
<p>what is the best way with polars to calculate the zscore on multiple fields and perioid with a grouping.<br />
I have the code below but checking if I can do something better:</p>
<pre class="lang-py prettyprint-override"><code>window = "30d" # would like to do it on a list
df = (
df.sort(["date", "matu", "strike"])
.rolling(index_column="date", period=window, group_by=["matu", "strike"])
.agg(
[
pl.col(col).mean().alias(f"mean {col} {window}")
for col in ["value1", "value2", "value3"]
]
+ [
pl.col(col).std().alias(f"std {col} {window}")
for col in ["value1", "value2", "value3"]
]
+ [pl.col(col).first() for col in ["value1", "value2", "value3"]]
)
.with_columns(
(
(pl.col(f"{col}") - pl.col(f"mean {col} {window}"))
/ pl.col(f"std {col} {window}")
).alias(f"z-score {col} {window}")
for col, window in intertools.product(
["value1", "value2", "value3"], [window]
)
)
)
</code></pre>
|
<python><python-polars>
|
2024-04-20 19:01:00
| 1
| 415
|
Mac Fly
|
78,359,302
| 1,560,241
|
module 'sys' has no attribute 'last_traceback'
|
<p>A bit confused by whether this is a standard attribute. The doc string says it does exist.</p>
<pre><code>In [1]: import sys
In [2]: sys.last_traceback
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[2], line 1
----> 1 sys.last_traceback
AttributeError: module 'sys' has no attribute 'last_traceback'
</code></pre>
|
<python><traceback>
|
2024-04-20 18:28:14
| 1
| 812
|
episodeyang
|
78,359,292
| 1,769,327
|
Calculating Exponential Moving Average (EMA) in Polars with ewm_mean or another method
|
<p>Please note, this is related to <a href="https://stackoverflow.com/questions/77160103/exponential-moving-average-ema-calculations-in-polars-dataframe">Exponential Moving Average (EMA) calculations in Polars dataframe</a> that I raised 7 months ago.</p>
<p>Suppose I have the following values:</p>
<pre><code>values = [143.15,143.1,143.06,143.01,143.03,143.09,143.14,143.18,143.2,143.2,143.2,143.31,143.38,143.35,143.34,143.25,143.33,143.3,143.33,143.36]
</code></pre>
<p>Using Pandas and TA-Lib, I can perform the following:</p>
<pre><code>import pandas as pd
import talib as ta
df_pan = pd.DataFrame(
{
'value': values
}
)
df_pan['ema_9'] = ta.EMA(df_pan['value'], timeperiod=9)
</code></pre>
<p>Resulting in:</p>
<pre><code> value ema_9
0 143.15 NaN
1 143.10 NaN
2 143.06 NaN
3 143.01 NaN
4 143.03 NaN
5 143.09 NaN
6 143.14 NaN
7 143.18 NaN
8 143.20 143.106667
9 143.20 143.125333
10 143.20 143.140267
11 143.31 143.174213
12 143.38 143.215371
13 143.35 143.242297
14 143.34 143.261837
15 143.25 143.259470
16 143.33 143.273576
17 143.30 143.278861
18 143.33 143.289089
19 143.36 143.303271
</code></pre>
<p>In Polars, I had been doing the following:</p>
<pre><code>import polars as pl
df_pol = (
pl.DataFrame(
{
'value': values
}
)
.with_columns(
pl.when(pl.col('value').cum_count() < 9)
.then(pl.col('value').head(9).mean())
.otherwise(pl.col('value'))
.ewm_mean(span=9, min_periods=9, ignore_nulls=True, adjust=False)
.alias('ema_9')
)
)
</code></pre>
<p>and I had been getting identical results to TA-Lib, but for reasons I am unable fathom, I now get the following:</p>
<pre><code>value ema_9
f64 f64
143.15 null
143.1 null
143.06 null
143.01 null
143.03 null
143.09 null
143.14 null
143.18 null
143.2 143.125333
143.2 143.140267
143.2 143.152213
143.31 143.183771
143.38 143.223017
143.35 143.248413
143.34 143.266731
143.25 143.263384
143.33 143.276708
143.3 143.281366
143.33 143.291093
143.36 143.304874
</code></pre>
<p>Unless I am being incredibly dumb (not impossible!), I can only think there has been a change to the mechnanics of how ewm_mean calculates its results.</p>
<p>I have tried changing the ewm_mean parameter values, but I feel I am flailing around, as I simply don't have the knowledge of the mathematical principles behind how it works.</p>
<p>So, ideally, advice on how to adjust ewm_mean for my needs would be perfect.</p>
<p>In order to provide more background, I have rolled my own function, that produces the desired results, but is woefully inneficient as it loops through a Python list:</p>
<pre><code>def add_ema_col(df: pl.DataFrame, column: str, periods: int, smoothing:int = 2) -> pl.DataFrame:
"""Add an *EMA* column to an OHLCV dataframe."""
values = df.select(column).to_series().to_list()
ema = [sum(values[:periods]) / periods]
nones = [None] * (periods - 1)
for price in values[periods:]:
ema.append((price * (smoothing / (1 + periods))) + ema[-1] * (1 - (smoothing / (1 + periods))))
final_values = pl.Series(nones + ema)
return (
df
.with_columns(
final_values
)
)
df = (
pl.DataFrame({'Close': values})
.pipe(lambda df: add_ema_col(df, 'Close', 9))
)
</code></pre>
<p>Producing the desired:</p>
<pre><code>Close
f64 f64
143.15 null
143.1 null
143.06 null
143.01 null
143.03 null
143.09 null
143.14 null
143.18 null
143.2 143.106667
143.2 143.125333
143.2 143.140267
143.31 143.174213
143.38 143.215371
143.35 143.242297
143.34 143.261837
143.25 143.25947
143.33 143.273576
143.3 143.278861
143.33 143.289089
143.36 143.303271
</code></pre>
<p>So, ideally I would like to achieve the desired results with ewm_mean, but if that is for some reason not practical, I would welcome feedback on how to convert the principles of my custom function into a Polars expression operating solely on the dataframe without the need for Python loop & list.</p>
<p>In reality, I would actually welcome answers to both, as I am trying to improve my Polars skills as much as possible, but I don't want to be greedy with other peoples' time!</p>
<p>Any assistance would be greatly appreciated!</p>
<p><strong>Edit</strong></p>
<p>ewm_mean worked as expected in Polars version: 0.19.14. I am currently using version: 0.20.21.</p>
<p>I have raised this as a defect: <a href="https://github.com/pola-rs/polars/issues/15807" rel="nofollow noreferrer">ewm_mean produces different results in 0.20.21 than 0.19.14 #15807</a></p>
<p>If it is a bug, I guess help on making my custom function more efficient would be the most beneficial to me right now.</p>
<p>Btw, I appreciate that I could convert the dataframe to a pyarrow-backed Pandas dataframe and run the TA-Lib function on it, but that is my least favored option right now.</p>
|
<python><dataframe><python-polars>
|
2024-04-20 18:26:30
| 1
| 631
|
HapiDaze
|
78,359,182
| 5,287,011
|
Time series decomposition
|
<p>I have a time series that I want to decompose.
Dataset (train - dataframe) example (stock price):</p>
<pre class="lang-none prettyprint-override"><code> Date Close
7389 2014-12-24 104.589996
7390 2014-12-26 105.059998
7391 2014-12-29 105.330002
7392 2014-12-30 105.360001
7393 2014-12-31 104.5700
</code></pre>
<p>Here is my code:</p>
<pre><code>train_dec = copy.deepcopy(train)
train_dec.index = pd.to_datetime(train_dec['Date'])
train_dec.index.freq = 'D'
# Transform DataFrame into a Series
train_series = train_dec['Close']
train_decomposition = seasonal_decompose(train_series, model='additive')
train_trend = train_decomposition.trend
train_seasonal = train_decomposition.seasonal
train_residual = train_decomposition.resid
</code></pre>
<p>I tried without converting into Series and with it. Tried set up frequency to 'D'.</p>
<p>I keep getting errors such as:</p>
<blockquote>
<p>ValueError: Inferred frequency None from passed values does not conform to passed frequency D</p>
</blockquote>
<p>or</p>
<blockquote>
<p>ValueError: You must specify a period or x must be a pandas object with a PeriodIndex or a DatetimeIndex with a freq not set to None</p>
</blockquote>
<p>when I do not set frequency.</p>
<p>Maybe it is because the data have gaps (weekends) when there is no data point (stock price). Should I convert it to a weekly format? But how can I do this if there are gaps (e.g. if I have removed outliers)?</p>
<p>It must be something trivial but I can not see the solution.</p>
<p>Your help is greatly appreciated!</p>
|
<python><pandas><time-series><decomposition>
|
2024-04-20 17:46:10
| 1
| 3,209
|
Toly
|
78,358,608
| 8,026,780
|
how to log request id in django middleware?
|
<p>I write some code as below,but it raise an exception</p>
<p>here is request_id_middleware.py</p>
<pre class="lang-py prettyprint-override"><code>import uuid
from django.middleware.common import CommonMiddleware
from taiji.logger import logger
class RequestIDMiddleware(CommonMiddleware):
# pass
def process_request(self, request):
request.META['request_id'] = str(uuid.uuid4())
logger.info(f"start request id: {request.META['request_id']}")
return request
def process_response(self, request, response):
if request.META.get('request_id') is None:
response.headers['X-REQUEST-ID'] = request.META['request_id']
logger.info(f"finish request id: {response.headers['X-REQUEST-ID']}")
return response
</code></pre>
<p>logger.py</p>
<pre class="lang-py prettyprint-override"><code>import logging
def set_logger(name):
logger=logging.getLogger(name)
handler=logging.StreamHandler()
handler.setLevel(logging.DEBUG)
fmt=logging.Formatter(fmt='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(fmt)
logger.addHandler(handler)
return logger
logger=set_logger('gg')
</code></pre>
<p>views</p>
<pre class="lang-py prettyprint-override"><code>def index(request: HttpRequest):
logger.info("hello world")
return JsonResponse("hello world")
</code></pre>
<p>but it tell me</p>
<pre><code>Traceback (most recent call last):
File "E:\miniconda\envs\py312\Lib\wsgiref\handlers.py", line 137, in run
self.result = application(self.environ, self.start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\miniconda\envs\py312\Lib\site-packages\django\contrib\staticfiles\handlers.py", line 80, in __call__
return self.application(environ, start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\miniconda\envs\py312\Lib\site-packages\django\core\handlers\wsgi.py", line 124, in __call__
response = self.get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\miniconda\envs\py312\Lib\site-packages\django\core\handlers\base.py", line 141, in get_response
response._resource_closers.append(request.close)
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'WSGIRequest' object has no attribute '_resource_closers'
[20/Apr/2024 22:32:59] "GET /index/ HTTP/1.1" 500 59
</code></pre>
<p>is it right way to implement a middleware?
why it call a WSGIRequest error? it should be a respone-like object error
Any help will be appreciate</p>
|
<python><django>
|
2024-04-20 14:36:57
| 1
| 453
|
Cherrymelon
|
78,358,561
| 736,662
|
Correlating response into another function with another function in Python
|
<p>I have a working function that obtains a value, not defined elsewhere in my script</p>
<pre><code>def get_requests(t, endpoint):
response = requests.get(f'{base_url}{endpoint}',
headers={"Authorization": f'Bearer {t}', "Content-Type": 'application/json'})
try:
corr_value = response.json()[0]["powerPlant"]["id"]
except KeyError:
print("Unable to get powerPlant id")
print("Powerplant ID: ", corr_value)
return response
</code></pre>
<p>Usage of the function is a pytest:</p>
<pre><code>def test_get_powerplants():
# Act:
response = get_requests(token, '/powerplant/all')
# Assertion:
assert response.status_code == 200 # Validation of status code
print("Response: ", response.text)
</code></pre>
<p>I want to use the variable "corr_value" from my function "get_requests" in another test:</p>
<pre><code>def test_bids_generate():
# Act:
response = post_requests(token, '/bids/generate', setpayload_bid_generate())
# Assertion:
assert response.status_code == 200 # Validation of status code
print("Response: ", response.text)
</code></pre>
<p>Where setpayload_bid_generate() should get the corr_value in as a parameter and put it into the request body like this:</p>
<p>def setpayload_bid_generate():
myjson = {"powerPlantIds": {corr_value}}
return myjson</p>
<p>However, the function setpayload_bid_generate() does not recognize corr_value in the above function.</p>
<p>I guess this has to do with scope. Should corr_value be defined outside all functions as a global value to be able to refer to it in different functions?</p>
|
<python>
|
2024-04-20 14:24:37
| 1
| 1,003
|
Magnus Jensen
|
78,358,388
| 11,092,636
|
Worksheet.row_dimensions type hinting does not work properly?
|
<p>Worksheet's <code>row_dimensions</code> raises a Warning when I put an integer in it (for the index of the row to change the height of) although it works (height is changed and script runs correctly).</p>
<p>MRE:</p>
<pre class="lang-py prettyprint-override"><code>import openpyxl
from openpyxl.worksheet.worksheet import Worksheet
# Load an existing workbook or create a new one
wb = openpyxl.load_workbook('your_workbook.xlsx')
ws: Worksheet = wb.active
# Set the height of row 1 to 25
ws.row_dimensions[1].height = 25 <-- problem
# Save the changes back to the workbook
wb.save('your_workbook.xlsx')
</code></pre>
<p>I'm not allowed such warnings, nor am I allowed to add #noqa.</p>
<p>I'm using <code>Python 3.11.1</code> and the latest version of openpyxl (<code>3.1.2</code>).</p>
<p><a href="https://i.sstatic.net/nNj7i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nNj7i.png" alt="enter image description here" /></a></p>
|
<python><openpyxl><python-typing>
|
2024-04-20 13:29:53
| 1
| 720
|
FluidMechanics Potential Flows
|
78,358,268
| 774,575
|
How to remove <indexing past lexsort depth may impact performance?">
|
<p>I've a dataframe with a non-unique MultiIndex:</p>
<pre><code> A B
L1 L2
7.0 7.0 -0.4 -0.1
8.0 5.0 -2.1 1.6
5.0 8.0 -1.8 -0.8
7.0 7.0 0.5 -1.2
NaN -1.1 -0.9
5.0 8.0 0.6 2.3
</code></pre>
<p>I want to select some rows using a tuple of values:</p>
<pre><code>data = df.loc[(7, 7), :]
</code></pre>
<p>With no surprise a warning is triggered:</p>
<pre><code>PerformanceWarning: indexing past lexsort depth may impact performance.
</code></pre>
<p>I'm trying to understand what in the current index causes this warning. I've read many answers here, some are related to old versions of pandas, other helped. From what I've read the warning is caused by two properties:</p>
<ul>
<li>The index entries are not unique and</li>
<li>The index entries are not sorted.</li>
</ul>
<p>So I'm processing the dataframe index with this function designed from the answers found on this stack:</p>
<pre><code>def adjust_index(df):
df = df.sort_index() # sort index
levels = list(range(len(df.index.levels)))
df_idx = df.groupby(level=levels).cumcount() # unique index
df_adj = df.set_index(df_idx, append=True) # change index
df_adj = df_adj.reset_index(level=-1, drop=True) # drop sorting level
return df_adj
</code></pre>
<p>This doesn't remove the warning. Can you explain what is wrong, useless or missing?</p>
<p>The rest of the code:</p>
<pre><code>import pandas as pd
from numpy import nan, random as npr
npr.seed(2)
# Dataframe with unsorted MultiIndex
def create_df():
n_rows = 6
data = npr.randn(n_rows, 2).round(1)
choices = [8, 7, 5, 7, 8, nan]
columns = ['A', 'B']
levels = ['L1', 'L2']
tuples = list(zip(npr.choice(choices, n_rows), npr.choice(choices, n_rows)))
index = pd.MultiIndex.from_tuples(tuples, names=levels)
df = pd.DataFrame(data, index=index, columns=columns)
return df
df = create_df()
df = adjust_index(df)
data = df.loc[(7, 7), :] # <-- triggers warning
</code></pre>
|
<python><pandas><multi-index>
|
2024-04-20 12:49:22
| 1
| 7,768
|
mins
|
78,358,188
| 11,114,048
|
Why do PIL and plt.imshow display different images when using the same tensor in Python?
|
<p>I am trying to convert a PyTorch tensor to a PIL image and display it using both matplotlib.pyplot and PIL. However, I am noticing that the images displayed by plt.imshow and PIL's display() function look different from each other.</p>
<p>imshow</p>
<p><a href="https://i.sstatic.net/X0qrB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X0qrB.png" alt="enter image description here" /></a></p>
<p>PIL</p>
<p><a href="https://i.sstatic.net/7JEZu.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7JEZu.jpg" alt="enter image description here" /></a></p>
<p>Below is the function I am using to perform the conversion and display the images:</p>
<pre><code>import matplotlib.pyplot as plt
from PIL import Image
def tensor_to_pil(image_tensor):
print(image_tensor[0].shape)
plt.figure()
plt.imshow(image_tensor[0].cpu().squeeze().numpy(), cmap='gray')
if image_tensor.shape[1] == 1:
pil_image = Image.fromarray(image_tensor[0].cpu().squeeze().numpy(), "L")
else:
# This seems to be an error in the code where 'error' is not defined
# error
pil_image = Image.fromarray(image_tensor[0].permute(1, 2, 0).cpu().numpy())
print("yes")
display(pil_image)
print("yes")
</code></pre>
<p>Tensor Dimensions: The tensor I am using is a grayscale image (1 color channel) , image_tensor variable have dimension torch.Size([1,1,640,640])</p>
<p>What could be causing the difference in how plt.imshow and PIL display the same image tensor? Is there any additional processing I need to do to align their outputs?</p>
|
<python><pytorch><python-imaging-library>
|
2024-04-20 12:23:38
| 1
| 322
|
Hjin
|
78,358,169
| 11,274,362
|
Best Algorithm for arrange members of set with the least proximity between the members of different sets in Python
|
<p>We have different sets. How to arrange different members next to each other in such a way that there is the least proximity between the members of the same set? And as much as possible none of the two members of the same set are next to each other?
For example</p>
<pre><code>s1 = [m1, m1, m1, m1, m1, m1, m1, m1, m1, m1]
s2 = [m2, m2, m2, m2, m2]
s3 = [m3, m3, m3]
</code></pre>
<p>And the result must be something like this:</p>
<pre><code>r1 = [m1,m2, m1,m2, m1, m2, m1,m2, m1,m2, m1,m3, m1,m3, m1,m3, m1, m1]
</code></pre>
<p><strong>Notice: In this case, it is possible to split s1 into n sets to void this problem. And then arrange members. And n must be the minimum number.</strong></p>
|
<python><algorithm><sorting><math>
|
2024-04-20 12:18:17
| 1
| 977
|
rahnama7m
|
78,358,131
| 5,703,539
|
Cannot migrate Flask db : No such command 'db'
|
<p>I'm new on Flask,
I've been setting up a first version of my flask app : <code>test-app</code>, with a mysql database. Now I want to alter and add some columns to some of my database table without erasing the existing data. I heard about Flask-migrate with Alembic as a solution of that problem. I've tried to setting everything up but came with this error :</p>
<pre><code>Error: Failed to find Flask application or factory in module 'test-app.app'. Use 'test-app.app:name' to specify one.
Usage: flask [OPTIONS] COMMAND [ARGS]...
Try 'flask --help' for help.
Error: No such command 'db'.
</code></pre>
<p>when running this command within my .env : <code>flask db migrate -m "initial migration"</code></p>
<p>I don't understand, here is the skeleton of my Flask app</p>
<pre><code>test-app/
.venv/
app/
models/
__init__.py
domain.py
subdomain.py
app.py
</code></pre>
<p>Some files :</p>
<p>models/<code>__init__.py</code></p>
<pre><code>from os.path import dirname, basename, isfile, join
from sqlalchemy.pool import QueuePool
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy(engine_options={"pool_size": 10, 'pool_recycle': 280, "poolclass":QueuePool, "pool_pre_ping":True})
import glob
modules = glob.glob(join(dirname(__file__), "*.py"))
__all__ = [ basename(f)[:-3] for f in modules if isfile(f) and not f.endswith('__init__.py')]
</code></pre>
<p>models/domain.py (very similar to subdomain.py)</p>
<pre><code>from dataclasses import dataclass
from . import db
@dataclass
class Domain(db.Model):
__tablename__ = 'domain'
__table_args__ = {'extend_existing': True}
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255), nullable=False)
</code></pre>
<p>app.py</p>
<pre><code>from flask import Flask, request, jsonify, json
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.event import listens_for
from flaskext.mysql import MySQL
from flask_cors import CORS
from dataclasses import dataclass
from sqlalchemy import text
from urllib.parse import quote
from flask_migrate import Migrate
app = Flask(__name__)
CORS(app, origins=["http://localhost:3000", "http://localhost:3000"])
from app.models import db
mysql =MySQL()
@dataclass
class User(db.Model):
__tablename__ = 'user'
id = db.Column(db.Integer, primary_key=True)
firstname = db.Column(db.String(46), nullable=False)#1
lastname = db.Column(db.String(46), nullable=False)#1
def __init__(self, firstname, lastname):
self.firstname = firstname
self.lastname = lastname
def as_dict(self):
excluded_fields = ['id']
return {field.name:getattr(self, field.name) for field in self.__table__.c if field.name not in excluded_fields}
@dataclass
class User(db.Model):
__tablename__ = 'user'
__table_args__ = {'extend_existing': True}
id = db.Column(db.Integer, primary_key=True)
firstname = db.Column(db.String(46), nullable=False)#1
lastname = db.Column(db.String(46), nullable=False)#1
def __init__(self, firstname, lastname):
self.firstname = firstname
self.lastname = lastname
def as_dict(self):
excluded_fields = ['id']
return {field.name:getattr(self, field.name) for field in self.__table__.c if field.name not in excluded_fields}
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://username:pwd@127.0.0.1/test'
db.init_app(app)
with app.app_context():
from app.models import *
migrate = Migrate(app, db)
db.create_all()
@app.route('/users', methods=['GET'])
def get_user():
users = User.query.all()
return jsonify(users)
@app.route('/user/<firstname>', methods=['GET'])
def user_byfirstname(firstname):
user = User.query.filter_by(firstname = firstname).first()
return jsonify(user.as_dict())
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>requirements.txt</p>
<pre><code>aiohttp==3.8.6
aiohttp-retry==2.8.3
aiosignal==1.3.1
alembic==1.13.1
async-timeout==4.0.3
attrs==23.1.0
blinker==1.6.3
certifi==2023.7.22
cffi==1.16.0
charset-normalizer==3.3.1
click==8.1.7
cryptography==41.0.7
distlib==0.3.7
filelock==3.12.4
Flask==2.3.0
Flask-Cors==4.0.0
flask-marshmallow==0.14.0
Flask-Migrate==4.0.7
Flask-MySQL==1.5.2
Flask-MySQLdb==2.0.0
Flask-Script==2.0.6
Flask-SQLAlchemy==3.1.1
flask-talisman==1.1.0
frozenlist==1.4.0
idna==3.4
itsdangerous==2.1.2
Jinja2==3.1.2
Mako==1.3.3
MarkupSafe==2.1.3
marshmallow-sqlalchemy==0.29.0
multidict==6.0.4
mysqlclient==2.2.0
packaging==23.2
platformdirs==3.11.0
psycopg2-binary==2.9.9
pycparser==2.21
PyJWT==2.8.0
PyMySQL==1.1.0
pyOpenSSL==23.3.0
python-dotenv==1.0.0
requests==2.31.0
six==1.16.0
SQLAlchemy==2.0.22
twilio==8.10.0
typing_extensions==4.8.0
urllib3==2.0.7
virtualenv==20.24.5
waitress==2.1.2
Werkzeug==3.0.0
WSGIserver==1.3
yarl==1.9.2
</code></pre>
<p>Please help!</p>
|
<python><flask><flask-sqlalchemy><flask-migrate>
|
2024-04-20 12:08:13
| 2
| 1,665
|
kabrice
|
78,358,124
| 3,623,537
|
get true `__dict__` if `__dict__` was overridden
|
<p>Is it possible to get object's true <code>__dict__</code> if <code>__dict__</code> was overridden? Are there simpler solutions than the ones below?</p>
<p>I came across this example where <code>__dict__</code> was overridden and got curious. I thought python is using <code>__dict__</code> to recognize object's attributes but turned out it can be overridden and attributes will still work. So, the original <code>__dict__</code> is still out there.</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
__dict__ = {}
obj = MyClass()
obj.hacky = 5
# 5, {}
print(obj.hacky, obj.__dict__)
</code></pre>
|
<python><ctypes><magic-methods>
|
2024-04-20 12:06:36
| 1
| 469
|
FamousSnake
|
78,358,049
| 1,580,469
|
VS Code: How to debug a Python method in debug console
|
<p>I have started a Python script in VS Code's debugger and have hit a breakpoint. I can examine the state in Debug Console.</p>
<p>Now I would like to call a method with parameters from within the Debug Console and follow (debug) it's execution.</p>
<p><a href="https://i.sstatic.net/q8xLm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q8xLm.png" alt="Screenshot showing Debug Console with a function call typed in, and in the main pane, a breakpoint is set inside that function." /></a></p>
<p>I've set a breakpoint in the method I call, but it is ignored: Debug Console immediately returns the result from the method, instead of interrupting at the breakpoint. Seems like debugging in Debug Console is not supported.</p>
<p>Any idea how to debug from Debug Console?</p>
|
<python><visual-studio-code><debugging>
|
2024-04-20 11:42:09
| 3
| 396
|
Christian
|
78,357,988
| 661,716
|
numba jitclass with record type of string
|
<p>The v3 variable is string value. I could not run with below code which gives error.</p>
<pre><code>import numpy as np
import pandas as pd
from numba.experimental import jitclass
from numba import types
import os
os.environ['NUMBA_VERBOSE'] = '1'
# ----- BEGINNING OF THE MODIFIED PART ----- #
recordType = types.Record([
('v', {'type': types.int64, 'offset': 0, 'alignment': None, 'title': None}),
('v2', {'type': types.float64, 'offset': 8, 'alignment': None, 'title': None}),
('v3', {'type': types.bytes, 'offset': 16, 'alignment': None, 'title': None})
], 32, False)
spec = [
('data', types.Array(recordType, 1, 'C', False))
]
# ----- END OF THE MODIFIED PART ----- #
@jitclass(spec)
class Test:
def __init__(self, data):
self.data = data
def loop(self):
v = self.data['v']
v2 = self.data['v2']
v3 = self.data['v3']
print("Inside loop:")
print("v:", v)
print("v2:", v2)
print("v3:", v3)
# Create a dictionary with the data
data = {'v': [1, 2, 3], 'v2': [1.0, 2.0, 3.0], 'v3': ['a', 'b', 'c']}
# Create the DataFrame
df = pd.DataFrame(data)
# Define the structured array dtype
dtype = np.dtype([
('v', np.int64),
('v2', np.float64),
('v3', 'S10') # Byte string with maximum length of 10 characters
])
print(df.to_records(index=False))
# Create the structured array
data_array = np.array(list(df.to_records(index=False)), dtype=dtype)
print("Original data array:")
print(data_array)
# Create an instance of the Test class
test = Test(data_array)
test.loop()
</code></pre>
<p>Errors:</p>
<pre><code>/home/totaljj/miniconda3/bin/conda run -n bt --no-capture-output python /home/totaljj/bt_lite_strategies/test/test_units/test_numba_obj.py
Traceback (most recent call last):
File "/home/totaljj/bt_lite_strategies/test/test_units/test_numba_obj.py", line 13, in <module>
('v3', {'type': types.bytes, 'offset': 16, 'alignment': None, 'title': None})
AttributeError: module 'numba.core.types' has no attribute 'bytes'
ERROR conda.cli.main_run:execute(124): `conda run python /home/totaljj/bt_lite_strategies/test/test_units/test_numba_obj.py` failed. (See above for error)
Process finished with exit code 1,
</code></pre>
|
<python><numba><jit>
|
2024-04-20 11:16:53
| 1
| 1,226
|
tompal18
|
78,357,908
| 17,973,259
|
Django Object id's passed to the template are not the same inside the view function
|
<p>When I am submitting the form, the question id's that are inside the view function do not match with the question id's from the request.POST.</p>
<p>When submitting the from, the print statements from the view function below are:</p>
<blockquote>
<p><QueryDict: {'csrfmiddlewaretoken': ['token here'], 'question_15':
['option1'], 'question_17': ['option1']}></p>
<p>question_4 question_5</p>
</blockquote>
<p>Inside the request.POST the id's are 15 and 17, and inside the view 4, 5.
Everytime i am submitting the form, the id's do not match.</p>
<p>View:</p>
<pre><code>@login_required
def quiz_view(request):
questions = QuizQuestion.objects.order_by('?')[:2]
user = request.user
if request.method == 'POST':
print(request.POST)
for question in questions:
input_name = f"question_{question.id}"
print(input_name)
option_selected = request.POST.get(input_name)
if option_selected:
# Check if the user has already responded to this question
existing_response = QuizUserResponse.objects.filter(user=user, question=question).first()
if existing_response:
# Update existing response
existing_response.response_text = getattr(question, option_selected)
existing_response.save()
else:
# Create new response
QuizUserResponse.objects.create(user=user, question=question, response_text=getattr(question, option_selected))
return redirect('users:gaming_quiz')
else:
context = {
"questions": questions,
}
return render(request, 'general/quiz_template.html', context)
</code></pre>
<p>HTML:</p>
<pre><code><form method="post">
{% csrf_token %}
{% for question in questions %}
<div>
<label class="question">{{ question.question_text }}</label>
<ul class="errors">
{% for error in form.errors %}
<li>{{ error }}</li>
{% endfor %}
</ul>
<div>
<label class="form-check-inline">
<input type="radio" name="question_{{ question.id }}" value="option1"> {{ question.option1 }}
</label>
<label class="form-check-inline">
<input type="radio" name="question_{{ question.id }}" value="option2"> {{ question.option2 }}
</label>
<label class="form-check-inline">
<input type="radio" name="question_{{ question.id }}" value="option3"> {{ question.option3 }}
</label>
<label class="form-check-inline">
<input type="radio" name="question_{{ question.id }}" value="option4"> {{ question.option4 }}
</label>
</div>
</div>
{% endfor %}
<button type="submit">Submit</button>
</form>
</code></pre>
|
<python><html><django>
|
2024-04-20 10:45:15
| 1
| 878
|
Alex
|
78,357,791
| 4,196,578
|
How to apply `--break-system-packages` conditionally only when the system pip/python supports it?
|
<p>I am adapting my pip commands to newer versions of Ubuntu (that support PEP 668). Out of the options, the only one worked so far (in my specific use case) is to</p>
<blockquote>
<p>Use --break-system-packages at the end of pip</p>
</blockquote>
<p>as indicated in <a href="https://stackoverflow.com/a/76084249/4196578">this answer</a>. That is, change</p>
<pre><code>sudo pip install xyz
</code></pre>
<p>to</p>
<pre><code>sudo pip install xyz --break-system-packages
</code></pre>
<p>. This worked for the newer versions of Ubuntu but causes an error in older versions of Ubuntu (22.04 LTS) that do not recognize the <code>--break-system-packages</code> option. The error message from pip is:</p>
<blockquote>
<p>no such option: --break-system-packages</p>
</blockquote>
<p><code>pip --version</code> shows:</p>
<blockquote>
<p>pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)</p>
</blockquote>
<p><em>How can one add conditions around pip commands so that it uses the <code>--break-system-packages</code> option only when the pip/python version is high enough to recognize it?</em></p>
|
<python><pip>
|
2024-04-20 10:01:39
| 1
| 22,728
|
thor
|
78,357,735
| 5,473,482
|
How can I detect the two ellipses on a roulette wheel?
|
<p>I have the image below:</p>
<p><a href="https://i.sstatic.net/33mvLm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/33mvLm.png" alt="roulette" /></a></p>
<p>How can I detect the two ellipses below:</p>
<p><a href="https://i.sstatic.net/WqMoBm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WqMoBm.png" alt="enter image description here" /></a></p>
<p>I want to have a mask only of this enclosed area to later detect the ball.</p>
<p>This is what I have tried but it does not find it correctly:</p>
<pre><code>import cv2
import numpy as np
# Load the image
image = cv2.imread('roulette.png')
# Convert the image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Apply Gaussian blur to reduce noise
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
# Perform edge detection using Canny
edges = cv2.Canny(blurred, 30, 150)
# Find contours in the edge-detected image
contours, _ = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Loop over the contours
for contour in contours:
# Fit an ellipse to the contour
if len(contour) >= 700:
ellipse = cv2.fitEllipse(contour)
# Draw the ellipse on the original image
cv2.ellipse(image, ellipse, (0, 255, 0), 2)
# Show the image with detected ellipses
cv2.imshow("Ellipses", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>How can I solve this issue? Any suggestions or another method? Thanks!</p>
|
<python><opencv><computer-vision><object-detection><semantic-segmentation>
|
2024-04-20 09:41:49
| 1
| 1,047
|
Blind0ne
|
78,357,734
| 14,256,643
|
Django mysql icontains not working for small case
|
<p>When I type <code>Travel</code> exactly same as my database it's working but when type <code>travel</code> it's working even I tried icontains for ignore case sensitivity. my code:</p>
<pre><code>similar_titles_query = Q(title__icontains=search_query) & Q(parent_product__isnull=True, domain_name=domain_name,is_published=False)
similar_titles_results = Product.objects.filter(similar_titles_query).order_by('?')[:5]
</code></pre>
|
<python><mysql><django>
|
2024-04-20 09:41:42
| 1
| 1,647
|
boyenec
|
78,357,541
| 926,918
|
Disable pyvis from opening elinks by default
|
<p>I work remotely on a Ubuntu system using a terminal and <code>vi</code> editor. I want to generate output using <code>pyvis</code> which I am able to do. However, running the code also creates a terrible side-effect of invoking a CLI browser (<code>elinks</code>) that keeps running in the background even if I exit at the beginning itself by pressing <code>q</code>. This becomes noticeable when I work with <code>vi</code> in escape mode and keeps showing the menu from which I cannot exit no matter what I try. Is there a way to stop this behavior? I tried the following:</p>
<pre><code>from pyvis.network import Network
import networkx as nx
import os
g = nx.MultiGraph()
g.add_node(0, label='node 0', title='Node 1', shape='ellipse', size=125)
g.add_node(1, label='node 1', title='Node 2', shape='ellipse', size=125)
g.add_node(2, label='node 2', title='Node 3', shape='ellipse', size=125)
g.add_edge(0, 1, label='e1_1', title='e1_1',arrows="false")
g.add_edge(0, 1, label='e1_2', title='E1_2',arrows="false")
g.add_edge(0, 2, label='e2', title='E2',arrows="false")
nt = Network(directed=True)
nt.from_nx(g)
nt.set_edge_smooth('dynamic')
nt.show('foo.html', notebook=False)
nt.set_options('''
var options = {
"nodes": {
"size": 100
},
"edges": {
"arrowStrikethrough": false,
"color": {
"inherit": false
},
"font": {
"size": 3,
"align": "top"
},
"smooth": dynamic
}
}
''')
nt.show('foo.html', notebook=False)
os.system("xdg-open /dev/null foo.html")
</code></pre>
<p>I run it as the following:</p>
<pre><code>$ python foo.py
</code></pre>
<p>Error on screen:</p>
<pre><code>foo.html
xdg-open: unexpected argument 'foo.html'
Try 'xdg-open --help' for more information.
(base) eweb@genome:~/test$ cannot create temporary directory for the root file system: Permission denied
Warning: program returned non-zero exit code #256
Opening "foo.html" with Chromium Web Browser (text/html)
cannot create temporary directory for the root file system: Permission denied
cannot create temporary directory for the root file system: Permission denied
/usr/bin/xdg-open: 882: firefox: not found
/usr/bin/xdg-open: 882: iceweasel: not found
/usr/bin/xdg-open: 882: seamonkey: not found
/usr/bin/xdg-open: 882: mozilla: not found
/usr/bin/xdg-open: 882: epiphany: not found
/usr/bin/xdg-open: 882: konqueror: not found
cannot create temporary directory for the root file system: Permission denied
cannot create temporary directory for the root file system: Permission denied
/usr/bin/xdg-open: 882: google-chrome: not found
</code></pre>
|
<python><pyvis>
|
2024-04-20 08:34:50
| 1
| 1,196
|
Quiescent
|
78,357,523
| 1,841,839
|
Path to pip broken - pip : The term 'pip' is not recognized as the name of a cmdlet,
|
<p>if i do this from terminal in PyCharm just try to install the package with pip.</p>
<pre><code>(venv) PS C:\Development\Citizenship\video_auto_generator> pip install python-dotenv
</code></pre>
<p>I get</p>
<blockquote>
<p>pip : The term 'pip' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that
the path is correct and try again.
At line:1 char:1</p>
<ul>
<li>pip install python-dotenv</li>
<li>
<pre><code>+ CategoryInfo : ObjectNotFound: (pip:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
</code></pre>
</li>
</ul>
</blockquote>
<p>However if i do</p>
<pre><code>(venv) PS C:\Development\Citizenship\video_auto_generator> venv\Scripts\python.exe -m pip install python-dotenv
</code></pre>
<p>and supply the path to python in the venv directory. it works.</p>
<blockquote>
<p>Collecting python-dotenv
Using cached python_dotenv-1.0.1-py3-none-any.whl.metadata (23 kB)
Using cached python_dotenv-1.0.1-py3-none-any.whl (19 kB)
Installing collected packages: python-dotenv
Successfully installed python-dotenv-1.0.1</p>
</blockquote>
<p>This works in another project i have but its broken in my new project.</p>
<p>Python 3.12. Im having so many issues with this since upgrading to 3.12 i cant get a new project crated that works out of the box with venv it takes forever to debug issues.</p>
<p>This also works</p>
<pre><code>py -m pip install python-dotenv
</code></pre>
<p>Why cant i just type pip install like i used to.</p>
<h1>update</h1>
<pre><code>"C:\Program Files\Python312\python.exe" -m venv venv
.\venv\Scripts\activate
</code></pre>
<p>which give me invalid interpreter in pycharm and then i try to feed it the path to python and i get</p>
<blockquote>
<p>Error: Python packaging tool 'setuptools' not found</p>
</blockquote>
<p>if i try to then create a new venv via pycharm i get pip :</p>
<blockquote>
<p>The term 'pip' is not recognized as the name of a cmdlet</p>
</blockquote>
<p>i suspect this is something silly but i cant get anything to work now and its a big problem.</p>
<p>even</p>
<pre><code>py -m pip list
</code></pre>
<p>on some of my projects fails that it cant find pip</p>
<p><a href="https://i.sstatic.net/LqVvz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LqVvz.png" alt="enter image description here" /></a></p>
<h1>update 2</h1>
<p>If i create a new project it works fine</p>
<pre><code>"C:\Program Files\Python312\python.exe" -m venv venv
.\venv\Scripts\activate
echo python-dotenv > requirements.txt
type nul > .env
type nul > app.py
type nul > README.MD
pip install -r requirements.txt
pip list
</code></pre>
<p>However if i do this in an existing project.</p>
<pre><code>"C:\Program Files\Python312\python.exe" -m venv venv
.\venv\Scripts\activate
</code></pre>
<p>and try to use that in PyCharm I get. how can it be invalid.</p>
<p><a href="https://i.sstatic.net/5PIUZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5PIUZ.png" alt="enter image description here" /></a></p>
|
<python><pip><pycharm>
|
2024-04-20 08:29:31
| 1
| 118,263
|
Linda Lawton - DaImTo
|
78,357,512
| 1,815,710
|
Setting AUTH_USER_MODEL for user app within a v1 folder
|
<p>My django project structure looks like this</p>
<pre><code>/api
manage.py
poetry.lock
/api
/v1
__init__.py
/users
__init__.py
models.py
...
/properties
__init__.py
...
</code></pre>
<p>I'm trying to set up a custom User model by setting <code>AUTH_USER_MODEL</code> within <code>settings.py</code>.</p>
<p>However, I'm getting this error</p>
<pre><code>django.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'user.User' that has not been installed
</code></pre>
<p>I've tried values</p>
<pre><code>AUTH_USER_MODEL = "api.v1.user.User"
AUTH_USER_MODEL = "v1.user.User"
AUTH_USER_MODEL = "user.User"
</code></pre>
<p>But none of them fixed the issue</p>
<p>I also have these <code>INSTALLED_APPS</code></p>
<pre><code>INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"rest_framework",
"api.v1.users",
"api.v1.reservations",
"api.v1.properties",
]
</code></pre>
<p>and my User model currently looks like this</p>
<pre><code>class User(AbstractUser):
pass
</code></pre>
|
<python><django><django-rest-framework>
|
2024-04-20 08:26:53
| 1
| 16,539
|
Liondancer
|
78,357,478
| 14,923,149
|
How to create a combined heatmap in Python using matplotlib with normalized data and original t-test values?
|
<p>I have a DataFrame containing multiple features along with their associated t-test results and p-values. I aim to generate a combined heatmap in Python using Seaborn. In this heatmap, one section should display the features with normalized data using z-scores (to ensure visibility of both high and low values), while the other section should present the original t-test values and p-values.</p>
<p>I intend to create a single heatmap with distinct color schemes for each section to clearly differentiate between them. However, my attempts to plot two separate heatmaps and combine them have resulted in separate plots rather than a unified heatmap.</p>
<p>Could someone guide me on how to create a single combined heatmap where both sections appear attached?</p>
<p>Here's the code I've attempted so far:
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import gridspec</p>
<pre><code># Example DataFrame
data = {
'Feature1': np.random.randn(10),
'Feature2': np.random.randn(10),
'Feature3': np.random.randn(10),
't test': np.random.randn(10),
'p value': np.random.rand(10)
}
df = pd.DataFrame(data)
# Drop the last two columns
df_heatmap = df.iloc[:, :-2]
# Calculate z-scores for the DataFrame
df_heatmap_zscore = (df_heatmap - df_heatmap.mean()) / df_heatmap.std()
# Set up the layout
fig = plt.figure(figsize=(12, 8))
gs = gridspec.GridSpec(1, 4, width_ratios=[1, 1, 0.05, 0.05]) # 4 columns: 2 for heatmaps, 2 for colorbars
# Heatmap for the DataFrame excluding t-test and p-value columns
ax1 = plt.subplot(gs[0])
sns.heatmap(df_heatmap_zscore, cmap='coolwarm', annot=True, cbar=False)
plt.title('Heatmap without t-test and p-value')
# Heatmap for t-test p-values
ax2 = plt.subplot(gs[1])
sns.heatmap(df[['t test', 'p value']], cmap='viridis', annot=True, fmt=".4f", cbar=False, ax=ax2)
plt.title('Heatmap for t-test p-values')
# Create a single colorbar for the z-score
cbar_ax1 = plt.subplot(gs[2])
cbar1 = plt.colorbar(ax1.collections[0], cax=cbar_ax1, orientation='vertical')
cbar1.set_label('Z-score')
# Create a single colorbar for the t-test p-values
cbar_ax2 = plt.subplot(gs[3])
cbar2 = plt.colorbar(ax2.collections[0], cax=cbar_ax2, orientation='vertical')
cbar2.set_label('p-value')
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/bJYJP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bJYJP.png" alt="enter image description here" /></a></p>
<p>Is there a way to combine these heatmaps into a single plot, so they appear attached and have different color pattern and legend bar?</p>
<p><a href="https://i.sstatic.net/mC6mt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mC6mt.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-04-20 08:17:45
| 1
| 504
|
Umar
|
78,357,461
| 3,405,291
|
Mount Google Drive on Google Colab with write access
|
<p>How can I mount my Google Drive on Google Colab with <strong>write</strong> access?</p>
<p>That's because I'm receiving an error when my AI model tries to update a file on Google Drive:</p>
<pre><code>Traceback (most recent call last):
File "/content/Deep3DFaceRecon_pytorch/test.py", line 72, in <module>
opt = TestOptions().parse() # get test options
^^^^^^^^^^^^^^^^^^^^^
File "/content/Deep3DFaceRecon_pytorch/options/base_options.py", line 167, in parse
self.print_options(opt)
File "/content/Deep3DFaceRecon_pytorch/options/base_options.py", line 115, in print_options
with open(file_name, 'wt') as opt_file:
^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 30] Read-only file system: './checkpoints/facerecon_20230425/test_opt.txt'
</code></pre>
<p>By the way, I have already copied my folders from Google Drive to Google Colab:</p>
<pre><code>!git clone https://github.com/sicxu/Deep3DFaceRecon_pytorch.git
%cd Deep3DFaceRecon_pytorch
...
!mkdir checkpoints
!cp -r ../drive/MyDrive/Deep3D/facerecon_20230425 checkpoints/
...
</code></pre>
<h1>UPDATE</h1>
<p>On Google Colab, just trying to create a file inside the mounted Google Drive throws an error:</p>
<pre class="lang-bash prettyprint-override"><code>!touch ./checkpoints/facerecon_20230425/test_opt.txt
</code></pre>
<p>Log:</p>
<pre><code>touch: cannot touch './checkpoints/facerecon_20230425/test_opt.txt': Read-only file system
</code></pre>
|
<python><google-drive-api><google-colaboratory>
|
2024-04-20 08:11:45
| 1
| 8,185
|
Megidd
|
78,357,424
| 661,716
|
numba jitclass, dictionary of list variables
|
<p>I need to have dictionary of list variables for jitclass. Following gives me error.</p>
<p>I tried 10 different ways, but did not work. All dictionary values have same length of list.</p>
<pre><code>import numpy as np
import pandas as pd
from numba.experimental import jitclass
from numba import types
import os
os.environ['NUMBA_VERBOSE'] = '1'
spec = [
('data', types.Array(types.Record, 1, layout='C'))
]
@jitclass(spec)
class Test:
def __init__(self, data):
self.data = data
def loop(self):
v = self.data['v']
v2 = self.data['v2']
print("Inside loop:")
print("v:", v)
print("v2:", v2)
data = [[1, 2, 3], [1.0, 2.0, 3.0]]
# Define the structured array dtype
dtype = np.dtype([
('v', np.int64),
('v2', np.float64)
])
# Create the structured array
data_array = np.array(data, dtype=dtype)
print("Original data array:")
print(data_array)
# Create an instance of the Test class
test = Test(data_array)
test.loop()
</code></pre>
<p>Error:</p>
<pre><code>/home/totaljj/miniconda3/bin/conda run -n bt --no-capture-output python /home/totaljj/bt_lite_strategies/test/test_units/test_numba_obj_Ask.py
Original data array:
[[(1, 1.) (2, 2.) (3, 3.)]
[(1, 1.) (2, 2.) (3, 3.)]]
Traceback (most recent call last):
File "/home/totaljj/bt_lite_strategies/test/test_units/test_numba_obj_Ask.py", line 40, in <module>
test = Test(data_array)
File "/home/totaljj/miniconda3/envs/bt/lib/python3.9/site-packages/numba/experimental/jitclass/base.py", line 124, in __call__
return cls._ctor(*bind.args[1:], **bind.kwargs)
File "/home/totaljj/miniconda3/envs/bt/lib/python3.9/site-packages/numba/core/dispatcher.py", line 468, in _compile_for_args
error_rewrite(e, 'typing')
File "/home/totaljj/miniconda3/envs/bt/lib/python3.9/site-packages/numba/core/dispatcher.py", line 409, in error_rewrite
raise e.with_traceback(None)
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Internal error at <numba.core.typeinfer.CallConstraint object at 0x7fa912b30a60>.
"Failed in nopython mode pipeline (step: native lowering)\n<class 'numba.core.types.abstract._TypeMetaclass'>"
During: resolving callee type: jitclass.Test#7fa96486ba60<data:array(<class 'numba.core.types.npytypes.Record'>, 1d, C)>
During: typing of call at <string> (3)
Enable logging at debug level for details.
File "<string>", line 3:
<source missing, REPL/exec in use?>
ERROR conda.cli.main_run:execute(124): `conda run python /home/totaljj/bt_lite_strategies/test/test_units/test_numba_obj_Ask.py` failed. (See above for error)
Process finished with exit code 1
</code></pre>
|
<python><numba><jit>
|
2024-04-20 07:53:34
| 1
| 1,226
|
tompal18
|
78,357,326
| 7,743,427
|
Can any usage of the assignment operator technically be replaced with the walrus operator surrounded by parentheses?
|
<p>Say we have this assignment statement:</p>
<pre><code>a = 5
</code></pre>
<p>Even though it'd obviously be ugly to do this for no reason, we could technically accomplish the same thing with:</p>
<pre><code>(a := 5)
</code></pre>
<p>This still assigns 5 to <code>a</code>, and the value that the walrus operator returns just isn't used.</p>
<p>Are there any cases where this replacement wouldn't work?</p>
|
<python><python-3.x><syntax><language-design><python-assignment-expression>
|
2024-04-20 07:14:48
| 1
| 561
|
Inertial Ignorance
|
78,357,130
| 736,662
|
Python getting value from JSON
|
<p>I have a response on a request returning this json</p>
<pre><code>Response: [{"hpsId":10032,"powerPlant":{"name":"Svartisen","id":67302, ....
</code></pre>
<p>I get the hpsId by saying</p>
<pre><code>corr_value = response.json()[0]["hpsId"]
</code></pre>
<p>But I want the "id" instead, and this is not working ad gives the error "KeyError"</p>
<pre><code> corr_value = response.json()[0]["id"]
</code></pre>
<p>I assume I must say something like this:</p>
<pre><code>corr_value = response.json()[0]["powerPlant.id"]
</code></pre>
<p>But not sure how to state it in python code.
Any ideas?</p>
|
<python>
|
2024-04-20 05:53:14
| 2
| 1,003
|
Magnus Jensen
|
78,357,085
| 3,920,548
|
Welford Variance Differs from Numpy Variance
|
<p>I want to use Welford's method to compute a running variance and mean. I came across <a href="https://pypi.org/project/welford/" rel="nofollow noreferrer">this</a> implementation of Welford's method in Python. However, when testing to double-check that it results in the same output as the standard Numpy implementation of calculating variance, I do find that there is a difference in output.</p>
<p>Running the following code (using the python module unittest) shows that the two give different results (even after testing many times):</p>
<pre><code>random_sample = np.random.normal(0, 1, 100)
std = np.var(random_sample, dtype=np.longdouble)
mean = np.mean(random_sample, dtype=np.longdouble)
welford = Welford()
welford.add_all(random_sample)
self.assertAlmostEqual(mean, welford.mean)
self.assertAlmostEqual(var, welford.var_s)
>> AssertionError: 1.1782075496578717837 != 1.1901086360180526 within 7 places (0.011901086360180828804 difference)
</code></pre>
<p>Interestingly, there is only a difference in the variance, not the mean.</p>
<p>For my purposes, a 0.012 difference is significant enough that it could affect my results.</p>
<p>Why would there be such a difference? Could this be due to compounding floating point errors? If so, would my best bet be to rewrite the package to use the <code>Decimal</code> class?</p>
|
<python><numpy><statistics><numerical-methods>
|
2024-04-20 05:28:50
| 1
| 1,784
|
Robbie
|
78,356,981
| 12,714,507
|
Terminating threads in Python using threading.Event in a class
|
<p>I have a Python program that uses fairly long-lived synchronous calls so things are wrapped in threads. Here's a rough MVP:</p>
<pre class="lang-py prettyprint-override"><code>
class MyClass:
processor = None
def __init__(self):
self.processor_active = threading.Event()
self.processor_thread = threading.Thread(target=self.start_processing_thread, daemon=True)
self.processor_thread.start()
self.processor_active.wait()
def start_processing_thread(self):
self.processor = MyProcessor() # this line takes a while to run
self.processor_active.set() # allows the constructor to release
while self.processor_active.is_set():
data = processor.process() # this line can take arbitrarily long to execute
print(f"Received data: {data}")
def shutdown(self):
self.processor.shutdown() # to clean up resources
self.processor_active.clear()
self.processor_thread.join()
</code></pre>
<p>My problem: calling shutdown on a MyClass instance hangs forever without shutting down, or it prints "Received data: " repeatedly. It seems my thread cannot get the current state of the threading Event. How do I fix it?</p>
|
<python><python-multithreading>
|
2024-04-20 04:22:54
| 1
| 349
|
Nimrod Sadeh
|
78,356,841
| 4,618,639
|
Getting 404 on Openai Azure Endpoint
|
<p>Using the AzureOpenAi client, and getting a 404</p>
<pre><code>AsyncAzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-01-25-preview",
azure_deployment="XXX-staging",
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT", ""),
)
</code></pre>
<p>As you can see in the image below, I'm using the 0125 model version. Interestingly, I don't even see my model version listed directly here:
<a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/api-version-deprecation" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/ai-services/openai/api-version-deprecation</a></p>
<p>I've tried this on openai-1.23.2 and openai-1.14.3</p>
<p><a href="https://i.sstatic.net/NaME0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NaME0.png" alt="enter image description here" /></a></p>
|
<python><azure><openai-api>
|
2024-04-20 02:38:07
| 1
| 583
|
Randy Song
|
78,356,812
| 13,250,589
|
Error: User code failed to load. Cannot determine backend specification
|
<p>I am using the Python sdk for firebase functions.</p>
<p>When I try to deploy functions by running the command <code>firebase deploy</code> I receive this Error</p>
<blockquote>
<p>Error: User code failed to load. Cannot determine backend specification</p>
</blockquote>
<p>I get the same error when running firebase function emulator,
If I run the command <code>firebase init functions</code> and reinstall all the dependencies, the error stops occurring, but reappears if I stop the emulators and run them again.</p>
<p>I have tried to uninstall and reinstall the dependencies and run <code>firebase deploy</code>, but the error persists.</p>
<p>I ran the command <code>firebase deploy --debug</code> to get logs, here are the logs</p>
<pre class="lang-bash prettyprint-override"><code>[2024-04-20T02:02:14.722Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"]
[2024-04-20T02:02:14.724Z] > authorizing via signed-in user (hahmed.1015@gmail.com)
[2024-04-20T02:02:14.724Z] [iam] checking project prizebonds-16d6e for permissions ["cloudfunctions.functions.create","cloudfunctions.functions.delete","cloudfunctions.functions.get","cloudfunctions.functions.list","cloudfunctions.functions.update","cloudfunctions.operations.get","datastore.indexes.create","datastore.indexes.delete","datastore.indexes.list","datastore.indexes.update","firebase.projects.get","firebasedatabase.instances.update"]
[2024-04-20T02:02:14.727Z] >>> [apiv2][query] POST https://cloudresourcemanager.googleapis.com/v1/projects/prizebonds-16d6e:testIamPermissions [none]
[2024-04-20T02:02:14.727Z] >>> [apiv2][(partial)header] POST https://cloudresourcemanager.googleapis.com/v1/projects/prizebonds-16d6e:testIamPermissions x-goog-quota-user=projects/prizebonds-16d6e
[2024-04-20T02:02:14.728Z] >>> [apiv2][body] POST https://cloudresourcemanager.googleapis.com/v1/projects/prizebonds-16d6e:testIamPermissions {"permissions":["cloudfunctions.functions.create","cloudfunctions.functions.delete","cloudfunctions.functions.get","cloudfunctions.functions.list","cloudfunctions.functions.update","cloudfunctions.operations.get","datastore.indexes.create","datastore.indexes.delete","datastore.indexes.list","datastore.indexes.update","firebase.projects.get","firebasedatabase.instances.update"]}
[2024-04-20T02:02:16.117Z] <<< [apiv2][status] POST https://cloudresourcemanager.googleapis.com/v1/projects/prizebonds-16d6e:testIamPermissions 200
[2024-04-20T02:02:16.118Z] <<< [apiv2][body] POST https://cloudresourcemanager.googleapis.com/v1/projects/prizebonds-16d6e:testIamPermissions {"permissions":["cloudfunctions.functions.create","cloudfunctions.functions.delete","cloudfunctions.functions.get","cloudfunctions.functions.list","cloudfunctions.functions.update","cloudfunctions.operations.get","datastore.indexes.create","datastore.indexes.delete","datastore.indexes.list","datastore.indexes.update","firebase.projects.get","firebasedatabase.instances.update"]}
[2024-04-20T02:02:16.119Z] >>> [apiv2][query] POST https://iam.googleapis.com/v1/projects/prizebonds-16d6e/serviceAccounts/prizebonds-16d6e@appspot.gserviceaccount.com:testIamPermissions [none]
[2024-04-20T02:02:16.119Z] >>> [apiv2][body] POST https://iam.googleapis.com/v1/projects/prizebonds-16d6e/serviceAccounts/prizebonds-16d6e@appspot.gserviceaccount.com:testIamPermissions {"permissions":["iam.serviceAccounts.actAs"]}
[2024-04-20T02:02:17.553Z] <<< [apiv2][status] POST https://iam.googleapis.com/v1/projects/prizebonds-16d6e/serviceAccounts/prizebonds-16d6e@appspot.gserviceaccount.com:testIamPermissions 404
[2024-04-20T02:02:17.554Z] <<< [apiv2][body] POST https://iam.googleapis.com/v1/projects/prizebonds-16d6e/serviceAccounts/prizebonds-16d6e@appspot.gserviceaccount.com:testIamPermissions {"error":{"code":404,"message":"Unknown service account","status":"NOT_FOUND"}}
[2024-04-20T02:02:17.554Z] [functions] service account IAM check errored, deploy may fail: HTTP Error: 404, Unknown service account {"name":"FirebaseError","children":[],"context":{"body":{"error":{"code":404,"message":"Unknown service account","status":"NOT_FOUND"}},"response":{"statusCode":404}},"exit":1,"message":"HTTP Error: 404, Unknown service account","status":404}
[2024-04-20T02:02:17.556Z] >>> [apiv2][query] GET https://firebase.googleapis.com/v1beta1/projects/prizebonds-16d6e [none]
[2024-04-20T02:02:18.070Z] <<< [apiv2][status] GET https://firebase.googleapis.com/v1beta1/projects/prizebonds-16d6e 200
[2024-04-20T02:02:18.071Z] <<< [apiv2][body] GET https://firebase.googleapis.com/v1beta1/projects/prizebonds-16d6e {"projectId":"prizebonds-16d6e","projectNumber":"784905849810","displayName":"prizeBonds","name":"projects/prizebonds-16d6e","resources":{"hostingSite":"prizebonds-16d6e","realtimeDatabaseInstance":"prizebonds-16d6e-default-rtdb"},"state":"ACTIVE","etag":"1_3927acd3-cf07-48a5-a957-d6f4deccae90"}
=== Deploying to 'prizebonds-16d6e'...
i deploying database, firestore, functions
i database: checking rules syntax...
[2024-04-20T02:02:18.078Z] >>> [apiv2][query] GET https://firebasedatabase.googleapis.com/v1beta/projects/prizebonds-16d6e/locations/-/instances/prizebonds-16d6e-default-rtdb [none]
[2024-04-20T02:02:19.598Z] <<< [apiv2][status] GET https://firebasedatabase.googleapis.com/v1beta/projects/prizebonds-16d6e/locations/-/instances/prizebonds-16d6e-default-rtdb 200
[2024-04-20T02:02:19.598Z] <<< [apiv2][body] GET https://firebasedatabase.googleapis.com/v1beta/projects/prizebonds-16d6e/locations/-/instances/prizebonds-16d6e-default-rtdb {"name":"projects/784905849810/locations/us-central1/instances/prizebonds-16d6e-default-rtdb","project":"projects/784905849810","databaseUrl":"https://prizebonds-16d6e-default-rtdb.firebaseio.com","type":"DEFAULT_DATABASE","state":"ACTIVE"}
[2024-04-20T02:02:19.600Z] >>> [apiv2][query] PUT https://prizebonds-16d6e-default-rtdb.firebaseio.com/.settings/rules.json dryRun=true
[2024-04-20T02:02:19.600Z] >>> [apiv2][body] PUT https://prizebonds-16d6e-default-rtdb.firebaseio.com/.settings/rules.json "{\n \"rules\": {\n \".read\": false,\n \".write\": false\n }\n}"
[2024-04-20T02:02:20.289Z] <<< [apiv2][status] PUT https://prizebonds-16d6e-default-rtdb.firebaseio.com/.settings/rules.json 200
[2024-04-20T02:02:20.290Z] <<< [apiv2][body] PUT https://prizebonds-16d6e-default-rtdb.firebaseio.com/.settings/rules.json {"status":"ok"}
+ database: rules syntax for database prizebonds-16d6e-default-rtdb is valid
i firestore: reading indexes from firestore.indexes.json...
i cloud.firestore: checking firestore.rules for compilation errors...
[2024-04-20T02:02:20.300Z] >>> [apiv2][query] POST https://firebaserules.googleapis.com/v1/projects/prizebonds-16d6e:test [none]
[2024-04-20T02:02:20.300Z] >>> [apiv2][body] POST https://firebaserules.googleapis.com/v1/projects/prizebonds-16d6e:test [omitted]
[2024-04-20T02:02:21.343Z] <<< [apiv2][status] POST https://firebaserules.googleapis.com/v1/projects/prizebonds-16d6e:test 200
[2024-04-20T02:02:21.343Z] <<< [apiv2][body] POST https://firebaserules.googleapis.com/v1/projects/prizebonds-16d6e:test {}
+ cloud.firestore: rules file firestore.rules compiled successfully
[2024-04-20T02:02:21.344Z] >>> [apiv2][query] GET https://firebase.googleapis.com/v1beta1/projects/prizebonds-16d6e [none]
[2024-04-20T02:02:21.856Z] <<< [apiv2][status] GET https://firebase.googleapis.com/v1beta1/projects/prizebonds-16d6e 200
[2024-04-20T02:02:21.857Z] <<< [apiv2][body] GET https://firebase.googleapis.com/v1beta1/projects/prizebonds-16d6e {"projectId":"prizebonds-16d6e","projectNumber":"784905849810","displayName":"prizeBonds","name":"projects/prizebonds-16d6e","resources":{"hostingSite":"prizebonds-16d6e","realtimeDatabaseInstance":"prizebonds-16d6e-default-rtdb"},"state":"ACTIVE","etag":"1_3927acd3-cf07-48a5-a957-d6f4deccae90"}
i functions: preparing codebase default for deployment
i functions: ensuring required API cloudfunctions.googleapis.com is enabled...
i functions: ensuring required API cloudbuild.googleapis.com is enabled...
i artifactregistry: ensuring required API artifactregistry.googleapis.com is enabled...
[2024-04-20T02:02:21.859Z] >>> [apiv2][query] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/cloudfunctions.googleapis.com [none]
[2024-04-20T02:02:21.859Z] >>> [apiv2][(partial)header] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/cloudfunctions.googleapis.com x-goog-quota-user=projects/prizebonds-16d6e
[2024-04-20T02:02:21.860Z] >>> [apiv2][query] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/runtimeconfig.googleapis.com [none]
[2024-04-20T02:02:21.861Z] >>> [apiv2][(partial)header] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/runtimeconfig.googleapis.com x-goog-quota-user=projects/prizebonds-16d6e
[2024-04-20T02:02:21.862Z] >>> [apiv2][query] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/cloudbuild.googleapis.com [none]
[2024-04-20T02:02:21.862Z] >>> [apiv2][(partial)header] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/cloudbuild.googleapis.com x-goog-quota-user=projects/prizebonds-16d6e
[2024-04-20T02:02:21.863Z] >>> [apiv2][query] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/artifactregistry.googleapis.com [none]
[2024-04-20T02:02:21.864Z] >>> [apiv2][(partial)header] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/artifactregistry.googleapis.com x-goog-quota-user=projects/prizebonds-16d6e
[2024-04-20T02:02:23.505Z] <<< [apiv2][status] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/cloudfunctions.googleapis.com 200
[2024-04-20T02:02:23.505Z] <<< [apiv2][body] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/cloudfunctions.googleapis.com [omitted]
+ functions: required API cloudfunctions.googleapis.com is enabled
[2024-04-20T02:02:23.506Z] <<< [apiv2][status] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/runtimeconfig.googleapis.com 200
[2024-04-20T02:02:23.506Z] <<< [apiv2][body] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/runtimeconfig.googleapis.com [omitted]
[2024-04-20T02:02:23.528Z] <<< [apiv2][status] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/cloudbuild.googleapis.com 200
[2024-04-20T02:02:23.529Z] <<< [apiv2][body] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/cloudbuild.googleapis.com [omitted]
+ functions: required API cloudbuild.googleapis.com is enabled
[2024-04-20T02:02:23.532Z] <<< [apiv2][status] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/artifactregistry.googleapis.com 200
[2024-04-20T02:02:23.533Z] <<< [apiv2][body] GET https://serviceusage.googleapis.com/v1/projects/prizebonds-16d6e/services/artifactregistry.googleapis.com [omitted]
+ artifactregistry: required API artifactregistry.googleapis.com is enabled
[2024-04-20T02:02:23.533Z] >>> [apiv2][query] GET https://firebase.googleapis.com/v1beta1/projects/prizebonds-16d6e/adminSdkConfig [none]
[2024-04-20T02:02:24.038Z] <<< [apiv2][status] GET https://firebase.googleapis.com/v1beta1/projects/prizebonds-16d6e/adminSdkConfig 200
[2024-04-20T02:02:24.038Z] <<< [apiv2][body] GET https://firebase.googleapis.com/v1beta1/projects/prizebonds-16d6e/adminSdkConfig {"projectId":"prizebonds-16d6e","databaseURL":"https://prizebonds-16d6e-default-rtdb.firebaseio.com","storageBucket":"prizebonds-16d6e.appspot.com"}
[2024-04-20T02:02:24.039Z] >>> [apiv2][query] GET https://runtimeconfig.googleapis.com/v1beta1/projects/prizebonds-16d6e/configs [none]
[2024-04-20T02:02:24.676Z] <<< [apiv2][status] GET https://runtimeconfig.googleapis.com/v1beta1/projects/prizebonds-16d6e/configs 200
[2024-04-20T02:02:24.676Z] <<< [apiv2][body] GET https://runtimeconfig.googleapis.com/v1beta1/projects/prizebonds-16d6e/configs {}
[2024-04-20T02:02:24.678Z] Customer code is not Node
[2024-04-20T02:02:24.679Z] Validating python source
[2024-04-20T02:02:24.679Z] Building python source
i functions: Loading and analyzing source code for codebase default to determine what to deploy
[2024-04-20T02:02:24.681Z] Could not find functions.yaml. Must use http discovery
[2024-04-20T02:02:24.689Z] Running command with virtualenv: command="G:\My Drive\dev\firebase\new\functions\venv\Scripts\activate.bat", args=["","&&","python.exe","-c","\"import firebase_functions; import os; print(os.path.dirname(firebase_functions.__file__))\""]
[2024-04-20T02:02:24.915Z] stdout: G:\My Drive\dev\firebase\new\functions\venv\Lib\site-packages\firebase_functions
[2024-04-20T02:02:24.924Z] Running admin server with args: ["python.exe","\"G:\\My Drive\\dev\\firebase\\new\\functions\\venv\\Lib\\site-packages\\firebase_functions\\private\\serving.py\""] and env: {"FIREBASE_CONFIG":"{\"projectId\":\"prizebonds-16d6e\",\"databaseURL\":\"https://prizebonds-16d6e-default-rtdb.firebaseio.com\",\"storageBucket\":\"prizebonds-16d6e.appspot.com\"}","GCLOUD_PROJECT":"prizebonds-16d6e","GOOGLE_CLOUD_QUOTA_PROJECT":"prizebonds-16d6e","ADMIN_PORT":"8081"} in G:\My Drive\dev\firebase\new\functions
[2024-04-20T02:02:24.924Z] Running command with virtualenv: command="G:\My Drive\dev\firebase\new\functions\venv\Scripts\activate.bat", args=["","&&","python.exe","\"G:\\My Drive\\dev\\firebase\\new\\functions\\venv\\Lib\\site-packages\\firebase_functions\\private\\serving.py\""]
* Serving Flask app 'serving'
* Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:8081
Press CTRL+C to quit
127.0.0.1 - - [20/Apr/2024 07:02:34] "GET /__/quitquitquit HTTP/1.1" 200 -
Error: User code failed to load. Cannot determine backend specification
</code></pre>
<p>What is causing this error and how do I solve it?</p>
|
<python><firebase><google-cloud-functions><firebase-tools>
|
2024-04-20 02:17:31
| 0
| 885
|
Hammad Ahmed
|
78,356,723
| 6,443,336
|
How to set output print precision of Sympy on Jupyter Notebook
|
<p>Using <code>set_global_settings</code> I was able to print variables on jupyter-notebook using scientific notation:</p>
<pre><code>import sympy as sy
from sympy.printing.str import StrPrinter
StrPrinter.set_global_settings(min=1, max=1)
a=sy.Matrix([189.001234])
a
</code></pre>
<p>However, I still need to set precision:</p>
<p><a href="https://i.sstatic.net/9EOPm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9EOPm.png" alt="enter image description here" /></a></p>
|
<python><jupyter-notebook><sympy>
|
2024-04-20 01:26:14
| 1
| 1,403
|
Vitor Abella
|
78,356,713
| 3,821,009
|
Add row of totals for certain Polars columns (and null for others) without listing each column separately
|
<p>Say I have this dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(dict(
j=[2, 7, 1, 8],
k=[False, True, True, False],
l=['foo', 'bar', 'quux', 'bin'],
u=[5.0, 8.0, 13.0, 21.0],
))
print(df)
</code></pre>
<pre><code>shape: (4, 4)
┌─────┬───────┬──────┬──────┐
│ j ┆ k ┆ l ┆ u │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ bool ┆ str ┆ f64 │
╞═════╪═══════╪══════╪══════╡
│ 2 ┆ false ┆ foo ┆ 5.0 │
│ 7 ┆ true ┆ bar ┆ 8.0 │
│ 1 ┆ true ┆ quux ┆ 13.0 │
│ 8 ┆ false ┆ bin ┆ 21.0 │
└─────┴───────┴──────┴──────┘
</code></pre>
<p>I can make a sum of row certain columns only and set others to <code>None</code>:</p>
<pre class="lang-py prettyprint-override"><code>df_sum = (df
.select(
pl.col('j').sum(),
pl.lit(None).alias('k'),
pl.lit(None).alias('l'),
pl.col('u').sum(),
)
)
print(df_sum)
</code></pre>
<pre><code>shape: (1, 4)
┌─────┬──────┬──────┬──────┐
│ j ┆ k ┆ l ┆ u │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ null ┆ null ┆ f64 │
╞═════╪══════╪══════╪══════╡
│ 18 ┆ null ┆ null ┆ 47.0 │
└─────┴──────┴──────┴──────┘
</code></pre>
<p>I'd like to keep the column order so that I can then <code>polars.concat</code> the two frames to get the one dataframe with totals row.</p>
<pre><code>shape: (5, 4)
┌─────┬───────┬──────┬──────┐
│ j ┆ k ┆ l ┆ u │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ bool ┆ str ┆ f64 │
╞═════╪═══════╪══════╪══════╡
│ 2 ┆ false ┆ foo ┆ 5.0 │
│ 7 ┆ true ┆ bar ┆ 8.0 │
│ 1 ┆ true ┆ quux ┆ 13.0 │
│ 8 ┆ false ┆ bin ┆ 21.0 │
│ 18 ┆ null ┆ null ┆ 47.0 │
└─────┴───────┴──────┴──────┘
</code></pre>
<p>There are two potential use cases that dictate what "certain column" means:</p>
<ul>
<li>Columns with certain type(s)</li>
<li>Columns with certain name(s)</li>
</ul>
<p>Is there a way to do this without listing each column separately?</p>
|
<python><dataframe><python-polars>
|
2024-04-20 01:16:55
| 1
| 4,641
|
levant pied
|
78,356,180
| 339,144
|
Python: What happens to signal handers when another signal is received
|
<pre><code>import time
import signal
global_thing = []
class Foreman:
def __init__(self):
signal.signal(signal.SIGUSR1, self.handle_sigusr1)
def handle_sigusr1(self, sig, frame):
global_thing.append("x")
i_am_nr = len(global_thing)
for i in range(4):
print("I am taking my sweet time", i_am_nr, i)
time.sleep(1)
def run_forever(self):
# there is no actual code here, all the action happens in the signal handlers
time.sleep(3600)
Foreman().run_forever()
</code></pre>
<p>To my slight surprise, the result of this, when sending a second signal while the first one is still being dealt with, is like so:</p>
<pre><code>$ python example.py
I am taking my sweet time 1 0
I am taking my sweet time 1 1
I am taking my sweet time 1 2
I am taking my sweet time 2 0
I am taking my sweet time 2 1
I am taking my sweet time 2 2
I am taking my sweet time 2 3
I am taking my sweet time 1 3
</code></pre>
<p>In other words, it appears that the signal handler is in turn suspended. However, I cannot find anything in the <a href="https://docs.python.org/3/library/signal.html#signal.signal" rel="nofollow noreferrer">Python docs</a> that describes this. To rely on this behavior I would like to have some kind of document that explains what the expected behavior is.</p>
<p>In addition, and similarly: what happens if 2 signals arrive at (almost) the exact same time: will the CPython interpreter be able to catch them both and deal with them as observed? Or is there some lower level where this can still go wrong (and in what way?)</p>
<p>(I'm particularly interested in Linux)</p>
|
<python><signals>
|
2024-04-19 21:10:50
| 1
| 2,577
|
Klaas van Schelven
|
78,355,920
| 6,386,155
|
How do you count repeating entries in a column?
|
<p>I have a data frame like this:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
</tr>
<tr>
<td>0</td>
</tr>
<tr>
<td>0</td>
</tr>
<tr>
<td>1</td>
</tr>
<tr>
<td>1</td>
</tr>
<tr>
<td>0</td>
</tr>
<tr>
<td>0</td>
</tr>
</tbody>
</table></div>
<p>I would like to create another column such that it counts the number of time a value repeats in a batch:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Value</th>
<th>Freq</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>3</td>
</tr>
<tr>
<td>0</td>
<td>3</td>
</tr>
<tr>
<td>0</td>
<td>3</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table></div>
<p>Group by will not work here.
How would you do it?</p>
|
<python><pandas><dataframe>
|
2024-04-19 19:51:48
| 1
| 885
|
user6386155
|
78,355,889
| 10,625,777
|
Dealing with I/O exceptions beyond OSError
|
<p>I have an endless program that needs to write to daily log files. Below is basically my Message class that opens a file at startup and closes/opens a new file daily. Now I'm testing for error handling, as the program should not die even if it can't write logs. For example, the file system fills up. Most normal I/O errors are OSError, and this handles those.</p>
<p>If at startup the initial file can't be opened, I get this exception</p>
<blockquote>
<p>AttributeError: 'NoneType' object has no attribute 'write'</p>
</blockquote>
<p>because the log object can't be created. I then try to log.write to a None object with no write routine. A good log open creates a <class '_io.TextIOWrapper'>. <strong>Is there a way to initialize log to a dummy TextIOWrapper? And would that gain me anything?</strong></p>
<p>Because if you do write to a closed file handle you get this exception</p>
<blockquote>
<p>ValueError: I/O operation on closed file.</p>
</blockquote>
<p>I'm baffled as to why this is not an OSError. So I at least have to add ValueError to my exception list.</p>
<pre><code>class Message:
log_date = f"{datetime.now():%Y%m%d}"
log_file = log_date + '.log'
try:
log = open(log_file, 'a')
log.write(f"{log_file} opened\n")
log.flush()
except OSError as err:
errno, strerror = err.args
print(f"I/O Error with {log_file}; {strerror}")
def print(msg: str):
today = f"{datetime.now():%Y%m%d}"
try:
if today != Message.log_date:
Message.log.close()
Message.log_date = today
Message.log_file = today + '.log'
Message.log = open(Message.log_file, 'a')
Message.log.write(f"{Message.log_file} opened\n")
Message.log.write(f"{msg}\n")
Message.log.flush()
except OSError as err:
errno, strerror = err.args
print(f"I/O Error with {Message.log_file}; {strerror}")
</code></pre>
|
<python><attributeerror><valueerror><oserror>
|
2024-04-19 19:44:11
| 0
| 495
|
Chris
|
78,355,856
| 16,717,009
|
Identify if there is a contiguous group of non-zeros in a list
|
<p>I have a list of integers. I want to know if the non-zero numbers form a <em>single</em> contiguous group. So for instance:</p>
<pre><code>[0, 0, -1, 2, -3, 0] True
[1, 2, 3, 0, 0, 0] True
[1, 2, 3, 4, 5, 0] True
[1, 1, 1, 1, 1, 1] True
[0, 1, 0, 3, 3, 3] False
[1, 2, 3, 0, 3, 0] False
[0, 0, 0, 0, 0, 0] False
</code></pre>
<p>My code that produced the above:</p>
<pre><code>def simple_contiguous_subset(data):
if all(x == 0 for x in data): # drop this line and next if all 0 should be True
return False
seen_zero_after_group = False
in_non_zero_group = False
for num in data:
if num == 0:
if not in_non_zero_group:
seen_zero_after_group = False
else:
seen_zero_after_group = True
else:
if seen_zero_after_group:
return False
in_non_zero_group = True
return True
data_list = [[0,0,-1,2,-3,0], [1,2,3,0,0,0], [1,2,3,4,5,0], [1,1,1,1,1,1], [0,1,0,3,3,3], [1,2,3,0,3,0], [0,0,0,0,0,0]]
for data in data_list:
result = simple_contiguous_subset(data)
print(f"{data} {result}")
</code></pre>
<p>I'm just wondering if there's a faster more direct way. I've struggled a bit with <code>itertools.groupby</code> but not making any headway.</p>
|
<python>
|
2024-04-19 19:34:26
| 3
| 343
|
MikeP
|
78,355,676
| 4,117,496
|
OSError: libtorch_cuda_cpp.so: cannot open shared object file: No such file or directory
|
<p>I needed to have Python <code>torchaudio</code> library installed for my application which is packaged into a Docker image.</p>
<p>I am able to do this easily on my EC2 instance easily:</p>
<pre><code>pip3 install torchaudio
python3
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torchaudio
>>> torchaudio.__version__
'2.2.1+cu121'
</code></pre>
<p>But not through my Dockerfile, here's what I have in my Dockerfile:</p>
<pre><code>RUN pip3 install --target=/opt/prod/lib/python3.8/site-packages torchaudio
</code></pre>
<p>but when I entered into the docker container started from this image:</p>
<pre><code>>>> import torchaudio
/opt/prod/lib/python3.8/site-packages/torchaudio/_internal/module_utils.py:99: UserWarning: Failed to import soundfile. 'soundfile' backend is not available.
warnings.warn("Failed to import soundfile. 'soundfile' backend is not available.")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/prod/lib/python3.8/site-packages/torchaudio/__init__.py", line 1, in <module>
from torchaudio import ( # noqa: F401
File "/opt/prod/lib/python3.8/site-packages/torchaudio/_extension.py", line 135, in <module>
_init_extension()
File "/opt/prod/lib/python3.8/site-packages/torchaudio/_extension.py", line 105, in _init_extension
_load_lib("libtorchaudio")
File "/opt/prod/lib/python3.8/site-packages/torchaudio/_extension.py", line 52, in _load_lib
torch.ops.load_library(path)
File "/opt/prod/lib/python3.8/site-packages/torch/_ops.py", line 852, in load_library
ctypes.CDLL(path)
File "/opt/prod/python3.8/lib/python3.8/ctypes/__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
OSError: libtorch_cuda_cpp.so: cannot open shared object file: No such file or directory
</code></pre>
|
<python><python-3.x><docker><pytorch><torchaudio>
|
2024-04-19 18:47:00
| 1
| 3,648
|
Fisher Coder
|
78,355,442
| 719,812
|
Python json.dumps of a tuple with some UTF-8 characters, either fails or converts them. I want the encoded character retained as is
|
<p>On my server, a Python script gets data from a database as a tuple. Then the script converts the tuple to a string (using json.dumps()) to be passed to the JavaScript script in the user's browser.</p>
<p>The data include German names such as Weidmüller. When the Python scrip gets that data, it returns it as Weidm\xfcller, where \xfc is the UTF-8 encoding of ü. So far so good.</p>
<p>However,</p>
<ul>
<li><code>json.dumps(tableData,ensure_ascii=False)</code> converts the \xfc to �</li>
<li><code>json.dumps(tableData,ensure_ascii=True)</code> fails: "UnicodeDecodeError: 'utf8' codec can't decode byte 0xfc in position 5: invalid start byte"</li>
</ul>
<p>What I really want is for json.dumps to leave the UTF-8 encoded character alone; to just pass the \xfc as is. That way the JavaScript script in the user's browser can do the decoding.
Is that possible?</p>
<p>Or, am I approaching the problem incorrectly?</p>
<p>Here is the complete code:</p>
<pre><code>import MySQLdb
...
# Open the data base and return a handle to it and its cursor
dataBase, dbCursor = database.OpenDB()
# Get data from the URL
fieldStore = cgi.FieldStorage()
selFieldName = selFieldValue = ''
sqlQuery = 'SELECT * FROM %s' % (database.CompTableName)
if ('fldName' in fieldStore) and ('fldValue' in fieldStore):
fldName = fieldStore['fldName'].value
fldValue = fieldStore['fldValue'].value
sqlQuery += ' WHERE %s = \'%s\'' % (fldName,fldValue)
if ('max' in fieldStore):
maxRows = fieldStore['max'].value
sqlQuery += ' LIMIT ' + maxRows
# Get the selected data in the table as a list of lists
rowsAffected = dbCursor.execute(sqlQuery)
tableData = dbCursor.fetchall()
# Close the database and return the results
dataBase.close()
jsonTableData = json.dumps(tableData,encoding='latin1',ensure_ascii=True)
print jsonTableData
</code></pre>
<p>And here is test code:</p>
<pre><code> tableData = (('item1', 'Jones',), ('item2', 'Weidm\xfcller'))
jsonTableData = json.dumps(tableData,encoding='latin1',ensure_ascii=True)
print jsonTableData
</code></pre>
|
<python><json><python-2.7><utf-8>
|
2024-04-19 17:51:49
| 1
| 1,425
|
Davide Andrea
|
78,355,329
| 9,771,547
|
TypeError: ADAM.minimize() got an unexpected keyword argument 'initial_point'
|
<p>When I run the below code, it shows the below error. I tried with COBYLA and still same,</p>
<pre><code> optimizer = ADAM(maxiter=100)
new_params = optimizer.minimize(len(agent.params), objective, initial_point=agent.params)
</code></pre>
<p><a href="https://i.sstatic.net/BlyYt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BlyYt.png" alt="enter image description here" /></a></p>
<p>However, when I change the code as below,</p>
<pre><code> while True:
action = int(agent.choose_action(state))
new_state, reward, done = env.step(action)
# Quantum reinforcement learning update
def objective(params):
q_circ = q_circuit(params, state + action)
q_circ_transpiled = transpile(q_circ, agent.backend)
q_job = assemble(q_circ_transpiled, shots=1000)
job_result = agent.backend.run(q_job).result()
counts = job_result.get_counts(q_circ_transpiled)
return -counts.get(str(action), 0)
print("agent.params: ", agent.params)
new_params, _, _ = optimizer.minimize(objective, agent.params, jac=None, bounds=None)
</code></pre>
<p>Here, agent.params: [0.43454172 0.87378962]</p>
<p>Nonetheless, It shows the below error,</p>
<p><a href="https://i.sstatic.net/muInY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/muInY.png" alt="enter image description here" /></a></p>
<p>Any help in this regard will be highly appreciated. Thanks</p>
|
<python><optimization><qiskit><adam>
|
2024-04-19 17:23:07
| 0
| 351
|
monir zaman
|
78,355,159
| 895,029
|
How to monkey patch `__init__` in a third party module?
|
<p>I'm trying to monkey patch the <code>__init__</code> method of a third party module (<a href="https://github.com/jborean93/smbprotocol/tree/master" rel="nofollow noreferrer">smbprotocol</a>).</p>
<p>Specifically, the <code>SMBDirectoryIO#_init__</code> which inherits from <code>SMBRawIO#__init__</code> (<a href="https://github.com/jborean93/smbprotocol/blob/master/src/smbclient/_io.py#L359-L361" rel="nofollow noreferrer">here</a>)</p>
<p>I want to add <code>"a"</code> to the <code>mode</code> with which the file is opened, because currently <code>listdir</code> only calls <code>SMBDirectoryIO</code> with <code>mode="r"</code> which isn't enough for my needs (see <a href="https://github.com/jborean93/smbprotocol/blob/master/src/smbclient/_os.py#L237" rel="nofollow noreferrer">here</a>)</p>
<p>I've tried various takes on the following approach, but the code doesn't seem to ever be hit</p>
<pre class="lang-py prettyprint-override"><code>
import smbclient._io
class SMBDirectoryIOPatched(smbclient._io.SMBDirectoryIO):
def __init__(self, path, mode, *args, **kwargs):
super().__init__(path, mode + "a", *args, **kwargs)
smbclient._io.SMBDirectoryIO = SMBDirectoryIOPatched
</code></pre>
<p>Any ideas where I'm going wrong? Thanks</p>
|
<python><python-3.x><monkeypatching><smb>
|
2024-04-19 16:41:59
| 0
| 4,506
|
rwb
|
78,355,151
| 5,786,649
|
Combining paths from hydra configurations (varible interpolation)
|
<p>I am using hydra for configuration management.</p>
<p>When my configuration files contain parts of paths (folder names, file names), what is the most convenient way to create a full path to a given file? As an example:</p>
<p>I have a <code>config.yaml</code> that contains some folder names:</p>
<pre class="lang-yaml prettyprint-override"><code>paths:
data:
base: data # top-level data folder in my project
external: external # data as loaded from other sources
raw: raw # data as in external, but stored as binaries
dataset:
</code></pre>
<p>My configuration-files in <code>dataset</code> contain file names, for example</p>
<pre class="lang-yaml prettyprint-override"><code>files:
train: train_data
test: test_data
</code></pre>
<p>Finally, I have defined my own subclasses of <code>DictConfig</code> for better type hinting in my editor.</p>
<p>What I would like:</p>
<pre class="lang-py prettyprint-override"><code>>>> cfg.dataset.files.train
Path("base/raw/train_data")
</code></pre>
<p>But currently, I always have to write</p>
<pre class="lang-py prettyprint-override"><code>>>> Path(cfg.paths.data.base)/cfg.paths.data.raw/cfg.dataset.files.train
Path("base/raw/train_data")
Is there a hydra-based solution? Or should I rather create the full filepaths in my `DictConfig` subclasses? Or should I completely abandon storing filepaths in configs?
</code></pre>
|
<python><fb-hydra>
|
2024-04-19 16:40:42
| 1
| 543
|
Lukas
|
78,355,086
| 1,231,450
|
Pandas rolling closest value
|
<p>Suppose we have the following dataframe:</p>
<pre><code> timestamp open high low close delta atr last_index bearish bullish_turning_point
2 04-10-2024 01:54:44 18370.00 18377.75 18367.50 18376.00 32 0 1949 False True
5 04-10-2024 03:21:14 18376.50 18383.00 18375.25 18381.25 28 0 3899 False True
7 04-10-2024 04:38:54 18378.50 18386.25 18378.25 18385.50 133 0 5199 False True
9 04-10-2024 05:30:27 18384.00 18389.50 18378.75 18388.25 135 0 6499 False True
12 04-10-2024 06:06:12 18371.00 18378.00 18369.50 18378.00 130 0 8449 False True
14 04-10-2024 06:33:44 18372.25 18383.75 18372.00 18376.25 67 0 9749 False True
18 04-10-2024 07:21:14 18377.50 18387.75 18376.25 18380.00 8 0 12349 False True
22 04-10-2024 07:47:58 18388.00 18396.75 18385.25 18389.50 -30 0 14949 False True
25 04-10-2024 08:06:17 18390.75 18397.00 18387.50 18392.00 -25 0 16899 False True
28 04-10-2024 08:33:32 18384.75 18398.00 18383.25 18394.00 89 0 18849 False True
30 04-10-2024 08:54:35 18391.25 18403.00 18387.75 18399.25 84 0 20149 False True
34 04-10-2024 09:11:15 18388.75 18396.25 18385.75 18392.25 15 0 22749 False True
43 04-10-2024 10:02:22 18343.50 18350.50 18341.25 18350.50 113 0 28599 False True
46 04-10-2024 10:14:44 18352.00 18361.75 18352.00 18360.00 -42 0 30549 False True
49 04-10-2024 10:35:49 18354.00 18361.25 18347.75 18358.00 49 0 32499 False True
52 04-10-2024 10:54:18 18362.25 18372.00 18361.50 18372.00 180 0 34449 False True
56 04-10-2024 11:12:32 18369.25 18379.50 18367.00 18376.50 78 0 37049 False True
59 04-10-2024 11:27:27 18370.00 18376.50 18367.50 18373.25 54 0 38999 False True
65 04-10-2024 12:01:53 18377.75 18388.25 18377.50 18383.25 108 0 42899 False True
73 04-10-2024 12:25:04 18382.00 18386.25 18381.00 18384.75 65 0 48099 False True
</code></pre>
<p>How can I find the "nearest" close to "open" for each line before? E.g.</p>
<p>For line 30 (<code>close</code>: 18399.25) this would be line 25 (<code>open</code>: 18390.75). For line 52 (<code>close</code>: 18372.00) this would be 14 (<code>open</code>: 18372.25) and so on.</p>
|
<python><pandas>
|
2024-04-19 16:30:37
| 2
| 43,253
|
Jan
|
78,355,057
| 6,714,667
|
How can i iterate through this list and return a text if threshold not met?
|
<pre><code>scores = [0.9,0.8,0.3,0.4]
new=[]
for sc in scores:
if sc < 0.8:
pass
else:
new.append(sc)
print(new)
</code></pre>
<p>however i want to also include a response IF none of the scores are >0.8 then print "none of the scores in list had a score that met threshold"
how can i do this in an efficient manner?</p>
|
<python><python-3.x>
|
2024-04-19 16:22:40
| 3
| 999
|
Maths12
|
78,354,977
| 1,104,581
|
Why can't I get PySpark to drop the right duplicate columns after a 'leftouter' join with a Dataframe that is itself the result of a join?
|
<p>I have the following input dataframes:</p>
<pre class="lang-py prettyprint-override"><code>expected = spark.createDataFrame(
# fmt: off
data=[
{"id": "1", "group": "1", "start": 1_000_000, "stop": 1_001_200, "info": "info1"},
{"id": "1", "group": "6", "start": 6_001_000, "stop": 6_003_330, "info": "info2"},
{"id": "1", "group": "9", "start": 3_080_100, "stop": 3_081_000, "info": "info3"},
{"id": "2", "group": "1", "start": 1_000_000, "stop": 1_001_200, "info": "info4"},
{"id": "2", "group": "6", "start": 6_001_000, "stop": 6_003_330, "info": "info5"},
{"id": "2", "group": "9", "start": 3_080_100, "stop": 3_081_000, "info": "info6"},
],
# fmt: on
schema=StructType(
[
StructField("id", StringType(), False),
StructField("group", StringType(), False),
StructField("start", IntegerType(), False),
StructField("stop", IntegerType(), False),
StructField("info", StringType(), False),
]
),
)
found = spark.createDataFrame(
# fmt: off
data=[
{"id": "1", "group": "1", "start": 1_000_000, "stop": 1_001_200},
{"id": "1", "group": "9", "start": 3_080_103, "stop": 3_080_500},
{"id": "1", "group": "9", "start": 3_080_511, "stop": 3_081_000},
{"id": "2", "group": "1", "start": 1_000_005, "stop": 1_001_200},
{"id": "2", "group": "6", "start": 6_000_000, "stop": 6_003_009},
{"id": "2", "group": "6", "start": 6_003_015, "stop": 6_004_000},
{"id": "2", "group": "9", "start": 3_080_100, "stop": 3_080_500},
{"id": "2", "group": "9", "start": 3_080_496, "stop": 3_080_996},
],
# fmt: on
schema=StructType(
[
StructField("id", StringType(), False),
StructField("group", StringType(), False),
StructField("start", IntegerType(), False),
StructField("stop", IntegerType(), False),
]
),
)
</code></pre>
<p>If I do a 'leftouter' join of them with 'id' and 'group' and drop the duplicate columns used in the join from the right, I get the columns with <code>null</code> values dropped, as expected:</p>
<pre class="lang-py prettyprint-override"><code>joined_left_outer_without_range = (
expected.join(
found,
on=[
expected.id == found.id,
expected.group == found.group,
],
how="leftouter",
)
.drop(found.id)
.drop(found.group)
)
print("joined_left_outer_without_range:")
joined_left_outer_without_range.show()
</code></pre>
<pre><code>joined_left_outer_without_range:
+---+-----+-------+-------+-----+-------+-------+
| id|group| start| stop| info| start| stop|
+---+-----+-------+-------+-----+-------+-------+
| 1| 1|1000000|1001200|info1|1000000|1001200|
| 1| 6|6001000|6003330|info2| null| null|
| 1| 9|3080100|3081000|info3|3080511|3081000|
| 1| 9|3080100|3081000|info3|3080103|3080500|
| 2| 1|1000000|1001200|info4|1000005|1001200|
| 2| 6|6001000|6003330|info5|6003015|6004000|
| 2| 6|6001000|6003330|info5|6000000|6003009|
| 2| 9|3080100|3081000|info6|3080496|3080996|
| 2| 9|3080100|3081000|info6|3080100|3080500|
+---+-----+-------+-------+-----+-------+-------+
</code></pre>
<p>However, if I first join the columns with an 'inner' range join, and then try to join the results of that join with the original dataframe using a 'leftouter' join, attempting to drop the duplicate columns comming from the right dataframe, actually drops the columns from the left dataframe and leaves the <code>null</code> values from the right dataframe:</p>
<pre class="lang-py prettyprint-override"><code>joined_range_overlap = (
expected.hint("range_join", 300).join(
found,
on=[
expected.id == found.id,
expected.group == found.group,
expected.start < found.stop,
expected.stop > found.start,
],
how="inner",
)
.drop(found.id)
.drop(found.group)
.withColumn("found_start", found.start)
.withColumn("found_stop", found.stop)
.drop(found.start)
.drop(found.stop)
)
print("joined_range_overlap:")
joined_range_overlap.show()
joined_with_missing_overlap = (
expected.join(
joined_range_overlap,
on=[
expected.id == joined_range_overlap.id,
expected.group == joined_range_overlap.group,
expected.start == joined_range_overlap.start,
expected.stop == joined_range_overlap.stop,
],
how="leftouter",
)
.drop(joined_range_overlap.id)
.drop(joined_range_overlap.group)
.drop(joined_range_overlap.start)
.drop(joined_range_overlap.stop)
)
print("joined_with_missing_overlap:")
joined_with_missing_overlap.show()
</code></pre>
<pre><code>joined_range_overlap:
+---+-----+-------+-------+-----+-----------+----------+
| id|group| start| stop| info|found_start|found_stop|
+---+-----+-------+-------+-----+-----------+----------+
| 1| 1|1000000|1001200|info1| 1000000| 1001200|
| 1| 9|3080100|3081000|info3| 3080103| 3080500|
| 1| 9|3080100|3081000|info3| 3080511| 3081000|
| 2| 1|1000000|1001200|info4| 1000005| 1001200|
| 2| 6|6001000|6003330|info5| 6000000| 6003009|
| 2| 6|6001000|6003330|info5| 6003015| 6004000|
| 2| 9|3080100|3081000|info6| 3080100| 3080500|
| 2| 9|3080100|3081000|info6| 3080496| 3080996|
+---+-----+-------+-------+-----+-----------+----------+
joined_with_missing_overlap:
+-----+----+-----+-------+-------+-----+-----------+----------+
| info| id|group| start| stop| info|found_start|found_stop|
+-----+----+-----+-------+-------+-----+-----------+----------+
|info1| 1| 1|1000000|1001200|info1| 1000000| 1001200|
|info2|null| null| null| null| null| null| null|
|info3| 1| 9|3080100|3081000|info3| 3080511| 3081000|
|info3| 1| 9|3080100|3081000|info3| 3080103| 3080500|
|info4| 2| 1|1000000|1001200|info4| 1000005| 1001200|
|info5| 2| 6|6001000|6003330|info5| 6003015| 6004000|
|info5| 2| 6|6001000|6003330|info5| 6000000| 6003009|
|info6| 2| 9|3080100|3081000|info6| 3080496| 3080996|
|info6| 2| 9|3080100|3081000|info6| 3080100| 3080500|
+-----+----+-----+-------+-------+-----+-----------+----------+
</code></pre>
<p>Why is this not working as expected, and how do I get PySpark to drop the intended columns without explicitly renaming the columns?</p>
<p>Note: the reason I am doing these as two separate joins is that I am trying to use the Databricks range join optimization hint, <a href="https://docs.databricks.com/en/optimizations/range-join.html#range-join-optimization" rel="nofollow noreferrer">which is only performed for 'inner' range overlap join</a>.</p>
|
<python><apache-spark><join><pyspark>
|
2024-04-19 16:06:52
| 1
| 2,104
|
dr-igor
|
78,354,974
| 2,573,075
|
AWS DeepAR predict returns 400
|
<p>I'm trying to run a DeepAR to do some estimation for a series the next 15 records based on current and previous months.</p>
<p>I follow the examples and I believe that everything is ok. Still I have a very cryptic error.</p>
<p>I put here the code:</p>
<pre><code>predictor = estimator.deploy(
initial_instance_count=1,
instance_type='ml.m5.large',
serializer=JSONSerializer(),
deserializer=JSONDeserializer()
)
json_request = json.dumps({
"instances": ts,
"configuration": {
"num_samples": 10,
"output_types": ["quantiles", "samples"],
"quantiles": ['0.2', '0.5', '0.8']
}
})
prediction = predictor.predict(json_request)
</code></pre>
<p>My json looks like this:</p>
<pre><code>{"instances":
[{"start": "2024-03-01",
"target":[60,10,86,62,21,25,7,79,33,82,34,43,14,99,5,37,85,84,88,25,2,14,15,98,14,75,70,99,12]
},
{"start": "2024-04-01",
"target": [55,89,40,81,87,7,49,77,37,42,48,27,89,45,85]
}],
"configuration": {"num_samples": 15, "output_types": ["quantiles", "samples"], "quantiles": ["0.2", "0.5", "0.8"]}}
</code></pre>
<p>But I have the following error:</p>
<pre><code>---------------------------------------------------------------------------
ModelError Traceback (most recent call last)
Cell In[22], line 2
1 print(type('json_request'))
----> 2 prediction = predictor.predict(json_request)
3 print(prediction)
File c:\Users\civan\PycharmProjects\JupyterBooks\.venv\Lib\site-packages\sagemaker\base_predictor.py:212, in Predictor.predict(self, data, initial_args, target_model, target_variant, inference_id, custom_attributes, component_name)
209 if inference_component_name:
210 request_args["InferenceComponentName"] = inference_component_name
--> 212 response = self.sagemaker_session.sagemaker_runtime_client.invoke_endpoint(**request_args)
213 return self._handle_response(response)
File c:\Users\civan\PycharmProjects\JupyterBooks\.venv\Lib\site-packages\botocore\client.py:553, in ClientCreator._create_api_method.<locals>._api_call(self, *args, **kwargs)
549 raise TypeError(
550 f"{py_operation_name}() only accepts keyword arguments."
551 )
552 # The "self" in this scope is referring to the BaseClient.
--> 553 return self._make_api_call(operation_name, kwargs)
File c:\Users\civan\PycharmProjects\JupyterBooks\.venv\Lib\site-packages\botocore\client.py:1009, in BaseClient._make_api_call(self, operation_name, api_params)
1005 error_code = error_info.get("QueryErrorCode") or error_info.get(
1006 "Code"
1007 )
1008 error_class = self.exceptions.from_code(error_code)
-> 1009 raise error_class(parsed_response, operation_name)
1010 else:
1011 return parsed_response
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "Unable to evaluate payload provided".
</code></pre>
<p>I have tried without serializer/desearilzer and it give error also but not this one, things related to format.</p>
|
<python><amazon-web-services><time-series><amazon-sagemaker><deepar>
|
2024-04-19 16:05:59
| 2
| 633
|
Claudiu
|
78,354,862
| 6,546,694
|
polars getting a StringCacheMismatchError even after using stringcache context manager
|
<p>Using polars throughout</p>
<p>I want to use replace to kind of like labelencode my categorical columns. The problem is that my dataframe is made by concatenating other dataframes</p>
<p>I can do the following:</p>
<pre><code>df1 = pl.DataFrame(
{'a':[1,2],
'b':['a','b']},
schema = {'a':pl.Float32, 'b': pl.Categorical})
cats = df1['b'].cat.get_categories().to_list()
df1 = df1.with_columns(
pl.col('b').replace(cats, range(len(cats)), return_dtype = pl.Int32)
)
print(df1)
output:
shape: (2, 2)
┌─────┬─────┐
│ a ┆ b │
│ --- ┆ --- │
│ f32 ┆ i32 │
╞═════╪═════╡
│ 1.0 ┆ 0 │
│ 2.0 ┆ 1 │
</code></pre>
<p>But the following fails:</p>
<pre><code>with pl.StringCache():
df1 = pl.DataFrame(
{'a':[1,2],
'b':['a','b']},
schema = {'a':pl.Float32, 'b': pl.Categorical})
df2 = pl.DataFrame(
{'c':[3,4], 'b':['a','c']},
schema = {'c': pl.Int32, 'b': pl.Categorical})
df = pl.concat([df1, df2], how = 'diagonal')
print(df)
output:
shape: (4, 3)
┌──────┬─────┬──────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ f32 ┆ cat ┆ i32 │
╞══════╪═════╪══════╡
│ 1.0 ┆ a ┆ null │
│ 2.0 ┆ b ┆ null │
│ null ┆ a ┆ 3 │
│ null ┆ c ┆ 4 │
└──────┴─────┴──────┘
</code></pre>
<pre><code>cats = df['b'].cat.get_categories().to_list()
df = df.with_columns(
pl.col('b')
.replace(cats, range(len(cats)), return_dtype = pl.Int32))
print(df)
output:
StringCacheMismatchError: cannot compare categoricals coming from different sources,
consider setting a global StringCache.
</code></pre>
<p>How can I replace the categories with an integer using replace in case my dataframe is formed by concatenating several other dataframes?</p>
|
<python><dataframe><python-polars><categorical-data>
|
2024-04-19 15:44:15
| 2
| 5,871
|
figs_and_nuts
|
78,354,838
| 2,803,344
|
RuntimeError when using keras 3 with pytorch backend
|
<p>I am trying to reproduce the VAE model demonstrated in the document of Keras3:
<a href="https://keras.io/examples/generative/vae/" rel="nofollow noreferrer">https://keras.io/examples/generative/vae/</a> and <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/#putting-it-all-together-an-endtoend-example" rel="nofollow noreferrer">https://keras.io/guides/making_new_layers_and_models_via_subclassing/#putting-it-all-together-an-endtoend-example</a></p>
<p>The second example works well. Since I am trying to use a <em>torch</em> backend, I modified some places in the first example:</p>
<pre class="lang-py prettyprint-override"><code>import os
os.environ['KERAS_BACKEND'] = 'torch'
import keras
import numpy as np
from keras import layers, ops
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
def __init__(self, name='sampling', **kwargs):
super(Sampling, self).__init__(name=name, **kwargs)
self.seed_generator = keras.random.SeedGenerator(42)
def call(self, inputs):
z_mean, z_log_var = inputs
batch = ops.shape(z_mean)[0]
dim = ops.shape(z_mean)[1]
epsilon = keras.random.normal(shape=(batch, dim), seed=self.seed_generator)
return z_mean + ops.exp(0.5 * z_log_var) * epsilon
class Encoder(keras.Model):
"""Maps MNIST digits to a triplet (z_mean, z_log_var, z)."""
def __init__(self, latent_dim=32, intermediate_dim=64, name='encoder', **kwargs):
super().__init__(name=name, **kwargs)
self.conv_layer1 = layers.Conv2D(32, 3, activation='relu', strides=2, padding='same')
self.conv_layer2 = layers.Conv2D(64, 3, activation='relu', strides=2, padding='same')
self.flatten = layers.Flatten()
self.dense_proj = layers.Dense(intermediate_dim, activation='relu')
self.dense_mean = layers.Dense(latent_dim, name='z_mean')
self.dense_log_var = layers.Dense(latent_dim, name='z_log_var')
self.sampling = Sampling()
def call(self, inputs):
# encoder_inputs = keras.Input(shape=(28, 28, 1))(inputs)
x = self.conv_layer1(inputs)
x = self.conv_layer2(x)
x = self.flatten(x)
x = self.dense_proj(x)
z_mean = self.dense_mean(x)
z_log_var = self.dense_log_var(x)
z = self.sampling((z_mean, z_log_var))
return z_mean, z_log_var, z
class Decoder(keras.Model):
"""Converts z, the encoded digit vector, back into a readable digit."""
def __init__(self, original_dim, intermediate_dim=64, name='decoder', **kwargs):
super(Decoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(7 * 7 * 64, activation='relu')
self.reshape = layers.Reshape((7, 7, 64))
self.conv_transpose1 = layers.Conv2DTranspose(64, 3, activation='relu', strides=2, padding='same')
self.conv_transpose2 = layers.Conv2DTranspose(32, 3, activation='relu', strides=2, padding='same')
self.dense_output = layers.Conv2DTranspose(1, 3, activation='sigmoid', padding='same')
# self.dense_output = layers.Dense(original_dim, activation='sigmoid')
def call(self, inputs):
x = self.dense_proj(inputs)
x = self.reshape(x)
x = self.conv_transpose1(x)
x = self.conv_transpose2(x)
return self.dense_output(x)
class VAE(keras.Model):
"""Combines the encoder and decoder into an end-to-end model for training."""
def __init__(
self,
encoder,
decoder,
name='vae',
**kwargs
):
super().__init__(name=name, **kwargs)
self.encoder = encoder
self.decoder = decoder
def call(self, input):
z_mean, z_log_var, z = self.encoder(input)
reconstructed = self.decoder(z)
# Add KL divergence regularization loss.
kl_loss = -0.5 * ops.mean(1 + z_log_var - ops.square(z_mean) - ops.exp(z_log_var), axis=1)
self.add_loss(kl_loss)
return reconstructed
if __name__ == '__main__':
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
print(x_train.shape)
mnist_digits = np.concatenate([x_train, x_test], axis=0)
mnist_digits = np.expand_dims(mnist_digits, -1).astype("float32") / 255
# x_train = x_train.reshape(-1, 28, 28, 1).astype('float32') / 255
print(mnist_digits.shape)
encoder = Encoder(latent_dim=32)
# print(encoder.summary())
decoder = Decoder(original_dim=784)
# print(decoder.summary())
vae = VAE(encoder=encoder, decoder=decoder)
vae.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-3), loss=keras.losses.MeanSquaredError())
vae.fit(mnist_digits, mnist_digits, epochs=30, batch_size=128)
</code></pre>
<p>However, I keep getting the following error:</p>
<pre><code>/opt/miniconda3/envs/vae/bin/python /Users/belter/github/VAE/example.py
(60000, 28, 28)
(70000, 28, 28, 1)
/opt/miniconda3/envs/vae/lib/python3.11/site-packages/keras/src/backend/common/backend_utils.py:89: UserWarning: You might experience inconsistencies across backends when calling conv transpose with kernel_size=3, stride=2, dilation_rate=1, padding=same, output_padding=1.
warnings.warn(
Epoch 1/30
Traceback (most recent call last):
File "/Users/belter/github/VAE/example.py", line 106, in <module>
vae.fit(mnist_digits, mnist_digits, epochs=30, batch_size=128)
File "/opt/miniconda3/envs/vae/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/opt/miniconda3/envs/vae/lib/python3.11/site-packages/keras/src/backend/torch/numpy.py", line 1248, in stack
return torch.stack(x, dim=axis)
^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: stack expects each tensor to be equal size, but got [] at entry 0 and [128] at entry 1
Process finished with exit code 1
</code></pre>
<p>I am using python 3.11, pytorch 2.2.2, and keras 3.2.1.</p>
<p>I know it might be something about the problem of the dimensionality of the input data, but I cannot figure it out.</p>
|
<python><machine-learning><keras><pytorch><neural-network>
|
2024-04-19 15:39:03
| 0
| 3,867
|
Belter
|
78,354,782
| 4,434,140
|
how to use returns.context.RequiresContext with async functions in python?
|
<p>I am very fond of the <a href="https://github.com/dry-python/returns" rel="nofollow noreferrer">returns</a> library in python, and I would like to use it more. I have a little issue right now. Currently, I have a function that uses a redis client and gets the value corresponding to a key, like so:</p>
<pre class="lang-py prettyprint-override"><code>from redis import Redis
from returns.context import RequiresContext
def get_value(key: str) -> RequiresContext[str, Redis]:
def inner(client: Redis) -> str:
value = client.get(key)
return value.decode("utf-8")
return RequiresContext(inner)
</code></pre>
<p>Obviously, that function works like a charm:</p>
<pre class="lang-py prettyprint-override"><code>with Redis(
host=redis_host,
port=redis_port,
password=redis_password,
) as redis_client:
value = get_value(key="my-key")(redis_client)
print("value = ", value)
</code></pre>
<p>Now, I would like to use the asyncio pendant of that code, i.e. use the <code>redis.asyncio.Redis</code>. Unfortunately, it looks like things become a bit more complicated in that case. I should probably switch from <code>RequiresContext</code> to <code>RequiresContextFutureResultE</code>, but I was not able to find a working solution. Here's the best code I was able to come up with:</p>
<pre class="lang-py prettyprint-override"><code>async def get_value(key: str) -> RequiresContextFutureResultE[str, Redis]:
async def inner(client: Redis) -> FutureResultE[str]:
value = await client.get(key)
return FutureResult.from_value(value.decode("utf-8"))
return RequiresContextFutureResultE(inner)
</code></pre>
<p>When I run it like this:</p>
<pre class="lang-py prettyprint-override"><code>async def main():
async with Redis(
host="localhost",
port=6379,
password="902nks291",
db=15,
) as redis_client:
rcfr = get_value(key="user-id")
value = await rcfr(redis_client)
print("value: ", value)
asyncio.run(main())
</code></pre>
<p>I get the error that <code>rcfr</code> is not a callable. Can someone help me figure out how I should fix my code to make it work the way I want?</p>
|
<python><functional-programming>
|
2024-04-19 15:28:13
| 2
| 1,331
|
Laurent Michel
|
78,354,689
| 8,618,987
|
Python script not returning results from MSSQL stored procedure, despite successful execution
|
<p>I'm encountering an issue with retrieving results from an MSSQL stored procedure using a Python script. Here's the problem:</p>
<p>I have a stored procedure <code>getlive_value</code> in my MSSQL database <code>XStudio_Historian</code>. When I execute this stored procedure in SQL Server Management Studio using the query <code>execute XStudio_Historian.[dbo].getlive_value '45'</code>, it returns a value as expected.</p>
<p>However, when I attempt to execute the same stored procedure from a Python script using the <code>pyodbc</code> library, it executes successfully but doesn't return any results.</p>
<p>Here's the Python script I'm using:</p>
<pre class="lang-py prettyprint-override"><code>import pyodbc
# Database connection details
driver = '{SQL Server Native Client 11.0}'
server = 'MIS-SRV'
database = 'XStudio_Historian'
username = 'username'
password = 'database_password'
# Stored procedure to execute
stored_proc_name = 'getlive_value'
parameter = '45'
# Connection string
conn_str = f'DRIVER={driver};SERVER={server};DATABASE={database};UID={username};PWD={password}'
try:
# Connect to the database
conn = pyodbc.connect(conn_str)
print("Connected to the database.")
# Create a cursor
cursor = conn.cursor()
# Execute the stored procedure
cursor.execute(f"EXEC {database}.[dbo].{stored_proc_name} '{parameter}'")
print("Stored procedure executed successfully.")
# Fetch the results if any
rows = cursor.fetchall()
for row in rows:
print(row)
# Close cursor and connection
cursor.close()
conn.close()
print("Connection closed.")
except pyodbc.Error as ex:
print(f"Error: {ex}")
</code></pre>
<p>I've checked the stored procedure, and it does return results when executed directly in MSSQL.</p>
<p><a href="https://i.sstatic.net/lHoq3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lHoq3.png" alt="enter image description here" /></a></p>
<p>What could be causing this discrepancy between executing the stored procedure in MSSQL and Python? And how can I modify my Python script to retrieve the expected results?</p>
<p>Python Error is as shown in the image :<a href="https://i.sstatic.net/cwd45.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwd45.png" alt="enter image description here" /></a></p>
<p>Any help or insights would be greatly appreciated. Thank you!</p>
|
<python><sql-server><stored-procedures><pyodbc>
|
2024-04-19 15:11:56
| 0
| 549
|
Ranjit Singh Shekhawat
|
78,354,679
| 10,962,766
|
Problem decoding Google News URLS that contain consent information
|
<p>I have a particular problem decoding base64 Google News URLs in Python when they do not only contain a URL but also consent information.</p>
<p>Based on the older issue <a href="https://stackoverflow.com/questions/51131834/decoding-encoded-google-news-urls">decoding Google news urls</a>, I wrote the following function within a larger script that correctly decodes 99% of my URLs:</p>
<pre><code>def decode_google_url(e):
global faulty_urls
faulty_urls=[]
# trim leading/trailing whitespace
e = e.strip()
# decode string to get target URL
try:
target_url = base64.b64decode(e)[4:].decode('utf-8', "backslashreplace").split('\\')[0]
target_urls.append(target_url)
except Exception as ex:
print(f"Error decoding URL: {ex}")
# all exceptions are triggered by links that contain consent information as well as URLs
faulty_urls.append(e)
return faulty_urls
return target_urls
</code></pre>
<p>As you can see in the comment, exceptions are triggered by encoded URLs that also seem to contain consent information. One example is the following string of 276 characters, which base64 in my script does not decode because it allegedly does not represent a multiple of 4:</p>
<p><code>CBMiYWh0dHBzOi8vd3d3LnRpbWVzb2Zpc3JhZWwuY29tL2Zvci15ZWFycy1uZXRhbnlhaHUtcHJvcHBlZC11cC1oYW1hcy1ub3ctaXRzLWJsb3duLXVwLWluLW91ci1mYWNlcy_SAWVodHRwczovL3d3dy50aW1lc29maXNyYWVsLmNvbS9mb3IteWVhcnMtbmV0YW55YWh1LXByb3BwZWQtdXAtaGFtYXMtbm93LWl0cy1ibG93bi11cC1pbi1vdXItZmFjZXMvYW1wLw==</code></p>
<p>When I put this into an online decoder, I get the following information:</p>
<p>"I am at least 18 years old and I consent to the processing of my personal data in accordance with this website's privacy policy.
<a href="https://www.timesofisrael.com/for-years-netanyahu-propped-up-hamas-now-its-blown-up-in-our-faces-%F0%9F%94%93" rel="nofollow noreferrer">https://www.timesofisrael.com/for-years-netanyahu-propped-up-hamas-now-its-blown-up-in-our-faces-🔓</a> <a href="https://www.timesofisrael.com/for-years-netanyahu-propped-up-hamas-now-its-blown-up-in-our-faces/map/%22" rel="nofollow noreferrer">https://www.timesofisrael.com/for-years-netanyahu-propped-up-hamas-now-its-blown-up-in-our-faces/map/"</a></p>
<p>Stripping the consent information and icon from the string to keep only the URL would not be a problem, but I cannot even get to this decoded result in my script because the input string triggers an error.</p>
|
<python><base64><google-news>
|
2024-04-19 15:11:01
| 2
| 498
|
OnceUponATime
|
78,354,610
| 12,846,524
|
Customtkinter: Close the dropdown menu when the dropdown arrow is clicked again
|
<p>I have a piece CustomTkinter GUI that has several combobox dropdowns. I am having an issue with closing the popup when clicking the dropdown arrow for a second time. I have tried looking online for anybody having the same issue but I've not found much, and digging through the source code on GitHub didn't bring me any success either!</p>
<p>I want these dropdowns to be able to close the popups when the dropdown arrow is clicked a second time. Currently when clicking it again the popup is recreated. The way to close the popup is to click anywhere else (i.e. selecting a dropdown option, clicking the accompanying entry box, or clicking anywhere outside of the combobox).</p>
<p>Below is a dummy code/pseudo-code (as the toggle_dropdown method does not work) that attempts to solve my problem:</p>
<pre><code>import customtkinter as ctk
class DummyApp(ctk.CTk):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.geometry('200x100')
self.combobox = ctk.CTkComboBox(self, values=['Option A', 'Option B'])
self.combobox.pack(side='left')
self.combobox.bind('<Button-1>', lambda event: self.toggle_dropdown(event, cbox=self.combobox))
@staticmethod
def toggle_dropdown(event, cbox: ctk.CTkComboBox):
if cbox.cget('state') == 'shown':
cbox.hide()
else:
cbox.show()
if __name__ == '__main__':
app = DummyApp()
app.mainloop()
</code></pre>
<p>I have tried to bind an event to my combobox such that when the dropdown arrow is clicked it will check if it is currently being displayed. If so: it closes the popup, and vice versa. This does not work because the event is being bound to the entry box rather than the dropdown arrow.</p>
<p>So I would like two things fixed with this code:</p>
<ol>
<li>Bind the event to the dropdown arrow rather than the entry box (the lambda function would allow me to make the toggle_dropdown method generalisable to other comboboxes within my GUI)</li>
<li>Fix toggle_dropdown to correctly check the 'shown/hidden' state, and then hide/show as appropriate.</li>
</ol>
<p>Thank you in advance for any assistance provided!</p>
<hr />
<h2>EDIT:</h2>
<p>I could be wrong but it seems that this is a problem inherent to customtkinter. From testing this on other machines, with different versions of windows & customtkinter the problem still persists.</p>
<p>I have opened an issue on the GitHub, so I hope it will be looked into: <a href="https://github.com/TomSchimansky/CustomTkinter/issues/2386" rel="nofollow noreferrer">https://github.com/TomSchimansky/CustomTkinter/issues/2386</a></p>
<p>I have temporarily created an inelegant solution whereby the CTKComboBox has been replaced with a CTkOptionMenu, as the <code>self.optionbox.bind(...)</code> applies to the entire widget. This has allowed me to change between 'normal/disabled' states, which correctly closes the dropdown menu on a second click.</p>
<p>With this there was the problem after selecting an option of the widget having to be clicked twice in order to re-enable it, then select a new option. To solve this I have created a method <code>dropdown_checker</code> that re-enables the optionbox every 100 ms. See the below code:</p>
<pre><code>import customtkinter as ctk
class DummyApp(ctk.CTk):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.geometry('200x100')
self.optionbox = ctk.CTkOptionMenu(self, values=['Option A', 'Option B'])
self.optionbox.pack(side='left')
self.optionbox.bind('<Button-1>', lambda event: self.toggle_dropdown(event, obox=self.optionbox))
self.dropdown_checker()
@staticmethod
def toggle_dropdown(event, obox: ctk.CTkOptionMenu):
if obox.cget('state') == 'normal':
obox.configure(state='disabled')
elif obox.cget('state') == 'disabled':
obox.configure(state='normal')
def dropdown_checker(self):
self.optionbox.configure(state='normal')
self.after(100, self.dropdown_checker)
if __name__ == '__main__':
app = DummyApp()
app.mainloop()
</code></pre>
<p>Like I said this is not a very good solution, but it is the only one I could think of whilst waiting for a reply to the GitHub issue. If anybody has any improvements/suggestions, I would love to hear them!</p>
|
<python><oop><tkinter><dropdown><customtkinter>
|
2024-04-19 15:00:24
| 1
| 374
|
AlexP
|
78,354,489
| 2,604,247
|
Are Totally Different Python Scripts Running on the Same Host Influenced by the GIL?
|
<p>As the question says, if I run two different python scripts on two different terminals (Ubuntu 22.04) like</p>
<pre><code>$ python3 script1.py
$ python3 script2.py
</code></pre>
<p>does the global interpreter lock still prevent their concurrent execution? Or does the GIL only prevents thread pooling or process pooling (via the <code>concurrent.futures</code> module) only within the same parent process?</p>
|
<python><multithreading><concurrency><gil>
|
2024-04-19 14:40:52
| 0
| 1,720
|
Della
|
78,354,467
| 1,356,926
|
Python and C++ long double give different results
|
<p>I'm trying to debug an inconsistency between a piece of C++ code and another piece of Python code which are returning different results.</p>
<p>In general, I have always expected floating point operations between the two environments to yield equal results, assuming the initial floating point values are the same. For example, if I perform a simple division between double values, like:</p>
<pre><code>d1 = np.double(4e2)
d2 = np.double(4e4)
d3 = d1 / d2
------------
double x1 = 4e2;
double x2 = 4e4;
double x3 = x1 / x2;
</code></pre>
<p>the results are identical down to the individual bits:</p>
<pre><code>Python:
4.0000000000000000000000000e+02
00000000 00000000 00000000 00000000 00000000 00000000 01111001 01000000
4.0000000000000000000000000e+04
00000000 00000000 00000000 00000000 00000000 10001000 11100011 01000000
1.0000000000000000208166817e-02
01111011 00010100 10101110 01000111 11100001 01111010 10000100 00111111
-----------
Cpp:
4.00000000000000000000000e+02
00000000 00000000 00000000 00000000 00000000 00000000 01111001 01000000
4.00000000000000000000000e+04
00000000 00000000 00000000 00000000 00000000 10001000 11100011 01000000
1.00000000000000002081668e-02
01111011 00010100 10101110 01000111 11100001 01111010 10000100 00111111
</code></pre>
<p>However, this seems not to happen when using <code>np.longdouble</code> and <code>long double</code> respectively. Not only do the initial values contain some randomly initialized bits (for both implementations, re-running results in the later part of the output bits to be somewhat randomly set...why?), but the resulting division yields consistently a different result:</p>
<pre><code>d1 = np.longdouble(4e2)
d2 = np.longdouble(4e4)
d3 = d1 / d2
-----
4.0000000000000000000000000e+02
00000000 00000000 00000000 00000000 00000000 00000000 00000000 11001000
00000111 01000000 00110110 00100011 11111101 01111111 00000000 00000000
4.0000000000000000000000000e+04
00000000 00000000 00000000 00000000 00000000 00000000 01000000 10011100
00001110 01000000 00110110 00100011 11111101 01111111 00000000 00000000
1.0000000000000000208166817e-02
00001010 11010111 10100011 01110000 00111101 00001010 11010111 10100011
11111000 00111111 00110110 00100011 11111101 01111111 00000000 00000000
</code></pre>
<p>while for C++</p>
<pre><code>long double x1 = 4e2l;
long double x2 = 4e4l;
long double x3 = x1 / x2;
------
4.00000000000000000000000e+02
00000000 00000000 00000000 00000000 00000000 00000000 00000000 11001000
00000111 01000000 01101010 10111110 11100000 01111111 00000000 00000000
4.00000000000000000000000e+04
00000000 00000000 00000000 00000000 00000000 00000000 01000000 10011100
00001110 01000000 01100111 10111110 11100000 01111111 00000000 00000000
9.99999999999999999979671e-03
00001010 11010111 10100011 01110000 00111101 00001010 11010111 10100011
11111000 00111111 01111000 10111110 11100000 01111111 00000000 00000000
</code></pre>
<p>In my case, this has unfortunate repercussions due to calls to <code>round</code> which end up in different directions for the two implementations. So my question is, what exactly is going on? In what way is a long double special that the two results do not match?</p>
<p>In case it is relevant, here is how I output the corresponding binary representations:</p>
<pre class="lang-py prettyprint-override"><code>def binar(num):
num = np.array([num])
s = ""
ss = []
count = 0
for v in num.view(np.int8):
s += " " + str(np.binary_repr(v, width=8))
count += 1
if count == 8:
ss.append(s)
s = ""
if s != "":
ss.append(s)
return ss
</code></pre>
<p>and</p>
<pre class="lang-cpp prettyprint-override"><code>void printBinary(const char* prefix, auto num)
{
unsigned char* it = reinterpret_cast<unsigned char*>(&num);
for (std::size_t i = 0; i < sizeof(num); i+=8) {
std::cout << prefix;
for (std::size_t j = 0; j < 8; j++)
std::cout << std::bitset<8>(it[i+j]) << ' ';
std::cout << '\n';
}
}
</code></pre>
<p>EDIT: As requested, here are the values for both Python and C++ printed using <code>printf("%La")</code> (for Python I used Pybind11 to pass the long double along):</p>
<pre><code>Python:
d1 = 0xc.8p+5
d2 = 0x9.c4p+12
d3 = 0xa.3d70a3d70a3d8p-10
----------
C++
x1 = 0xc.8p+5
x2 = 0x9.c4p+12
x3 = 0xa.3d70a3d70a3d70ap-10
</code></pre>
|
<python><c++><numpy><floating-point><precision>
|
2024-04-19 14:37:19
| 1
| 5,637
|
Svalorzen
|
78,354,338
| 11,468,323
|
No way to terminate python process; using trio, running as root, reading /proc/bus/input/devices
|
<p>I wanted to achieve some of the functionality that <code>keyboard</code> python package offers, but without threads, instead using trio.</p>
<p>I created an async function that reads <code>/proc/bus/input/devices</code> to look for keyboards, then starts a task (in a nursery) for each keyboard found - basically it opens the keyboard <code>/dev/input/event/...</code> file (binary read mode) and reads chunks of bytes, parses using <code>struct</code> module and so on. Everything works nice, I get a stream of lines like I wanted: <code>"a" pressed</code>, <code>"a" released</code>, etc.</p>
<p>Now the weird part, I can't terminate that script. If I press Ctrl+C, or send <code>kill -INT</code>, it stops reporting which keys were pressed/released, but python (the process) does not stop. I have to kill it with SIGTERM. I also wrote an <code>if</code> to run <code>sys.exit(0)</code> when the key being released is <code>q</code> - no difference. Same with cancelling trio nursery.cancel_scope.</p>
<p>Can I force python to print a traceback when it's in that weird state? Or is there any other way to debug where exactly it is frozen?</p>
|
<python><python-trio>
|
2024-04-19 14:17:03
| 1
| 325
|
Paweł Lis
|
78,354,236
| 7,615,872
|
Call an async function in Pydantic custom Validator
|
<p>I have 2 Paydantic models <code>Article</code> and <code>Author</code> defined as follow:</p>
<pre><code>from pydantic import BaseModel
from typing import List
class Author(BaseModel):
id: int
first_name: str
class Article(BaseModel):
id: int
name: str
authors: List[Author]
</code></pre>
<p>These models are used to parse and validate python dict that looks like:</p>
<pre><code>article_data = {"id": 568, "name": "Smart People", "authors": [{"id": 123}, {"id": 456}]}
</code></pre>
<p>to get get authors details I have an async function that from an id, it returns autho details:</p>
<pre><code>async def get_author(id: int) -> Optional[dict]:
# Simulate fetching author details from a database or other source
authors = {
123: {"id": 123, "first_name": "George Bob"},
456: {"id": 456, "first_name": "Alice Smith"},
}
return authors.get(id)
</code></pre>
<p>to fill author detail in the article object I implemented a custom validator, so the definition of Article class:</p>
<pre><code>from pydantic import BaseModel, validator
class Article(BaseModel):
id: int
name: str
authors: List[Author]
@validator("authors", pre=True)
def populate_author(cls, value):
return [Author(**(get_author(item.get("id")))) for item in value]
</code></pre>
<p>to run the code, I am using this code snippet:</p>
<pre><code>async def main():
article = Article.parse_obj(article_data)
print(article)
asyncio.run(main())
</code></pre>
<p>this would not work as I am not awaiting <code>get_author</code>.
when changing my code to use <code>await get_author</code> it will raise:</p>
<pre><code>SyntaxError: asynchronous comprehension outside of an asynchronous function
</code></pre>
<p>which is expected as <code>populate_author</code> is not async function trying to await a async funtion <code>get_author</code></p>
<p>Another alternative I tried is to make the validator async: <code>async def populate_author(cls, value):</code> this will raise this error:</p>
<pre><code>RuntimeWarning: coroutine 'populate_author' was never awaited
</code></pre>
<p>which is expected as well knowing how Pydantic is implemented.</p>
<p>In this case, what is the solution to run an async function inside a custom Pydantic validator?</p>
<p>Pydatic version 1.10.14 with python 3.11</p>
|
<python><python-3.x><python-asyncio><pydantic>
|
2024-04-19 14:02:30
| 1
| 1,085
|
Mehdi Ben Hamida
|
78,354,145
| 4,791,408
|
Concatenate large matrices leads to memory error
|
<p>I have two large sparse matrices which I want to concatenate, I am using the following function.</p>
<pre><code>all_data = sparse.vstack([train_data_pt_a, train_data_pt_b])
</code></pre>
<p>I can load the two matrices without any issue, however, when the <code>vstack</code> happens, the memory usage spikes and the program terminates (my jupyter notebook states "kernel died")</p>
<p>A solution would be to iterate <code>train_data_pt_b</code> by chunks, append them at the bottom of <code>train_data_pt_a</code> and hope that the garbage collection does its job. Is there any better way ? Or a builtin function for this kind of operations ?</p>
<p>The shape of <code>train_data_pt_a</code> is <code>(318000, 43089)</code> and the percentage of non zero elements is around 7%.</p>
<p><code>train_data_pt_b</code> has 2 times more rows and a similar non zero percentage.</p>
|
<python><numpy><matrix><scipy><sparse-matrix>
|
2024-04-19 13:50:56
| 0
| 1,064
|
RUser4512
|
78,354,136
| 16,115,413
|
How to Send a Streaming Response via LlamaIndex to a FastAPI Endpoint?
|
<p>I need to send a streaming response using LlamaIndex to my FastAPI endpoint. Below is the code I've written so far:</p>
<pre class="lang-py prettyprint-override"><code>@bot_router.post("/bot/pdf_convo")
async def pdf_convo(query: QuestionInput):
chat_engine = cache["chat_engine"]
user_question = query.content
streaming_response = chat_engine.stream_chat(user_question)
for token in streaming_response.response_gen:
print(token, end="")
</code></pre>
<p>I'd appreciate any guidance on how to properly implement the streaming response with LlamaIndex. Thank you!</p>
|
<python><nlp><openai-api><large-language-model><llama-index>
|
2024-04-19 13:49:16
| 1
| 549
|
Mubashir Ahmed Siddiqui
|
78,354,106
| 10,983,470
|
Surprising results comparing 2 pd.array
|
<p>Using <code>python</code> 3.10.13, <code>pandas</code> 2.2.0 with <code>numpy</code> 1.26.4, I expected to be able to use something like (as required by the ruff linter) :</p>
<pre class="lang-py prettyprint-override"><code># this is False
$ (pd.array([""]) == pd.array([""]))[0] is True
False
# this does work
$ (pd.array([""]) == pd.array([""]))[0] == True
True
# though
$ (pd.array([""]) == pd.array([""]))[0]
True
</code></pre>
<p>Note : wrapping <code>pd.array([""])</code> in a list solves the problem, I am asking why here it seems <code>True</code> != <code>True</code>.</p>
<p>Additional information :</p>
<pre class="lang-py prettyprint-override"><code># you end up with a StringArray when using
$ pd.DataFrame(dict(a=[""])).unique()
# this behavior is also true for
$ (pd.DataFrame(dict(a=[""])) == pd.DataFrame(dict(a=[""]))).values[0][0] is True
False
# one needs to use pd.DataFrame.equals for it to work
$ pd.DataFrame(dict(a=[""])).equals(pd.DataFrame(dict(a=[""]))) is True
True
</code></pre>
|
<python><python-3.x><pandas>
|
2024-04-19 13:45:26
| 1
| 1,783
|
cbo
|
78,353,981
| 827,927
|
Sequentially reading inputs of different types in Python
|
<p>My program gets as input an integer, then a string, then another integer, all in the same line of input. In C++, I can read them from the standard input in one line, as follows:</p>
<pre><code>int a; string b; int c;
cin >> a >> b >> c;
</code></pre>
<p>Moreover, I can easily read custom types like containers in the same way, if I define the >> operator correctly.</p>
<p>Is there a similar one-liner in Python? So far, the best I could come up with is the following ugly code:</p>
<pre><code>a,b,c = [f(x) for f,x in zip((int,str,int),input().split())]
</code></pre>
|
<python><input>
|
2024-04-19 13:23:38
| 1
| 37,410
|
Erel Segal-Halevi
|
78,353,844
| 8,119,069
|
Python module to get PDF text coordinates
|
<p>Is there a Python module that has the possibility of returning contents of a PDF file as a list of bounding boxes, with top and left coordinates and text value? Something like Firefox has would be ideal, this is what I mean.
<a href="https://i.sstatic.net/b7FPb.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b7FPb.jpg" alt="enter image description here" /></a></p>
<p>So it would be something like this:</p>
<pre><code>[{'left': 72, 'top': 14.18, 'text': 'YesLogic Pty. Ltd'}, {'left': 72, 'top': 15.75, 'text': '7 / 39 Bouverie St'}]
</code></pre>
<p>So, to parse the complete file like this, not just some specific area (I know there are options for that).</p>
|
<python><pdf><pypdf>
|
2024-04-19 13:01:51
| 0
| 501
|
DoctorEvil
|
78,353,817
| 8,040,369
|
Changing the structure of a DataFrame in Python from row level to column level
|
<p>I have a DataFrame like below</p>
<pre><code>name Type value
==================================
AAA 1 Increase
AAA 2 Decrease
AAA 3 Neutral
</code></pre>
<p>I would like to convert the structure of the DF to something like below</p>
<pre><code>name Type1 Type2 Type3
==================================================
AAA Increase Decrease Neutral
</code></pre>
<p>Any help is much appreciated.</p>
<p>Thanks,</p>
|
<python><pandas><dataframe>
|
2024-04-19 12:57:20
| 0
| 787
|
SM079
|
78,353,737
| 9,212,050
|
Continue TensorBoard Logging from a Previous Timestep When Resuming Training with Stable Baselines3
|
<p>I am working with a reinforcement learning model using Stable Baselines3, specifically a PPO model. I train my model for a certain number of timesteps, let's say 500, using the following code:</p>
<pre><code>log_dir='./logging_directory/'
model = PPO_model("MlpPolicy", env, verbose=1, tensorboard_log=log_dir)
model.learn(total_timesteps=500, callback=[customMetricsLogger])
</code></pre>
<p>The model is saved to a zip folder using the customMetricsLogger callback. Also, the logged variables are saved in <code>./logging_directory/PPO_1</code>.</p>
<p>Now I want to load the model and continue training. However, I'm facing an issue with TensorBoard logging.
When I resume training and specify the same logging directory that I used in the initial training session, TensorBoard does not continue logging in the same file. Instead, it creates a new directory (e.g., 'PPO_2') and starts logging from timestep 0 again. Here is how I load the model and specify the logging directory:</p>
<pre><code>model = PPO_model.load(model_path, env=env, tensorboard_log=log_dir)
</code></pre>
<p>My goal is to continue logging in the same 'PPO_1' directory and have the logs display a continuous line in TensorBoard, starting from the last timestep of the previous training phase when I run <code>tensorboard --logdir .</code>. Or if having the same line is not possible, at least have the new line start from 500. Is it possible to setup the number of timesteps the model already trained for tensorboard?</p>
|
<python><reinforcement-learning><tensorboard><stable-baselines>
|
2024-04-19 12:41:27
| 0
| 1,404
|
Sayyor Y
|
78,353,691
| 5,743,955
|
Pyright with multiple venv (monorepo)
|
<p>Got an issue with pyright on a monorepo: it didn't recognize classes/functions imported from nearby projects/libraries (get a <code>reportMissingImports</code>).</p>
<p>For instance, with this repo structure:</p>
<pre><code>.
├── library
| └── src
| └── __init__.py
├── project1
| └── src
| └── main.py
├── project2 ...
└── pyproject.toml
</code></pre>
<p>This line in <code>main.py</code> will raise the <code>reportMissingImports</code> error (even if the code is working):</p>
<pre class="lang-py prettyprint-override"><code>from library import ModuleClass
</code></pre>
<p>As a manual workaround, this command works from root level:</p>
<pre class="lang-bash prettyprint-override"><code>poetry run pyright --pythonpath project1/.venv/bin/python project1/main.py
</code></pre>
<p>But as I have multiple projects, each with its own .venv folder, I can't set <code>pythonpath</code> globally.</p>
<p>I tried multiple options in <code>pyproject.toml</code> to configure pyright properly for the whole repository, but nothing worked so far.
What is the proper way to configure multiple venv like that with pyright ?</p>
<p>FYI, as a context, my final goal is to setup pyright for sublime text 3 with LSP-pyright and python >=3.11. But at least command line should work before that (-‿-")</p>
|
<python><sublimetext><python-venv><pyright>
|
2024-04-19 12:34:50
| 1
| 334
|
Romain
|
78,353,074
| 8,840,275
|
PIL Image Crop and Resize Degrading Quality
|
<p>I am writing an image-processing script.</p>
<ol>
<li>Hits background removal API</li>
<li>Detect item bounds in the image</li>
<li>Crops image based on bounds co-ordinates and places it on a 1400 x 1000 canvas</li>
<li>Save to PNG</li>
</ol>
<p>The image I got back from the background removal API is the perfect colour and quality.</p>
<p><a href="https://i.sstatic.net/yjDJE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yjDJE.png" alt="background removal img" /></a></p>
<p>But once I crop, place on canvas and save, the colours are not as bright, and the quality is not as good.</p>
<p><a href="https://i.sstatic.net/xXQ10.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xXQ10.png" alt="crop and resized img" /></a></p>
<p>Below is my crop and save function.</p>
<pre><code>def crop_and_resize_sneaker(bg_removed_image_path, bounding_box, output_dir, image_name):
"""
Crops and resizes an image to fit a 1400 x 1000 canvas while maintaining quality.
Uses PIL to handle images with an emphasis on preserving the original quality,
especially during resizing and saving operations.
"""
try:
# Load the image with background removed
image = Image.open(bg_removed_image_path).convert('RGBA')
# Extract bounding box coordinates
x, y, w, h = bounding_box
# Crop the image based on the bounding box
cropped_image = image.crop((x, y, x + w, y + h))
# Define output dimensions
output_width, output_height = 1400, 1000
# Calculate new dimensions to maintain aspect ratio
aspect_ratio = w / h
if aspect_ratio > (output_width / output_height):
new_width = output_width
new_height = int(output_width / aspect_ratio)
else:
new_width = int(output_height * aspect_ratio)
new_height = output_height
# Resize the image using LANCZOS (high-quality)
resized_image = cropped_image.resize((new_width, new_height), Image.LANCZOS)
# Create a new image for the final output with a transparent background
final_image = Image.new('RGBA', (output_width, output_height), (0, 0, 0, 0))
# Calculate center positioning
start_x = (output_width - new_width) // 2
start_y = (output_height - new_height) // 2
# Paste the resized image onto the transparent canvas
final_image.paste(resized_image, (start_x, start_y), resized_image)
# Save the final image as PNG with maximum quality settings
final_img_path = os.path.join(output_dir, 'resized', f'{image_name}_sneaker_canvas.png')
final_image.save(final_img_path, 'PNG', quality=95) # Although 'quality' has no effect on PNGs, provided for completeness
return final_img_path
except Exception as e:
logging.error(f"Error in cropping and resizing sneaker: {e}")
return None
</code></pre>
<p>How can I ensure the crop and resize image is of equal quality as the input image (background removed)?</p>
|
<python><python-imaging-library>
|
2024-04-19 10:39:48
| 1
| 327
|
Piers Thomas
|
78,352,986
| 12,890,458
|
Salabim 3D animation example program is not running
|
<p>I am running the <a href="https://www.salabim.org/manual/3dAnimation.html#example" rel="nofollow noreferrer">example program</a>, see below, from the 3D animation page of the salabim website (salabim is a discrete event simulation package in Python)</p>
<pre><code>import salabim as sim
env = sim.Environment()
env.background_color("90%gray")
env.width(900)
env.height(700)
env.position((1000, 0))
env.width3d(900)
env.height3d(700)
env.position3d((0, 100))
env.animate(True)
env.animate(True)
env.animate3d(True)
env.show_camera_position()
env.show_camera_position(over3d=True)
sim.Animate3dGrid(x_range=range(-2, 3), y_range=range(-2, 3))
traj0 = sim.TrajectoryCircle(radius=2, vmax=1)
d0 = traj0.duration()
sim.Animate3dBox(x_len=1, y_len=1, z_len=1, color="red", x=lambda t: traj0.x(t % d0), y=lambda t: traj0.y(t % d0),
z=0.5, z_angle=lambda t: traj0.angle(t % d0))
traj1 = sim.TrajectoryCircle(radius=1, vmax=1, angle0=360, angle1=0)
d1 = traj1.duration()
sim.Animate3dBox(x_len=0.5, y_len=0.5, z_len=0.5, color="green", x=lambda t: traj1.x(t % d1),
y=lambda t: traj1.y(t % d1), z=0.25, z_angle=lambda t: traj1.angle(t % d1))
env.run(sim.inf)
</code></pre>
<p>I get the following error:</p>
<pre><code>Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\rtap\.conda\envs\test_env\lib\tkinter\__init__.py", line 1892, in __call__
return self.func(*args)
File "C:\Users\rtap\.conda\envs\test_env\lib\tkinter\__init__.py", line 814, in callit
func(*args)
File "C:\Users\rtap\.conda\envs\test_env\lib\site-packages\salabim\salabim.py", line 12881,
in simulate_and_animate_loop
self.animation3d_init()
File "C:\Users\rtap\.conda\envs\test_env\lib\site-packages\salabim\salabim.py", line 10298, in animation3d_init
glut.glutInit()
File "C:\Users\rtap\.conda\envs\test_env\lib\site-packages\OpenGL\GLUT\special.py", line 333, in glutInit
_base_glutInit( ctypes.byref(count), holder )
File "C:\Users\rtap\.conda\envs\test_env\lib\site-packages\OpenGL\platform\baseplatform.py", line 423, in __call__
raise error.NullFunctionError(
OpenGL.error.NullFunctionError: Attempt to call an undefined function glutInit, check for bool(glutInit) before calling
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\rtap\.conda\envs\test_env\lib\site-packages\salabim\salabim.py", line 12849, in run
raise SimulationStopped
salabim.salabim.SimulationStopped
</code></pre>
<p>I created the python environment as follows:</p>
<pre><code>conda create -n test_env python=3.9
conda activate -n test_env
conda install pip
pip install pillow
pip install pyopengl
pip install pyopengl_accelerate
pip install pywavefront
pip install pyglet==1.5.27
pip install opencv-python
pip install salabim
pip install greenlet
</code></pre>
<p>The error occurs at <code>env.run(sim.inf)</code>. What am I doing wrong? Is my python environment not correct?</p>
|
<python><event-simulation>
|
2024-04-19 10:22:46
| 1
| 460
|
Frank Tap
|
78,352,972
| 4,873,946
|
extracting blocks of data using numpy
|
<p>Here is a problem I am having:</p>
<p>I want to plot some energy bands obtained with Quantum Espresso. The data comes in a file with two columns. The columns are separated by empty lines into blocks. Each block corresponds to a band.</p>
<p>Here is an example of the first two blocks:</p>
<pre><code> 0.0000 -44.2709
0.0250 -44.2709
0.0500 -44.2709
0.0750 -44.2708
0.1000 -44.2708
0.1250 -44.2707
0.1500 -44.2706
0.1750 -44.2705
0.2000 -44.2703
0.2250 -44.2702
0.2500 -44.2701
0.2750 -44.2700
0.3000 -44.2698
0.3250 -44.2697
0.3500 -44.2696
0.3750 -44.2695
0.4000 -44.2694
0.4250 -44.2694
0.4500 -44.2693
0.4750 -44.2693
0.5000 -44.2693
0.5250 -44.2693
0.5500 -44.2692
0.5750 -44.2692
0.6000 -44.2691
0.6250 -44.2690
0.6500 -44.2689
0.6750 -44.2688
0.7000 -44.2687
0.7250 -44.2686
0.7500 -44.2685
0.7750 -44.2683
0.8000 -44.2682
0.8250 -44.2681
0.8500 -44.2680
0.8750 -44.2679
0.9000 -44.2678
0.9250 -44.2678
0.9500 -44.2677
0.9750 -44.2677
1.0000 -44.2677
1.0354 -44.2677
1.0707 -44.2677
1.1061 -44.2678
1.1414 -44.2680
1.1768 -44.2681
1.2121 -44.2683
1.2475 -44.2686
1.2828 -44.2688
1.3182 -44.2690
1.3536 -44.2693
1.3889 -44.2695
1.4243 -44.2698
1.4596 -44.2700
1.4950 -44.2702
1.5303 -44.2704
1.5657 -44.2706
1.6010 -44.2707
1.6364 -44.2708
1.6718 -44.2709
1.7071 -44.2709
1.7504 -44.2709
1.7937 -44.2708
1.8370 -44.2706
1.8803 -44.2704
1.9236 -44.2702
1.9669 -44.2699
2.0102 -44.2696
2.0535 -44.2692
2.0968 -44.2689
2.1401 -44.2685
2.1834 -44.2681
2.2267 -44.2677
2.2700 -44.2674
2.3133 -44.2671
2.3566 -44.2668
2.3999 -44.2665
2.4432 -44.2663
2.4865 -44.2662
2.5298 -44.2661
2.5731 -44.2661
2.6085 -44.2661
2.6438 -44.2661
2.6792 -44.2662
2.7146 -44.2664
2.7499 -44.2665
2.7853 -44.2667
2.8206 -44.2669
2.8560 -44.2672
2.8913 -44.2674
2.9267 -44.2677
2.9620 -44.2679
2.9974 -44.2682
3.0328 -44.2684
3.0681 -44.2686
3.1035 -44.2688
3.1388 -44.2690
3.1742 -44.2691
3.2095 -44.2692
3.2449 -44.2693
3.2802 -44.2693
3.2802 -44.2677
3.3052 -44.2677
3.3302 -44.2676
3.3552 -44.2676
3.3802 -44.2675
3.4052 -44.2674
3.4302 -44.2673
3.4552 -44.2672
3.4802 -44.2671
3.5052 -44.2670
3.5302 -44.2669
3.5552 -44.2667
3.5802 -44.2666
3.6052 -44.2665
3.6302 -44.2664
3.6552 -44.2663
3.6802 -44.2662
3.7052 -44.2662
3.7302 -44.2661
3.7552 -44.2661
3.7802 -44.2661
0.0000 -20.8317
0.0250 -20.8322
0.0500 -20.8338
0.0750 -20.8364
0.1000 -20.8400
0.1250 -20.8445
0.1500 -20.8497
0.1750 -20.8555
0.2000 -20.8618
0.2250 -20.8684
0.2500 -20.8751
0.2750 -20.8819
0.3000 -20.8884
0.3250 -20.8947
0.3500 -20.9004
0.3750 -20.9055
0.4000 -20.9098
0.4250 -20.9133
0.4500 -20.9159
0.4750 -20.9174
0.5000 -20.9179
0.5250 -20.9179
0.5500 -20.9178
0.5750 -20.9175
0.6000 -20.9172
0.6250 -20.9169
0.6500 -20.9164
0.6750 -20.9159
0.7000 -20.9154
0.7250 -20.9149
0.7500 -20.9143
0.7750 -20.9137
0.8000 -20.9132
0.8250 -20.9126
0.8500 -20.9122
0.8750 -20.9117
0.9000 -20.9113
0.9250 -20.9110
0.9500 -20.9108
0.9750 -20.9107
1.0000 -20.9106
1.0354 -20.9102
1.0707 -20.9089
1.1061 -20.9068
1.1414 -20.9039
1.1768 -20.9003
1.2121 -20.8959
1.2475 -20.8910
1.2828 -20.8855
1.3182 -20.8797
1.3536 -20.8736
1.3889 -20.8673
1.4243 -20.8611
1.4596 -20.8551
1.4950 -20.8495
1.5303 -20.8444
1.5657 -20.8400
1.6010 -20.8365
1.6364 -20.8338
1.6718 -20.8322
1.7071 -20.8317
1.7504 -20.8322
1.7937 -20.8338
1.8370 -20.8365
1.8803 -20.8400
1.9236 -20.8443
1.9669 -20.8492
2.0102 -20.8545
2.0535 -20.8601
2.0968 -20.8659
2.1401 -20.8716
2.1834 -20.8772
2.2267 -20.8826
2.2700 -20.8876
2.3133 -20.8922
2.3566 -20.8962
2.3999 -20.8997
2.4432 -20.9025
2.4865 -20.9045
2.5298 -20.9058
2.5731 -20.9062
2.6085 -20.9063
2.6438 -20.9064
2.6792 -20.9067
2.7146 -20.9071
2.7499 -20.9076
2.7853 -20.9082
2.8206 -20.9089
2.8560 -20.9096
2.8913 -20.9105
2.9267 -20.9114
2.9620 -20.9123
2.9974 -20.9132
3.0328 -20.9142
3.0681 -20.9151
3.1035 -20.9159
3.1388 -20.9166
3.1742 -20.9171
3.2095 -20.9176
3.2449 -20.9178
3.2802 -20.9179
3.2802 -20.9106
3.3052 -20.9106
3.3302 -20.9105
3.3552 -20.9104
3.3802 -20.9102
3.4052 -20.9100
3.4302 -20.9097
3.4552 -20.9094
3.4802 -20.9091
3.5052 -20.9088
3.5302 -20.9084
3.5552 -20.9081
3.5802 -20.9078
3.6052 -20.9074
3.6302 -20.9071
3.6552 -20.9069
3.6802 -20.9066
3.7052 -20.9065
3.7302 -20.9063
3.7552 -20.9063
3.7802 -20.9062
</code></pre>
<p>You may notice that the first column contains the same data over and over and only the second column has different data. What I would like to do is only retain the first column from the first block and then transform into separate columns the second column. Like this:</p>
<pre><code> 0.0000 -44.2709 -20.8317
0.0250 -44.2709 -20.8322
0.0500 -44.2709 -20.8338
0.0750 -44.2708 -20.8364
0.1000 -44.2708 -20.8400
0.1250 -44.2707 -20.8445
0.1500 -44.2706 -20.8497
0.1750 -44.2705 -20.8555
0.2000 -44.2703 -20.8618
0.2250 -44.2702 -20.8684
0.2500 -44.2701 -20.8751
0.2750 -44.2700 -20.8819
0.3000 -44.2698 -20.8884
0.3250 -44.2697 -20.8947
0.3500 -44.2696 -20.9004
0.3750 -44.2695 -20.9055
0.4000 -44.2694 -20.9098
0.4250 -44.2694 -20.9133
0.4500 -44.2693 -20.9159
0.4750 -44.2693 -20.9174
0.5000 -44.2693 -20.9179
0.5250 -44.2693 -20.9179
0.5500 -44.2692 -20.9178
0.5750 -44.2692 -20.9175
0.6000 -44.2691 -20.9172
0.6250 -44.2690 -20.9169
0.6500 -44.2689 -20.9164
0.6750 -44.2688 -20.9159
0.7000 -44.2687 -20.9154
0.7250 -44.2686 -20.9149
0.7500 -44.2685 -20.9143
0.7750 -44.2683 -20.9137
0.8000 -44.2682 -20.9132
0.8250 -44.2681 -20.9126
0.8500 -44.2680 -20.9122
0.8750 -44.2679 -20.9117
0.9000 -44.2678 -20.9113
0.9250 -44.2678 -20.9110
0.9500 -44.2677 -20.9108
0.9750 -44.2677 -20.9107
1.0000 -44.2677 -20.9106
1.0354 -44.2677 -20.9102
1.0707 -44.2677 -20.9089
1.1061 -44.2678 -20.9068
1.1414 -44.2680 -20.9039
1.1768 -44.2681 -20.9003
1.2121 -44.2683 -20.8959
1.2475 -44.2686 -20.8910
1.2828 -44.2688 -20.8855
1.3182 -44.2690 -20.8797
1.3536 -44.2693 -20.8736
1.3889 -44.2695 -20.8673
1.4243 -44.2698 -20.8611
1.4596 -44.2700 -20.8551
1.4950 -44.2702 -20.8495
1.5303 -44.2704 -20.8444
1.5657 -44.2706 -20.8400
1.6010 -44.2707 -20.8365
1.6364 -44.2708 -20.8338
1.6718 -44.2709 -20.8322
1.7071 -44.2709 -20.8317
1.7504 -44.2709 -20.8322
1.7937 -44.2708 -20.8338
1.8370 -44.2706 -20.8365
1.8803 -44.2704 -20.8400
1.9236 -44.2702 -20.8443
1.9669 -44.2699 -20.8492
2.0102 -44.2696 -20.8545
2.0535 -44.2692 -20.8601
2.0968 -44.2689 -20.8659
2.1401 -44.2685 -20.8716
2.1834 -44.2681 -20.8772
2.2267 -44.2677 -20.8826
2.2700 -44.2674 -20.8876
2.3133 -44.2671 -20.8922
2.3566 -44.2668 -20.8962
2.3999 -44.2665 -20.8997
2.4432 -44.2663 -20.9025
2.4865 -44.2662 -20.9045
2.5298 -44.2661 -20.9058
2.5731 -44.2661 -20.9062
2.6085 -44.2661 -20.9063
2.6438 -44.2661 -20.9064
2.6792 -44.2662 -20.9067
2.7146 -44.2664 -20.9071
2.7499 -44.2665 -20.9076
2.7853 -44.2667 -20.9082
2.8206 -44.2669 -20.9089
2.8560 -44.2672 -20.9096
2.8913 -44.2674 -20.9105
2.9267 -44.2677 -20.9114
2.9620 -44.2679 -20.9123
2.9974 -44.2682 -20.9132
3.0328 -44.2684 -20.9142
3.0681 -44.2686 -20.9151
3.1035 -44.2688 -20.9159
3.1388 -44.2690 -20.9166
3.1742 -44.2691 -20.9171
3.2095 -44.2692 -20.9176
3.2449 -44.2693 -20.9178
3.2802 -44.2693 -20.9179
3.2802 -44.2677 -20.9106
3.3052 -44.2677 -20.9106
3.3302 -44.2676 -20.9105
3.3552 -44.2676 -20.9104
3.3802 -44.2675 -20.9102
3.4052 -44.2674 -20.9100
3.4302 -44.2673 -20.9097
3.4552 -44.2672 -20.9094
3.4802 -44.2671 -20.9091
3.5052 -44.2670 -20.9088
3.5302 -44.2669 -20.9084
3.5552 -44.2667 -20.9081
3.5802 -44.2666 -20.9078
3.6052 -44.2665 -20.9074
3.6302 -44.2664 -20.9071
3.6552 -44.2663 -20.9069
3.6802 -44.2662 -20.9066
3.7052 -44.2662 -20.9065
3.7302 -44.2661 -20.9063
3.7552 -44.2661 -20.9063
3.7802 -44.2661 -20.9062
</code></pre>
<p>But there is a catch! I have managed to do something close with <code>numpy.unique</code> but I have noticed that for some reason, Quantum Espresso will sometimes write two or more equal values in the first column blocks while the corresponding values in the second column are different and using <code>numpy.uniques</code> I will lose data.</p>
<p>I have tried this way: <code>kp_bands=np.take(bands[:,0],range(0,122),axis=0)</code>. Here <code>bands</code> is where I loaded the data with <code>numpy.loadtxt</code> and <code>122</code> is the number of values in each block. The trouble is, this is not always the same. Could be different depending on the studied system.</p>
<p>My question is:</p>
<p>How can I do this without losing data and without knowing how many lines are in each block?</p>
|
<python><numpy>
|
2024-04-19 10:19:55
| 3
| 454
|
lucian
|
78,352,905
| 621,591
|
Pydantic BaseModel validation order for Json vs str
|
<p>Can someone tell me why Pydantic is validating a field as a string even though the field type is <code>Json[Any] | str</code>? And is there a way to have it return a dict instead?</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any
from pydantic import BaseModel, Json
class FooStr(BaseModel):
json_or_str: Json[Any] | str
class FooInt(BaseModel):
json_or_int: Json[Any] | int
if __name__ == "__main__":
print(type(FooStr(json_or_str='{"a": 1}').json_or_str)) # prints <class 'str'>
print(type(FooInt(json_or_int='{"a": 1}').json_or_int)) # prints <class 'dict'>
</code></pre>
|
<python><pydantic><pydantic-v2>
|
2024-04-19 10:06:15
| 1
| 4,621
|
Brendan Maguire
|
78,352,595
| 10,396,491
|
Angular transformation matrix for 6 DoF simulation using scipy.spatial.transform.Rotation
|
<p>I am writing a simulator for 6 DoF motion of a vehicle and need to transform the moments and angular velocities defined in the global coordinate system into the vehicle reference frame and back. Normally this is done using:</p>
<p><a href="https://i.sstatic.net/4wcRI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4wcRI.png" alt="enter image description here" /></a></p>
<p>and an inverse of that. I have implemented this myself, taking care of the singular cases for the pitch angle, but would like to rely on the scipy module instead. That's what I'm already using for vector variables. Is this possible? I don't see anything in the docs.</p>
|
<python><scipy><simulation><physics><motion>
|
2024-04-19 09:11:29
| 1
| 457
|
Artur
|
78,352,556
| 1,654,955
|
milvus.py: pymilvus.exceptions.MilvusException: <MilvusException: (code=1, message=Field full_length is not in the hit entity)>
|
<p>I query a collection in a zilliz milvus db like this:</p>
<pre><code>documents = vector_store.similarity_search_with_score(query)
</code></pre>
<p>The query is successful but in line 777 of milvus.py the value <code>result.full_length</code> is retrieved, which is not available:</p>
<pre><code>for result in res[0]:
data = {x: result.entity.get(x) for x in output_fields}
doc = self._parse_document(data)
pair = (doc, result.full_length)
ret.append(pair)
</code></pre>
<p>which then leads to this exception</p>
<pre><code>File "/Users/tilman/LangchainCorsera/venv/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py", line 644, in similarity_search
res = self.similarity_search_with_score(
File "/Users/tilman/LangchainCorsera/venv/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py", line 717, in similarity_search_with_score
res = self.similarity_search_with_score_by_vector(
File "/Users/tilman/LangchainCorsera/venv/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py", line 777, in similarity_search_with_score_by_vector
pair = (doc, result.full_length)
File "/Users/tilman/LangchainCorsera/venv/lib/python3.9/site-packages/pymilvus/client/abstract.py", line 588, in __getattr__
raise MilvusException(message=f"Field {item} is not in the hit entity")
pymilvus.exceptions.MilvusException: <MilvusException: (code=1, message=Field full_length is not in the hit entity)>
</code></pre>
<p>Any clues?</p>
|
<python><langchain><milvus>
|
2024-04-19 09:02:53
| 3
| 312
|
Tilman Rossmy
|
78,352,361
| 2,411,320
|
Docker build stopped working after Ubuntu software update
|
<p>Docker build was ok, and the Python script (app) that was inside could be executed perfectly too. I hadn't touched the code for some months now, but I recently did a general, across-system software update in my Oracle VM VirtualBox Ubuntu 20.04.3 LTS x86_64. I thought this wouldn't affect my docker, since I had specified package versions, etc.</p>
<p>However, when I try to docker build now it cannot find TensorFlow 2.12.0, and says that only 12.16.0rc and 12.16.1 are available. How can I make it see the 2.12.0 version again? Here is my Dockerfile:</p>
<pre><code># Build stage
FROM python:3.7-slim AS build
WORKDIR /todo
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
RUN apt-get update
RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/*
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN conda --version
RUN apt-get --purge autoremove -y wget
RUN python -m pip install --no-cache-dir --no-deps tensorflow-cpu==2.12.0 # <--- ERROR HERE
# Runtime stage
FROM python:3.7-slim
WORKDIR /todo
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
COPY --from=build /root/miniconda3 /root/miniconda3
COPY requirements.txt requirements.txt
COPY get_predict_m_data.py get_predict_m_data.py
ENV PATH="/root/miniconda3/bin:${PATH}"
ENV PATH="/usr/local/lib/:${PATH}"
RUN pip3 install --no-cache-dir -r requirements.txt
ENV PORT 5556
EXPOSE 5556
RUN rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
CMD ["python", "-u", "get_predict_m_data.py"]
</code></pre>
<p>Error:</p>
<pre><code>gsamaras@lv74744332234a:~/Code$ sudo docker build --no-cache -t predict_api .
[sudo] password for gsamaras:
[+] Building 36.1s (14/23) docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.68kB 0.0s
=> [internal] load metadata for docker.io/library/python:3.7-slim-buster 3.4s
=> [auth] library/python:pull token for registry-1.docker.io 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [build 1/8] FROM docker.io/library/python:3.7-slim-buster@sha256:9bd2bfc822a533f99cbe6b1311d5bf0ff136f776ebac9b985407829f17278935 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 533B 0.0s
=> CACHED [build 2/8] WORKDIR /todo 0.0s
=> [build 3/8] RUN apt-get update 6.5s
=> CANCELED [stage-1 3/12] RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/* 32.7s
=> [build 4/8] RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/* 2.9s
=> [build 5/8] RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && mkdir /root/.conda && bash Miniconda3-latest-Linux-x86_64.sh -b && rm -f Miniconda3-latest-Linux- 20.8s
=> [build 6/8] RUN conda --version 0.5s
=> [build 7/8] RUN apt-get --purge autoremove -y wget 0.5s
=> ERROR [build 8/8] RUN python -m pip install --no-cache-dir --no-deps tensorflow-cpu==2.12.0 1.4s
------
> [build 8/8] RUN python -m pip install --no-cache-dir --no-deps tensorflow-cpu==2.12.0:
1.094 ERROR: Could not find a version that satisfies the requirement tensorflow-cpu==2.12.0 (from versions: 2.16.0rc0, 2.16.1)
1.094 ERROR: No matching distribution found for tensorflow-cpu==2.12.0
------
Dockerfile:22
--------------------
20 | RUN apt-get --purge autoremove -y wget
21 |
22 | >>> RUN python -m pip install --no-cache-dir --no-deps tensorflow-cpu==2.12.0
23 |
24 | # Runtime stage
--------------------
ERROR: failed to solve: process "/bin/sh -c python -m pip install --no-cache-dir --no-deps tensorflow-cpu==2.12.0" did not complete successfully: exit code: 1
</code></pre>
<hr />
<p>If I go with a newer TF version, then a chain reaction happens with many dependencies getting broken, I can show the requirements.txt too if needed. If I update all the dependencies (and install new required packages), then I am getting a series of runtime errors, which when I fix, I hit compatibility issues (Keras 2 with Keras 3 trained models), which requires a lot of work; thus I seek a way to work with TF 2.12.0, in order for everything to be harmonized.</p>
|
<python><linux><docker><tensorflow><ubuntu>
|
2024-04-19 08:31:31
| 1
| 73,655
|
gsamaras
|
78,352,244
| 4,847,250
|
How to put a dictionary into the clipboard to copy/paste it in another pyqt6 window?
|
<p>I would like to use the clipboard to pass a dict from a Qapplication to another.</p>
<p>I can copy a text, but I don't understand how I can pass something else. I need to pass a dict instead of a text.</p>
<p>Here is a minimal example where I can launch the software twice and copy the text to the other one:</p>
<pre><code>import sys
import numpy as np
from PyQt6.QtGui import *
from PyQt6.QtCore import *
from PyQt6.QtWidgets import *
class MainWindow(QMainWindow):
def __init__(self, parent=None):
super(MainWindow, self).__init__()
self.centralWidget = QWidget()
self.setCentralWidget(self.centralWidget)
self.mainHBOX = QVBoxLayout()
self.layout = QVBoxLayout()
self.Text_EL = QLineEdit('Some Text')
self.Copy_PB = QPushButton('Copy to ClipBoard')
self.Paste_PB = QPushButton('Paste from ClipBoard')
self.layout.addWidget(self.Text_EL)
self.layout.addWidget(self.Copy_PB)
self.layout.addWidget(self.Paste_PB)
self.mainHBOX.addLayout(self.layout)
self.centralWidget.setLayout(self.mainHBOX)
self.centralWidget.setFixedSize(QSize(300,400))
self.Copy_PB.clicked.connect(self.Copy_fun)
self.Paste_PB.clicked.connect(self.Paste_fun)
def Copy_fun(self):
# copy into clipboard
text = self.Text_EL.text()
# send a dictionary instead of a string
MyDict = {'text':text}
cb = QApplication.clipboard()
cb.clear(mode=QClipboard.Mode.Clipboard)
cb.setText(text, mode=QClipboard.Mode.Clipboard)
def Paste_fun(self):
# Paste from clipboard
cb = QApplication.clipboard()
text = cb.text()
self.Text_EL.setText(text)
if __name__ == "__main__":
app = QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec())
</code></pre>
<p>I tried something with QMimeData but it seems that it cannot handle dict</p>
<pre><code>def Copy_fun(self):
# copy into clipboard
text = self.Text_EL.text()
# send a dictionary instead of a string
MyDict = {'text':text}
clipboard = QGuiApplication.clipboard()
data = QMimeData()
data.setData(MyDict)
clipboard.setMimeData(data, mode=QClipboard.Mode.Clipboard)
</code></pre>
|
<python><clipboard><pyqt6>
|
2024-04-19 08:09:39
| 0
| 5,207
|
ymmx
|
78,352,141
| 3,070,181
|
Modifying an environment vairiable when running a Python subprocess
|
<p>I am attempting to run a Python subprocess in a different virtualenv from the default. I have based my solution on <a href="https://stackoverflow.com/a/4453495/3070181">this answer</a>. My script is</p>
<pre><code>import os
import subprocess
from pathlib import Path
from icecream import ic
VIRTUAL_ENV = str(Path(Path.home(), '.pyenv/versions/wx'))
script_path = 'scripts/foo'
script_env = os.environ.copy()
script_env["VIRTUAL_ENV"] = VIRTUAL_ENV
ic(os.environ['VIRTUAL_ENV'])
ic(script_env['VIRTUAL_ENV'])
subprocess.run([script_path], env=script_env)
</code></pre>
<p>where <em>foo</em> is a bash script that calls a Python program that requires the virtualenv <em>wx</em>. The script_env['VIRTUAL_ENV'] is correct, however I get the error</p>
<blockquote>
<p>Traceback (most recent call last):
File "/home/jeff/projects/scripts/wx_python/wx_python_standard_ids.py", line 1, in
import wx
ModuleNotFoundError: No module named 'wx'</p>
</blockquote>
|
<python><subprocess>
|
2024-04-19 07:49:36
| 2
| 3,841
|
Psionman
|
78,352,063
| 6,376,297
|
python concurrent.futures.ProcessPoolExecutor causes BrokenProcessPool error when run in a Visual Studio Code notebook with Windows 10
|
<p>The example code in the <a href="https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor-example" rel="nofollow noreferrer">official doc</a> of <code>concurrent.futures.ProcessPoolExecutor()</code> :</p>
<pre><code>import concurrent.futures
import math
PRIMES = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419]
def is_prime(n):
if n < 2:
return False
if n == 2:
return True
if n % 2 == 0:
return False
sqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return True
def main():
with concurrent.futures.ProcessPoolExecutor() as executor:
for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)):
print('%d is prime: %s' % (number, prime))
if __name__ == '__main__':
main()
</code></pre>
<p>runs without any issues in a jupyter notebook in a ubuntu system, but fails with a <em>"BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending."</em> error in a jupyter notebook in Visual Studio Code in a Windows 10 system.<br />
<em>"Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)] on win32"</em></p>
<p>I have consulted several posts that discuss this.<br />
So yes, I know from <a href="https://stackoverflow.com/questions/62488423/brokenprocesspool-while-running-code-in-jupyter-notebook">this post</a> and from my own tests that I can put the code in a .py script and call it from the notebook. That works.<br />
And yes, I know (vaguely) that Windows behaves differently from other systems in creating processes, threads, etc.; <a href="https://stackoverflow.com/questions/43836876/processpoolexecutor-works-on-ubuntu-but-fails-with-brokenprocesspool-when-r">this post</a> briefly mentions it; but there does not appear to be a conclusion/solution.</p>
<p>So the point remains: <strong>given that it's claimed that this code should work in Windows, is there anything one can do to be able to run the code <em>as per official documentation</em>, in a jupyter notebook, in Visual Studio Code, in Windows 10?</strong></p>
<p>Using a separate .py script has many inconveniences vs running the code in a notebook cell.</p>
<p>It just seems bizarre to me that <em>official documentation</em> should have code that clearly does not work in a major operating system, forcing every single person who encounters this error to go looking for workarounds and finding N different and quite inconvenient solutions. Unless there is something wrong with my python installation or settings, but most other code works fine, so...<br />
BTW, for the record, I did also try some of the solutions that recommended to use <code>multiprocessing</code> <code>Pool</code> and similar; that resulted in the notebook cell freezing forever.</p>
<hr />
<p><strong>EDIT</strong> after a lot of further browsing, I found a possible solution that still allows to run the <em>multiprocessing part</em> in a notebook, only moving the <em>worker function</em> to a .py</p>
<p><a href="https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac" rel="nofollow noreferrer">https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac</a></p>
<p>The example shown in the above post works in Visual Studio Code.</p>
<p>I imagine that for the example in the <code>concurrent.futures.ProcessPoolExecutor()</code> one would have to move the definition of <code>is_prime</code> to a .py and import it; not sure about the <code>with</code> part. To be tried.</p>
<hr />
<p><strong>EDIT 2</strong></p>
<p>Yes, it works. See my answer below.</p>
|
<python><jupyter-notebook><parallel-processing>
|
2024-04-19 07:34:41
| 1
| 657
|
user6376297
|
78,352,004
| 2,473,382
|
Global fixture in pytest
|
<h1>Question</h1>
<p>I want, with <em>as little boilerplate possible</em>, to mock one of my function.</p>
<h1>Project Setup</h1>
<p>In my (simplified) project, I have the following files/functions:</p>
<ul>
<li><code>utils.py</code>, with function <code>get_id(param1, param2)</code> (that is what I want to mock)</li>
<li><code>work.py</code> with function <code>do_work()</code> importing and using <code>utils.get_id</code></li>
<li><code>tests/test_work.py</code> with the following test:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from work import do_work
def test_work():
# somehow have get_id patched
...
do_work()
...
</code></pre>
<h1>Solutions</h1>
<h2>Easy, too much boilerplate</h2>
<p>It is very easy to patch <code>get_id</code> from the <code>work</code> module. Note that whatever the final solution is, the parameters are important, so <code>return_value</code> will not do.</p>
<pre class="lang-py prettyprint-override"><code>from work import do_work
import mock
def mock_id(param1, param2): return f"{param1} {param2}"
def test_work():
with mock.path("work.get_id", side_effect=mock_id):
...
do_work()
...
# A variation is:
@patch("work.get_id", side_effect=mock_id)
def test_another_work(mock_id):
...
</code></pre>
<p>Does work, but it is a lot of boilerplate:</p>
<ul>
<li>need to define or import <code>mock_id()</code> in each test file</li>
<li>need a full <code>patch</code> line, and possibly a useless parameter to a test function</li>
<li>the tested module is patched, so each test file will be different, because a lot of modules use <code>get_id()</code></li>
</ul>
<h2><code>conftest.py</code> and global fixture</h2>
<p>I can add a <code>conftest.py</code> file, defining a pytest fixture once and for all</p>
<pre><code>@pytest.fixture()
def _get_id():
def mock_get_id(param1, param2):
return f"{param1} {param2}"
with mock.patch("utils.get_id", side_effect=mock_get_id):
yield
</code></pre>
<p>I thought that then I could just have my tests written as such:</p>
<pre><code>@pytest.mark.usefixtures("_get_id")
def test_work():
...
</code></pre>
<p>I explicitly do not want it with <code>autouse=True</code>, and this one <code>@pytest.mark.usefixtures("_get_id")</code> line seems to me like a good balance between boilerplate and explicitness.</p>
<h2>alternative fixture</h2>
<p>While looking around, this looked like it could have worked as well:</p>
<pre><code>@pytest.fixture()
def _get_id(monkeypatch):
def mock_get_id(param1, param2):
return f"{param1} {param2}"
monkeypath("utils.get_id", mockget_id):
</code></pre>
<h1>Problem</h1>
<p>The fixture is called, used, but the original get_id is always used, not the mocked version. How can I ensure that <code>get_id</code> is globally patched?</p>
|
<python><python-3.x><unit-testing><pytest><monkeypatching>
|
2024-04-19 07:24:54
| 1
| 3,081
|
Guillaume
|
78,351,990
| 11,082,237
|
Is there a way to safely import a Python module from untrusted user input?
|
<p>I want to dynamically import a module, based on untrusted user input, and be confident that the submodule that I will import comes from a specific trusted module directory.</p>
<p>I intend to process user input in the following way :</p>
<pre class="lang-py prettyprint-override"><code>import importlib
submodule_name = user_input.split(".")[0]
# Are the 2 following lines safe ? (considering I trust all code in `trusted_directory`)
imported_module = importlib.import_module(f"trusted_directory.{submodule_name}")
imported_module.main()
</code></pre>
<p><code>trusted_directory</code> will contain pyhon modules that I write myself, each of them implementing a <code>main()</code> function. I'm mainly worried about two things :</p>
<ol>
<li>Directory traversal : Are there values for <code>user_input</code> that lead to a module outside of <code>trusted_directory</code> to be imported ?</li>
<li>Characters that are not dots having a special meaning in imports : <a href="https://peps.python.org/pep-0328/" rel="nofollow noreferrer">PEP 328</a> only mentions dots. Is there any other PEP that introduces other special-meaning characters in imports ?</li>
</ol>
<p>More generally, is my approach safe ? Can you find a way to break out of <code>trustred_directory</code> ? Is there any recommended sanitation method for Python imports that I did not find ?</p>
<p>I looked around for specs about absolute/relative imports in python and could only find <a href="https://peps.python.org/pep-0328/" rel="nofollow noreferrer">PEP 323</a>. From my understanding of this spec, it seems that my approach is safe, but I'm worried I missed other important bits of specification.</p>
|
<python><input><python-import>
|
2024-04-19 07:22:23
| 1
| 863
|
Pierre couy
|
78,351,963
| 11,441,731
|
Why am I getting an error when backpropagating through a compiled PyTorch module multiple times?
|
<p>I'm working on a PyTorch project where I have a custom module called that applies axial rotatry position embeddings to tensors. I've implemented both a regular and a compiled version of this module using torch.compile. When I try to backpropagate through the compiled version multiple times, I encounter the following error:</p>
<pre><code>RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
</code></pre>
<p>However, the regular (uncompiled) version of the module works fine with multiple backpropagation calls.</p>
<p>Here's the relevant code:</p>
<pre class="lang-py prettyprint-override"><code>import math
from functools import reduce
import torch
from einops import rearrange
from torch import nn
def bounding_box(
h: int, w: int, pixel_ar: float = 1.0
) -> tuple[float, float, float, float]:
# Compute the adjusted aspect ratio based on the pixel aspect ratio
ar = w / (h * pixel_ar)
# Compute the bounding box
h_bounds = (-1.0 / ar, 1.0 / ar) if ar > 1.0 else (-1.0, 1.0)
w_bounds = (-ar, ar) if ar < 1.0 else (-1.0, 1.0)
return h_bounds + w_bounds
def centered_linspace(
start: float,
end: float,
steps: int,
*,
dtype: torch.dtype = None,
device: torch.device = None,
) -> torch.Tensor:
edges = torch.linspace(start, end, steps + 1, dtype=dtype, device=device)
# Compute the midpoint between each pair of edges
return (edges[:-1] + edges[1:]) / 2
def make_axial_positions(
h: int,
w: int,
pixel_ar: float = 1.0,
align_corners: bool = False,
dtype: torch.dtype = None,
device: torch.device = None,
) -> torch.Tensor:
h_min, h_max, w_min, w_max = bounding_box(h, w, pixel_ar)
# If align_corners is set to True, the grid will include the corners of the bounding box
# Otherwise, the grid boundaries will include the centers of the pixels
linspace_fn = torch.linspace if align_corners else centered_linspace
h_grid = linspace_fn(h_min, h_max, h, dtype=dtype, device=device)
w_grid = linspace_fn(w_min, w_max, w, dtype=dtype, device=device)
# Create a grid of positions
h_positions, w_positions = torch.meshgrid(h_grid, w_grid, indexing="ij")
return torch.stack((h_positions, w_positions), dim=-1)
def apply_axial_rope(
x: torch.Tensor, theta: torch.Tensor, conjugate: bool = False
) -> None:
# Ensure the operations are performed in float32
dtype = reduce(torch.promote_types, (x.dtype, theta.dtype, torch.float32))
# Ensure that the dimensions of x and theta are compatible
dim = theta.shape[-1]
assert dim * 2 <= x.shape[-1], f"x must have at least {2 * dim} channels"
# Extract tensor components and ensure they have the correct dtype
x_1, x_2, x_3 = x[..., :dim], x[..., dim : dim * 2], x[..., dim * 2 :]
x_1, x_2, theta = map(lambda t: t.to(dtype), (x_1, x_2, theta))
# Compute the rotation angles
cos, sin = theta.cos(), theta.sin()
sin = -sin if conjugate else sin
# Rotate the tensors
x_1 = (cos * x_1 - sin * x_2).to(x.dtype)
x_2 = (sin * x_1 + cos * x_2).to(x.dtype)
return torch.cat((x_1, x_2, x_3), dim=-1)
class AxialRoPE(nn.Module):
def __init__(self, dim: int, n_heads: int) -> None:
super().__init__()
freqs_min, freqs_max = math.log(math.pi), math.log(10.0 * math.pi)
freqs = torch.linspace(freqs_min, freqs_max, n_heads * dim // 4 + 1)[:-1].exp()
freqs = freqs.view(n_heads, dim // 4).contiguous()
self.register_buffer("freqs", freqs)
def forward(
self, q: torch.Tensor, k: torch.Tensor, positions: torch.Tensor
) -> torch.Tensor:
# Compute the rotation angles
freqs = self.freqs.to(positions.dtype)
h_theta = positions[..., None, 0:1] * freqs
w_theta = positions[..., None, 1:2] * freqs
theta = torch.cat((h_theta, w_theta), dim=-1)
theta = rearrange(theta, "... x y h d -> ... h x y d")
# Apply the RoPE to the queries and keys
q = apply_axial_rope(q, theta)
k = apply_axial_rope(k, theta)
return q, k
if __name__ == "__main__":
with torch.device("cuda:0" if torch.cuda.is_available() else "cpu"):
positions = make_axial_positions(32, 32)
q = torch.randn(1, 8, 32, 32, 64, requires_grad=True)
k = torch.randn(1, 8, 32, 32, 64, requires_grad=True)
axial_rope = AxialRoPE(64, 8)
q, k = axial_rope(q, k, positions)
# Test the backward pass
q.sum().backward()
k.sum().backward()
positions = make_axial_positions(32, 32)
q = torch.randn(1, 8, 32, 32, 64, requires_grad=True)
k = torch.randn(1, 8, 32, 32, 64, requires_grad=True)
axial_rope_compiled = torch.compile(AxialRoPE(64, 8))
q, k = axial_rope_compiled(q, k, positions)
# Test the backward pass for the compiled version
q.sum().backward()
k.sum().backward()
</code></pre>
<p>Here are my environment details:</p>
<pre><code>PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 23.10 (x86_64)
GCC version: (Ubuntu 13.2.0-4ubuntu3) 13.2.0
Clang version: Could not collect
CMake version: version 3.20.3
Libc version: glibc-2.38
Python version: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.5.0-27-generic-x86_64-with-glibc2.38
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-14900KF
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 20%
CPU max MHz: 6000.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.15.1+torch220cu121
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==1.9.5
[pip3] rotary-embedding-torch==0.5.3
[pip3] torch==2.2.1
[pip3] torchaudio==2.2.1
[pip3] torchdiffeq==0.2.3
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.3.2
[pip3] torchsde==0.2.6
[pip3] torchvision==0.17.1
[pip3] triton==2.2.0
[conda] clip-anytorch 2.6.0 pypi_0 pypi
[conda] dctorch 0.1.2 pypi_0 pypi
[conda] natten 0.15.1+torch220cu121 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-lightning 1.9.5 pypi_0 pypi
[conda] rotary-embedding-torch 0.5.3 pypi_0 pypi
[conda] torch 2.2.1 pypi_0 pypi
[conda] torchaudio 2.2.1 pypi_0 pypi
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchmetrics 1.3.2 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchvision 0.17.1 pypi_0 pypi
[conda] triton 2.2.0 pypi_0 pypi
</code></pre>
|
<python><pytorch><autograd>
|
2024-04-19 07:16:17
| 1
| 471
|
Kinyugo
|
78,351,628
| 10,216,112
|
CMake error: Cannot find python executable
|
<p>I had a CMake file which searches for a python executable as</p>
<pre><code>find_program(PYTHON python)
if(NOT PYTHON)
if(MSVC)
set(PYTHON "python3.12")
else()
message(FATAL_ERROR "could not find python executable")
endif()
endif()
get_filename_component(PYTHON_PATH "${PYTHON}" DIRECTORY)
</code></pre>
<p>while generating cmake cache for the above script I get an error in build logs</p>
<pre><code>Severity Code Description Project File Line Suppression State
Error CMake Error at C:/Users/himanshu/CMCommon/index_common.cmake:545 (message):
could not find python executable C:/Usershimanshu/CMCommon/index_common.cmake 545
</code></pre>
<p>I see the python executable is present in my environment variable as well.</p>
<p><a href="https://i.sstatic.net/w0QDa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w0QDa.png" alt="enter image description here" /></a></p>
<p>How can I debug the above error?</p>
<p><strong>EDIT:</strong> Posting the error that I got after executing the cmake command given by OrenishShalom</p>
<pre><code>CMake Error at CMakeLists.txt:2 (project):
The CMAKE_C_COMPILER:
cl
is not a full path and was not found in the PATH.
To use the NMake generator with Visual C++, cmake must be run from a shell
that can use the compiler cl from the command line. This environment is
unable to invoke the cl compiler. To fix this problem, run cmake from the
Visual Studio Command Prompt (vcvarsall.bat).
Tell CMake where to find the compiler by setting either the environment
variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
the compiler, or to the compiler name if it is in the PATH.
</code></pre>
<p>If I type just the cmake command in shell, it works fine but I don't know why it gives above error.</p>
|
<python><visual-studio><cmake>
|
2024-04-19 06:09:41
| 1
| 7,901
|
Himanshu Poddar
|
78,351,590
| 9,522,658
|
sqlalchemy-file package FileField is not allowing to give custom filename, automatically saves the file as unnamed
|
<p>I am using sqlalchemy-file package FileField to save files in local storage, it is being saved but not allowing to give custom filename, automatically saves the file as unnamed. I need to fix that.</p>
<p>I am working on FastAPI project and using file variable as shown below to save files</p>
<pre><code>from sqlalchemy_file import FileField
class UserDoc(Base):
__tablename__ = "user_doc"
id = Column(Integer, primary_key=True)
file = Column(FileField)
</code></pre>
<p>this is how I save the file</p>
<pre><code>@router.post(
"doc_upload"
)
async def document_upload(
user_file: UploadFile = File(...),
db: Session = Depends(deps.get_db),
):
try:
# code logic
file_contents = await user_file.read()
document_data = {"file": file_contents}
db_document = UserDocument(**document_data)
db.add(db_document)
db.commit()
db.refresh(db_document)
except Exception as _:
# exception logic
</code></pre>
<p>the file contents are stored in the database as</p>
<pre><code>{
"content_path":null,
"filename":"unnamed",
"content_type":"application/octet-stream",
"size":267226,
"files":[
"upload_folder/121e7cbf-f19c-4538-8fcf-2f323f31e53e"
],
"file_id":"121e7cbf-f19c-4538-8fcf-2f323f31e53e",
"upload_storage":"upload_folder",
"uploaded_at":"2024-04-19T05:21:35.429745",
"path":"upload_folder/121e7cbf-f19c-4538-8fcf-2f323f31e53e",
"url":"/base_path/upload_folder/121e7cbf-f19c-4538-8fcf-2f323f31e53e",
"saved":true
}
</code></pre>
<p>so when i try to download the file it is giving me unnamed, I want to save it with a name.</p>
|
<python><file><file-upload><sqlalchemy><fastapi>
|
2024-04-19 05:58:48
| 1
| 570
|
Avin Mathew
|
78,351,569
| 51,816
|
Aligning/syncing one audio file to another using Python
|
<p>I have 2 audio files recorded using laptop mic and one with external mic. Laptop mic recording starts after external mic. The time difference could be 2-60 seconds.</p>
<p>So I wrote this code which is pretty accurate, but when I bring both audio tracks (one from the video that uses laptop mic) and the newly adjusted audio, there is still like 50ms delay. Why could this be?</p>
<p><a href="https://i.sstatic.net/z7Dsj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z7Dsj.png" alt="enter image description here" /></a></p>
<p>Sometimes it seems there is positive delay, and sometimes negative.
<a href="https://i.sstatic.net/6eAVm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6eAVm.png" alt="enter image description here" /></a></p>
<pre><code>from scipy.signal import correlate
from scipy.signal import fftconvolve
from scipy import signal
from pydub import AudioSegment
from pydub.utils import mediainfo
import numpy as np
def findOffset(audio1, audio2):
correlation = signal.correlate(audio2, audio1, mode="full")
lags = signal.correlation_lags(audio2.size, audio1.size, mode="full")
lag = lags[np.argmax(correlation)]
return lag
def adjustAudio(audio_segment, lag, frame_rate):
# Convert lag from samples to milliseconds, rounding to nearest integer at the last step
ms_lag = round((lag / frame_rate) * 1000)
if lag > 0:
# Audio needs to start later: pad audio at the beginning
silence = AudioSegment.silent(duration=ms_lag, frame_rate=frame_rate)
adjusted_audio = silence + audio_segment
else:
# Audio needs to start earlier: trim audio from the beginning
adjusted_audio = audio_segment[abs(ms_lag):] # Use abs to convert negative lag to positive
return adjusted_audio
def alignAudioTrack(audioFile, newAudioFile, lag):
audio_data, rate, audio_segment = loadAudio(audioFile, return_segment=True)
# Adjust the AudioSegment based on lag, ensuring frame_rate is passed correctly
adjusted_audio = adjustAudio(audio_segment, lag, rate)
# Fetch original bitrate
bitrate = mediainfo(audioFile)['bit_rate']
# Save the adjusted audio preserving the original bitrate
adjusted_audio.export(newAudioFile, format="mp3", bitrate=bitrate)
audio1, rate1 = loadAudio(audioFile1)
audio2, rate2 = loadAudio(audioFile2)
lag = findOffset(audio1, audio2)
alignedAudioFile = os.path.join(newAudioDir, f"{baseName}_aligned.mp3")
alignAudioTrack(origAudioFile, alignedAudioFile, lag)
</code></pre>
|
<python><audio><scipy><signal-processing><correlation>
|
2024-04-19 05:52:40
| 0
| 333,709
|
Joan Venge
|
78,351,491
| 16,723,655
|
How numpy array full show in jupyter notebook without wrapping?
|
<p>I already referred to <a href="https://stackoverflow.com/questions/55466277/how-to-print-the-full-numpy-array-without-wrapping-in-jupyter-notebook">'How to print the full NumPy array without wrapping (in Jupyter Notebook)'</a>.</p>
<p>However, my array size is 256x256.</p>
<p>Therefore, it is automatically wrapping in jupyter as below.</p>
<p><a href="https://i.sstatic.net/JRlnQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JRlnQ.png" alt="enter image description here" /></a></p>
<p>My array is based on image and only consisting of 0 and 1.</p>
<p>I just want to see my image shape with 0 and 1 such as below.</p>
<p>I also tried below code.</p>
<pre><code>np.set_printoptions(threshold = np.inf)
np.set_printoptions(linewidth = np.inf)
</code></pre>
<p><a href="https://i.sstatic.net/iyUy1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iyUy1.png" alt="enter image description here" /></a> Reference for <a href="https://stackoverflow.com/q/51205502/8508004">that post is here</a>.</p>
|
<python><numpy><jupyter-notebook>
|
2024-04-19 05:19:01
| 1
| 403
|
MCPMH
|
78,351,381
| 15,233,792
|
How to measure Flask API response time and record them in logs?
|
<p>I have a log system for my flask API</p>
<pre class="lang-py prettyprint-override"><code>import logging
def setup_logger(logp=None, debug=False):
log_level = logging.DEBUG if debug else logging.INFO
form = "[%(asctime)s][%(name)s][%(levelname)s][%(filename)s] %(message)s"
datefmt = "%Y-%m-%d-%H:%M:%S"
logging.basicConfig(level=log_level, format=form, datefmt=datefmt)
if logp is not None:
fhandler = logging.StreamHandler(open(logp, 'a'))
fhandler.setFormatter(logging.Formatter(form, datefmt))
logging.root.addHandler(fhandler)
logger = logging.getLogger("MY_APP")
setup_logger()
logger.debug(f"Import {__file__}")
</code></pre>
<p>And the logs look like:</p>
<pre><code>[2024-04-19-04:20:02][werkzeug][INFO][_internal.py] 172.23.0.2 - - [19/Apr/2024 04:20:02] "GET /available/space HTTP/1.1" 200 -
</code></pre>
<p>However, I want to measure and record my flask API response time and add them to logs also, like <code>0.45 s</code>, in order to see which API handler is slow.</p>
<p>The expecting behavior is like this:</p>
<pre><code>[2024-04-19-04:20:02][werkzeug][INFO][_internal.py][0.45 s] 172.23.0.2 - - [19/Apr/2024 04:20:02] "GET /available/space HTTP/1.1" 200 -
</code></pre>
<p>Is there any method to conduct that? Thanks</p>
|
<python><flask><logging>
|
2024-04-19 04:40:42
| 1
| 2,713
|
stevezkw
|
78,351,374
| 2,444,008
|
Django " [ was not closed " error in url routing
|
<p>I'm new to python. Trying to understand and following the tutorials. One of the many tutorials that I follow, in django for url routing we are creating url.py and put our routing logic inside that file as below:</p>
<pre><code>urlpatterns = [
path=("",views.index),
path=("/index",views.index)
]
</code></pre>
<p>In every tutorial that syntax is working. But in my example it throws an error as below:
<a href="https://i.sstatic.net/fFYe6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fFYe6.png" alt="enter image description here" /></a></p>
<p>I did not understand why it is showing me error and not for others.</p>
<p>Also I have tried so assign path method into a variable and after that It worked.
<a href="https://i.sstatic.net/CtcNm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CtcNm.png" alt="enter image description here" /></a></p>
<p>Am I missing syntax rule or someting?</p>
|
<python><django>
|
2024-04-19 04:34:39
| 1
| 1,093
|
ftdeveloper
|
78,351,077
| 20,122,390
|
Is asyncio in Python user-level threading model, cooperative scheduling?
|
<p>I have been working for a long time with asyncio in Python, however I would like to clarify some thoughts on how asyncio actually works. I will break down my thoughts so that I can give context and that you can correct me if I have any errors in the bases.
I understand that Python programs are kernel-level threading model, but with GIL. So, a python program will run in a kernel process and each python will invoke an OS thread, but due to the GIL, only each of these threads will run at a time. I also understand that the only way to get multiple threads in a Python program is through the "threading" module. That is, in a normal Python program without using this module I will simply have a process with a single thread running.
Then the asyncio library arrives, my question is if asyncio would be an implementation of the user-level threading model, cooperative scheduling. The event loop manages all user threads (coroutines) and each of these coroutines has a cooperative approach since through await they determine when they return control to the scheduler. Additionally, all of these user threads are mapped to a single OS thread. I am right? Is this how it works?</p>
|
<python><python-asyncio><python-multithreading>
|
2024-04-19 02:28:17
| 1
| 988
|
Diego L
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.