QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,024,389
| 651,174
|
How to insert element with text?
|
<p>I can insert an element like this:</p>
<pre><code>product_node.insert(cnt, etree.Element("vod_type"))
</code></pre>
<p>Is it possible to do the following:</p>
<pre><code>product_node.insert(cnt, etree.Element("vod_type").text="hello")
</code></pre>
<p>Or something of the sort. Or do I have to split it up into three lines?</p>
<pre><code>elem = etree.Element("vod_type")
elem.text = "hello"
product_node.insert(cnt, elem)
</code></pre>
|
<python><python-3.x><xml><lxml>
|
2024-02-20 01:51:33
| 2
| 112,064
|
David542
|
78,024,250
| 10,975,845
|
Difference Between Linter, Sanitizer and Analyzers
|
<p>What is the difference Between Linters, Sanitizers and Analyzers?
And what are some examples of Linters, Sanitizers and Analyzers for Python?
Also are they dependent on the IDE that you are using? I am currently using neovim and I have downloaded a few what I thought were linters. But now when I am searching for analyzers, I found that most of them weren't linters but analyzers like pylint, pyflakes, mypy.</p>
<p>I am following this tutorial:
<a href="https://youtu.be/vdn_pKJUda8?t=3649&feature=shared" rel="nofollow noreferrer">https://youtu.be/vdn_pKJUda8?t=3649&feature=shared</a></p>
<p>Thanks!</p>
|
<python><linter><eclipse-memory-analyzer><sanitizer>
|
2024-02-20 00:51:51
| 1
| 344
|
Allie
|
78,024,239
| 2,152,371
|
Python Program not listening to keystrokes when Active Window is External Program (LInux)
|
<p>I'm trying to have a Python program listen for keystrokes sent to a program that is launched as a subroutine (Popen).</p>
<p>The problem is that when the correct window is focused nothing happens. No acknowledgement that a key has been pressed. If I click off from the window to something else it works.</p>
<p>This listening is meant to debug a problem of not being able to send keystrokes to the program - pyautogui, pynput, keyboard, even Popen. communicate does not work. The keystrokes sent are ignored by the Python script, even if manually entered myself on the keyboard.</p>
<p>I'm running Retroarch through Lutris. My computer's OS is Ubuntu Linux 22.04, the display manager is Wayland. Below is the code:</p>
<pre><code>from pynput import keyboard
import subprocess, time
def on_press(key):
print(key)
return False # stop listener; remove this if want more keys
lutris = f'LUTRIS_SKIP_INIT=1 lutris/bin/lutris lutris:rungameid/3'
p = subprocess.Popen(lutris, shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
time.sleep(5)
listener = keyboard.Listener(on_press=on_press)
listener.start() # start to listen on a separate thread
listener.join() # remove if main thread is polling self.keys
</code></pre>
<p>I expect the cited block of code to wait for a keystroke, and exit once one is received.</p>
<p>If I have the Retroarch/Lutris window focused (active) all keystrokes are ignored.</p>
|
<python><input><automation><listener>
|
2024-02-20 00:46:10
| 1
| 470
|
Miko
|
78,024,121
| 497,403
|
Cannot convince Pytorch to install with Cuda (Windows 11)
|
<p>I am trying to install PyTorch with Cuda using Anaconda3, on Windows 11:</p>
<ul>
<li>My GPU is RTX <strong>3060</strong>.</li>
<li>My conda environment is Python <strong>3.10.13</strong>.</li>
<li>nvidia-smi outputs Driver Version: <strong>551.23</strong>, CUDA Version: <strong>12.4</strong>.</li>
</ul>
<p>What I tried:</p>
<ul>
<li>Following the instructions on <a href="https://pytorch.org/get-started/locally/" rel="nofollow noreferrer">https://pytorch.org/get-started/locally/</a>, picking CUDA 12.1 (and 11.8)</li>
<li>Reading through a bunch of posts on <a href="https://discuss.pytorch.org/" rel="nofollow noreferrer">https://discuss.pytorch.org/</a> (including <a href="https://discuss.pytorch.org/t/torch-cuda-is-available-gives-false/197333" rel="nofollow noreferrer">https://discuss.pytorch.org/t/torch-cuda-is-available-gives-false/197333</a>)</li>
<li>Reading through a bunch of posts on SO (e.g. <a href="https://stackoverflow.com/questions/73250147/pytorch-cuda-is-not-available">PyTorch: CUDA is not available</a>)</li>
</ul>
<p>Many articles say that I don't need to have a independently installed CUDA so I uninstalled the system-wide version - but it predictably didn't make a difference. No matter what I try, <code>torch.cuda.is_available()</code> is always false.</p>
<p>I think my conclusion is that the nVidia driver must match the version of CUDA, but for me it's counterintuitive. I've always trained myself to install the latest and greatest nVidia driver (I use my machine for gaming as much as I want to use it for ML) - is my understanding correct, and the only way to get PyTorch to work with CUDA is to <em>downgrade</em> my GPU driver from 551.23 to something that supports CUDA 12.1?</p>
|
<python><windows><pytorch><anaconda>
|
2024-02-19 23:56:12
| 2
| 5,708
|
David Airapetyan
|
78,024,054
| 18,247,317
|
Convert SVG with embedded image into PNG with Python
|
<p>My aim is to create PNG images from an svg with embedded (as a link) image.
I split the process in two: first create the SVGs, then my intention is to convert them to PNG.
The below successfully creates an svg file for each of the listed paths of a source SVG containing a linked image. It also correctly inserts the original image into the new files, however, each of the converted PNGs contain only the path, without the image.
Embedding the image as base64 is not an option as it is very heavy: I have already tried and the error was that it was too big (~13millions columns on one line), is there any other way to achieve the result?</p>
<pre><code>import os
from shutil import copyfile
from lxml import etree
os.environ['path'] += r';C:\Program Files\Inkscape' #only way I found to have cairo work
import cairosvg
def create_svg_from_path(svg_path, output_folder):
tree = etree.parse(svg_path)
root = tree.getroot()
width = root.get('width')
height = root.get('height')
image = root.find('.//{http://www.w3.org/2000/svg}image')
if image is not None:
# Get image attributes
image_width = image.get('width')
image_height = image.get('height')
image_x = image.get('x')
image_y = image.get('y')
image_href = image.get('{http://www.w3.org/1999/xlink}href')
image_filename = os.path.basename(image_href)
target_image_path = os.path.join(output_folder, image_filename)
if not os.path.exists(target_image_path):
# Assuming the script is in the same directory as the image file
source_image_path = os.path.join(os.path.dirname(os.path.abspath(svg_path)), image_filename)
copyfile(source_image_path, target_image_path)
for path in root.findall('.//{http://www.w3.org/2000/svg}path'):
# Get the path ID
path_id = path.get('id')
output_svg_filename = os.path.join(output_folder, f"{path_id}.svg")
new_root = etree.Element('svg', version="1.0", width=width, height=height, xmlns="http://www.w3.org/2000/svg")
new_image = etree.SubElement(new_root, 'image', attrib={
'x': image_x, 'y': image_y,
'width': image_width, 'height': image_height,
'{http://www.w3.org/1999/xlink}href': f"{image_filename}",
'preserveAspectRatio': 'none'
})
group = etree.SubElement(new_root, 'g')
new_path = etree.SubElement(group, 'path')
new_path.attrib.update(path.attrib)
with open(output_svg_filename, 'wb') as file:
file.write(etree.tostring(new_root, pretty_print=True, xml_declaration=True, encoding='utf-8'))
# Convert the SVG file to PNG
output_png_filename = os.path.join(output_folder, f"{path_id}.png")
cairosvg.svg2png(url=output_svg_filename, write_to=output_png_filename)
if __name__ == "__main__":
input_svg_file ="file.svg"
output_folder = "output_svgs"
if not os.path.exists(output_folder):
os.makedirs(output_folder)
create_svg_from_path(input_svg_file, output_folder)
</code></pre>
|
<python><svg>
|
2024-02-19 23:27:20
| 0
| 491
|
Oran G. Utan
|
78,024,045
| 784,044
|
Docker container build is failing in Gitlab CI
|
<p>I have a container that has no issues building locally; however when I try to build it in a Gitlab CI pipeline, I get the following error:</p>
<p><code>ERROR: failed to solve: failed to solve with frontend dockerfile.v0: failed to solve with frontend gateway.v0: rpc error: code = Unknown desc = failed to build LLB: executor failed running [/bin/sh -c python3 -m pip install -r requirements.txt]: runc did not terminate sucessfully</code></p>
<p>Here is what I have in my Dockerfile:</p>
<pre><code>ARG PYTHON_VERSION=3.9.0
FROM python:3
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
WORKDIR /app
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#user
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/sbin/nologin" \
--no-create-home \
--uid "${UID}" \
appuser
# Copy the source code into the container.
COPY . .
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer.
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,source=requirements.txt,target=requirements.txt \
python3 -m pip install -r requirements.txt
# Switch to the non-privileged user to run the application.
USER appuser
# Expose the port that the application listens on.
#EXPOSE 8000
# Run the application.
CMD python3 main.py
</code></pre>
<p>Here's what I have in my .gilab-ci.yml:</p>
<pre><code>stages:
- deploy
publish:
stage: deploy
image:
name: docker:latest
services:
- docker:19-dind
before_script:
- apk add --no-cache curl jq g++ make python3 py3-pip
- pip install awscli --break-system-packages
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- aws --version
- docker info
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:$CI_PIPELINE_IID .
- docker push $DOCKER_REGISTRY/$APP_NAME:$CI_PIPELINE_IID
</code></pre>
<p>Any help would be greatly appreciated.</p>
|
<python><docker><gitlab-ci>
|
2024-02-19 23:23:38
| 0
| 886
|
Sidd Menon
|
78,023,985
| 5,615,873
|
How can I disable OpenCV writing image windows position and size to Windows Registry in Python?
|
<p>Every time I show an image with OpenCV (cv2) and use an image window label with <code>cv2.imshow(label, image)</code>, OpenCV creates or updates keys with that window label and the image position and size as values in Windows Registry. The key <strong>is HKEY_CURRENT_USER\SOFTWARE\OpenCV\HighGUI\Windows</strong>. (See screenshot below.)</p>
<p>Is there a way to stop OpenCV from doing this?</p>
<p>(My opencv-python version is 4.9.0.80.)</p>
<p><a href="https://i.sstatic.net/4E1yj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4E1yj.png" alt="enter image description here" /></a></p>
<h2>Update</h2>
<p>Note: I was asked to check if the question "How to combine imshow() and moveWindow() in openCV with python?" could solve my problem. I can't see that the suggested question is related to mine. (Moving a window image does not solve the problem of window names written in Windows registry.)</p>
|
<python><windows><opencv><user-interface><registry>
|
2024-02-19 23:02:33
| 1
| 3,537
|
Apostolos
|
78,023,800
| 4,269,851
|
Python dynamic argument processing inside fucntion
|
<p>I have following code that is not working.</p>
<pre><code>def myFunction(*args):
total_lists = len(args)
for n in range(total_lists):
print(list(zip(args[0], args[1])))
myFunction([1,2,3],["a", "b", "c"])
</code></pre>
<p>Basically function receives unknown number of arguments as <code>lists</code> with equal number of values, need to output every list line by line like so</p>
<pre><code>1,a
2,b
3,c
</code></pre>
<p>if called with three arguments then</p>
<pre><code>myFunction([1,2,3],["a", "b", "c"], ["one", "two", "three"])
</code></pre>
<p>should output</p>
<pre><code>1,a,one
2,b,two
3,c,three
</code></pre>
<p>I don't have control how many lists will be passed to function so i can't mix them prior to sending, only can process inside function, how i make it work dynamically?</p>
|
<python><function><loops><for-loop><tuples>
|
2024-02-19 22:07:29
| 3
| 829
|
Roman Toasov
|
78,023,739
| 13,135,901
|
TypeError: Abstract models cannot be instantiated (Django)
|
<p>I have abstract models <code>A</code> and <code>B</code> and a child model <code>C</code>.</p>
<pre><code>class A(models.Model):
field1 = models.CharField(max_length=24)
class Meta:
abstract = True
class B(models.Model):
field1 = models.CharField(max_length=24)
class Meta:
abstract = True
def make_A(self):
A(field1=self.field1).save()
class C(B):
pass
</code></pre>
<p>If I try to run <code>C.make_A()</code> I get <code>TypeError: Abstract models cannot be instantiated</code> error. So I am forced to re-write <code>make_A</code> method on the child. Is there a way to write a method that instantiates <code>A</code> under abstract <code>B</code>?</p>
|
<python><django>
|
2024-02-19 21:57:11
| 1
| 491
|
Viktor
|
78,023,622
| 2,263,683
|
Mock a module property
|
<p>I'm trying to mock a property which is at the module root:</p>
<p><code>some_module.py</code></p>
<pre><code>@property
def conn(self):
if not get_request():
raise SomeError(
"Some error happened"
)
return get_request().connection
Database.conn = conn
</code></pre>
<p>I've tried many different ways like:</p>
<pre><code>@mock.patch("some_module.conn")
def test_some_test(mock_conn):
mock_conn.return_value = mock.Mock()
</code></pre>
<p>..</p>
<pre><code>@mock.patch("some_module.conn", new_callable=PropertyMock)
def test_some_test(mock_conn):
mock_conn.return_value = mock.Mock()
</code></pre>
<p>..</p>
<pre><code>@mock.patch("some_module.conn", new_callable=PropertyMock, return_value=mock.Mock())
def test_some_test():
...
</code></pre>
<p>But I wasn't successful. I know that some of those approaches would work if the property was belong to a class like this:</p>
<pre><code>MyClass:
@property
def conn(self):
if not get_request():
raise SomeError(
"Some error happened"
)
return get_request().connection
</code></pre>
<p>but in my case the property has to be in module's root and now I'm struggling with mocking it. Anyone has any idea how to mock that?</p>
<p><strong>Update</strong>:</p>
<p>Considering <code>conn</code> as a variable and trying to mock it like a global variable didn't work either:</p>
<pre><code>@mock.patch("some_module.conn", mock.Mock())
def test_some_test():
...
</code></pre>
|
<python><mocking><pytest>
|
2024-02-19 21:31:02
| 0
| 15,775
|
Ghasem
|
78,023,568
| 52,917
|
how can I exclude a pydantic field from the schema?
|
<p>some of the fields in a pydantic class are actually internal representation and not something I want to serialize or put in a schema. I am trying various methods to exclude them but nothing seems to work.</p>
<p>here's one approach where I use the <code>exclue=True</code> and <code>exclude_schema=True</code> of a <code>Field</code></p>
<pre class="lang-py prettyprint-override"><code>class MyItem(BaseModel):
name: str = Field(...)
data: str = Field(...)
class MyObject(BaseModel):
items_dict: dict[str, MyItem] = Field(..., exclude=True, exclude_schema=True)
@property
def items_list(self):
return list(self.items_dict.values())
@root_validator(pre=True)
@classmethod
def encode_decode_items(cls, values):
items = values.get('items_dict')
if isinstance(items, dict):
values['items_dict'] = list(items.values())
elif isinstance(items, list):
values['items_dict'] = validate_items(items)
return values
obj = MyObject(items_dict=[MyItem(name='item1', data='data1'), MyItem(name='item2', data='data2')])
print(json.dumps(obj.model_json_schema(), indent=2))
</code></pre>
<p>however in the schema the field is still there:</p>
<pre><code>{
"$defs": {
"MyItem": {
"properties": {
"name": {
"title": "Name",
"type": "string"
},
"data": {
"title": "Data",
"type": "string"
}
},
"required": [
"name",
"data"
],
"title": "MyItem",
"type": "object"
}
},
"properties": {
"items_dict": {
"additionalProperties": {
"$ref": "#/$defs/MyItem"
},
"exclude_schema": true,
"title": "Items Dict",
"type": "object"
}
},
"required": [
"items_dict"
],
"title": "MyObject",
"type": "object"
}
</code></pre>
|
<python><pydantic><pydantic-v2>
|
2024-02-19 21:16:46
| 1
| 2,449
|
Aviad Rozenhek
|
78,023,380
| 10,037,034
|
ImportError: cannot import name 'BaseOuputParser' from 'langchain.schema'
|
<p>Hello i am trying to run this following code but i am getting an error;</p>
<pre><code>from langchain.schema import BaseOuputParser
</code></pre>
<p>Error;</p>
<blockquote>
<p>ImportError: cannot import name 'BaseOuputParser' from
'langchain.schema'</p>
</blockquote>
<p>My langchain version is ; <strong>'0.1.7'</strong></p>
|
<python><openai-api><langchain>
|
2024-02-19 20:39:19
| 2
| 1,311
|
Sevval Kahraman
|
78,023,374
| 735,926
|
How to check if a Python module with a custom __getattr__ has a given attribute?
|
<p>I implemented a custom <code>__getattr__</code> on <a href="https://github.com/bfontaine/Anycode/blob/d7442b42fabbc6922f06bda284e0844d79f4b0c3/anycode/__init__.py#L12" rel="nofollow noreferrer">a module</a> that retrieves a value and caches it with <code>setattr</code>. Here is the simplified version:</p>
<pre class="lang-py prettyprint-override"><code># mymodule.py
import sys
__this = sys.modules[__name__]
def __getattr__(name: str):
print(f"Generating {name}")
value = "some value"
setattr(__this, name, value)
return value
</code></pre>
<p>This works fine:</p>
<pre><code>>>> import mymodule
>>> mymodule.foo
Generating foo
'some value'
>>> mymodule.bar
Generating bar
'some value'
>>> mymodule.bar
'some value'
</code></pre>
<p>Note how <code>__getattr__</code> is called only on attributes that don’t already exist on the module. How can I identify these attributes? Python obviously knows about them, but I can’t get that information.</p>
<p><code>hasattr</code> doesn’t work, because under the hood it calls <code>getattr</code> which calls my function:</p>
<pre><code>>>> hasattr(mymodule, "foo")
True
>>> hasattr(mymodule, "something")
Generating something
True
</code></pre>
<p><code>"foo" in dir(mymodule)</code> works, but it requires to enumerate <em>all</em> attributes with <code>dir()</code> to test for a single one. One can delete attributes with <code>del mymodule.foo</code>, so keeping track of them in <code>__getattr__</code> using a global <code>set</code> doesn’t work (and having a custom <code>__delattr__</code> <a href="https://peps.python.org/pep-0726/" rel="nofollow noreferrer">is not supported yet</a>).</p>
<p>How can I test if "something" is an attribute of <code>mymodule</code> without calling <code>mymodule.__getattr__</code>?</p>
|
<python><python-module><getattr>
|
2024-02-19 20:38:10
| 1
| 21,226
|
bfontaine
|
78,023,207
| 12,178,630
|
Match close element pairs from two numpy arrays
|
<p>I have two numpy arrays, I would like to have a one-to-one mapping between the two arrays so that elements that are closest to each other get mapped to each other. I have written the following script that basically does that, but I do not like that the mapping is not deterministically unique, like for some configurations It might print the same index twice.</p>
<pre><code>array1 = np.array([[324, 274], [542,274], [99,275]])
array2 = np.array([[571, 266], [67, 265], [320, 266]])
for i in range(len(array1)):
print(np.argmin(np.linalg.norm(array2 - array1[i], axis=1)))
</code></pre>
<p>I am not yet sure how specifying the axis actually working in this context, although I know that choosing axis=0 means taking it by rows, and axis=1 means taking this it by columns, but since it is 2D array not sure how to interpret that.</p>
<p>I also thought about KD-Trees but I am not sure If using them would be the appropriate choice, and not an overkill for this task.</p>
<p>I would like to note, that when I refer to the closest element, I mean in both dimensions so It is essentially euclidean distance, I would really appreciate it also if someone could highlight on axis=1 and axis=0 in the code I provided, as well as getting the closest element considering only one axis. zipping them as well all in one liner, without loops if possible.</p>
|
<python><arrays><numpy><scipy>
|
2024-02-19 19:52:44
| 1
| 314
|
Josh
|
78,022,620
| 8,474,284
|
systemd-python AttributeError: module 'systemd.journal' has no attribute 'JournaldLogHandler'
|
<p>I installed <code>systemd-python</code>:</p>
<p><code>pip install systemd-python</code></p>
<p>My python code:</p>
<pre><code>from systemd import journal
...
log = logging.getLogger('lookup')
log.addHandler(journal.JournaldLogHandler())
log.setLevel(logging.INFO)
</code></pre>
<p>I got this error:</p>
<pre><code>Traceback (most recent call last):
File "/app/lookup.py", line 12, in <module>
log.addHandler(journal.JournaldLogHandler())
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'systemd.journal' has no attribute 'JournaldLogHandler'. Did you mean: 'JournalHandler'?
</code></pre>
|
<python><pip>
|
2024-02-19 17:48:14
| 1
| 1,723
|
DaWe
|
78,022,553
| 17,654,424
|
Docker Image for displaying graphics (matplotlib, plotly)
|
<p>I'am trying to containerize my python project which is using matplotlib and plotly to display some graphics and tables.</p>
<p>However I'am having trouble to display those graphics... My script just runs and it wont display anything.</p>
<p>Here is my dockerfile:</p>
<pre><code># Use an official Python runtime as a parent image
FROM ubuntu:20.04
# Set the working directory in the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Update apt package lists and upgrade installed packages
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y build-essential python3-pip && \
pip install --upgrade pip && \
rm -rf /var/lib/apt/lists/*
# Set DEBIAN_FRONTEND to noninteractive to prevent prompts
ENV DEBIAN_FRONTEND=noninteractive
# Install Pillow and its ImageTk module
RUN apt-get update && \
apt-get install -y python3-pil python3-pil.imagetk
# Install tkinter for Python 3.9
RUN apt-get install -y python3-tk
# Install any needed dependencies specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Define the command to run your application
CMD ["python3", "app.py"]
</code></pre>
<p>And here some of the functions I'am trying to use:</p>
<pre class="lang-py prettyprint-override"><code>
def display_num_occurences(interesting_words: dict) -> None:
"""
Display a bar plot with the occurrences of the interesting words
"""
words = list(interesting_words.keys())
occurrences = [interesting_words[word]['occurences'] for word in words]
plt.figure(figsize=(10, 5))
plt.bar(words, occurrences, color='blue')
plt.title('Occurrences of Words')
plt.xlabel('Words')
plt.ylabel('Occurrences')
plt.grid(axis='y')
plt.xticks(rotation=45)
plt.yticks(range(0, max(occurrences) + 1, 1))
plt.show()
def display_table(interesting_words: dict) -> None:
"""
Display a table using plotly with the data related to the interesting words
"""
# Create the table data
table_data = [['Word', 'Occurrences', 'Documents', 'Sentences']]
table_data.append(list(interesting_words.keys()))
table_data.append([interesting_words[word]['occurences'] for word in interesting_words.keys()])
table_data.append(['\n'.join(interesting_words[word]['documents']) for word in interesting_words.keys()])
table_data.append(['\n'.join(interesting_words[word]['sentences']) for word in interesting_words.keys()])
# Create the Plotly figure
fig = go.Figure(data=[go.Table(
header=dict(values=table_data[0],
fill_color='lightblue',
align='center'),
cells=dict(values=table_data[1:],
fill_color='white',
align='left'))
])
# Update layout
fig.update_layout(
title="Table of Word Occurrences",
width=2000,
height=1000,
)
# Show the figure
fig.show()
</code></pre>
<p>I have tried a couple of different things but none seem to work.</p>
|
<python><docker><matplotlib><plotly>
|
2024-02-19 17:36:47
| 0
| 651
|
Simao
|
78,022,494
| 11,692,124
|
Pytorch lightning on_save_checkpoint is not called
|
<p><a href="https://github.com/FarhangAmaji/versatileAnn/tree/on_save_checkpointNotWorking" rel="nofollow noreferrer">on my project(which is in development phase)</a> (if u want to take a look/clone make sure that it's "on_save_checkpointNotWorking" branch)</p>
<p>I have a class named <code>BrazingTorch</code> (in <code>brazingTorchFolder/brazingTorch.py</code>) inherited from <code>pytorch lightning LightningModule</code> and I have defined <code>on_save_checkpoint</code> method on this class (in <code>brazingTorchFolder/brazingTorchParents/saveLoad.py</code>) but it's not called in .fit method (which as probably you have guessed is for running whole pipline including training_step and etc)</p>
<p>note I am using ModelCheckpoint in .fit, even though it actually saves the model, <code>on_save_checkpoint</code> is not called!!</p>
<p>I have checked the fact that <code>on_save_checkpoint</code> should be on <code>BrazingTorch</code> (the class which inherits from LightningModule), and it's correct and it exists on model. also have asked this problem from gemini or chatgpt and they provided some things to check which were trivial but then again correctly implemented.</p>
<p>you may try .fit method in <code>tests\brazingTorchTests\fitTests.py</code>. there I am calling .fit method which is in <code>brazingTorchFolder\brazingTorchParents\modelFitter.py</code> (note it's closely related to .baseFit in the same file)</p>
<p>note the loggerPath and where the checkpoint is saved is in <code>tests\brazingTorchTests\NNDummy1\arch1\mainRun_seed71</code></p>
<pre><code>def fit(self, trainDataloader: DataLoader,
valDataloader: Optional[DataLoader] = None,
*, lossFuncs: List[nn.modules.loss._Loss],
seed=None, resume=True, seedSensitive=False,
addDefaultLogger=True, addDefault_gradientClipping=True,
preRunTests_force=False, preRunTests_seedSensitive=False,
preRunTests_lrsToFindBest=None,
preRunTests_batchSizesToFindBest=None,
preRunTests_fastDevRunKwargs=None, preRunTests_overfitBatchesKwargs=None,
preRunTests_profilerKwargs=None, preRunTests_findBestLearningRateKwargs=None,
preRunTests_findBestBatchSizesKwargs=None,
**kwargs):
if not seed:
seed = self.seed
self._setLossFuncs_ifNot(lossFuncs)
architectureName, loggerPath, shouldRun_preRunTests = self._determineShouldRun_preRunTests(
False, seedSensitive)
loggerPath = loggerPath.replace('preRunTests', 'mainRun_seed71')
checkpointCallback = ModelCheckpoint(
monitor=f"{self._getLossName('val', self.lossFuncs[0])}",
mode='min', # Save the model when the monitored quantity is minimized
save_top_k=1, # Save the top model based on the monitored quantity
every_n_epochs=1, # Checkpoint every 1 epoch
dirpath=loggerPath, # Directory to save checkpoints
filename=f'BrazingTorch',
)
callbacks_ = [checkpointCallback, StoreEpochData()]
kwargsApplied = {
'logger': pl.loggers.TensorBoardLogger(self.modelName, name=architectureName,
version='preRunTests'),
'callbacks': callbacks_, }
return self.baseFit(trainDataloader=trainDataloader, valDataloader=valDataloader,
addDefaultLogger=addDefaultLogger,
addDefault_gradientClipping=addDefault_gradientClipping,
listOfKwargs=[kwargsApplied], **kwargs)
@argValidator
def baseFit(self, trainDataloader: DataLoader,
valDataloader: Union[DataLoader, None] = None,
addDefaultLogger=True, addDefault_gradientClipping=True,
listOfKwargs: List[dict] = None,
**kwargs):
# cccUsage
# - this method accepts kwargs related to trainer, trainer.fit, and self.log and
# pass them accordingly
# - the order in listOfKwargs is important
# - _logOptions phase based values feature:
# - args related to self.log may be a dict with these keys 'train', 'val', 'test',
# 'predict' or 'else'
# - this way u can specify what phase use what values and if not specified with
# 'else' it's gonna know
# put together all kwargs user wants to pass to trainer, trainer.fit, and self.log
listOfKwargs = listOfKwargs or []
listOfKwargs.append(kwargs)
allUserKwargs = {}
for kw in listOfKwargs:
self._plKwargUpdater(allUserKwargs, kw)
# add default logger if allowed and no logger is passes
# because by default we are logging some metrics
if addDefaultLogger and 'logger' not in allUserKwargs:
allUserKwargs['logger'] = pl.loggers.TensorBoardLogger(self.modelName)
# bugPotentialCheck1
# shouldn't this default logger have architectureName
appliedKwargs = self._getArgsRelated_toEachMethodSeparately(allUserKwargs)
notAllowedArgs = ['self', 'overfit_batches', 'name', 'value']
self._removeNotAllowedArgs(allUserKwargs, appliedKwargs, notAllowedArgs)
self._warnForNotUsedArgs(allUserKwargs, appliedKwargs)
# add gradient clipping by default
if not self.noAdditionalOptions and addDefault_gradientClipping \
and 'gradient_clip_val' not in appliedKwargs['trainer']:
appliedKwargs['trainer']['gradient_clip_val'] = 0.1
Warn.info('gradient_clip_val is not provided to fit;' + \
' so by default it is set to default "0.1"' + \
'\nto cancel it, you may either pass noAdditionalOptions=True to model or ' + \
'pass addDefault_gradientClipping=False to fit method.' + \
'\nor set another value to "gradient_clip_val" in kwargs passed to fit method.')
trainer = pl.Trainer(**appliedKwargs['trainer'])
self._logOptions = appliedKwargs['log']
if 'train_dataloaders' in appliedKwargs['trainerFit']:
del appliedKwargs['trainerFit']['train_dataloaders']
if 'val_dataloaders' in appliedKwargs['trainerFit']:
del appliedKwargs['trainerFit']['val_dataloaders']
trainer.fit(self, trainDataloader, valDataloader, **appliedKwargs['trainerFit'])
self._logOptions = {}
return trainer
def on_save_checkpoint(self, checkpoint: dict):
# reimplement this method to save additional information to the checkpoint
# Add additional information to the checkpoint
checkpoint['brazingTorch'] = {
'_initArgs': self._initArgs,
'allDefinitions': self.allDefinitions,
'warnsFrom_getAllNeededDefinitions': self.warnsFrom_getAllNeededDefinitions,
}
return checkpoint
</code></pre>
|
<python><pytorch><pytorch-lightning>
|
2024-02-19 17:27:48
| 1
| 1,011
|
Farhang Amaji
|
78,022,474
| 8,474,284
|
ERROR: Could not build wheels for cysystemd
|
<p>I'm facing an error installing the systemd package:</p>
<p><code>pip install cysystemd</code></p>
<pre><code> error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [23 lines of output]
/usr/local/lib/python3.12/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'build_requires'
warnings.warn(msg)
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-312
creating build/lib.linux-x86_64-cpython-312/cysystemd
copying cysystemd/__init__.py -> build/lib.linux-x86_64-cpython-312/cysystemd
copying cysystemd/async_reader.py -> build/lib.linux-x86_64-cpython-312/cysystemd
copying cysystemd/daemon.py -> build/lib.linux-x86_64-cpython-312/cysystemd
copying cysystemd/journal.py -> build/lib.linux-x86_64-cpython-312/cysystemd
copying cysystemd/py.typed -> build/lib.linux-x86_64-cpython-312/cysystemd
running build_ext
building 'cysystemd._daemon' extension
creating build/temp.linux-x86_64-cpython-312
creating build/temp.linux-x86_64-cpython-312/cysystemd
gcc -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -fPIC -I/usr/local/include/python3.12 -c cysystemd/_daemon.c -o build/temp.linux-x86_64-cpython-312/cysystemd/_daemon.o
cysystemd/_daemon.c:1202:10: fatal error: systemd/sd-daemon.h: No such file or directory
1202 | #include <systemd/sd-daemon.h>
| ^~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for cysystemd
ERROR: Could not build wheels for cysystemd, which is required to install pyproject.toml-based projects
</code></pre>
|
<python><pip><systemd>
|
2024-02-19 17:25:18
| 1
| 1,723
|
DaWe
|
78,022,472
| 9,983,652
|
what is windows equivalent of pwd.getpwuid?
|
<p>What is the windows equivalent of pwd.getpwuid? I believe pwd is an UNIX package. Here is the code and the error when using windows</p>
<pre><code>import pwd
file_owner_name = pwd.getpwuid(file_owner_uid).pw_name
import pwd
ModuleNotFoundError: No module named 'pwd'
</code></pre>
|
<python>
|
2024-02-19 17:25:13
| 1
| 4,338
|
roudan
|
78,022,405
| 5,462,743
|
Azure Machine Learning SDK V2 changer version of python in Compute Cluster
|
<p>In SDK V1, I am using a environment for compute cluster with a dockerfile string like this:</p>
<pre><code>azureml_env = Environment("my_experiment")
azureml_env.python.conda_dependencies = CondaDependencies.create(
pip_packages=["pandas", "databricks-connect==10.4"],
)
dockerfile = rf"""
FROM mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu22.04:latest
RUN mkdir -p /usr/share/man/man1
RUN apt-get -y update \
&& apt-get install openjdk-19-jdk -y \
&& rm -rf /var/lib/apt/lists/*
"""
azureml_env.docker.base_image = None
azureml_env.docker.base_dockerfile = dockerfile
</code></pre>
<p>So I am using <code>mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu22.04:latest</code> where it gives me a python 3.8.</p>
<p>But when I switch to SDK V2, I get a python 3.10, which is not compatible with my <code>databricks runtime</code> that need python 3.8.</p>
<p>Here is my dockerfile:</p>
<pre><code>FROM mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu22.04:latest
RUN mkdir -p /usr/share/man/man1
RUN apt-get -y update \
&& apt-get install openjdk-19-jdk -y \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install -r requirements.txt && rm requirements.txt
# set command
CMD ["bash"]
</code></pre>
<p>I call it like this in python:</p>
<pre><code>azureml_env = Environment(
build=BuildContext(
path="deploy/utils/docker_context", # Where is my dockerfile and other file to copy inside
),
name="my_experiment",
)
azureml_env.validate()
self.ml_client.environments.create_or_update(azureml_env)
</code></pre>
<p>Why don't I get a python 3.8 but a python 3.10?</p>
|
<python><azure-machine-learning-service><azure-ai>
|
2024-02-19 17:13:58
| 1
| 1,033
|
BeGreen
|
78,022,329
| 353,337
|
Create combined numpy array without explicit loop/comprehension
|
<p>I have a long NumPy array that I need to organize in a "pattern" like so:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
# long array:
a = np.array(
[
1.3,
-1.8,
0.3,
11.4,
# ...
]
)
def pattern(x: float):
return np.array(
[
[x, 0, 0],
[0, x, 0],
[0, 0, x],
[+x, +x, +x],
[-x, -x, -x],
]
)
out = np.array([pattern(x) for x in a])
print(out.shape)
</code></pre>
<pre><code>(4, 5, 3)
</code></pre>
<p>I'm wondering if there's a way to construct <code>out</code> <em>without</em> the explicit loop over <code>a</code>.</p>
<p>Any ideas?</p>
|
<python><numpy><matrix>
|
2024-02-19 16:58:09
| 4
| 59,565
|
Nico Schlömer
|
78,022,295
| 1,501,700
|
Show function parameters
|
<p>I have AWS MQTT client python project in Visual Code. I have installed AWSIoTPythonSDK library that allows to communicate with AWS server. But I cant get this library function parameters by presing <code>ctrl+shift+space</code>. What else I should do in order to see <code>AWSIoTPythonSDK</code> library function parameters in VS Code editor?</p>
|
<python><amazon-web-services><visual-studio-code>
|
2024-02-19 16:53:03
| 1
| 18,481
|
vico
|
78,022,291
| 10,215,160
|
Pandera slow synthetic Dataframe generation if not eq schema
|
<p>I am trying to learn the Pandera Library. When i try to automatically generate test Data from a DataFrameModel, it suddenly takes very long or crashes if i deviate from the minimal example in terms of limit checks.</p>
<p>Consider the Base example from the Pandera documentation: <a href="https://pandera.readthedocs.io/en/latest/data_synthesis_strategies.html#strategies-and-examples-from-dataframe-models" rel="nofollow noreferrer">https://pandera.readthedocs.io/en/latest/data_synthesis_strategies.html#strategies-and-examples-from-dataframe-models</a></p>
<p>I can expand it with lots of additional columns. Consider the Code:</p>
<pre class="lang-py prettyprint-override"><code>from pandera.typing import Series, DataFrame
import pandera as pa
import hypothesis
class InSchema(pa.DataFrameModel):
column1: Series[int] = pa.Field(eq=10)
column2: Series[float] = pa.Field(eq=0.25)
column3: Series[str] = pa.Field(eq="foo")
column4: Series[int] = pa.Field()
column5: Series[int] = pa.Field(eq=123)
column6: Series[int] = pa.Field(eq=123)
column7: Series[int] = pa.Field(eq=123)
class OutSchema(InSchema):
column4: Series[float]
@pa.check_types
def processing_fn(df: DataFrame[InSchema]) -> DataFrame[OutSchema]:
return df.assign(column4=df.column1 * df.column2)
def test_processing_fn():
dataframe = InSchema.example(size=5)
processing_fn(dataframe)
print(dataframe)
if __name__ == "__main__":
gg= InSchema.strategy(size=5)
test_processing_fn()
print("Done!")
</code></pre>
<p>Alright. Now Observe the if i Change one columns limits to <code>unique=True</code>:</p>
<pre class="lang-py prettyprint-override"><code>from pandera.typing import Series, DataFrame
import pandera as pa
import hypothesis
class InSchema(pa.DataFrameModel):
# NOW UNIQUE
column1: Series[int] = pa.Field(unique=True)
column2: Series[float] = pa.Field(eq=0.25)
column3: Series[str] = pa.Field(eq="foo")
column4: Series[int] = pa.Field()
column5: Series[int] = pa.Field(eq=123)
column6: Series[int] = pa.Field(eq=123)
column7: Series[int] = pa.Field(eq=123)
class OutSchema(InSchema):
column4: Series[float]
@pa.check_types
def processing_fn(df: DataFrame[InSchema]) -> DataFrame[OutSchema]:
return df.assign(column4=df.column1 * df.column2)
def test_processing_fn():
dataframe = InSchema.example(size=5)
processing_fn(dataframe)
print(dataframe)
if __name__ == "__main__":
gg= InSchema.strategy(size=5)
test_processing_fn()
print("Done!")
</code></pre>
<p>The Software is able to infer a dataframe.
But if I now change one check to an equivalent check (eq=123 <=> le=123 && ge= 123), it fails:</p>
<pre class="lang-py prettyprint-override"><code>from pandera.typing import Series, DataFrame
import pandera as pa
import hypothesis
class InSchema(pa.DataFrameModel):
column1: Series[int] = pa.Field(unique=True)
column2: Series[float] = pa.Field(eq=0.25)
column3: Series[str] = pa.Field(eq="foo")
column4: Series[int] = pa.Field()
# EQUIVALENT CHECKS
column5: Series[int] = pa.Field(le=123, ge=123)
column6: Series[int] = pa.Field(eq=123)
column7: Series[int] = pa.Field(eq=123)
class OutSchema(InSchema):
column4: Series[float]
@pa.check_types
def processing_fn(df: DataFrame[InSchema]) -> DataFrame[OutSchema]:
return df.assign(column4=df.column1 * df.column2)
def test_processing_fn():
dataframe = InSchema.example(size=5)
processing_fn(dataframe)
print(dataframe)
if __name__ == "__main__":
gg= InSchema.strategy(size=5)
test_processing_fn()
print("Done!")
</code></pre>
<p>Now i get an error:</p>
<blockquote>
<p>hypothesis.errors.Unsatisfiable: Unable to satisfy assumptions of example_generating_inner_function</p>
</blockquote>
<p>My question is, why? If i use both restrictions (unique and ge/le) separated, it works but both cannot be satisfiable?</p>
|
<python><pandas><dataframe><python-hypothesis><pandera>
|
2024-02-19 16:52:15
| 1
| 1,486
|
Sandwichnick
|
78,022,239
| 10,695,731
|
Selenium wait for element_to_be_clickable throws error, manual waiting works
|
<p>In the HTML i have a table where the first column includes a radio button:</p>
<p><a href="https://i.sstatic.net/QBys4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QBys4.png" alt="enter image description here" /></a></p>
<p>I want to select and click the button with the following code:</p>
<pre><code>rdoBtn = wait.until(EC.element_to_be_clickable((By.NAME, "rdoSelection")))
rdoBtn.click()
</code></pre>
<p>When i run this i get the error:</p>
<pre><code>NoSuchWindowException: 'Browsing context has been discarded'
</code></pre>
<p>However when i go in my debugger and slowly go through this part it works as intented.
Also the <code>element_to_be_clickable</code> already worked on other normal buttons before.</p>
<p>Why does it behave this way and how can i fix it?</p>
|
<python><html><selenium-webdriver><web-scraping>
|
2024-02-19 16:43:30
| 0
| 691
|
NicO
|
78,022,111
| 1,422,096
|
Python .pth and ._pth files (standard install vs. embedded Python)
|
<p>I see two diffrent behaviours about .pth and ._pth files:</p>
<ul>
<li><p>the official Python install for Windows uses <code>.pth</code> files in <code>lib\site-packages</code> as documented in <a href="https://docs.python.org/3.12/library/site.html#module-site" rel="nofollow noreferrer">https://docs.python.org/3.12/library/site.html#module-site</a>. It seems that adding multiple <code>.pth</code> files works and <strong>is used</strong> at the Python interpreter startup.</p>
</li>
<li><p>the Python <em>Embedded</em> package for Windows seems to use <code>._pth</code> files (and not <code>.pth</code>!) for the same thing: <code>python38._pth</code>, but <em>in the same folder</em> as <code>python.exe</code>. In this case it seems that adding a second <code>.pth</code> or <code>._pth</code> file in the same folder <strong>isn't used</strong> by the interpreter. Why? Is it documented that, for embedded Python, only one ._pth will be used?</p>
</li>
</ul>
<p><strong>More generally, what are the rules about <code>.pth</code> vs <code>._pth</code> files? In which case should we use one or the other, what is the reason for the underscore _pth?</strong></p>
<p>Note: the rules seem here complicated (.pth vs ._pth ; multiple files allowed vs. only one file used).</p>
<p>NB: <a href="https://docs.python.org/3.11/using/windows.html#finding-modules" rel="nofollow noreferrer">Documentation about ._pth</a> files</p>
|
<python><path><python-embedding>
|
2024-02-19 16:21:02
| 0
| 47,388
|
Basj
|
78,021,989
| 7,116,108
|
pandas to_dict - list of rows by key
|
<p>I have a DataFrame of the form:</p>
<pre><code>df = pd.DataFrame({
"key": ["a", "b", "b", "c", "c", "c"],
"var1": [1, 2, 3, 4, 5, 6],
"var2": ["x", "y", "x", "y", "x", "y"],
})
</code></pre>
<p>i.e.</p>
<pre><code> key var1 var2
0 a 1 x
1 b 2 y
2 b 3 x
3 c 4 y
4 c 5 x
5 c 6 y
</code></pre>
<p>I am trying to transform the data to generate a dict with unique <code>key</code> values as top level keys, and a list of records for the other columns (<code>var1</code> and <code>var2</code>)</p>
<p>Expected output:</p>
<pre><code>{
"a": [{"var1": 1, "var2": "x"}],
"b": [{"var1": 2, "var2": "y"}, {"var1": 3, "var2": "x"}],
"c": [{"var1": 4, "var2": "y"}, {"var1": 5, "var2": "x"}, {"var1": 6, "var2": "y"}],
}
</code></pre>
<p>I tried using the code below, which works, but uses a for loop which makes it slow for large dataframes. How can I achieve the expected result in a more idiomatic way with pandas?</p>
<pre><code>result = {}
for key in df["key"].unique():
key_df = df[df["key"] == key]
result[key] = key_df.drop("key", axis=1).to_dict(orient="records")
</code></pre>
|
<python><pandas>
|
2024-02-19 16:01:11
| 1
| 1,587
|
Dan
|
78,021,828
| 2,707,864
|
Sympy: Integral(0, (R, b, r)) not simplifying to zero when stemming from the Leibniz rule
|
<p>I am differentiating under an integral. Since the integrand does not depend explicitly on the variable, the corresponding term from the Leibniz rule would not show up.
Nevertheless, sympy does not simplify it to zero. Why is that? Can it be somehow coerced to do so?
That is preventing further simplification of my expressions down the road.
This is my MCVE.</p>
<pre><code># Test: Check why integrating the null function does not simplify to zero
r = sym.symbols('r', real=True, positive=True)
b = sym.symbols('b', real=True, positive=True)
# Dummy integration variable
R = sym.symbols('R', real=True, positive=True)
# 1. Simple integral... simplifies ok
i1s = sym.simplify(sym.integrate(0, (R, b, r)))
print('i1s =', i1s)
# 2. Integral stemming from Leibniz rule... does not simplify
p = sym.Function('p', real=True)
i2s = sym.simplify(sym.integrate(R*p(R), (R, b, r)))
print('i2s =', i2s)
i3s = sym.simplify(sym.diff(i2s, r))
print('i3s =', i3s)
</code></pre>
<p>which produces</p>
<pre><code>i1s = 0
i2s = Integral(R*p(R), (R, b, r))
i3s = r*p(r) + Integral(0, (R, b, r))
</code></pre>
<p>I would expect</p>
<pre><code>i3s = r*p(r)
</code></pre>
<p>Note: all uses of <code>sym.simplify</code> turn out to be superfluous here.</p>
|
<python><sympy><symbolic-math><numerical-integration>
|
2024-02-19 15:36:33
| 1
| 15,820
|
sancho.s ReinstateMonicaCellio
|
78,021,659
| 3,650,983
|
Running DepthEstimationPipeline example from huggingface.co
|
<p>getting the following error when running</p>
<pre><code>depth_estimator = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-base-hf")
</code></pre>
<blockquote>
<p>ValueError: The checkpoint you are trying to load has model type
<code>depth_anything</code> but Transformers does not recognize this
architecture. This could be because of an issue with the checkpoint,
or because your version of Transformers is out of date.</p>
</blockquote>
|
<python><machine-learning><deep-learning><pytorch><huggingface-transformers>
|
2024-02-19 15:10:09
| 1
| 4,119
|
ChaosPredictor
|
78,021,630
| 5,070,526
|
Airflow : Task wise dependency check with another Dag using ExternalTaskSensor
|
<p><code>ExternalTaskSensor</code> not working as expected. Here is example of <code>dags</code>.</p>
<p><strong>Dag1(test_first_dag.py)</strong>:</p>
<pre><code>Task 1 -> print("Dag1-Task1-Hello World")
Task 2 -> print("Dag1-Task2-Hello World")
</code></pre>
<p><strong>Dag2(test_second_dag.py)</strong>:</p>
<pre><code>Task 1 -> print("Dag2-Task1-Hello World")
Task 2 -> print("Dag2-Task2-Hello World")
</code></pre>
<p>Once Dag 1 triggered - it can run as normal. Once the second Dag is triggered , the Task1 of Dag2 should check whether the task 1 of Dag 1 has completed or not. For testing , I used to run the Dag2 first . So it should start then will wait until task1 of Dag1 completes. In the below code, I have two Dags. On execution the Task1 of Dag2 keeps on waiting , even though the Dag1 completes. <br />
Can some one help why the trigger is not working properly ?
<strong>test_first_dag.py</strong></p>
<pre><code>from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from datetime import datetime, timedelta
import time
# Define the default arguments for the DAG
default_args = {
'depends_on_past': False,
'start_date': datetime(2022, 1, 1),
}
dag = DAG(
'test_first_dag',
default_args=default_args,
schedule_interval=timedelta(days=1),
)
def task1():
print("Dag1-Task1-Hello World")
def task2():
time.sleep(10) # Delay for 10 seconds
print("Dag1-Task2-Hello World")
task1 = PythonOperator(
task_id='task1',
python_callable=task1,
dag=dag,
)
task2 = PythonOperator(
task_id='task2',
python_callable=task2,
dag=dag,
)
task1 >> task2
</code></pre>
<p><strong>test_second_dag.py</strong></p>
<pre><code>from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.sensors.external_task_sensor import ExternalTaskSensor
from datetime import datetime, timedelta
default_args = {
'start_date': datetime(2022, 1, 2), # Make sure it's after the start date of Dag1
}
dag = DAG(
'test_second_dag',
default_args=default_args,
schedule_interval=timedelta(days=1),
)
def task1():
print("Dag2-Task1-Hello World")
def task2():
print("Dag2-Task2-Hello World")
# Wait for Dag1's task1 to be completed
wait_for_dag1_task1 = ExternalTaskSensor(
task_id='wait_for_dag1_task1',
external_dag_id='test_first_dag',
external_task_id='task1',
dag=dag,
)
task1 = PythonOperator(
task_id='task1',
python_callable=task1,
dag=dag,
)
task2 = PythonOperator(
task_id='task2',
python_callable=task2,
dag=dag,
)
wait_for_dag1_task1 >> task1 >> task2
</code></pre>
|
<python><airflow><airflow-2.x><airflow-taskflow>
|
2024-02-19 15:07:29
| 1
| 725
|
Sachin Sukumaran
|
78,021,585
| 4,406,532
|
Colab Google using qiskit Aer
|
<p>Here is my code:</p>
<pre><code>#!pip3 install qiskit
!pip3 install qiskit-aer
from qiskit import Aer, ClassicalRegister, QuantumCircuit, QuantumRegister
</code></pre>
<p>This is the Error I get:</p>
<pre><code>ImportError Traceback (most recent call last)
<ipython-input-14-2acb4a3b8d5d> in <cell line: 4>()
2 !pip3 install qiskit
3 get_ipython().system('pip3 install qiskit-aer')
----> 4 from qiskit import Aer, ClassicalRegister, QuantumCircuit, QuantumRegister
5
6 # Load training data
ImportError: cannot import name 'Aer' from 'qiskit' (/usr/local/lib/python3.10/dist-packages/qiskit/__init__.py)
</code></pre>
<p>I use Google Colab. I also used the following:</p>
<pre><code>!pip3 install qiskit
and !pip install qiskit
</code></pre>
<p>And I got the same error.</p>
|
<python><google-colaboratory><qiskit>
|
2024-02-19 14:59:47
| 2
| 2,293
|
Avi
|
78,021,416
| 16,627,522
|
PyBind11: How to use a class as a data type
|
<p>How do I convert a C++ class into a 'data type', for instance, if I had a vector: <code>dog_vector</code>, I can convert it to a <code>py::array</code> and expose it to Python where it would be read as a Numpy array. How do I expose a class to Python, allowing for its methods to also be called from Python.</p>
<p>I am trying to create a callback function in Python which takes in a class defined in my C++ code i.e.</p>
<pre class="lang-py prettyprint-override"><code># Python side
import myextension
def make_dog_bark(dog: myextension.Dog):
dog.bark()
</code></pre>
<p>How do I convert a class into a data type that can be used as such in Python? If Dog was a vector, for instance, my C++ binding for the callback would have been:</p>
<pre><code>make_dog_bark(py::array_t<int64_t>(dog_vector.size(), dog_vector.data()))
</code></pre>
<p>and I would use in Python as</p>
<pre><code>def make_dog_bark(dog_vector: npt.npt.NDArray):
print(dog_vector)
</code></pre>
<p>I'm not sure if this is clear. I have tried to keep the example minimal and avoid putting all the callback registering and stuff but I can add more information if required.</p>
|
<python><c++><pybind11><nanobind>
|
2024-02-19 14:31:10
| 0
| 634
|
Tommy Wolfheart
|
78,021,371
| 18,159,603
|
Pytorch advanced indexing with list of lists as indices
|
<p>Here is some python code to reproduce my issue:</p>
<pre class="lang-py prettyprint-override"><code>import torch
n, m = 9, 4
x = torch.arange(0, n * m).reshape(n, m)
print(x.shape)
print(x)
# torch.Size([9, 4])
# tensor([[ 0, 1, 2, 3],
# [ 4, 5, 6, 7],
# [ 8, 9, 10, 11],
# [12, 13, 14, 15],
# [16, 17, 18, 19],
# [20, 21, 22, 23],
# [24, 25, 26, 27],
# [28, 29, 30, 31],
# [32, 33, 34, 35]])
list_of_indices = [
[],
[2, 3],
[1],
[],
[],
[],
[0, 1, 2, 3],
[],
[0, 3],
]
print(list_of_indices)
for i, indices in enumerate(list_of_indices):
x[i, indices] = -1
print(x)
# tensor([[ 0, 1, 2, 3],
# [ 4, 5, -1, -1],
# [ 8, -1, 10, 11],
# [12, 13, 14, 15],
# [16, 17, 18, 19],
# [20, 21, 22, 23],
# [-1, -1, -1, -1],
# [28, 29, 30, 31],
# [-1, 33, 34, -1]])
</code></pre>
<p>I have a list of list of indices. I want to set the indices in <code>x</code> to a specific value (here <code>-1</code>) using the indices in <code>list_of_indices</code>. In this list, each sublist correspond to a row of <code>x</code>, containing the indices to set to <code>-1</code> for this row. This can be easily done using a for-loop, but I feel like pytorch would allow to do that much more efficiently.</p>
<p>I tried the following:</p>
<pre class="lang-py prettyprint-override"><code>x[torch.arange(len(list_of_indices)), list_of_indices] = -1
</code></pre>
<p>but it resulted in</p>
<pre class="lang-py prettyprint-override"><code>IndexError: shape mismatch: indexing tensors could not be broadcast together with shapes [9], [9, 0]
</code></pre>
<p>I tried to find people having the same problem, but the number of questions about indexing tensors is so large that I might have missed it.</p>
|
<python><indexing><pytorch><tensor>
|
2024-02-19 14:22:54
| 2
| 1,036
|
leleogere
|
78,021,309
| 1,914,034
|
Create images patches with overlap using torch.unfold
|
<p>I would like to split an images into smaller images of 1024x1024. I would like to have an overlap of 100 pixels in the top/bottom and left/right of the patches.</p>
<p><img src="https://i.sstatic.net/tDc9P.png" alt="enter image description here" /></p>
<p>I am using padding and torch.unfold to create even sized patches.</p>
<pre><code>def create_patches(image, patch_size):
# Pad right and bottom to fit patch_size
padding_right = patch_size - image.shape[2]%patch_size
padding_bottom = patch_size - image.shape[1]%patch_size
image = F.pad(image, (0, 0, padding_right, padding_bottom))
return image.unfold(0, 3, 3).unfold(1, patch_size, patch_size).unfold(2, patch_size, patch_size)
</code></pre>
<p>How can I make the patches to overlap?</p>
|
<python><pytorch>
|
2024-02-19 14:14:46
| 1
| 7,655
|
Below the Radar
|
78,021,029
| 424,957
|
May I use sequence number in Python Pool.imap?
|
<p>I use code as below to download ts files, because original ts filename is too long, that is failed when merge ts files to mp4 file, so I want to save it in sequence 001.ts, 02.ts,...and so on, how can I pass this sequence number to download_ts_file?</p>
<pre><code>def download_ts_file(ts_url: str, store_dir: str, attempts: int = 10):
ts_fname = ts_url.split('/')[-1]
ts_dir = os.path.join(store_dir, ts_fname)
ts_res = None
for _ in range(attempts):
try:
ts_res = requests.get(ts_url, headers=header)
if ts_res.status_code == 200:
break
except Exception:
pass
time.sleep(.5)
if isinstance(ts_res, Response) and ts_res.status_code == 200:
with open(ts_dir, 'wb+') as f:
f.write(ts_res.content)
else:
print(f"Failed to download streaming file: {ts_fname}.")
pool = Pool(20)
gen = pool.imap(partial(download_ts_file, store_dir='.'), ts_url_list)
for _ in tqdm.tqdm(gen, total=len(ts_url_list)):
pass
pool.close()
pool.join()
</code></pre>
|
<python><threadpool>
|
2024-02-19 13:28:57
| 1
| 2,509
|
mikezang
|
78,020,975
| 7,133,942
|
Can't import a installed package in Python with jupyter notebook
|
<p>I have a strange problem. I want to use the package "copulas" in Python (<a href="https://sdv.dev/Copulas/" rel="nofollow noreferrer">https://sdv.dev/Copulas/</a>). For that I first install it by using</p>
<pre><code>!pip install copulas
</code></pre>
<p>The output is</p>
<pre><code>Requirement already satisfied: copulas in c:\users\wi9632\appdata\local\programs\python\python311\lib\site-packages (0.10.0)
Requirement already satisfied: plotly<6,>=5.10.0 in c:\users\wi9632\appdata\local\programs\python\python311\lib\site-packages (from copulas) (5.19.0)
Requirement already satisfied: numpy<2,>=1.23.3 in c:\users\wi9632\appdata\local\programs\python\python311\lib\site-packages (from copulas) (1.26.4)
Requirement already satisfied: scipy<2,>=1.9.2 in c:\users\wi9632\appdata\local\programs\python\python311\lib\site-packages (from copulas) (1.12.0)
Requirement already satisfied: pandas>=1.5.0 in c:\users\wi9632\appdata\local\programs\python\python311\lib\site-packages (from copulas) (2.2.0)
Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\wi9632\appdata\local\programs\python\python311\lib\site-packages (from pandas>=1.5.0->copulas) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in c:\users\wi9632\appdata\local\programs\python\python311\lib\site-packages (from pandas>=1.5.0->copulas) (2024.1)
Requirement already satisfied: tzdata>=2022.7 in c:\users\wi9632\appdata\local\programs\python\python311\lib\site-packages (from pandas>=1.5.0->copulas) (2024.1)
Requirement already satisfied: tenacity>=6.2.0 in c:\users\wi9632\appdata\local\programs\python\python311\lib\site-packages (from plotly<6,>=5.10.0->copulas) (8.2.3)
Requirement already satisfied: packaging in c:\users\wi9632\appdata\local\programs\python\python311\lib\site-packages (from plotly<6,>=5.10.0->copulas) (23.2)
Requirement already satisfied: six>=1.5 in c:\users\wi9632\appdata\local\programs\python\python311\lib\site-packages (from python-dateutil>=2.8.2->pandas>=1.5.0->copulas) (1.16.0)
[notice] A new release of pip available: 22.3.1 -> 24.0
[notice] To update, run: python.exe -m pip install --upgrade pip
</code></pre>
<p>Then I check if it was installed by using the following code:</p>
<pre><code>!python --version
!pip list
</code></pre>
<p>Which leads to the output:</p>
<pre><code>Python 3.11.3
Package Version
--------------- -------
copula 0.0.4
copulas 0.10.0
numpy 1.26.4
packaging 23.2
pandas 2.2.0
pip 22.3.1
plotly 5.19.0
python-dateutil 2.8.2
pytz 2024.1
scipy 1.12.0
setuptools 65.5.0
six 1.16.0
tenacity 8.2.3
tzdata 2024.1
[notice] A new release of pip available: 22.3.1 -> 24.0
[notice] To update, run: python.exe -m pip install --upgrade pip
</code></pre>
<p>When I then try to import the module by using the code:</p>
<pre><code># import all relevant libraries
import scipy.stats as stats
import numpy as np
import pandas as pd
import datetime as dt
import re
from matplotlib import pyplot as plt
import pickle
import copulas
</code></pre>
<p>I get the error:</p>
<pre><code>ModuleNotFoundError Traceback (most recent call last)
<ipython-input-8-cec22cbc11bb> in <module>
7 from matplotlib import pyplot as plt
8 import pickle
----> 9 import copulas
ModuleNotFoundError: No module named 'copulas'
</code></pre>
<p>Here are some additional information:</p>
<pre><code>import sys
print(sys.path)
</code></pre>
<p>leads to the output:</p>
<pre><code>['C:\\Users\\wi9632\\bwSyncShare3\\Eigene Arbeit\\Code\\Python\\lademodellierung-main', 'c:\\users\\wi9632\\python\\python39\\python39.zip', 'c:\\users\\wi9632\\python\\python39\\DLLs', 'c:\\users\\wi9632\\python\\python39\\lib', 'c:\\users\\wi9632\\python\\python39', '', 'c:\\users\\wi9632\\python\\python39\\lib\\site-packages', 'c:\\users\\wi9632\\python\\python39\\lib\\site-packages\\win32', 'c:\\users\\wi9632\\python\\python39\\lib\\site-packages\\win32\\lib', 'c:\\users\\wi9632\\python\\python39\\lib\\site-packages\\Pythonwin', 'c:\\users\\wi9632\\python\\python39\\lib\\site-packages\\IPython\\extensions', 'C:\\Users\\wi9632\\.ipython']
</code></pre>
<p>and</p>
<pre><code>import sys
sys.executable
</code></pre>
<p>Leads to the output:</p>
<pre><code>'c:\\users\\wi9632\\python\\python39\\python.exe'
</code></pre>
<p>Can you tell me, why I can install the library but not import it?</p>
|
<python><jupyter-notebook><python-packaging><copula>
|
2024-02-19 13:21:50
| 1
| 902
|
PeterBe
|
78,020,935
| 2,391,712
|
Default value based on other field
|
<p>Is there a way to have the following:</p>
<pre class="lang-python prettyprint-override"><code>from dataclasses import dataclass, fields
@dataclass
class ExampleCls:
category: str
subcategory: str = f"{category} level 2"
</code></pre>
<ol>
<li><code>ExampleCls(category="hi")</code>
<ul>
<li>works with <code>field(init=False)</code> and returns <code>ExampleCls(category="hi", subcategory=f"{category} level 2")</code></li>
</ul>
</li>
<li><code>ExampleCls(category="hi", subcategory="greetings")</code>
<ul>
<li>here I expect <code>ExampleCls(category="hi", subcategory="greetings")</code></li>
<li>with <code>field(init=False)</code>, I will get an error: <code>got an unexpected keyword argument 'subcategory'</code></li>
</ul>
</li>
</ol>
<p>the <code>field(default_factory)</code> does not accept arguments from the other fields</p>
|
<python><python-dataclasses>
|
2024-02-19 13:15:43
| 0
| 2,515
|
5th
|
78,020,921
| 1,422,096
|
How does .pth files distinguish between lines with path information and lines with Python code?
|
<p>It seems a .pth file can also mix <em>"folders to add to path"</em> information + also, some Python code, such as <code>import foo</code>. Example in <code>pywin32.pth</code>:</p>
<pre><code># .pth file for the PyWin32 extensions
win32
win32\lib
Pythonwin
# And some hackery to deal with environments where the post_install script
# isn't run.
import pywin32_bootstrap
</code></pre>
<p><strong>How does the .pth file parser distinguish lines for folders, and lines with Python code (import...)?</strong> Here is seems ambiguous.</p>
<p>Note: Is there an official specification for the Python .pth file format? (I found some information on <a href="https://docs.python.org/3/library/site.html" rel="nofollow noreferrer">https://docs.python.org/3/library/site.html</a> but it's incomplete)</p>
<p>Note 2: it seems the .pth files can also include <code>zip</code> files instead of folders, see <code>python38._pth</code> in embedded Python:</p>
<pre><code>python38.zip
.
# Uncomment to run site.main() automatically
#import site
</code></pre>
|
<python><path><python-import><pth>
|
2024-02-19 13:13:10
| 1
| 47,388
|
Basj
|
78,020,884
| 3,973,175
|
hist2d plots with vmin/vax unknown until plotting with combined colorbar; previous solution not working
|
<p>I am attempting to make plots of density in rows. The challenging part, is that each of the plots must have the same color-scale min and max, and I do not know beforehand what the min and max are. I must plot the values first in order to find out the min and max, remove those plots, and then make new ones with the determined min/max.</p>
<p>Something like this, but with hist2d and in rows (image from another S.O. page):
<a href="https://i.sstatic.net/SFozD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SFozD.png" alt="enter image description here" /></a></p>
<p>I've come up with a minimal working example based on <a href="https://stackoverflow.com/questions/13784201/how-to-have-one-colorbar-for-all-subplots">How to have one colorbar for all subplots</a>, <a href="https://stackoverflow.com/questions/66443606/vmin-and-vmax-in-hist2d">vmin and vmax in hist2d</a></p>
<p>and this solution from "SpinUp __ A Davis", which I don't see how I can implement with real data:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=1, ncols=3)
for ax in axes.flat: # I don't see how to implement this part with real data
im = ax.imshow(np.random.random((10,10)), vmin=0, vmax=1)
fig.colorbar(im, ax=axes.ravel().tolist())
plt.show()
</code></pre>
<p>but the best I can come up with is this, which doesn't work:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np # only for MWE, not present in real file
# https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.hist2d.html
# getting min/max
fig, (ax0,ax1) = plt.subplots(nrows = 2, ncols = 1, sharex = True)
colormax = float("-inf")
colormin = float("inf")
h0 = ax0.hist2d(np.random.random(100), np.random.random(100))
colormax = max(h0[0].max(), colormax)
colormin = min(h0[0].min(), colormin)
h1 = ax1.hist2d(np.random.random(200), np.random.random(200))
colormax = max(h1[0].max(), colormax)
colormin = min(h1[0].min(), colormin)
# starting real plot, not just to get min/max
fig, axes = plt.subplots(nrows = 2, ncols = 1, sharex = True, figsize = (6.4,4.8))
i = 0
h = [h0, h1]
for ax in axes.flat: # I don't see how I can implement this
im = ax.imshow(h[i], vmin = colormin, vmax = colormax)
i += 1
fig.colorbar(im, ax= axes.ravel().tolist())
plt.savefig('imshow.debug.png')
</code></pre>
|
<python><python-3.x><matplotlib>
|
2024-02-19 13:05:34
| 1
| 6,227
|
con
|
78,020,841
| 2,440,388
|
Is possible to read environment variables while running python code on a PEX file?
|
<p>I am trying to have my python application running on a <a href="https://github.com/pex-tool/pex" rel="nofollow noreferrer">PEX</a> file, inside a docker container.</p>
<p>I was wondering if PEX files support any kind of configuration in order to read environment variables that get provided to the docker container just to avoid packaging a PEX file per deplyoment environment having differrrent types of configuration.</p>
<p>I couldn't find any type of documentation on this on the PEX site.</p>
<p>Expectation was to be able to consume defined environment variables for the docker container from the python process running on the PEX file, but the PEX execution seems self-isolated.</p>
<p>Only documentation about runtime environment variables I could find is about PEX_* environment variables which doesn't seem to serve that purpose.</p>
|
<python><docker><environment-variables><pex>
|
2024-02-19 12:59:19
| 1
| 343
|
jarey
|
78,020,823
| 10,695,731
|
Selenium find element by link_text or Xpath - ElementNotFoundException
|
<p>I have an href on the website which which looks like this:</p>
<pre><code><a href="Redirect.jsp?ScreenCode=L0000" target="LinkFrame">Transfer Doc</a>
</code></pre>
<p>This element is even visible when i expect the browser window opened by Selenium.</p>
<p><strong>Problem:</strong></p>
<p>I am trying to find this element and tried the following approaches:</p>
<pre><code>driver = webdriver.Firefox()
... #navigation to current site
transDocBtn = driver.find_element(By.XPATH, '//a[contains(@href,"L0000")]')
transDocBtn = driver.find_element(By.PARTIAL_LINK_TEXT,"Transfer Doc")
</code></pre>
<p>Neither of the 2 lines find the element. Both return an ElementNotFoundException.</p>
<p>How can i find my link? What is the reason the both lines above do not work?</p>
<p><strong>Edit:</strong>
For Reference, this is what the structure roughly looks like. My Element is within the <code>body class=menu</code></p>
<p><a href="https://i.sstatic.net/Ijn0D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ijn0D.png" alt="enter image description here" /></a></p>
|
<python><html><selenium-webdriver><web-scraping>
|
2024-02-19 12:56:22
| 1
| 691
|
NicO
|
78,020,643
| 11,211,041
|
Type 'str' doesn't have expected attribute '__setitem__' in Python
|
<p>Why do I get "<strong>Type 'str' doesn't have expected attribute '__setitem__'</strong>" when I call my static class method?</p>
<pre><code>@staticmethod
def replace_region_decimal_sign(to_convert):
to_convert[to_convert.index(',')] = '.' # Replace comma ("region" decimal sign) with a dot
return to_convert
</code></pre>
<p><a href="https://i.sstatic.net/dkPB5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dkPB5.png" alt="enter image description here" /></a></p>
<pre><code>if __name__ == "__main__":
print(Calculator.replace_region_decimal_sign("1,23"))
</code></pre>
<p><a href="https://i.sstatic.net/mFB1W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mFB1W.png" alt="enter image description here" /></a></p>
|
<python><string><attributes>
|
2024-02-19 12:26:22
| 1
| 892
|
Snostorp
|
78,020,395
| 1,030,951
|
How to specify Nested JSON using Langchain
|
<p>I am using <a href="https://python.langchain.com/docs/modules/model_io/output_parsers/types/structured" rel="nofollow noreferrer">StructuredParser</a> of Langchain library. I am getting flat dictionary from parser. Please guide me to get a list of dictionaries from output parser.</p>
<pre class="lang-py prettyprint-override"><code>PROMPT_TEMPLATE = """
You are an android developer.
Parse this error message and provide me identifiers & texts mentioend in error message.
--------
Error message is {msg}
--------
{format_instructions}
"""
def get_output_parser():
missing_id = ResponseSchema(name="identifier", description="This is missing identifier.")
missing_text = ResponseSchema(name="text", description="This is missing text.")
response_schemas = [missing_id, missing_text]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
return output_parser
def predict_result(msg):
model = ChatOpenAI(open_api_key="", openai_api_base="", model="llama-2-70b-chat-hf", temperature=0, max_tokens=2000)
output_parser = get_output_parser()
format_instructions = output_parser.get_format_instructions()
prompt = ChatPromptTemplate.from_template(template=PROMPT_TEMPLATE)
message = prompt.format_messages(msg=msg, format_instructions=format_instructions)
response = model.invoke(message)
response_as_dict = output_parser.parse(response.content)
print(response_as_dict)
predict_result("ObjectNotFoundException AnyOf(AllOf(withId:identifier1, withText:text1),AllOf(withId:identifier2, withText:text1),AllOf(withId:identifier3, withText:text1))")
</code></pre>
<p>The output I get is</p>
<pre class="lang-json prettyprint-override"><code>{
"identifier":"identifier1",
"text":"text1"
}
</code></pre>
<p>Expected output is</p>
<pre class="lang-json prettyprint-override"><code>[
{
"identifier":"identifier1",
"text":"text1"
},
{
"identifier":"identifier2",
"text":"text1"
},
{
"identifier":"identifier3",
"text":"text1"
}
]
</code></pre>
<p>How to specify such nested JSON in OutputParser</p>
|
<python><langchain>
|
2024-02-19 11:44:37
| 2
| 4,935
|
HarshIT
|
78,020,346
| 22,221,987
|
Python Sphinx inheritance path display and view configuration
|
<p>I'm trying to generate a project documentation with Sphinx. Here is my project tree:</p>
<pre><code>project_folder
docs
package_0
__init__.py
script_0.py
package_1
__init__.py
package_2
script_2.py
package_3
script_3.py
script_33.py
package_4
script_4.py
script_44.py
</code></pre>
<p>I execute <code>sphinx-quickstart</code> with <code>Separate source and build directories?: No</code> flag in <code>/docs</code> folder.<br />
Than I add some edits in code in the <code>/docs/conf.py</code>:</p>
<pre><code>import os
import sys
sys.path.insert(0, os.path.abspath('..'))
project = 'RC API'
copyright = '2024, Mike'
author = 'Mike'
release = '1.0.0'
extensions = ['sphinx.ext.todo', 'sphinx.ext.viewcode', 'sphinx.ext.autodoc']
templates_path = ['_templates']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
language = 'en'
html_theme = 'alabaster'
html_static_path = ['_static']
</code></pre>
<p>In <code>/docs/index.rst</code> I add <code>modules</code> under the <code>toctree</code>. The file looks like this:</p>
<pre><code>.. toctree::
:maxdepth: 2
:caption: Contents:
modules
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
</code></pre>
<p>After that I execute <code>sphinx-apidoc -o docs package_0</code> in the <code>/project_folder</code>. This causes <code>rst</code> files generation in <code>/docs</code> folder.</p>
<p>Finally I run <code>make html</code> in <code>/docs</code> folder. All docs are done.</p>
<p><strong>Here is the questions</strong>:
How to shorten the inheritance path of every method/class? We already in the package, so there is no reason to write full method/class path inside the package. Here is the example:<br />
<code>Here some scripts and functions inside the packages (and what i want to do with them).</code>
<a href="https://i.sstatic.net/cFBjx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cFBjx.png" alt="enter image description here" /></a></p>
<p>This style is overloaded with information. Is it possible to make structure looks like the project tree in IDE? With no additional information (telling "this is the package" and "this is the module") and no long inheritance paths?<br />
And one more question: How should I setup the functions docstring parsing to avoid compressing all text in a single row? Example:<br />
<a href="https://i.sstatic.net/p32SR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p32SR.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><documentation><python-sphinx>
|
2024-02-19 11:35:43
| 0
| 309
|
Mika
|
78,020,312
| 8,010,921
|
Signature Inheritance for **kwarg in constructor in python
|
<p>I have two TypeDict defined as follows:</p>
<pre class="lang-py prettyprint-override"><code>class TypeFilter1(TypedDict):
c: str
d: float
class TypeFilter2(TypedDict):
e: float
f: int
</code></pre>
<p>that I would like to use as types to construct two classes</p>
<pre class="lang-py prettyprint-override"><code>class MyClass1():
def __init__(self, **kwargs:Unpack[TypeFilter1]) -> None:
self.filters = [f"{key}:{values}" for key, values in kwargs.items() if key in TypeFilter1.__annotations__ ]
self.params = ' '.join(self.filters)
class MyClass2():
def __init__(self, **kwargs:Unpack[TypeFilter2]) -> None:
self.filters = [f"{key}:{values}" for key, values in kwargs.items() if key in TypeFilter2.__annotations__ ]
self.params = ' '.join(self.filters)
</code></pre>
<p>The role of the constructors is to parse <code>**kwargs</code> to format a <code>str</code> containing only those <code>kwargs.items()</code> there are members of their respective <code>TypeFilterX.__annotations__</code>:</p>
<p>A use case could be:</p>
<pre class="lang-py prettyprint-override"><code>a = MyClass1(c="hello", d=2.0) # self.params -> "c:hello d:2.0
b = MyClass2(e=2.1, g=3) # self.params -> "e:2.1"
</code></pre>
<p>Since both classes do the same thing, I thought that they could inherit the constructor from a <code>BaseClass</code> but I wouldn't know how to implement <code>BaseClass</code> so that it could parse <code>**kwargs</code> using different <code>TypeFilters</code>.</p>
<p>I tried to</p>
<ul>
<li>create a <code>GenericTypeFilter</code> as a parent of both <code>TyperFilterX</code> and sign the <code>super().__init__(**kwargs)</code> as <code>self, **kwargs:Unpack[GenericTypeFilter]</code>, but both classes return an empty <code>str</code></li>
<li>define <code>**kwargs:Unpack[T]</code> in <code>super().__init__(**kwargs)</code> where <code>T = TypeVar("T", TypeFilter1,TypeFilter2)</code>, but `'TypeVar' object has no attribute</li>
<li>use <code>Any</code>. Same problem as above.</li>
</ul>
<p>How should I do it? Moreover: now I have <strong>2</strong> <code>TypeFilterX</code>, but how would you handle the situation if I have <strong>N</strong> of them?</p>
<p>Thank you!</p>
|
<python>
|
2024-02-19 11:29:03
| 3
| 327
|
ddgg
|
78,020,254
| 1,029,469
|
Python / Pillow: treat an animated GIF or WEBP in place
|
<p>I want to resize a complete GIF (all frames) in place/in memory, without saving it to disk. I'm using Pillow/PIL. I've seen the other examples of "how to create or modify an animated GIF with Pillow", but the all use the Images's save method like this: <code>first_frame.save("anim.gif", save_all=True, append_images=additional_frames)</code>. I cannot do it this way, for several reasons (or it would be a big overhead for my program).</p>
<p>Example (what I tried):</p>
<pre><code>im = Image.open("my.gif")
for i in range(im.n_frames):
im.seek(i)
result = im.resize(new_width, new_height)
im.paste(result)
print(im.size) >> old size
</code></pre>
<p>It looks like the <code>result</code> is not pasted?</p>
<p>I've also tried this: <code>im.paste(result, (0, 0, new_width, new_height))</code> to explicitly define how the resulting frame should be pasted, but this doesnt help as well.</p>
|
<python><python-imaging-library>
|
2024-02-19 11:18:05
| 0
| 1,957
|
benzkji
|
78,020,185
| 3,828,463
|
Why am I getting this error? <class 'TypeError'>: wrong type
|
<p>I have a Python driver calling a Fortran subroutine. The subroutine has 1 character, then 3 integers. Before I added the character variable at the start and didn't have the argtypes call, it ran fine. But adding the character (and its length) plus the artgtypes call at the start gives a TypeError:</p>
<pre><code>import ctypes as ct
lib = ct.CDLL('FTN_lib.dll')
fs = getattr(lib,'PY_OBJFUN_MOD_mp_GETSIZE')
fs.restype = None
fs.argtypes = [ct.c_char_p,ct.c_int,ct.c_int,ct.c_int,ct.c_int]
x = b'x ' + sys.argv[1].encode('UTF-8') + b' debug=900'
n = ct.c_int(0)
nl = ct.c_int(0)
nnl = ct.c_int(0)
fs(x, len(x), ct.byref(n), ct.byref(nl), ct.byref(nnl))
</code></pre>
<blockquote>
<pre><code>fs(x, len(x), ct.byref(n), ct.byref(nl), ct.byref(nnl))
</code></pre>
<p>ctypes.ArgumentError: argument 3: <class 'TypeError'>: wrong type</p>
</blockquote>
<p>It seems the addition of fs.argtypes is causing the problem. The test code didn't have it, and it worked fine. Adding fs.argtypes causes the test code to fail too. Hence there is some problem using ct.c_int construct which is used as ct_byref in the call.</p>
<p>The Fortran code header is:</p>
<pre><code> subroutine getsize(file,n,nl,nnl)
implicit none
character(*), intent(in ) :: file
integer , intent(out) :: n
integer , intent(out) :: nl
integer , intent(out) :: nnl
</code></pre>
|
<python><fortran><ctypes>
|
2024-02-19 11:06:28
| 1
| 335
|
Adrian
|
78,020,103
| 8,068,825
|
Pandas - Duplicate rows based on the last character in string column
|
<p>so if I have a pandas dataframe</p>
<pre><code>msg | label
"hello" | 1
"hi!" | 0
</code></pre>
<p>I need something that'll duplicate any row ending with [?,.,!] and duplicate the row without the punctuation so the above row would become</p>
<pre><code>msg | label
"hello" | 1
"hi!" | 0
"hi" | 0
</code></pre>
<p>after the transformation. How would I do this?</p>
|
<python><pandas>
|
2024-02-19 10:52:01
| 3
| 733
|
Gooby
|
78,019,997
| 4,269,851
|
Possible to use python native lists to replace array altogether?
|
<p>I come from PHP background so i am very used to arrays with key->value pairs e.g.</p>
<pre><code>array[0] = "One"
array[1] = "Two"
array[3] = "Three"
</code></pre>
<p>Then i could <code>unset(array[2])</code> and still have keys untouched</p>
<pre><code>array[1] -> "One"
array[3] -> "Three"
</code></pre>
<p>For all my coding experience i never used arrays without keys, because of additional flexibility they give. When started coding with Python they natively do not have arrays only <a href="https://docs.python.org/3/library/stdtypes.html#typesseq-list" rel="nofollow noreferrer">Lists</a>.</p>
<p>Now lists behave differently key is assigned automatically as data is added to list</p>
<pre><code>list = []
list.append("One") # list[0] -> "One"
list.append("Two") # list[1] -> "Two"
list.append("Three") # list[2] -> "Three"
</code></pre>
<p>if i do <code>list.pop(1)</code> then list index will shift down so basically can't relay on array index anymore.</p>
<pre><code>list[0] -> "One"
list[1] -> "Three"
</code></pre>
<p>I could have used <a href="https://docs.python.org/3/library/array.html" rel="nofollow noreferrer">array</a> module for Python or whatever module they have for this that replicate PHP functionality, however i want to first see if there is a way to achieve what i want using pure Python data types.</p>
<p>Lets talk about this example i am going to import a .CSV list with three columns e.g.</p>
<pre><code>John 31 USA 516
Sam 27 UK 517
Mike 45 Germany 521
</code></pre>
<p>Now if i worked with <strong>PHP</strong> i'd store data as multi dimensional array</p>
<pre><code>array[0]["name"] = "John"
array[0]["age"] = 31
array[0]["country"] = "USA"
array[0]["membership_id"] = "516"
array[1]["name"] = "Sam"
array[1]["age"] = 27
array[1]["country"] = "UK"
array[1]["membership_id"] = "517"
array[2]["name"] = "Mike"
array[2]["age"] = 45
array[2]["country"] = "Germany"
array[2]["membership_id"] = "523"
</code></pre>
<p>or with unique <code>membership_id</code> as key which is also handy in some cases.</p>
<pre><code>array[516]["name"] = "John"
array[516]["age"] = 31
array[516]["country"] = "USA"
array[517]["name"] = "Sam"
array[517]["age"] = 27
array[517]["country"] = "UK"
array[517]["membership_id"] = "517"
array[523]["name"] = "Mike"
array[523]["age"] = 45
array[523]["country"] = "Germany"
</code></pre>
<p>or i could use four single dimensional arrays because in PHP if i delete key <code>[1]</code> in every array other keys will still stay [0] and <code>[2]</code> will not shift numbers.</p>
<pre><code>name[0] = "John"
name[1] = "Sam"
name[2] = "Mike"
age[0] = 31
age[1] = 27
age[2] = 45
country[0] = "USA"
country[1] = "UK"
country[2] = "Germany"
membership_id[0] = "516"
membership_id[1] = "517"
membership_id[2] = "523"
</code></pre>
<p>1 What's the best approach to store using <strong>Python</strong> data types (no extra libraries) only option i come up with is to use single dimensional list or there's other ways to store this data?</p>
<pre><code>name = []
name.append("John")
name.append("Sam")
name.append("Mike")
age = []
age.append(31)
age.append(27)
age.append(45)
country = []
country.append("USA")
country.append("UK")
country.append("Germany")
membership_id = []
membership_id.append("516")
membership_id.append("517")
membership_id.append("523")
</code></pre>
<p>2 Imagine i have list with 1000 entries, if at some point i decide to delete <code>John</code> from the list how i make this happen?</p>
<pre><code>for n in range(0, len(membership_id) -1):
if membership_id[n] == "516"
name.pop(n)
age.pop(n)
country.pop(n)
membership_id.pop(n)
</code></pre>
<p>above code will return <code>IndexError: list index out of range</code> so how do i do make it work?</p>
<p>3 At some point i want output data from lists in console as table how i do it (please no answers with fancy <code>zip()</code> function i am trying to understand basics not optimize)?</p>
<p>If i print line by line i am getting</p>
<pre><code>John Sam Mike
31 27 45
USA UK Germany
516 517 521
</code></pre>
<p>format i need is</p>
<pre><code>John 31 USA 516
Sam 27 UK 517
Mike 45 Germany 521
</code></pre>
|
<python><arrays><list><types>
|
2024-02-19 10:35:19
| 3
| 829
|
Roman Toasov
|
78,019,970
| 17,471,060
|
Pythonic way to get polars data frame absolute max values of all relevant columns
|
<p>I want to create absolute maximum of all Polars data frame columns. Here is a way, but surely could be improved.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import polars as pl
df = pl.DataFrame({
"name": ["one", "one", "one", "two", "two", "two"],
"val1": [1.2, -2.3, 3, -3.3, 2.2, -1.3],
"val2": [1,2,3,-4,-3,-2]
})
absVals = []
for col in df.columns:
try:
absVals.append((lambda arr: max(arr.min(), arr.max(), key=abs)) (df[col]))
except:
absVals.append(np.NaN)
df_out= pl.DataFrame(data=absVals).transpose()
df_out.columns=df.columns
print(df_out)
</code></pre>
<p>Output:</p>
<pre><code>shape: (1, 3)
┌──────┬──────┬──────┐
│ name ┆ val1 ┆ val2 │
│ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ f64 │
╞══════╪══════╪══════╡
│ NaN ┆ -3.3 ┆ -4.0 │
└──────┴──────┴──────┘
</code></pre>
|
<python><dataframe><python-polars>
|
2024-02-19 10:30:50
| 4
| 344
|
beta green
|
78,019,932
| 23,051,231
|
AttributeError: module 'typing' has no attribute 'Literal'
|
<p>I have python 3.9 installed in my windows machine with a flask script working as expected. Then I tried to replicate the environment on my linux machine. I have the same python version and the exact same <em>pip list</em>. But in the linux machine I receive this error from mod_wsgi:</p>
<pre><code>from flask import Flask, abort
File "/var/www/VirtualEnvs/newEnv/lib/python3.9/site-packages/flask/__init__.py", line 1, in <module>
from . import json as json
File "/var/www/VirtualEnvs/newEnv/lib/python3.9/site-packages/flask/json/__init__.py", line 6, in <module>
from ..globals import current_app
File "/var/www/VirtualEnvs/newEnv/lib/python3.9/site-packages/flask/globals.py", line 6, in <module>
from werkzeug.local import LocalProxy
File "/var/www/VirtualEnvs/newEnv/lib/python3.9/site-packages/werkzeug/__init__.py", line 1, in <module>
from .serving import run_simple as run_simple
File "/var/www/VirtualEnvs/newEnv/lib/python3.9/site-packages/werkzeug/serving.py", line 76, in <module>
t.Union["ssl.SSLContext", t.Tuple[str, t.Optional[str]], t.Literal["adhoc"]]
AttributeError: module 'typing' has no attribute 'Literal'
</code></pre>
<p>I even tried upgrading to python3.10 and I have the same result:</p>
<pre><code>from flask import Flask, abort
File "/usr/local/lib/python3.10/dist-packages/flask/__init__.py", line 5, in <module>
from . import json as json
File "/usr/local/lib/python3.10/dist-packages/flask/json/__init__.py", line 6, in <module>
from ..globals import current_app
File "/usr/local/lib/python3.10/dist-packages/flask/globals.py", line 6, in <module>
from werkzeug.local import LocalProxy
File "/usr/local/lib/python3.10/dist-packages/werkzeug/__init__.py", line 5, in <module>
from .serving import run_simple as run_simple
File "/usr/local/lib/python3.10/dist-packages/werkzeug/serving.py", line 76, in <module>
t.Union["ssl.SSLContext", t.Tuple[str, t.Optional[str]], t.Literal["adhoc"]]
AttributeError: module 'typing' has no attribute 'Literal'
</code></pre>
<p>My last idea was to <em>pip install --upgrade typing</em> which gave me this new error:</p>
<pre><code>from flask import Flask, abort
File "/usr/local/lib/python3.10/dist-packages/flask/__init__.py", line 3, in <module>
import typing as t
File "/usr/local/lib/python3.10/dist-packages/typing.py", line 1359, in <module>
class Callable(extra=collections_abc.Callable, metaclass=CallableMeta):
File "/usr/local/lib/python3.10/dist-packages/typing.py", line 1007, in __new__
self._abc_registry = extra._abc_registry
AttributeError: type object 'Callable' has no attribute '_abc_registry'
</code></pre>
<p>As far as I know, this <em>pip install --upgrade typing</em> should not be used in this python version. <a href="https://stackoverflow.com/questions/55833509/attributeerror-type-object-callable-has-no-attribute-abc-registry">Here</a> more details could be found.</p>
<p>Flask version 3.0.2
Werkzeug version 3.0.1</p>
|
<python><flask><mod-wsgi><werkzeug>
|
2024-02-19 10:25:02
| 1
| 403
|
galex
|
78,019,900
| 7,553,746
|
How can I use list comprehension to count high or low pair?
|
<p>I am trying to make my code more elegant/less verbose with list comprehension. I have the following so far:</p>
<pre><code>a = [47, 67, 22]
b = [26, 47, 12]
at = 0
bt = 0
for a, b in [c for c in zip(a, b)]:
if a>b:
at+=1
elif a<b:
bt+=1
</code></pre>
<p>I've been trying to do something like this to remove the if elif statements:</p>
<pre><code>[at+=1 if (a>b) else bt+=1 if (a<b) for a, b in [c for c in zip(a, b)]]
</code></pre>
<p>Which gives me a invalid syntax error highlighting the <code>at+=1</code> (so I've gone down a rabbit hole:</p>
<pre><code>[at+=1 if (a>b) else bt+=1 if (a<b) else hello=None for a, b in [c for c in zip(a, b)]]
^^
SyntaxError: invalid syntax
</code></pre>
|
<python>
|
2024-02-19 10:19:40
| 3
| 3,326
|
Johnny John Boy
|
78,019,666
| 9,809,542
|
How to avoid splitting on specific text block in LangChain
|
<p>I'm using langchain ReucrsiveCharacterTextSplitter to split a string into chunks. Within this string is a substring which I can demarcate. I want this substring to not be split up, whether that's entirely it's own chunk, appended to the previous chunk, or prepended to the next chunk. Is there a relatively simple way to do this?</p>
<p>For example, the following code:</p>
<pre><code>from langchain.text_splitter import RecursiveCharacterTextSplitter
splitter = RecursiveCharacterTextSplitter(chunk_size=5, chunk_overlap=2, separators=[' '], keep_separator=False)
nosplit = '<nosplit>Keep all this together, very important! Seriously though it is...<nosplit>'
text = 'Giggity! ' + nosplit + 'Ahh yeah...\nI just buy a jetski.'
chunks = splitter.split_text(text)
print(chunks)
</code></pre>
<p>Prints:
['Giggity!', 'Keep', 'all', 'this', 'together,', 'very', 'important!', 'Seriously', 'though', 'it', 'is...Ahh', 'yeah...\nI', 'just', 'buy a', 'jetski.']</p>
<p>Whereas I would like it to print:
['Giggity!', 'Keep all this together, very important! Seriously though it is...', 'Ahh', 'yeah...\nI', 'just', 'buy a', 'jetski.']</p>
<p>The only partial solution I have is to give <code><nosplit></code> priority in my separators list and then temporarily replace all other separators within the nosplit text with non-separator placeholders, and then put them back in. E.g.:</p>
<pre><code>from langchain.text_splitter import RecursiveCharacterTextSplitter
splitter = RecursiveCharacterTextSplitter(chunk_size=5, chunk_overlap=2, separators=['<nosplit>', ' '], keep_separator=False)
nosplit = '<nosplit>Keep all this together, very important! Seriously though it is...<nosplit>'
space_word = 'x179lp'
nosplit = nosplit.replace(' ', space_word)
text = 'Giggity!' + nosplit + 'Ahh yeah...\nI just buy a jetski.'
chunks = splitter.split_text(text)
for i, chunk in enumerate(chunks):
chunks[i] = chunk.replace(space_word, ' ')
print(chunks)
</code></pre>
<p>The issue is with this method I can't use character level splitting, i.e. '', as a possible separator for the rest of the document (I'd like to use the default separators list (along with nosplit): ["nosplit", "\n\n", "\n", " ", ""]).</p>
<p>Thank you!</p>
|
<python><nlp><langchain><retrieval-augmented-generation><text-chunking>
|
2024-02-19 09:43:19
| 1
| 587
|
DMcC
|
78,019,567
| 1,422,096
|
How does "import pythoncom" find the right files?
|
<p>NB: I know the proper solution is just to do <code>pip install pywin32</code> and let everything be done automatically, but here this question is about the internals of <code>pythoncom</code>.</p>
<p>When doing <code>import pythoncom</code>, it works, but:</p>
<ul>
<li><p>in "C:\Python38\Lib\site-packages\pythoncom.py", there is <code>import pywintypes</code>, but <strong>no pywintypes.py</strong> or .pyd or .dll is present in the <code>site-packages</code> directory. How can it "magically" find <code>pywintypes</code>?</p>
</li>
<li><p>when doing <code>print(pythoncom.__file__)</code>, we see:</p>
<pre><code>'C:\\Python38\\lib\\site-packages\\pywin32_system32\\pythoncom38.dll'
</code></pre>
<p>How is this stunning behaviour possible internally? (i.e. pythoncom.py that we imported is now recognized as another file, a .dll)</p>
</li>
<li><p>Also, <code>pythoncom.py</code> contains:</p>
<pre><code># Magic utility that "redirects" to pythoncomxx.dll
import pywintypes
pywintypes.__import_pywin32_system_module__("pythoncom", globals())
</code></pre>
</li>
</ul>
<p><strong>What and where is this (I quote) "magic utility" that redirects to <code>pythoncomxx.dll</code>?</strong></p>
<p>I don't see where this utility is called when doing just <code>import pythoncom</code>.</p>
|
<python><windows><python-import><pywin32><pth>
|
2024-02-19 09:27:29
| 1
| 47,388
|
Basj
|
78,019,480
| 4,875,766
|
How to dynamically provide arguments to a class __init__ function based on attributes?
|
<p>How do tools like <code>dataclasses</code> and <code>pydantic</code> create the <code>__init__()</code> functions for the classes they create?</p>
<p>I know I can use these tools but I want to learn how to use the python metaprogramming to parse my class and create a dynamic init function based on attributes and their types.</p>
<p>an example:</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
x: int
y: str | None
z: float = 0.1
# I don't want to manually make this
def __init__(self, x: int, y: str | None, z: float = 0.1):
self.x = x
self.y = y
self.z = z
</code></pre>
<p>Is it possible to do something similar using metaclasses and type annotations?</p>
|
<python><oop><metaprogramming>
|
2024-02-19 09:12:11
| 1
| 331
|
TobyStack
|
78,019,304
| 5,259,592
|
SSL error in python requests on POST API call
|
<p>I have to use hit an api request using Airflow. So i was using HttpOperator but facing below error</p>
<pre><code>urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='<api host>', port=443): Max retries exceeded with url: <endpoint path> (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)')))
</code></pre>
<p>So i look at HttpHook code try to replicate it and also wrote my code to hit the api . Below is the sample i wrote</p>
<pre><code>import requests
data = {}
data['subject'] = "test mail"
data['fromUserEmail'] = "<sender id>"
data['toUserEmail'] = "<my mail id>"
data['emailContent'] = "test mail"
cookies = "<some headers>"
location = "USA"
content_type = "application/json"
apikey = '<some api key>'
auth = '"Basic <auth key>'
headers = {'X-COM-LOCATION': location, 'Content-Type': content_type, 'apikey': apikey, 'Authorization': auth,
'Cookie': cookies}
endpoint = "<host> + <path>"
req = requests.Request("POST",url=endpoint,headers=headers,data=data)
ses = requests.Session()
pr = ses.prepare_request(req)
settings = ses.merge_environment_settings(pr.url, {}, None, "/home/airflow/.local/lib/python3.10/site-packages/certifi/cacert.pem",None)
pr.body = 'No, I want exactly this as the body.'
resp = ses.send(pr,**settings)
</code></pre>
<p>but still i am getting same error below is the stack trace</p>
<pre><code>>>> pr.body = 'No, I want exactly this as the body.'
>>> resp = ses.send(pr,**settings)
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 714, in urlopen
httplib_response = self._make_request(
File "/home/airflow/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 403, in _make_request
self._validate_conn(conn)
File "/home/airflow/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1053, in _validate_conn
conn.connect()
File "/home/airflow/.local/lib/python3.10/site-packages/urllib3/connection.py", line 419, in connect
self.sock = ssl_wrap_socket(
File "/home/airflow/.local/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(
File "/home/airflow/.local/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "/usr/local/lib/python3.10/ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "/usr/local/lib/python3.10/ssl.py", line 1104, in _create
self.do_handshake()
File "/usr/local/lib/python3.10/ssl.py", line 1375, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/home/airflow/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 798, in urlopen
retries = retries.increment(
File "/home/airflow/.local/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='<host>', port=443): Max retries exceeded with url: <path> (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/airflow/.local/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/requests/adapters.py", line 517, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='<host>', port=443): Max retries exceeded with url: <path> (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)')))
>>>
>>> print(resp.status_code)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'resp' is not defined
</code></pre>
<p>I tried many things from stackoverflow and requests' official document but nothing helps me. Can anyone help me here?</p>
|
<python><ssl-certificate><airflow>
|
2024-02-19 08:36:37
| 0
| 400
|
Ayush Goyal
|
78,019,123
| 8,068,825
|
Pandas - Split multiple columns of numpy arrays into multiple columns
|
<p>I saved numpy array to pandas columns and it gets saved as a string. I parse the string and convert it to numpy array below. The issue is the code below is painfully slow. Is there a way to speed this up?</p>
<pre><code>def f(x):
# Remove "[" and "]" and split on space
return np.fromstring(x[1 : -1], sep=" ")
def split_df(df: pd.DataFrame):
output_df = pd.DataFrame(np.float32(df["output"].str.replace("\n", "").apply(f).to_list()))
other_output_df = pd.DataFrame(np.float32(df["other_output"].str.replace("\n", "").apply(f).to_list())).add_prefix("prev_")
new_df = pd.concat((output_df, other_output_df), axis=1, ignore_index=True)
return new_df
</code></pre>
|
<python><pandas><numpy>
|
2024-02-19 08:02:58
| 1
| 733
|
Gooby
|
78,019,115
| 839,837
|
Use values in a list to extract specific rows from an excel spreadsheet
|
<p>I have a spreadsheet where I'm trying to extract specific rows that contain a value from a list. The values checked are in the Task column of the spreadsheet. The spreadsheet has an unusual layout and is not friendly for parsing.</p>
<p>I converted the excel spreadsheet I need to parse into a JSON file so I could show it here in Stack Overflow since attachments are not possible.</p>
<p>I want to run the list of Task numbers against the Excel spreadsheet and create a new spreadsheet containing only the rows that contain the Task numbers from the task_number list.</p>
<p>Looking something like this:-</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Task</th>
<th>Description</th>
<th>A</th>
<th>B</th>
<th>C</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>130101</td>
<td>something</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>131269</td>
<td>something</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
</tr>
</tbody>
</table></div>
<p>I've been trying to get this to work using Python and Pandas.</p>
<p>This is what I have so far:-</p>
<pre><code>import pandas as pd
# the excel spreadsheet in JSON form
excel_sheet = '''{"Unnamed: 0.1":0,"Unnamed: 0":"Some sentence titling the page","Unnamed: 1":null,"Unnamed: 2":null,"Unnamed: 3":null,"Unnamed: 4":null,"Unnamed: 5":null},{"Unnamed: 0.1":1,"Unnamed: 0":null,"Unnamed: 1":null,"Unnamed: 2":null,"Unnamed: 3":null,"Unnamed: 4":null,"Unnamed: 5":null},{"Unnamed: 0.1":2,"Unnamed: 0":null,"Unnamed: 1":null,"Unnamed: 2":null,"Unnamed: 3":null,"Unnamed: 4":null,"Unnamed: 5":null},{"Unnamed: 0.1":3,"Unnamed: 0":"a thing:","Unnamed: 1":"sdftrhg","Unnamed: 2":null,"Unnamed: 3":null,"Unnamed: 4":null,"Unnamed: 5":null},{"Unnamed: 0.1":9,"Unnamed: 0":null,"Unnamed: 1":null,"Unnamed: 2":null,"Unnamed: 3":null,"Unnamed: 4":null,"Unnamed: 5":null},{"Unnamed: 0.1":10,"Unnamed: 0":"Task","Unnamed: 1":"Description","Unnamed: 2":"A","Unnamed: 3":"B","Unnamed: 4":"C","Unnamed: 5":null},{"Unnamed: 0.1":11,"Unnamed: 0":null,"Unnamed: 1":null,"Unnamed: 2":1,"Unnamed: 3":1,"Unnamed: 4":1,"Unnamed: 5":2.0},{"Unnamed: 0.1":12,"Unnamed: 0":"130","Unnamed: 1":null,"Unnamed: 2":null,"Unnamed: 3":null,"Unnamed: 4":null,"Unnamed: 5":null},{"Unnamed: 0.1":13,"Unnamed: 0":"130101","Unnamed: 1":"something","Unnamed: 2":1,"Unnamed: 3":2,"Unnamed: 4":3,"Unnamed: 5":4.0},{"Unnamed: 0.1":14,"Unnamed: 0":null,"Unnamed: 1":null,"Unnamed: 2":null,"Unnamed: 3":null,"Unnamed: 4":null,"Unnamed: 5":null},{"Unnamed: 0.1":15,"Unnamed: 0":null,"Unnamed: 1":"bam","Unnamed: 2":8,"Unnamed: 3":0,"Unnamed: 4":0,"Unnamed: 5":null},{"Unnamed: 0.1":16,"Unnamed: 0":null,"Unnamed: 1":null,"Unnamed: 2":null,"Unnamed: 3":null,"Unnamed: 4":null,"Unnamed: 5":null},{"Unnamed: 0.1":17,"Unnamed: 0":"131","Unnamed: 1":null,"Unnamed: 2":null,"Unnamed: 3":null,"Unnamed: 4":null,"Unnamed: 5":null},{"Unnamed: 0.1":18,"Unnamed: 0":"131269","Unnamed: 1":"something","Unnamed: 2":4,"Unnamed: 3":5,"Unnamed: 4":6,"Unnamed: 5":7.0},{"Unnamed: 0.1":43,"Unnamed: 0":null,"Unnamed: 1":null,"Unnamed: 2":null,"Unnamed: 3":null,"Unnamed: 4":null,"Unnamed: 5":null},{"Unnamed: 0.1":44,"Unnamed: 0":null,"Unnamed: 1":"bam","Unnamed: 2":8,"Unnamed: 3":0,"Unnamed: 4":0,"Unnamed: 5":null}'''
# convert JSON back to Excel
pd.read_json(excel_sheet , lines = True).to_excel("excel_sheet.xlsx")
# read in excel spreadsheet
excel_sheet = pd.read_excel("excel_sheet.xlsx")
task_number = [130101, 131269]
# create an empty dataframe with appropriate headings
new_data = pd.DataFrame({'Task':[], 'Description':[], 'A':[], 'B':[], 'C':[], ' ':[]})
# search rows in dataframe and append rows containing values in task_number into
# new_data dataframe (this is where it falls over). I use iloc to select the column by
# index instead of by the Task name due to it not being at the top of the column. It doesn't work.
#for task in task_number:
# new_data = new_data.append(excel_sheet[excel_sheet.iloc[:, 0] == task], ignore_index=True)
# This is another way as suggested by e-motta below, but it returns a empty dataframe
filtered_df = excel_sheet[excel_sheet.iloc[:, 0].isin(task_number)]
</code></pre>
<p>As my skill in parsing spreadsheets is bad to say the least I expect there's probably much better ways to do this, so any help would be greatly appreciated.</p>
<p>Thanks for taking the time to have a look at my problem.</p>
|
<python><python-3.x><pandas><excel><dataframe>
|
2024-02-19 08:00:45
| 1
| 727
|
Markus
|
78,019,004
| 4,215,176
|
Python: Reading bytes from a file and converting it to the needed type
|
<p>I am trying to implement a code which reads the bytes from a file and computes the histogram of the byte count frequencies:</p>
<p>I am using this code to read from a file and writing to a csv file:</p>
<pre><code>block_size = int(block_size)
data_block = data[:block_size]
encoded_data_block = base64.b64encode(data_block).decode("utf-8")
csv_writer.writerow([encoded_data_block, label])
</code></pre>
<p>later on, I am trying to read from the csv file to compute the byte count:</p>
<p>for example, one of the rows in my csv file looks something like this:</p>
<pre><code>b'a2pUuzNscSX+UaAz9KAMtls+OeN08DtLa4WYri76IouKLNp+iXaKntSqKBQZLqIdG/xt4GWiXD08nI
</code></pre>
<p>in an example dataset and code that i am referring to, the rows of the csv file are looking like this:</p>
<pre><code>b'x\x9c\x9c\xbd\xcb\x8e\x1cW\x96-8\xe7W\x18\x1c(\x88\x04<\x02\x0c\xea\x9d\x1aQ$\xf5*QR\x89\xcc\xd2\xbdU\xa8\x81\xb9\xdb\xf1p\x13\xcd\xcd<\xed\x11A\xd7(q\xd1
</code></pre>
<p>i want my dataset rows also to be like above, so that i can test my code against theirs. can anyone please help in how can i get my data to be of the above format?</p>
|
<python><python-3.x><byte>
|
2024-02-19 07:35:34
| 0
| 511
|
Sanku
|
78,018,922
| 9,495,110
|
Could not deserialize class 'Generator' because its parent module networks cannot be imported
|
<p>I have a subclassed model <code>Generator</code> that uses other subclassed layers declared in a different file <code>ops.py</code>. I tried to save the model by calling <code>model.save()</code> in <code>keras</code> format. Now when I am trying to load the model by calling <code>tf.keras.models.load_model()</code>, I get the following error:</p>
<pre><code>TypeError: Could not deserialize class 'Generator' because its parent module networks cannot be imported. Full object config: {'module': 'networks', 'class_name': 'Generator', 'config': {'name': 'Generator', 'max_conv_dim': 512, 'sn': False, 'img_size': 256, 'img_ch': 3, 'style_dim': 64}, 'registered_name': 'Generator', 'build_config': {'input_shape': [[8, 256, 256, 3], [8, 64]]}}
</code></pre>
<p>I am basically trying to save the <code>Generator</code> so that I can use it without importing the classes. This is the code that I am using: <a href="https://github.com/clovaai/stargan-v2-tensorflow" rel="nofollow noreferrer">https://github.com/clovaai/stargan-v2-tensorflow</a></p>
<p>How can I solve this issue?</p>
|
<python><tensorflow><keras><serialization>
|
2024-02-19 07:16:42
| 0
| 1,027
|
let me down slowly
|
78,018,799
| 16,405,935
|
How to transform multi-columns to rows
|
<p>I have a excel file with multi-columns as below (Sorry but I don't know how to recreate it with pandas):
<a href="https://i.sstatic.net/lzTs9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lzTs9.png" alt="enter image description here" /></a></p>
<p>Below is my expected Output:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'Code': ['11000000000', '11200100000', '11710000000', '11000000000', '11200100000', '11710000000', '11000000000', '11200100000', '11710000000'],
'Code Name': ['Car', 'Motorbike', 'Bike', 'Car', 'Motorbike', 'Bike', 'Car', 'Motorbike', 'Bike'],
'Date': ['19-02-2024', '19-02-2024', '19-02-2024', '19-02-2024', '19-02-2024', '19-02-2024', '19-02-2024', '19-02-2024', '19-02-2024'],
'Customer': ['Customer A', 'Customer A', 'Customer A', 'Customer B', 'Customer B', 'Customer B', 'Customer ...', 'Customer ...', 'Customer ...'],
'Point_1': [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
'Point_2': [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]})
df
Code Code Name Date Customer Point_1 Point_2
0 11000000000 Car 19-02-2024 Customer A NaN NaN
1 11200100000 Motorbike 19-02-2024 Customer A NaN NaN
2 11710000000 Bike 19-02-2024 Customer A NaN NaN
3 11000000000 Car 19-02-2024 Customer B NaN NaN
4 11200100000 Motorbike 19-02-2024 Customer B NaN NaN
5 11710000000 Bike 19-02-2024 Customer B NaN NaN
6 11000000000 Car 19-02-2024 Customer ... NaN NaN
7 11200100000 Motorbike 19-02-2024 Customer ... NaN NaN
8 11710000000 Bike 19-02-2024 Customer ... NaN NaN
</code></pre>
<p>What should I do to get this result. Thank you</p>
|
<python><pandas><dataframe>
|
2024-02-19 06:47:47
| 1
| 1,793
|
hoa tran
|
78,018,605
| 8,645,552
|
Write a numpy array to headerless wav file?
|
<p>I have a numpy audio array, I would like to write it in chunks as <strong>wav</strong> with <strong>ulaw</strong> encoding. For now I am using the <a href="https://python-soundfile.readthedocs.io/en/0.11.0/index.html?highlight=write#soundfile.SoundFile.write" rel="nofollow noreferrer">soundfile</a> library:</p>
<pre class="lang-py prettyprint-override"><code>import soundfile as sf
# ...
audio_len = len(audio) # numpy array
chunkLen = int(0.030 * 8000)
for i in range(0, audio_len, chunkLen):
chunk = audio[i:min(i+chunkLen, audio_len)]
buffered = BytesIO()
sf.write(buffered, chunk, samplerate=8000, format="WAV", subtype="ULAW")
buffered.seek(0)
yield buffered.read()
</code></pre>
<p>However, the returned raw data file includes a lot of <code>\bRIFF</code>, which means it is still writing headers, is there anyway I can write it as headerless?</p>
|
<python><numpy><wav><soundfile><mu-law>
|
2024-02-19 06:02:09
| 0
| 370
|
demid
|
78,018,029
| 1,574,054
|
How to delete all parents with no children in sqlalchemy?
|
<p>Let's consider the SQLAlchemy <a href="https://docs.sqlalchemy.org/en/20/orm/basic_relationships.html" rel="nofollow noreferrer">example for a basic relationship</a>:</p>
<pre><code>class Parent(Base):
__tablename__ = "parent_table"
id: Mapped[int] = mapped_column(primary_key=True)
children: Mapped[List["Child"]] = relationship(back_populates="parent")
class Child(Base):
__tablename__ = "child_table"
id: Mapped[int] = mapped_column(primary_key=True)
parent_id: Mapped[int] = mapped_column(ForeignKey("parent_table.id"))
parent: Mapped["Parent"] = relationship(back_populates="children")
</code></pre>
<p>How can I delete all <code>Parent</code>s without <code>Child</code>ren?</p>
<p>I tried this:</p>
<pre><code>from sqlalchemy import func
from sqlalchemy.orm import Session
with Session(engine) as sesssion:
session.query(Parent).filter(
func.count(Parent.children) == 0
).execution_options(is_delete_using=True).delete()
</code></pre>
<p>But this causes:</p>
<blockquote>
<p>sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) misuse of aggregate function count()<br>
[SQL: SELECT parent_table.id<br>
FROM parent_table, child_table<br>
WHERE count(parent_table.id = child_table.parent_id) = ?]<br>
[parameters: (0,)]<br>
(Background on this error at: <a href="https://sqlalche.me/e/20/e3q8" rel="nofollow noreferrer">https://sqlalche.me/e/20/e3q8</a>)<br></p>
</blockquote>
|
<python><sql><sqlite><sqlalchemy>
|
2024-02-19 01:42:34
| 2
| 4,589
|
HerpDerpington
|
78,017,871
| 6,300,467
|
How to access all the holdings in an ETF with yahooquery?
|
<p>Is it possible to grab <strong>all</strong> of the holdings of an ETF within <code>yahooquery</code>?</p>
<p>The example code below shows the top 10 holdings of an example ETF, but not all of them. Is it possible to get all the holdings of an ETF within the <code>yahooquery</code>/<code>yfinance</code> packages?</p>
<pre><code>from yahooquery import Ticker
t=Ticker("QQQ")
holdings = t.fund_holding_info['QQQ']['holdings']
holdings = pd.DataFrame.from_records(holdings)
print(holdings)
</code></pre>
<p>which returns the top 10 holdings of the given ETF.</p>
<pre><code> symbol holdingName holdingPercent
0 MSFT Microsoft Corp 0.089336
1 AAPL Apple Inc 0.086706
2 AMZN Amazon.com Inc 0.048489
3 NVDA NVIDIA Corp 0.045946
4 AVGO Broadcom Inc 0.043041
5 META Meta Platforms Inc Class A 0.041650
6 TSLA Tesla Inc 0.027246
7 GOOGL Alphabet Inc Class A 0.025067
8 GOOG Alphabet Inc Class C 0.024543
9 COST Costco Wholesale Corp 0.024024
</code></pre>
<p>If it's not possible with <code>yahooquery</code> is there another package that can do it?</p>
|
<python><pandas><web-scraping><yahoo-finance><yfinance>
|
2024-02-19 00:12:25
| 0
| 785
|
AlphaBetaGamma96
|
78,017,860
| 1,412,564
|
Django - how do I update the database with an expression of a field?
|
<p>We are using Django for our website, and I want to remove personal information from one of the databases. For example I want to change email addresses to their <code>id</code> + <code>"@example.com"</code>. I can do it with Python:</p>
<pre class="lang-py prettyprint-override"><code>for e in UserEmailAddress.objects.all():
e.email="{}@example.com".format(e.id)
e.save()
</code></pre>
<p>But it takes a long time if I have many users. Is there a way to do it with <code>UserEmailAddress.objects.all().update(...)</code> with one command that will do the same thing? I tried to use the <code>F</code> class but it didn't work, I probably didn't use it correctly.</p>
<p>I also want to filter and exclude by these expressions - for example, query all users except users where their <code>e.email=="{}@example.com".format(e.id)</code>. I tried both queries:</p>
<pre class="lang-py prettyprint-override"><code>from django.db.models import F
el = UserEmailAddress.objects.all().filter(email="{}@example.com".format(F('id')))
print(len(el))
el = UserEmailAddress.objects.all().exclude(email="{}@example.com".format(F('id')))
print(len(el))
</code></pre>
<p>Both of them return incorrect results (not what I expected).</p>
|
<python><django><django-queryset>
|
2024-02-19 00:07:02
| 1
| 3,361
|
Uri
|
78,017,850
| 965,372
|
Upgrading PAHO MQTT to v2, where to add callback method?
|
<p>I'm trying to update some code someone else wrote that I've been providing a docker file for, as well as trying to keep up-to-date,since they no longer support it. In the paho docs, for v2, you need to add "mqtt.CallbackAPIVersion.VERSION1" as per <a href="https://eclipse.dev/paho/files/paho.mqtt.python/html/migrations.html" rel="nofollow noreferrer">this page</a>.</p>
<p>In <a href="https://stackoverflow.com/questions/77984857/paho-mqtt-unsupported-callback-api-version-error">this SO question</a>, I kind of had an idea, but I'm not a python expert and not sure exactly where to add the callback method.</p>
<p>The code I'm trying to update that calls the pahoo.mqtt.client is</p>
<pre><code>#===========================================================================
#
# Broker connection
#
#===========================================================================
from . import config
import paho.mqtt.client as mqtt
#===========================================================================
class Client( mqtt.Client ):
"""Logging client
"""
def __init__( self, log=None ):
mqtt.Client.__init__(mqtt.CallbackAPIVersion.VERSION1, self )
self._logger = log
# Restore callbacks overwritten by stupid mqtt library
self.on_log = Client.on_log
def on_log( self, userData, level, buf ):
if self._logger:
self._logger.log( level, buf )
#===========================================================================
def connect( configDir, log, client=None ):
cfg = config.parse( configDir )
if client is None:
client = Client( log )
if cfg.user:
client.username_pw_set( cfg.user, cfg.password )
if cfg.ca_certs:
client.tls_set( cfg.ca_certs, cfg.certFile, cfg.keyFile )
log.info( "Connecting to broker at %s:%d" % ( cfg.host, cfg.port ) )
client.connect( cfg.host, cfg.port, cfg.keepAlive )
return client
#===========================================================================
</code></pre>
<p>Am I correct in thinking that to make this work with client v2, i need to do this?</p>
<pre><code>class Client( mqtt.Client ):
"""Logging client
"""
def __init__( self, log=None ):
mqtt.Client.__init__(mqtt.CallbackAPIVersion.VERSION1, self )
self._logger = log
# Restore callbacks overwritten by stupid mqtt library
self.on_log = Client.on_log
def on_log( self, userData, level, buf ):
if self._logger:
self._logger.log( level, buf )
</code></pre>
<p>I tried asking chatgpt but got the following unhelpful code:</p>
<pre><code>class Client(mqtt.Client):
"""Logging client"""
def __init__(self, log=None):
super().__init__(protocol=mqtt.MQTTv31, transport="tcp", clean_session=True, userdata=None,
protocol_version=mqtt.PROTOCOL_V31, callback=None)
self._logger = log
def on_log(self, userData, level, buf):
if self._logger:
self._logger.log(level, buf)
</code></pre>
<p>I also asked it to "rewrite it using version 2 of the library", and it came up with this:</p>
<pre><code>class Client(mqtt.Client):
"""Logging client"""
def __init__(self, log=None):
super().__init__(protocol=mqtt.MQTTv5) # Specify MQTT version 5
self._logger = log
def on_log(self, userdata, level, buf):
if self._logger:
self._logger.log(level, buf)
</code></pre>
<p>which might be correct, but I'm not sure.</p>
|
<python><mqtt><paho>
|
2024-02-19 00:01:50
| 0
| 1,264
|
Evan R.
|
78,017,794
| 651,174
|
Set two different debug levels in logger module
|
<p>I have the following which creates two loggers -- stream logger and file logger.</p>
<pre><code># Set up the Logging modules
tmp_file = tempfile.NamedTemporaryFile(mode='w+')
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
c_handler = logging.StreamHandler()
f_handler = logging.FileHandler(tmp_file.name)
c_handler.setFormatter(logging.Formatter('[%(asctime)s %(levelname)s]\t%(message)s'))
f_handler.setFormatter(logging.Formatter('[%(asctime)s] - %(levelname)s %(funcName)s:L%(lineno)s: %(message)s'))
logger.addHandler(c_handler)
logger.addHandler(f_handler)
</code></pre>
<p>I would like the <code>Stream</code> loger to have level of <code>logging.INFO</code> (it is currently correct) and the <code>File</code> logger to have level of <code>logging.DEBUG</code>. How would I do this?</p>
|
<python><python-3.x><logging>
|
2024-02-18 23:34:28
| 2
| 112,064
|
David542
|
78,017,670
| 5,414,176
|
Is it possible to use the depthSort argument in breadthfirst layout in dash-cytoscape?
|
<p>I'm plotting a network using <a href="https://dash.plotly.com/cytoscape" rel="nofollow noreferrer">Dash-Cytoscape</a>, using the <code>breadthfirst</code> layout, as documented <a href="https://js.cytoscape.org/#layouts/breadthfirst" rel="nofollow noreferrer">here</a>.</p>
<p>I would like to control the order of appearance of elements on the same level. The JS API has the <code>depthSort</code> argument to achieve this, but I couldn't figure out how to pass a callback in Python that the frontend can live with.</p>
<p>Things I've tried:</p>
<ol>
<li><code>"depthSort": lambda a,b: a - b</code></li>
<li><code>"depthSort": "(a,b) => a - b"</code></li>
<li><code>"depthSort": "function (a,b) { return a - b}"</code></li>
</ol>
<h2>Minimal example:</h2>
<p>I would like to get this:</p>
<pre><code> 1
3 2
</code></pre>
<p>but what I'm getting is this:</p>
<pre><code> 1
2 3
</code></pre>
<pre class="lang-py prettyprint-override"><code>from dash import Dash
import dash_cytoscape as cyto
app = Dash(__name__)
app.layout = cyto.Cytoscape(
elements=[
{"data": {"id": "1", "label": "1"}},
{"data": {"id": "2", "label": "2"}},
{"data": {"id": "3", "label": "3"}},
{"data": {"source": "1", "target": "2"}},
{"data": {"source": "1", "target": "3"}},
],
layout={
"name": "breadthfirst",
"roots": ["1"],
# "depthSort": ?
},
)
app.run_server(debug=True)
</code></pre>
|
<python><plotly-dash><cytoscape><dash-cytoscape>
|
2024-02-18 22:41:53
| 1
| 801
|
Alon
|
78,017,570
| 530,714
|
Expressive differences and similarities of Python's and C++20's modules? `import` vs `export`/`import`
|
<p>I am familiar with Python's <code>import</code> mechanism (no expert though). I am not at all familiar with the <a href="https://en.cppreference.com/w/cpp/language/modules" rel="nofollow noreferrer">modules</a> mechanism that C++ has finally received in C++20. Despite them being two very different languages, I am sure that many idioms I am used to have with Python's <code>import</code> can be approximated by C++'s module mechanism. Can you help me find (approximate) equivalents or explain why there are no close similarities between different ways an import can be done in Python?</p>
<ol>
<li><p>From what I've read, Python's <code>from foobar import *</code> should roughly be equivalent to C++'s <code>import foobar;</code>. That is, all public symbols from <code>foobar</code> become available for use at the top-level namespace. Are there any principal differences between them though?</p>
</li>
<li><p>Python's <code>import foobar</code> makes symbols available, but they are all put inside a namespace <code>foobar</code>. Can the same thing be achieved in C++, i.e. results of import are contained within a namespace?</p>
</li>
<li><p>Python's <code>from foobar import barbaz</code> makes only a single symbol <code>barbaz</code> visible. Can C++'s <code>import</code> be instructed to operate on a limited number of external symbols, instead of importing the whole contents of the module?</p>
</li>
<li><p>Finally, <code>from foobar import barbaz as qux</code> imports a symbol and makes it available under a different name. Can the same effect be achieved for C++ imports?</p>
</li>
</ol>
|
<python><c++><import><c++20>
|
2024-02-18 22:01:47
| 1
| 2,418
|
Grigory Rechistov
|
78,017,556
| 11,280,068
|
python os.makedirs not working with fastapi app
|
<p>I'm having a problem where in my fastapi app, when I run
<code>os.makdirs(dir, exist_ok=True)</code> no dirs get created. When I run just this command alone in a new file, it works, but in my fastapi app, it doesn't work.</p>
<p>I have made sure that my file permissions are okay, and that the logic is being accessed in the app itself.</p>
<p>Why isn't it working?</p>
|
<python><python-3.x><docker><operating-system><fastapi>
|
2024-02-18 21:56:47
| 1
| 1,194
|
NFeruch - FreePalestine
|
78,017,555
| 5,423,080
|
NumPy create a 3D array using other 2 3D arrays and a 1D array to discriminate
|
<p>I have 2 3D numpy arrays:</p>
<pre><code>a = np.array([[[1, 2, 3],
[4, 5, 6]],
[[7, 8, 9],
[10, 11, 12]],
[[13, 14, 15],
[16, 17, 18]]])
b = a + 100
</code></pre>
<p>and 1 1D array:</p>
<pre><code>c = np.array([0, 1, 0])
</code></pre>
<p>I want to create another 3D array which elements are from <code>a</code> and <code>b</code> based on <code>c</code>, i.e. if <code>c</code> is <code>0</code> take from <code>a</code>, if <code>1</code> take from <code>b</code>.</p>
<p>The result should be:</p>
<pre><code>array([[[ 1, 2, 3],
[ 4, 5, 6]],
[[107, 108, 109],
[110, 111, 112]],
[[ 13, 14, 15],
[ 16, 17, 18]]])
</code></pre>
<p>As I want to use just <code>numpy</code> and nothing else, I was trying with <code>np.where</code>, but the result is not what I want:</p>
<pre><code>>>> np.where(c==0, a, b)
array([[[ 1, 102, 3],
[ 4, 105, 6]],
[[ 7, 108, 9],
[ 10, 111, 12]],
[[ 13, 114, 15],
[ 16, 117, 18]]])
</code></pre>
<p>Any suggestion?</p>
|
<python><numpy>
|
2024-02-18 21:56:32
| 1
| 412
|
cicciodevoto
|
78,017,540
| 2,905,108
|
Tkinter image doesn't display without matplotlib
|
<p>My code generates an image as a numpy array, and then displays it in a canvas object. I use matplotlib for debugging and when when display the matplotlib window, it seems to work correctly.</p>
<p>However, when I close the matplotlib window, the image object is cleared and becomes fully white.</p>
<p>You can see with both windows open the canvas is correctly displayed. However when closing the matplotlib window, it becomes fully white.</p>
<p>Note: I've tried completely removing the matplotlib calls, and now the image is always white and never displays anything.</p>
<p><a href="https://i.sstatic.net/dYKos.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dYKos.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/4plwn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4plwn.png" alt="enter image description here" /></a></p>
<pre><code>from skimage.draw import polygon
from PIL import Image, ImageTk
import matplotlib.pyplot as plt
import tkinter as tk
import numpy as np
def generate():
# clear canvas
canvas.delete("all")
# make clean array
img = np.ones((width, height, 3), dtype=np.float32) * 0.75
# randomly generate positions
x_generated = np.random.uniform(0, width, 5)
y_generated = np.random.uniform(0, height, 5)
for x, y in zip(x_generated, y_generated):
base_height = 70
base_width = 70
# generate the corners
x0 = x - base_width/2
x1 = x + base_width/2
y0 = y - base_height/2
y1 = y + base_height/2
# create rectangle
poly = np.array((
(x0, y0),
(x1, y0),
(x1, y1),
(x0, y1),
))
rr, cc = polygon(poly[:, 0], poly[:, 1], img.shape)
img[rr, cc, :] = 0.5
# # canvas method. I don't want to use this!
# canvas.create_rectangle(x0, y0, x1, y1, fill="red", outline="")
# display generated image in tkinter canvas
scaled_to_8bit = img * 255
newarr = scaled_to_8bit.astype(np.uint8)
imgarray = ImageTk.PhotoImage(image=Image.fromarray(newarr, 'RGB'))
# create the canvas image object from the numpy array
canvas.create_image(20, 20, anchor="nw", image=imgarray)
# display pixel data for debugging
plt.imshow(img)
plt.show()
if __name__ == "__main__":
width = 1000
height = 1000
# Create a root window
root = tk.Tk()
root.configure(background="green")
# Create a canvas widget
canvas = tk.Canvas(root, width=width, height=height)
canvas.pack()
# Create a button widget
button = tk.Button(root, text="Generate", command=generate)
button.pack()
# start main tk loop
root.mainloop()
</code></pre>
|
<python><matplotlib><tkinter><tkinter-canvas>
|
2024-02-18 21:52:08
| 1
| 487
|
Gordon13
|
78,017,384
| 2,905,108
|
Image is mangled when displaying numpy array image in tkinter
|
<p>I'm trying to display an image, procedurally generated by my code, into a canvas object. I don't want to use the canvas functions to generate these images because there's no good way (the screenshot and ghostcript methods are no good) to get pixel data out of the canvas.</p>
<p>I looked around for ways to generate the image pixels and settled on scikit-image, and then you can load the array as data in an image object in tkinter.</p>
<p>On the left is the pixel data displayed with matplotlib, on the right is the tkinter canvas with the same data displayed as a PhotoImage.</p>
<p>It's clear something is wrong.</p>
<p><a href="https://i.sstatic.net/iuzkv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iuzkv.png" alt="enter image description here" /></a></p>
<p>This is my code:</p>
<pre><code>from skimage.draw import polygon
from PIL import Image, ImageTk
import matplotlib.pyplot as plt
import tkinter as tk
import numpy as np
def generate():
# clear canvas
canvas.delete("all")
# make clean array
img = np.ones((width, height, 3), dtype=np.double) * 0.75
# randomly generate positions
x_generated = np.random.uniform(0, width, 5)
y_generated = np.random.uniform(0, height, 5)
for x, y in zip(x_generated, y_generated):
base_height = 70
base_width = 70
# generate the corners
x0 = x - base_width/2
x1 = x + base_width/2
y0 = y - base_height/2
y1 = y + base_height/2
# create rectangle
poly = np.array((
(x0, y0),
(x1, y0),
(x1, y1),
(x0, y1),
))
rr, cc = polygon(poly[:, 0], poly[:, 1], img.shape)
img[rr, cc, :] = 0.5
# # canvas method. I don't want to use this!
# canvas.create_rectangle(x0, y0, x1, y1, fill="red", outline="")
# display generated image in tkinter canvas
scaled_to_8bit = img * 255
imgarray = ImageTk.PhotoImage(image=Image.fromarray(scaled_to_8bit, 'RGB'))
# create the canvas image object from the numpy array
canvas.create_image(20, 20, anchor="nw", image=imgarray)
# display pixel data for debugging
plt.imshow(img)
plt.show()
if __name__ == "__main__":
width = 1000
height = 1000
# Create a root window
root = tk.Tk()
root.configure(background="green")
# Create a canvas widget
canvas = tk.Canvas(root, width=width, height=height)
canvas.pack()
# Create a button widget
button = tk.Button(root, text="Generate", command=generate)
button.pack()
# start main tk loop
root.mainloop()
</code></pre>
|
<python><numpy><tkinter><tkinter-canvas>
|
2024-02-18 20:59:59
| 1
| 487
|
Gordon13
|
78,017,366
| 5,790,653
|
python local file module not found but with cli finds module
|
<p>This is my <code>file.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import os
os.chdir('/home/python3.10')
print(os.getcwd())
print(os.system('pwd; ls'))
import template
print(template.text)
</code></pre>
<p>I have a virtual environment in <code>/home/python3.10</code>.</p>
<p>If I run file like this:</p>
<pre class="lang-bash prettyprint-override"><code>/home/python3.10/venv/bin/python3.10 /home/another_dir/file.py
</code></pre>
<p>I get this error:</p>
<pre><code>/home/python3.10
/home/python3.10
__pycache__ file1.txt requirements.txt template.py test.py venv
0
Traceback (most recent call last):
File "/home/another_dir/file.py", line 5, in <module>
import template
ModuleNotFoundError: No module named 'template'
</code></pre>
<p>But if I run <code>/home/python3.10/venv/bin/python3.10</code> in the shell, then I run all of above lines, it works:</p>
<pre><code>Python 3.10.13 (main, Aug 25 2023, 13:20:03) [GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>>
>>> os.chdir('/home/python3.10')
>>> print(os.getcwd())
/home/python3.10
>>> print(os.system('pwd; ls'))
/home/python3.10
__pycache__ file1.txt requirements.txt template.py test.py venv
0
>>> import template
>>> print(template.text)
{text}
</code></pre>
<p>What's the reason I have this error? I see the <code>getcwd()</code> is the correct directory, but I'm not sure why it cannot see <code>template.py</code> to import as module.</p>
|
<python>
|
2024-02-18 20:54:32
| 1
| 4,175
|
Saeed
|
78,017,170
| 5,864,582
|
Repeated calls to the same _target_ with different parameter combinations using hydra
|
<p>I have the following python class:</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
@classmethod
def from_config(cls, configs: dict):
calculation = hydra.utils.instantiate(configs)
return calculation
def __call__(self, a, b):
return a + b
</code></pre>
<p>I want to create objects of the class and call with different <code>a</code> and <code>b</code> values.</p>
<p>The current solution is as follows:</p>
<p><code>calculation.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>- _target_: mymodule.MyClass
a: 1
b: 2
- _target_: mymodule.MyClass
a: 3
b: 4
</code></pre>
<p><code>main.py</code>:</p>
<pre class="lang-py prettyprint-override"><code> calc_cfgs = cfg.calculation
for calc_cfg in calc_cfgs:
calculation = mymodule.MyClass.from_config(configs=calc_cfg)
calculated_output = calculation(
a=calcs_cfg['a'],
b=calcs_cfg['b']
)
</code></pre>
<p>This approach works fine, but the downside is that the <code>calculation.yaml</code> file gets too long when the user wants to experiment with a lot of combinations of <code>a</code> and <code>b</code>.</p>
<p>Is there a better way to do it?</p>
<p>PS: Maybe something like this:</p>
<pre class="lang-yaml prettyprint-override"><code>- _target_: mymodule.MyClass
a: SOME_KEYWORD[1, 2]
b: SOME_KEYWORD[3, 4]
</code></pre>
|
<python><fb-hydra>
|
2024-02-18 19:42:35
| 1
| 5,736
|
akilat90
|
78,017,078
| 1,440,839
|
Using xarray with resample and interpolate to animate the movement of contours
|
<p>Lets say I have the following python code, showing a contour value at 12pm and 1pm, indicating that it's position has moved:</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
%matplotlib widget
ani = None
# Create the data array for the first time point
data_12 = np.zeros((1, 6, 6)) # Creating a 3D array with time dimension (1), and size 6x6
data_12[0, 1, 1] = 6 # Setting the value at 1x1 location to 6
# Create the xarray dataset for the first time point
ds_12 = xr.Dataset(
{
"value": (["time", "x", "y"], data_12)
},
coords={
"time": [np.datetime64("2024-01-01T12:00")],
"x": np.arange(6),
"y": np.arange(6),
},
)
# Create the data array for the second time point
data_13 = np.zeros((1, 6, 6))
data_13[0, 0, 0] = 0
data_13[0, 4, 4] = 6
ds_13 = xr.Dataset(
{
"value": (["time", "x", "y"], data_13)
},
coords={
"time": [np.datetime64("2024-01-01T13:00")],
"x": np.arange(6),
"y": np.arange(6),
},
)
ds_concat = xr.concat([ds_12, ds_13], dim="time")
ds_interp = ds_concat.resample(time="1min").interpolate("linear")
fig, ax = plt.subplots()
ax.set_xlim(0, 5)
ax.set_ylim(0, 5)
data_selected = ds_interp.isel(time=1)
cont = ax.contourf(data_selected["x"], data_selected["y"],data_selected["value"], cmap='viridis')
def animatehere(i):
global cont
data_selected = ds_interp.isel(time=i)
cont = ax.contourf(data_selected["x"], data_selected["y"],data_selected["value"], cmap='viridis')
return cont
def init():
pass
ani = FuncAnimation(fig,animatehere,init_func=init, frames=60,interval=30, repeat=False)
ani.save('animation.gif', writer='pillow')
plt.show()
</code></pre>
<p>This creates the following output.</p>
<p><a href="https://i.sstatic.net/t5rk3.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t5rk3.gif" alt="enter image description here" /></a></p>
<p>I want to know how I can interpolate this data so that it shows the contour moving from point to point rather than it simply disappearing in one location and re-appearing in another. In this example the intensity does not change, it's 6 in one location and 6 in the other but any solution should also be able to handle the intensity changing over time.</p>
<p>Can someone give me an example of how this might work or point me in the right direction?</p>
|
<python><numpy><matplotlib><scipy><python-xarray>
|
2024-02-18 19:15:06
| 2
| 401
|
John
|
78,017,053
| 9,983,652
|
\\langchain\\agents\\agent_toolkits' is not in the subpath of 'site-packages\\langchain_core' OR one path is relative and the other is absolute
|
<p>I just upgrade LangChain and OpenAi using below conda install. Then I got below error, any idea how to solve it? Thanks</p>
<p><a href="https://anaconda.org/conda-forge/langchain" rel="nofollow noreferrer">https://anaconda.org/conda-forge/langchain</a>
conda install conda-forge::langchain</p>
<p><a href="https://anaconda.org/conda-forge/openai" rel="nofollow noreferrer">https://anaconda.org/conda-forge/openai</a>
conda install conda-forge::openai</p>
<pre><code>from langchain.agents.agent_toolkits import create_python_agent
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[2], line 1
----> 1 from langchain.agents.agent_toolkits import create_python_agent
2 from langchain.tools.python.tool import PythonREPLTool
3 from langchain.llms.openai import OpenAI
File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive)
File c:\Users\yongn\miniconda3\envs\langchain_ai\lib\site-packages\langchain\agents\agent_toolkits\__init__.py:50, in __getattr__(name)
48 """Get attr name."""
49 if name in DEPRECATED_AGENTS:
---> 50 relative_path = as_import_path(Path(__file__).parent, suffix=name)
51 old_path = "langchain." + relative_path
52 new_path = "langchain_experimental." + relative_path
File c:\Users\test\miniconda3\envs\langchain_ai\lib\site-packages\langchain_core\_api\path.py:30, in as_import_path(file, suffix, relative_to)
28 if isinstance(file, str):
29 file = Path(file)
---> 30 path = get_relative_path(file, relative_to=relative_to)
31 if file.is_file():
32 path = path[: -len(file.suffix)]
File c:\Users\test\miniconda3\envs\langchain_ai\lib\site-packages\langchain_core\_api\path.py:18, in get_relative_path(file, relative_to)
16 if isinstance(file, str):
17 file = Path(file)
---> 18 return str(file.relative_to(relative_to))
File c:\Users\test\miniconda3\envs\langchain_ai\lib\pathlib.py:818, in PurePath.relative_to(self, *other)
816 if (root or drv) if n == 0 else cf(abs_parts[:n]) != cf(to_abs_parts):
817 formatted = self._format_parsed_parts(to_drv, to_root, to_parts)
--> 818 raise ValueError("{!r} is not in the subpath of {!r}"
819 " OR one path is relative and the other is absolute."
820 .format(str(self), str(formatted)))
821 return self._from_parsed_parts('', root if n == 1 else '',
822 abs_parts[n:])
ValueError: 'c:\\Users\\test\\miniconda3\\envs\\langchain_ai\\lib\\site-packages\\langchain\\agents\\agent_toolkits' is not in the subpath of 'c:\\Users\\test\\miniconda3\\envs\\langchain_ai\\lib\\site-packages\\langchain_core' OR one path is relative and the other is absolute.
</code></pre>
|
<python><langchain>
|
2024-02-18 19:09:26
| 1
| 4,338
|
roudan
|
78,016,891
| 20,771,478
|
SQL Alchemy does not execute queries
|
<p>I am using SQLAlchemy to run a query in SQL Server.
The query is supposed to create a table.
The below example is meant to be easily reproducable for you.
It does not contain the actual queries I use.</p>
<p>If I run the following code no table gets created. No error or message of any kind is returned.
(Please assume the URL_OBJECT is created properly)</p>
<pre><code>from sqlalchemy import create_engine
engine = create_engine(URL_OBJECT, fast_executemany=True)
with engine.connect() as connection:
connection.execute("Select 1 as an_integer into database.schema.test_table")
</code></pre>
<p>--> If the table "test_table" already exists I get an error telling me that the table already exists.</p>
<p>In contrast, the following is working:</p>
<pre><code>from sqlalchemy import create_engine
engine = create_engine(URL_OBJECT, fast_executemany=True)
with engine.connect() as connection:
print(connection.execute("Select 1").fetchone())
</code></pre>
<p>Returns: (1,)</p>
<p>This one also does what it should.</p>
<pre><code>from sqlalchemy import create_engine
engine = create_engine(URL_OBJECT, fast_executemany=True)
with engine.connect() as connection:
print(connection.execute("Select * datebase.schema.table_existing_in_database").fetchone())
</code></pre>
<p>Returns: That table's first row's data.</p>
<p>Why is my table creation code not executed while the Select statements are?</p>
|
<python><sql-server><sqlalchemy>
|
2024-02-18 18:22:00
| 1
| 458
|
Merlin Nestler
|
78,016,833
| 10,037,034
|
How to handle errors and messages without raising exception? (pythonic way)
|
<p>I have a program which has a lot of functions inside. And all this functions calls another functions. This functions has try catch blocks for to handle errors. I have to handle this errors without raising errors, because i am trying to do api application, i am loosing the control of handling errors.</p>
<p>So i wrote following code block, how can i solve this problem in a pythonic way?</p>
<pre><code>def second_funct(param1):
_return = True
_error_message = ""
try:
print("second_funct")
#some processing
print(1/0)
except Exception as e:
_return = False
_error_message = e
return _return, _error_message
def third_funct(param2):
_return = True
_error_message = ""
try:
print("third_funct")
#some processing
except Exception as e:
_return = False
_error_message = e
return _return, _error_message
def main():
_return = True
_error_message = ""
_param1 = 0
_param2 = 0
try:
print("main_funct")
#some processing
_return, _error_message = second_funct(_param1)
if(_return):
print("go third_funct")
_return, _error_message = third_funct(_param2)
if(_return):
#some process
print("ok")
else:
print(_error_message)
else:
print(_error_message)
except Exception as e:
print(e)
main()
</code></pre>
<p>Result of this code ;</p>
<pre><code>main_funct
second_funct
division by zero
</code></pre>
|
<python><error-handling><try-catch>
|
2024-02-18 18:05:49
| 1
| 1,311
|
Sevval Kahraman
|
78,016,744
| 3,333,319
|
Random ConnectionError using python-socketio on the client side
|
<p>I am using <code>python-socketio</code> to connect to a third party server using the socketIO protocol.
The server is out of my control, therefore I do not have access to server side logs, etc... .
The error below happens randomly (say half of the times), so my current workaround is to retry until the connection succeeds.</p>
<p>I have found other posts where the same exception is thrown, but in those cases it was a systematic error and the corresponding solutions do not apply to my case.</p>
<p>Any help to completely avoid the exception?</p>
<p><strong>Code</strong></p>
<pre class="lang-py prettyprint-override"><code># sio_mwe.py
from socketio import Client
sio_url = 'https://r.thewebsite.net?user=myuser&name=myname&credentials=mysecret'
sio = Client(logger=True, engineio_logger=True, reconnection_attempts=5)
sio.on('*', lambda x: print("Inside handler:", x))
sio.connect(sio_url, socketio_path='r')
print("-- CONNECTED --")
sio.emit('signal', data=f'mydata')
print("-- EMITTED --")
</code></pre>
<p><strong>Logs</strong></p>
<p>Below you can find the logs generated when the above MWE has been repeatedly executed.</p>
<pre><code>$ python sio_mwe.py
Attempting polling connection to https://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=polling&EIO=4
Polling connection accepted with {'sid': 'ghfrdZaKRjVS2vcBACqE', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000, 'maxPayload': 1000000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=websocket&EIO=4
WebSocket upgrade was successful
Received packet MESSAGE data 0{"sid":"7TWzrBV3ehhR212PACqG"}
Namespace / is connected
-- CONNECTED --
Emitting event "signal" [/]
Sending packet MESSAGE data 2["signal","mydata"]
-- EMITTED --
$ python sio_mwe.py
Attempting polling connection to https://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=polling&EIO=4
Polling connection accepted with {'sid': 'yiL2tFH4gy4FjExvACqI', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000, 'maxPayload': 1000000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=websocket&EIO=4
WebSocket upgrade was successful
Received packet MESSAGE data 0{"sid":"UpcoKLNq0X_ifz7oACqJ"}
Namespace / is connected
-- CONNECTED --
Emitting event "signal" [/]
Sending packet MESSAGE data 2["signal","mydata"]
-- EMITTED --
$ python sio_mwe.py
Attempting polling connection to https://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=polling&EIO=4
Polling connection accepted with {'sid': 'qmWjIe407V-92_7_ACqK', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000, 'maxPayload': 1000000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=websocket&EIO=4
WebSocket upgrade was successful
WebSocket connection was closed, aborting
Waiting for write loop task to end
Exiting write loop task
Engine.IO connection dropped
Connection failed, new attempt in 1.45 seconds
Exiting read loop task
Traceback (most recent call last):
File "**OMISSIS**/sio_mwe.py", line 39, in <module>
sio.connect(sio_url, socketio_path='r')
File "**OMISSIS**/python3.10/site-packages/socketio/client.py", line 163, in connect
raise exceptions.ConnectionError(
socketio.exceptions.ConnectionError: One or more namespaces failed to connect
$ python sio_mwe.py
Attempting polling connection to https://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=polling&EIO=4
Polling connection accepted with {'sid': 'bsU8bGZDK3C-dLEYACqS', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000, 'maxPayload': 1000000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=websocket&EIO=4
WebSocket upgrade was successful
WebSocket connection was closed, aborting
Waiting for write loop task to end
Exiting write loop task
Engine.IO connection dropped
Connection failed, new attempt in 1.33 seconds
Exiting read loop task
Traceback (most recent call last):
File "**OMISSIS**/sio_mwe.py", line 39, in <module>
sio.connect(sio_url, socketio_path='r')
File "**OMISSIS**/python3.10/site-packages/socketio/client.py", line 163, in connect
raise exceptions.ConnectionError(
socketio.exceptions.ConnectionError: One or more namespaces failed to connect
$ python sio_mwe.py
Attempting polling connection to https://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=polling&EIO=4
Polling connection accepted with {'sid': '6TqMXBDJspZBe92YACqY', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000, 'maxPayload': 1000000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=websocket&EIO=4
WebSocket upgrade was successful
Received packet MESSAGE data 0{"sid":"P10loKi4B23X79xqACqZ"}
Namespace / is connected
-- CONNECTED --
Emitting event "signal" [/]
Sending packet MESSAGE data 2["signal","mydata"]
-- EMITTED --
$ python sio_mwe.py
Attempting polling connection to https://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=polling&EIO=4
Polling connection accepted with {'sid': 'AmSCvIsjdRpnsdHwACqc', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000, 'maxPayload': 1000000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=websocket&EIO=4
WebSocket upgrade was successful
Received packet MESSAGE data 0{"sid":"veuDxr2k5dLLfX20ACqd"}
Namespace / is connected
-- CONNECTED --
Emitting event "signal" [/]
Sending packet MESSAGE data 2["signal","mydata"]
-- EMITTED --
$ python sio_mwe.py
Attempting polling connection to https://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=polling&EIO=4
Polling connection accepted with {'sid': '2tiYT9ZTqRiumIv4ACqe', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000, 'maxPayload': 1000000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=websocket&EIO=4
WebSocket upgrade was successful
WebSocket connection was closed, aborting
Waiting for write loop task to end
Exiting write loop task
Engine.IO connection dropped
Connection failed, new attempt in 0.85 seconds
Exiting read loop task
Attempting polling connection to https://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=polling&EIO=4
Polling connection accepted with {'sid': 'PAOF0QWb1rfW0f9-ACqg', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000, 'maxPayload': 1000000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=websocket&EIO=4
Sending packet CLOSE data None
Engine.IO connection dropped
Traceback (most recent call last):
File "**OMISSIS**/sio_mwe.py", line 39, in <module>
sio.connect(sio_url, socketio_path='r')
File "**OMISSIS**/python3.10/site-packages/socketio/client.py", line 163, in connect
raise exceptions.ConnectionError(
socketio.exceptions.ConnectionError: One or more namespaces failed to connect
$ python sio_mwe.py
Attempting polling connection to https://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=polling&EIO=4
Polling connection accepted with {'sid': 'H-81bc5akux_oi7MACqq', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000, 'maxPayload': 1000000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=websocket&EIO=4
WebSocket upgrade was successful
WebSocket connection was closed, aborting
Waiting for write loop task to end
Exiting write loop task
Engine.IO connection dropped
Connection failed, new attempt in 1.44 seconds
Exiting read loop task
Traceback (most recent call last):
File "**OMISSIS**/sio_mwe.py", line 39, in <module>
sio.connect(sio_url, socketio_path='r')
File "**OMISSIS**/python3.10/site-packages/socketio/client.py", line 163, in connect
raise exceptions.ConnectionError(
socketio.exceptions.ConnectionError: One or more namespaces failed to connect
$ python sio_mwe.py
Attempting polling connection to https://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=polling&EIO=4
Polling connection accepted with {'sid': 'vacxuBJMCJrBZq7xACqs', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000, 'maxPayload': 1000000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=websocket&EIO=4
WebSocket upgrade was successful
WebSocket connection was closed, aborting
Waiting for write loop task to end
Exiting write loop task
Engine.IO connection dropped
Connection failed, new attempt in 0.55 seconds
Exiting read loop task
Attempting polling connection to https://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=polling&EIO=4
Polling connection accepted with {'sid': '37uV5uncNZ69q-CJACqx', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000, 'maxPayload': 1000000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://r.thewebsite.net/r/?user=myuser&name=myname&credentials=mysecret&transport=websocket&EIO=4
WebSocket upgrade was successful
Received packet MESSAGE data 0{"sid":"pvYNYRBEBN0qVIDWACqy"}
Namespace / is connected
-- CONNECTED --
Reconnection successful
Emitting event "signal" [/]
Sending packet MESSAGE data 2["signal","mydata"]
-- EMITTED --
</code></pre>
<p><strong>Versions</strong></p>
<p>Client:</p>
<ul>
<li>Python: 3.10</li>
<li>python-engineio: 4.9.0</li>
<li>python-socketio: 5.11.1</li>
</ul>
<p>Server:</p>
<ul>
<li>SocketIO: 4.5.0</li>
</ul>
|
<python><socket.io><connection><python-socketio>
|
2024-02-18 17:34:56
| 0
| 973
|
Sirion
|
78,016,464
| 17,343
|
Performance of regex alteration
|
<p>The actual regex is more complex, but I am surprised by the cost of adding a second word:</p>
<pre><code>import timeit
SETUP = "import re; STRING = ' ' * 100000"
print(min(timeit.Timer("re.findall(r'(foo)' , STRING)", SETUP).repeat(10, 1000)))
print(min(timeit.Timer("re.findall(r'(foo|bar)' , STRING)", SETUP).repeat(10, 1000)))
print(min(timeit.Timer("re.findall(r'(foo|bar|qux)', STRING)", SETUP).repeat(10, 1000)))
</code></pre>
<p>The second regex is <strong>15 times slower</strong>, while the third one is <strong>faster</strong> than the second one (using <em>3.10.2</em>).</p>
<pre><code>0.012617316999239847
0.18311264598742127
0.146542029993725
</code></pre>
<p>The difference <a href="https://www.programiz.com/python-programming/online-compiler/" rel="nofollow noreferrer">here</a> (<em>3.11.8</em>) is even worse:</p>
<pre><code>0.022139030000005278
0.8239181079999867
0.6327433810000116
</code></pre>
<p>Any insight why the second version is so much slower, even slower than the third?<br />
Is there a way to improve that speed?</p>
<p>Pre-compiling the regex and using non-capturing groups does not make much of a difference.</p>
<p>Update: <a href="https://pypi.org/project/regex/" rel="nofollow noreferrer">PyPI regex</a> is more consistent, but slower. Also a large difference between one/two words:</p>
<pre><code>0.022885548998601735
0.25088962000154424
0.2537434719997691
</code></pre>
|
<python><python-3.x><regex>
|
2024-02-18 16:12:19
| 0
| 55,736
|
Peter Lang
|
78,016,325
| 2,707,864
|
Sympy: Get coefficients of generic functions in an expression
|
<p>I have equation (1) that linearly relates 3 functions of x, p,q,r.
I have equation (2) that linearly relates other 3 functions of x, s,q,r.
I want to obtain from (1) r in terms of p,q, and then substitute r in eqn. (2), to get a relation between s,p,q.
This is what I did in <code>sympy</code>.</p>
<pre><code>p = sym.Function('p', real=True)
q = sym.Function('q', real=True)
r = sym.Function('r', real=True)
a1, a2, a3 = sym.symbols('a1, a2, a3', real=True, positive=True)
# 1. Solve for r in terms of p, q, from Eqn. (1)
# (2*a2+1)*(a1-1)*px = (2*a2+1)*B*qx + (2*a2+1)*B*(a3-1)*rx
px = p(x)
qx = q(x)
rx = r(x)
expr_p_qr = (2*a2+1)*B*qx + (2*a2+1)*B*(a3-1)*rx
r_sol = sym.simplify(sym.solve((2*a2+1)*(a1-1)*px - expr_p_qr, rx, dict=True))
r1 = r_sol[0][rx]
print('r(x) =', r1)
# 2. Substitute r in Eqn. (2)
# sx + qx - rx = 0
s = sym.Function('s', real=True)
sx = s(x)
expr_sqr = sx + qx - rx
expr_sqrb = sym.simplify(expr_sqr.subs([(rx, r1)]))
print('expr_sqr:', expr_sqrb, '= 0')
</code></pre>
<p>and I get</p>
<pre><code>r(x) = (-B*q(x) + a1*p(x) - p(x))/(B*(a3 - 1))
expr_sqr: (B*(a3 - 1)*(q(x) + s(x)) + B*q(x) - a1*p(x) + p(x))/(B*(a3 - 1)) = 0
</code></pre>
<p>Now I want to get simplified coefficients of (s,p,q)(x).
I couldn't manage to do so on the whole expression.</p>
<p>Moreover, I tried to do it somewhat "manually", by extracting the coefficients.
And I failed as well, with</p>
<pre><code>cs = expr_sqrb.coeff(sx)
cp = expr_sqrb.coeff(px)
cq = expr_sqrb.coeff(qx)
print(cs, cp, cq)
</code></pre>
<p>which gives</p>
<pre><code>0 0 0
</code></pre>
<p>How can I solve this?</p>
<p><strong>Related</strong>:</p>
<ol>
<li><a href="https://stackoverflow.com/questions/59363674/sympy-how-to-collect-multi-variable-terms">SymPy: How to collect multi-variable terms?</a></li>
<li><a href="https://stackoverflow.com/questions/36818025/best-way-to-isolate-one-coefficient-of-a-multivariate-polynomial-in-sympy">Best way to isolate one coefficient of a multivariate polynomial in sympy</a></li>
</ol>
|
<python><sympy>
|
2024-02-18 15:31:00
| 1
| 15,820
|
sancho.s ReinstateMonicaCellio
|
78,016,158
| 11,613,489
|
Problems getting an element from a html
|
<p>I'm trying to use Selenium to (legally) scrape data from a website. I tried the code, but nothing appears as result. I'd like to obtain the data and see what I've done at the button before I show you how the HTML appears.</p>
<p>I just need to get "31 954 kr" from the hmtl.
<a href="https://i.sstatic.net/X3uAu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X3uAu.png" alt="enter image description here" /></a></p>
<p>Full html:</p>
<pre><code><article class="c-product-tile h-full relative product-}">
<a class="c-product-tile__link" tabindex="0" title="Balance Spisebord" ng-click="$ctrl.handleClick($event)" pdp-link="c/03-102-01_10991348" outlet="$ctrl.product.type === 'outletProduct'" target="_self" href="/nb-no/mot-oss/butikker/online-outlet/produkt/c/03-102-01_10991348/">
</a>
<div class="flex h-full flex-col relative">
<div class="c-product-tile__image bg-wild-sand px-6 py-28 mb-4 relative md:px-14 md:py-32 lg:px-12 lg:py-60">
<!----><div class="c-aspect-ratio-16x9 anim-fade-in scale-" ng-if="::$ctrl.product.type !== 'legs' || !$ctrl.product.useLegImages" style="">
<tile-image base-src="/products/03-102-01_10991348.webp" template="tileImageProductTile" is-product-image="true" move-pressmode-btn=".product- .c-product-tile__image" cachebust="0" mode="pad" aspect-ratio="16x9" default-on-error-url="/layout/images/produkt-default-medium-1.png" bgcolor="transparent" description="Balance Spisebord"><!----><div class="c-tile-image" ng-if="$ctrl.readyForLoad" style="">
<!---->
<!---->
<img src="/layout/images/7x3.gif" lazy-src="https://images.bolia.com/cdn-cgi/image/background=transparent,fit=pad,width=500,format=auto,height=281,quality=81/products/03-102-01_10991348.webp?v=0" lazy-srcset="https://images.bolia.com/cdn-cgi/image/background=transparent,fit=pad,width=340,format=auto,height=191,quality=81/products/03-102-01_10991348.webp?v=0 340w,https://images.bolia.com/cdn-cgi/image/background=transparent,fit=pad,width=540,format=auto,height=303,quality=81/products/03-102-01_10991348.webp?v=0 540w" load-immediately="$ctrl.loadImmediately" sizes="153px" alt="Balance Spisebord" on-error="/layout/images/produkt-default-medium-1.png" ng-class="{'tile-image__takeover' : $ctrl.takeOver, '': $ctrl.cssClass !== undefined}" srcset="https://images.bolia.com/cdn-cgi/image/background=transparent,fit=pad,width=340,format=auto,height=191,quality=81/products/03-102-01_10991348.webp?v=0 340w,https://images.bolia.com/cdn-cgi/image/background=transparent,fit=pad,width=540,format=auto,height=303,quality=81/products/03-102-01_10991348.webp?v=0 540w">
</div><!---->
</tile-image>
</div><!---->
<div class="absolute pin my-6">
<div class="absolute pin-t pin-l">
<!---->
<!---->
</div>
<!---->
<div class="absolute pin-l pin-b pin-r flex justify-between">
<!---->
<div class="pl-2 pr-4 w-full flex flex-wrap-reverse flex-col justify-self-end justify-end">
<!---->
<!----><!----><!---->
</div>
</div>
</div>
</div>
<!----><div ng-if="!$ctrl.noPrice" class="" style="">
<!---->
<!----><div ng-if="::$ctrl.product.type === 'outletProduct'" class="mb-4">
<p class="c-product-tile__title c-text-caption m-0 font-bold">
<span class="c-text-caption" ng-bind="$ctrl.product.title">Balance Spisebord</span>
</p>
<p class="c-text-caption truncate m-0 hidden md:block" ng-bind="::$ctrl.product.designInformation">Brun marmor</p>
<p class="c-text-caption m-0" ng-bind="::$ctrl.product.details">God stand</p>
</div><!---->
<!----><div ng-if="::$ctrl.showFromPrice() || $ctrl.showSalesPrice()" class="m-0 flex flex-wrap items-baseline">
<!----><p ng-if="!$ctrl.isMyBoliaBoostDiscount()" class="flex flex-wrap c-text-caption m-0">
<!----><span ng-if="$ctrl.showSalesPrice()" class="flex flex-wrap items-baseline">
<!----><span ng-if="$ctrl.showListPrice()" class="mr-3 font-bold" ng-class="{'font-bold': $ctrl.showListPrice()}" ng-bind="::$ctrl.product.salesPrice.amount">31&nbsp;954 kr.</span><!---->
<span ng-class="{'line-through': $ctrl.showListPrice()}" class="mr-3 line-through" ng-bind="::$ctrl.product.listPrice.amount">63&nbsp;909 kr.</span>
<!----><span ng-if="$ctrl.showDiscountText()" ng-style="::$ctrl.splashLabelService.labelStyle()" class="c-text-caption mr-3 flex items-center my-0 px-2 py-1 bg-brandy" style="background: rgb(225, 221, 212); color: rgb(0, 0, 0);">
Spar 50%
</span><!---->
</span><!---->
<!---->
</p><!---->
<!---->
<!---->
</div><!---->
<!---->
<!---->
<!---->
</div><!---->
<!---->
</div>
</article>
</code></pre>
<p>My python code (selenium)</p>
<pre><code> from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
driver.get("wwww.scrapedatafromhere.com")
element = WebDriverWait(driver, 10).until(
EC.visibility_of_element_located((By.XPATH, "(//article[contains(@class, 'c-product-tile')]//p[contains(@class, 'c-product-tile__title')]/span)[1]"))
)
text = element.text
print("Text:", text)
</code></pre>
<p>The result:
<em>empty</em> (no error)
What I am doing wrong here?</p>
|
<python><selenium-webdriver><web-scraping>
|
2024-02-18 14:34:46
| 2
| 642
|
Lorenzo Castagno
|
78,016,050
| 2,743,931
|
Python loc multiindex series with repeated keys
|
<p>I have a multi-index Series defined like this:</p>
<pre><code>s = pd.Series([1, 2, 3, 4, 5, 6],
index=pd.MultiIndex.from_product([["A", "B"], ["c", "d", "e"]]))
s
</code></pre>
<pre><code>A c 1
d 2
e 3
B c 4
d 5
e 6
dtype: int64
</code></pre>
<p>I want to evaluate this series at two lists:</p>
<pre><code>l1 = ['A', 'B', 'A']
l2 = ['c', 'd', 'c']
</code></pre>
<p>How can I do it pythonically without a for loop, e.g. without</p>
<pre><code>[s.loc[x, y] for x, y in zip(l1, l2)]
</code></pre>
|
<python><pandas>
|
2024-02-18 14:05:40
| 3
| 312
|
user2743931
|
78,016,047
| 1,754,307
|
Unable to unprotect worksheet in Xlwings
|
<p>I am running xlwings on Windows. When my code hits the unprotection of the sheet, the code just sits there. No exceptions, no popups appeared in Excel, just hanging. I can manually remove protection, the password is fine.</p>
<pre><code> with xw.App() as app:
wb = app.books.open(local_filename)
alerts = wb.api.Application.DisplayAlerts
wb.api.Application.DisplayAlerts = False
ws_data = wb.sheets["Data Input (Do not rename)"]
if wb.api.ProtectStructure:
wb.api.Unprotect(Password="xxx")
if ws_data.api.ProtectContents:
ws_data.api.unprotect(Password="xxx") # hangs here forever
</code></pre>
|
<python><xlwings>
|
2024-02-18 14:04:31
| 0
| 3,082
|
smackenzie
|
78,015,891
| 23,219,369
|
How to fix 4 arc issue when converting pdf to dxf using inkscape?
|
<p>I'm trying to convert a pdf file to a dxf file using the <code>inkscape</code> command.
I converted a pdf file to a svg file first and converted this to dxf file.</p>
<p>This is the <code>inkscape</code> command I've used:</p>
<pre><code>inkscape input.svg --export-type=dxf --export-extension=org.ekips.output.dxf_outlines -o output.dxf
</code></pre>
<p>As you can see, instead of one perfect circle, I got a circle made up of 4 arcs.</p>
<p><img src="https://i.sstatic.net/Mae7G.png" alt="circles" /></p>
<p>I want a smooth, single-componented circle. How can I fix this issue?</p>
|
<python><pdf><svg><inkscape><dxf>
|
2024-02-18 13:17:52
| 1
| 834
|
Temunel
|
78,015,804
| 3,943,162
|
How to use StreamlitCallbackHandler with Langgraph?
|
<p>I'm trying to use Streamlit to show the final output to the user together with the step-by-step thoughts. For it, I'm using StreamlitCallbackHandler copying from the <a href="https://github.com/langchain-ai/streamlit-agent/blob/main/streamlit_agent/mrkl_demo.py" rel="nofollow noreferrer">MRKL example</a>.</p>
<p>However, it is not working, and the error messages are unclear to me.</p>
<p>The only other reference that I found about the problem is the <a href="https://github.com/langchain-ai/langgraph/issues/101" rel="nofollow noreferrer">GitHub issue</a>, but as pointed out in the comment, the example code is not using langgraph, so I'm unsure the problem is the same, even with very similar error messages.</p>
<p>This is the relevant part of my code (pretty much the same as <a href="https://github.com/langchain-ai/langgraph/blob/main/examples/multi_agent/multi-agent-collaboration.ipynb" rel="nofollow noreferrer">multi-agent collaboration example</a> inside the Streamlight container from MRKL example):</p>
<pre class="lang-py prettyprint-override"><code>#...Other Langgraph code from the example
workflow = StateGraph(AgentState)
workflow.add_node("Researcher", research_node)
workflow.add_node("Chart Generator", chart_node)
workflow.add_node("call_tool", tool_node)
workflow.add_conditional_edges(
"Researcher",
router,
{"continue": "Chart Generator", "call_tool": "call_tool", "end": END},
)
workflow.add_conditional_edges(
"Chart Generator",
router,
{"continue": "Researcher", "call_tool": "call_tool", "end": END},
)
workflow.add_conditional_edges(
"call_tool",
# Each agent node updates the 'sender' field
# the tool calling node does not, meaning
# this edge will route back to the original agent
# who invoked the tool
lambda x: x["sender"],
{
"Researcher": "Researcher",
"Chart Generator": "Chart Generator",
},
)
workflow.set_entry_point("Researcher")
graph = workflow.compile()
#... Other Streamlit configurations
with st.form(key="form"):
user_input = st.text_input("Define the task")
submit_clicked = st.form_submit_button("Execute")
output_container = st.empty()
if with_clear_container(submit_clicked):
output_container = output_container.container()
output_container.chat_message("user").write(user_input)
answer_container = output_container.chat_message("assistant", avatar="🦜")
st_callback = StreamlitCallbackHandler(answer_container)
cfg = RunnableConfig()
cfg["callbacks"] = [st_callback]
cfg["recursion_limit"] = 100
answer = graph.invoke({
"messages": [
HumanMessage(
content=user_input
)
],
}, cfg)
answer_container.write(answer["content"])
</code></pre>
<p>Errror messages:</p>
<pre><code>2024-02-18 13:30:17.030 Thread 'ThreadPoolExecutor-5_0': missing ScriptRunContext
Error in StreamlitCallbackHandler.on_llm_start callback: NoSessionContext()
Error in StreamlitCallbackHandler.on_llm_end callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_tool_end callback: RuntimeError('Current LLMThought is unexpectedly None!')
2024-02-18 13:30:18.630 Thread 'ThreadPoolExecutor-5_0': missing ScriptRunContext
Error in StreamlitCallbackHandler.on_llm_start callback: NoSessionContext()
Error in StreamlitCallbackHandler.on_llm_end callback: RuntimeError('Current LLMThought is unexpectedly None!')
</code></pre>
<p>Any tip on how to use StreamlitCallbackHandler with Langgraph?</p>
|
<python><streamlit><py-langchain><langgraph>
|
2024-02-18 12:51:02
| 1
| 1,789
|
James
|
78,015,787
| 9,546,656
|
AttributeError: 'Collection' object has no attribute '__pydantic_private__'. Did you mean: '__pydantic_complete__'?
|
<p>I am trying to create pdf chat using open source LLM. I get the below error when I run ingest.py</p>
<pre><code>(my_venv) PS C:\GithubRepository\Chat-with-PDF-Chatbot> python ingest.py
bill.pdf
splitting into chunks
Loading sentence transformers model
C:\GithubRepository\Chat-with-PDF-Chatbot\my_venv\lib\site-packages\bitsandbytes\cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
'NoneType' object has no attribute 'cadam32bit_grad_fp32'
Creating embeddings. May take some minutes...
Traceback (most recent call last):
File "C:\GithubRepository\Chat-with-PDF-Chatbot\ingest.py", line 33, in <module>
main()
File "C:\GithubRepository\Chat-with-PDF-Chatbot\ingest.py", line 26, in main
db = Chroma.from_documents(texts, embeddings, persist_directory="db", client_settings=CHROMA_SETTINGS)
File "C:\GithubRepository\Chat-with-PDF-Chatbot\my_venv\lib\site-packages\langchain\vectorstores\chroma.py", line 613, in from_documents
return cls.from_texts(
File "C:\GithubRepository\Chat-with-PDF-Chatbot\my_venv\lib\site-packages\langchain\vectorstores\chroma.py", line 568, in from_texts
chroma_collection = cls(
File "C:\GithubRepository\Chat-with-PDF-Chatbot\my_venv\lib\site-packages\langchain\vectorstores\chroma.py", line 126, in __init__
self._collection = self._client.get_or_create_collection(
File "C:\GithubRepository\Chat-with-PDF-Chatbot\my_venv\lib\site-packages\chromadb\api\local.py", line 141, in get_or_create_collection
return self.create_collection(
File "C:\GithubRepository\Chat-with-PDF-Chatbot\my_venv\lib\site-packages\chromadb\api\local.py", line 111, in create_collection
return Collection(
File "C:\GithubRepository\Chat-with-PDF-Chatbot\my_venv\lib\site-packages\chromadb\api\models\Collection.py", line 52, in __init__
self._client = client
File "C:\GithubRepository\Chat-with-PDF-Chatbot\my_venv\lib\site-packages\pydantic\main.py", line 768, in __setattr__
if self.__pydantic_private__ is None or name not in self.__private_attributes__:
File "C:\GithubRepository\Chat-with-PDF-Chatbot\my_venv\lib\site-packages\pydantic\main.py", line 756, in __getattr__
return super().__getattribute__(item) # Raises AttributeError if appropriate
AttributeError: 'Collection' object has no attribute '__pydantic_private__'. Did you mean: '__pydantic_complete__'?
(my_venv) PS C:\GithubRepository\Chat-with-PDF-Chatbot>
</code></pre>
<p>I tried installing different versions of pydantic, pydantic_settings with chromadb Version: 0.3.26 but nothing worked. Would like to know the compatible versions of pydantic and pydantic_settings. I am using python 3.10</p>
|
<python><pydantic><large-language-model><chromadb><pydantic-settings>
|
2024-02-18 12:46:26
| 2
| 359
|
Akash
|
78,015,763
| 6,282,576
|
MongoEngine upsert and append to ListField of IntChoiceField (append to list if record exists, create otherwise)
|
<p>I have the following definition for an IntChoiceField:</p>
<pre class="lang-py prettyprint-override"><code>from mongoengine import IntField
class IntChoiceField(IntField):
def __init__(self, min_value=None, max_value=None, choices: list = None, **kwargs):
super().__init__(min_value, max_value, **kwargs)
self.choices = self.validate_choices(choices=choices)
def validate_choices(self, choices: list):
try:
choice_map = dict(choices)
choice_list = list(choice_map)
choice_list.sort()
return choice_list
except TypeError:
self.error("Choice list is not valid!")
def validate(self, value):
super().validate(value)
if value not in self.choices:
self.error("Invalid Choice")
</code></pre>
<p>Now I have a collection like this:</p>
<pre class="lang-py prettyprint-override"><code>import mongoengine_goodjson as gj
class Collection(gj.Document):
TYPE_1 = 1
TYPE_2 = 2
TYPE_3 = 3
TYPES = [
(TYPE_1, "type_1"),
(TYPE_2, "type_2"),
(TYPE_3, "type_3"),
]
user_id = ObjectIdField()
types = ListField(IntChoiceField(choices=TYPES))
</code></pre>
<p>I want to lookup on <code>user_id</code> and create a new record if nothing is found, or update and append to the list of <code>types</code> if a record is found. <em>Upserting</em> the first found record is also fine.</p>
<p>I figured I could do something like this:</p>
<pre class="lang-py prettyprint-override"><code>Collection(user_id=user_id).update(add_to_set__types=1, upsert=True)
</code></pre>
<p>or this:</p>
<pre class="lang-py prettyprint-override"><code>Collection(user_id=user_id).update(push__types=1, upsert=True)
</code></pre>
<p>but in both cases, always a new record is created regardless of its existence. How I can upsert and append on update?</p>
|
<python><mongodb><mongoengine>
|
2024-02-18 12:35:33
| 1
| 4,313
|
Amir Shabani
|
78,015,554
| 8,076,879
|
Cache python3-poetry installation in circleci
|
<p>I would like to cache poetry installation in <code>.circleci</code></p>
<p>The command is as follows:
<code>curl -sSL https://install.python-poetry.org | python3 -</code>
which creates a <code>bin</code> in
<code>/home/circleci/.local/bin/poetry</code>
which actually links to
<code>/home/circleci/.local/share/pypoetry/venv/bin/poetry</code> whereby <code>/home/circleci/.local/share/pypoetry</code> gets created by the installation.</p>
<p>After installation, doing <code>poetry config --list</code> results in</p>
<pre><code>cache-dir = "/home/circleci/.cache/pypoetry"
experimental.system-git-client = false
installer.max-workers = null
installer.modern-installation = true
installer.no-binary = null
installer.parallel = true
virtualenvs.create = true
virtualenvs.in-project = null
virtualenvs.options.always-copy = false
virtualenvs.options.no-pip = false
virtualenvs.options.no-setuptools = false
virtualenvs.options.system-site-packages = false
virtualenvs.path = "{cache-dir}/virtualenvs" # /home/circleci/.cache/pypoetry/virtualenvs
virtualenvs.prefer-active-python = false
virtualenvs.prompt = "{project_name}-py{python_version}"
warnings.export = true
</code></pre>
<p>In my config.yml I tried many variations of the following:</p>
<pre class="lang-yaml prettyprint-override"><code> - restore_cache: # **restores saved dependency cache if the Branch key template or requirements.txt files have not changed since the previous run**
key: deps1-{{ .Branch }}-{{ checksum "pyproject.toml" }}
paths:
- "/home/circleci/.cache/pypoetry"
- run:
name: Install poetry
command: |
curl -sSL https://install.python-poetry.org | python3 -
- save_cache: # ** special step to save dependency cache **
key: deps1-{{ .Branch }}-{{ checksum "pyproject.toml" }}
paths:
- "/home/circleci/.cache/pypoetry"
</code></pre>
<p>Where in the <code>paths</code>: key I tried</p>
<pre><code>- /home/circleci/.cache/pypoetry
- /home/circleci/.cache
- /home/circleci/.local/bin/
- /home/circleci/.local/share/pypoetry
- /home/circleci/.local/share/pypoetry/venv/bin
</code></pre>
<p>Caching in <code>.circleci</code> itself works well, for example caching the libraries installed by poetry, works flawlessly</p>
<pre><code> - restore_cache: # **restores saved dependency cache if the Branch key template or requirements.txt files have not changed since the previous run**
key: deps1-{{ .Branch }}-{{ checksum "poetry.lock" }}
paths:
- "/root/.cache/pypoetry/virtualenvs"
- run:
name: Install dependencies with poetry
command: |
poetry install
- save_cache: # ** special step to save dependency cache **
key: deps1-{{ .Branch }}-{{ checksum "poetry.lock" }}
paths:
- "/root/.cache/pypoetry/virtualenvs"
</code></pre>
|
<python><cicd><circleci><python-poetry>
|
2024-02-18 11:30:19
| 0
| 2,438
|
DaveR
|
78,015,300
| 1,137,254
|
How to generate a 'stub' Python interface for a 3p library for development?
|
<p>I'm developing for RPi. I'd like to do the majority of my coding on a fast, local machine with a snappy IDE. To do this, I need to be able to <code>pip install</code> the same modules as I have on the RPi. Problem is, one of the most popular, <a href="https://sourceforge.net/projects/raspberry-gpio-python/" rel="nofollow noreferrer">RPi.GPIO</a> is only installable on the RPi for obvious reasons (my laptop doesn't have GPIO pins!)</p>
<p>As the library is primarily written in C, it's not easy to extract a Python interface from it. Is there any tool that can scan the C/compiled module and extract the Python interface, stubbed, and output that somewhere I can then easily import it?</p>
<p>I'd like to contribute back to the project and have a stubbed Python interface, that is installable on non-rpis so that development is easy.</p>
<p>Having never released a Python library before, some bonus points for pointers to how to 'teach' pip to download different packages on different OSs.</p>
<p><a href="https://sourceforge.net/p/raspberry-gpio-python/tickets/215/" rel="nofollow noreferrer">Here's a ticket</a> of the issue I'd like to resolve.</p>
|
<python><raspberry-pi><python-typing><python-bindings>
|
2024-02-18 10:02:57
| 0
| 3,495
|
Sam
|
78,015,113
| 1,920,003
|
How to run two blocking functions concurrently?
|
<p>I have two blocking functions, I want them to stay separate, in separate threads or tasks.</p>
<pre><code>def foo():
# This can run as a background process
while True:
# do something
# write to disk
return data
def bar():
# read from the disk that was generated by foo
# do something
return data
</code></pre>
<p>Foo and bar are methods in different classes in different files.</p>
<p>I have a fastapi web socket server. I want both functions to start running and keep running, concurrently at the same time, as long as the client hasn't disconnected or as long as the client hasn't sent a <code>STOP</code> command.</p>
<p><code>bar()</code> function is supposed to keep running and scanning the disk. Every time I receive data back from <code>bar()</code>, I should send it back to the client via <code>await websocket.send_text(data)</code></p>
<p>I tried running them in a pool or using a queue but <code>foo</code> would prevent <code>bar</code> from starting.</p>
<p>I can make both of them async, but that's not ideal, because I'd have to store a lot of data in memory rather than on disk since reading from disk isn't easy when working with async functions. Besides, the logic works well, there's no need to introduce async.</p>
<p>If I were to do that I would have to remove the reading from the disk part and store the data in a shared dictionary for each websocket client.</p>
<p>I tried doing that, but I'm facing two issues:</p>
<ol>
<li><p>The dict stays empty when bar() tries to read from it, not sure how to share data between the two functions</p>
</li>
<li><p>I don't like asyncio because it often gets stuck and can't stop it when a stop command is sent or when the client disconnects.</p>
</li>
<li><p>I didn't use fastapi background tasks to run <code>foo</code> because fastapi background tasks cannot be stopped on demand.</p>
</li>
</ol>
<p>One of the versions I tried</p>
<pre><code>@app.websocket("/live-transcription")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
client_id = id(websocket)
tasks = []
stop_event = asyncio.Event()
try:
while True:
data = await websocket.receive_text()
message = json.loads(data)
command = message.get("command")
url = message.get("url", "")
if command == "STOP":
stop_event.set()
if tasks:
await asyncio.gather(*tasks, return_exceptions=True)
stop_event.clear()
tasks.clear()
if client_id in FILE_STORAGE:
del FILE_STORAGE[client_id]
elif command == "START":
FILE_STORAGE[client_id] = {}
bar_task = await asyncio.create_task(
bar(client_id, stop_event)
)
foo_task = await asyncio.create_task(
foo(url, client_id, stop_event)
)
tasks = [bar_task, foo_task]
while tasks and not websocket.closed:
while True:
try:
response = await bar_task.result()
if response:
await websocket.send_text(response)
else:
break
except asyncio.CancelledError:
break
# Clear finished tasks
tasks = [t for t in tasks if not t.done()]
if client_id in FILE_STORAGE:
del FILE_STORAGE[client_id]
except WebSocketDisconnect:
print("Client disconnected")
finally:
for task in tasks:
task.cancel()
await asyncio.gather(*tasks, return_exceptions=True)
if client_id in FILE_STORAGE:
del FILE_STORAGE[client_id]
await websocket.close()
</code></pre>
|
<python><fastapi>
|
2024-02-18 08:49:22
| 1
| 5,375
|
Lynob
|
78,014,906
| 11,861,874
|
Excel Cell Operation using Python
|
<p>I am trying to perform cell-by-cell operations in Excel using Python. It's easy to perform using VBA but not sure how can we leverage the same using Python.</p>
<p>I have the following data and would like to perform a read-and-write kind of operation. I hope you guys can help me with the same.</p>
<pre><code>import pandas as pd
import xlwings as xw
wb = xw.Book("Database.xlsx")
ws = wb.sheets["Sheet1"]
for cells in range(len(Age)):
# Perform the row-wise operation.
wb.save("Database.xlsx")
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Age</th>
<th>Income</th>
<th>Tax</th>
</tr>
</thead>
<tbody>
<tr>
<td>Adam</td>
<td>36</td>
<td>50,000</td>
<td></td>
</tr>
<tr>
<td>Will</td>
<td>42</td>
<td>80,000</td>
<td></td>
</tr>
<tr>
<td>Rony</td>
<td>34</td>
<td>40,000</td>
<td></td>
</tr>
<tr>
<td>Phil</td>
<td>60</td>
<td>90,000</td>
<td></td>
</tr>
<tr>
<td>David</td>
<td>32</td>
<td>28,000</td>
<td></td>
</tr>
</tbody>
</table></div>
<p>In the above table from Sheet1 of the Excel workbook, I would like to read the cells one by one for the Income column, so let's say Cell(2,3) = 50,000 then <strong>update in excel (write)</strong> Cell(2,4) (Tax) = 500 (i.e.; 10% of Income).</p>
|
<python><excel>
|
2024-02-18 07:26:04
| 1
| 645
|
Add
|
78,014,891
| 3,380,902
|
sqlalchemy + pandas parameterized query string execution error
|
<pre><code>import pandas as pd
from sqlalchemy import create_engine
host = "xxxx.us-west-2.rds.amazonaws.com"
user = "user1"
pwd = "xxx"
database = "db1"
engine = create_engine("mysql+pymysql://{}:{}@{}/{}".format(user,pwd,host,database), echo=True)
query1 = """
select *
from tbl1
where dt <= :cutoff_dt
"""
params={
"cutoff_dt": "2023-9-30"
}
# Begin transaction and create session
with engine.begin() as conn:
try:
# Execute your query
df1 = pd.read_sql(
query1,
conn,
params=params
)
except Exception as e:
# Handle any errors
print(f"Error occurred: {e}")
conn.rollback() # Rollback transaction if error occurs
finally:
# Close connection and rollback if not explicit (safer)
conn.close() # Explicitly close the connection even if no error
</code></pre>
<p>Error:</p>
<p>[parameters: {'cutoff_dt': '2023-9-30'}]
(Background on this error at: <a href="https://sqlalche.me/e/20/f405" rel="nofollow noreferrer">https://sqlalche.me/e/20/f405</a>)
2024-02-13 11:14:16,745 INFO sqlalchemy.engine.Engine ROLLBACK</p>
|
<python><pandas><sqlalchemy>
|
2024-02-18 07:21:23
| 1
| 2,022
|
kms
|
78,014,727
| 13,176,726
|
AssertionError in django-allauth settings.py during Django application startup
|
<p>I'm encountering an AssertionError in my Django application when trying to start the development server. The error occurs in the <code>django-allauth</code> package, specifically in the <code>settings.py</code> file. I've already tried reinstalling <code>django-allauth</code>, checking dependencies, and reviewing my configuration without success.</p>
<p>The error occurs with the following change to the configuration options <code>ACCOUNT_EMAIL_VERIFICATION = 'mandatory'</code> instead of
<code>'none'</code></p>
<p>Here is the traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python38-32\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Users\User\AppData\Local\Programs\Python\Python38-32\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\User\Desktop\Main_project\venv\lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "C:\Users\User\Desktop\Main_project\venv\lib\site-packages\django\core\management\commands\runserver.py", line 125, in inner_run
autoreload.raise_last_exception()
File "C:\Users\User\Desktop\Main_project\venv\lib\site-packages\django\utils\autoreload.py", line 87, in raise_last_exception
raise _exception[1]
File "C:\Users\User\Desktop\Main_project\venv\lib\site-packages\django\core\management\__init__.py", line 398, in execute
autoreload.check_errors(django.setup)()
File "C:\Users\User\Desktop\Main_project\venv\lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "C:\Users\User\Desktop\Main_project\venv\lib\site-packages\django\__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\User\Desktop\Main_project\venv\lib\site-packages\django\apps\registry.py", line 116, in populate
app_config.import_models()
File "C:\Users\User\Desktop\Main_project\venv\lib\site-packages\django\apps\config.py", line 269, in import_models
self.models_module = import_module(models_module_name)
File "C:\Users\User\AppData\Local\Programs\Python\Python38-32\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\User\AppData\Local\Programs\Python\Python38-32\lib\importlib\__init__.py", line 127, in import_m
odule
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Users\User\Desktop\Main_project\venv\lib\site-packages\allauth\account\models.py", line 9, in <module>
from . import app_settings, signals
File "C:\Users\User\Desktop\Main_project\venv\lib\site-packages\allauth\account\app_settings.py", line 373, in <modu
le>
app_settings = AppSettings("ACCOUNT_")
File "C:\Users\User\Desktop\Main_project\venv\lib\site-packages\allauth\account\app_settings.py", line 27, in __init
__
assert (
AssertionError
</code></pre>
<p><em><strong>My Question:</strong></em> How do I fix this error? It shows only when <code>ACCOUNT_EMAIL_VERIFICATION</code> is set to <code>mandatory</code>. If it is <code>none</code> the project runs smoothly.</p>
|
<python><django><django-allauth>
|
2024-02-18 06:08:19
| 2
| 982
|
A_K
|
78,014,684
| 2,822,041
|
How to replay keystrokes without modifiers?
|
<p>Using pynput, I am trying to capture keys and it involves keys like shift-1, cmd-1(i am on macos) etc</p>
<p>The problem I am facing is if I were to press shift-1 on my keyboard, it's recorded as shift-! (modified), and when I press shift-2 its recorded as shift-@. Hence when I replay these key strokes, it doesn't work as intended.</p>
<p>Key strokes with cmd key is also not working as intended.</p>
<p>Please point me to the right direction as I need the keys to be replayed as exactly as recorded.</p>
<p>disclaimer: I got these codes from the web and trying to modify it.</p>
<pre><code>from pynput import mouse, keyboard
def on_press(key):
print(key)
try:
json_object = {
'action':'pressed_key',
'key':key.char,
'_time': time.time()
}
except AttributeError:
if key == keyboard.Key.esc:
print("Keyboard recording ended.")
return False
json_object = {
'action':'pressed_key',
'key':str(key),
'_time': time.time()
}
recording.append(json_object)
def on_release(key):
try:
json_object = {
'action':'released_key',
'key':key.char,
'_time': time.time()
}
except AttributeError:
json_object = {
'action':'released_key',
'key':str(key),
'_time': time.time()
}
recording.append(json_object)
keyboard_listener = keyboard.Listener(
on_press=on_press,
on_release=on_release)
keyboard_listener.start()
keyboard_listener.join()
</code></pre>
|
<python><python-3.x>
|
2024-02-18 05:38:36
| 0
| 13,738
|
Someone Special
|
78,014,487
| 939,259
|
How can I use PyTorch 2.2 with Google Colab TPUs?
|
<p>I'm having trouble getting PyTorch 2.2 running with TPUs on Google Colab. I'm getting an error about a JAX bug, but I'm confused about this because I'm not doing anything with JAX.</p>
<p>My setup process is very simple:</p>
<pre><code>!pip install torch~=2.2.0 torch_xla[tpu]~=2.2.0 -f https://storage.googleapis.com/libtpu-releases/index.html
</code></pre>
<p>And then</p>
<pre><code>import torch
import torch_xla.core.xla_model as xm
</code></pre>
<p>which gives the error</p>
<pre><code>/usr/local/lib/python3.10/dist-packages/jax/__init__.py:27: UserWarning: cloud_tpu_init failed: KeyError('')
This a JAX bug; please report an issue at https://github.com/google/jax/issues
_warn(f"cloud_tpu_init failed: {repr(exc)}\n This a JAX bug; please report "
/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:441: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
</code></pre>
<p>Then trying</p>
<pre><code>t1 = torch.tensor(100, device=xm.xla_device())
t2 = torch.tensor(200, device=xm.xla_device())
print(t1 + t2)
</code></pre>
<p>gives the error</p>
<pre><code>2 frames
/usr/local/lib/python3.10/dist-packages/torch_xla/runtime.py in xla_device(n, devkind)
121
122 if n is None:
--> 123 return torch.device(torch_xla._XLAC._xla_get_default_device())
124
125 devices = xm.get_xla_supported_devices(devkind=devkind)
RuntimeError: Bad StatusOr access: UNKNOWN: TPU initialization failed: No ba16c7433 device found.
</code></pre>
|
<python><pytorch><google-colaboratory><jax>
|
2024-02-18 03:36:44
| 1
| 11,918
|
Thomas Johnson
|
78,014,453
| 3,166,105
|
Python errno 13 permission error when reading windows 10 registry hive
|
<p>I have a simple python script that tries to read the content of a HIVE inside C:\Windows\System32\config. I have given full control on the directory to the user running the script.</p>
<p><a href="https://i.sstatic.net/l67dz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l67dz.png" alt="enter image description here" /></a></p>
<p>I keep getting permission error.</p>
<p><a href="https://i.sstatic.net/N09U1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N09U1.png" alt="enter image description here" /></a></p>
|
<python><windows>
|
2024-02-18 03:15:59
| 0
| 2,036
|
holybull
|
78,014,389
| 721,998
|
module not found markupsafe
|
<p>I create a virtual env:</p>
<pre><code>> virtualenv -p python3 spectap
</code></pre>
<p>I activate it and install packages from requirements.txt in it:</p>
<pre><code>> . spectap/bin/activate
> pip3 install -r requirements.txt
</code></pre>
<p>Running my flask app in debug mode gives me this error:</p>
<pre><code>Traceback (most recent call last):
File "/home/ec2-user/app4/app/spectap/bin/flask", line 5, in <module>
from flask.cli import main
File "/home/ec2-user/app4/app/spectap/local/lib/python3.6/dist-packages/flask/__init__.py", line 1, in <module>
from markupsafe import escape
ModuleNotFoundError: No module named 'markupsafe'
</code></pre>
<p>I install <code>markupsafe</code>:</p>
<pre><code>(spectap) [ec2-user@ip-172-31-23-155 app]$ pip3 install MarkupSafe==2.0.1
Collecting MarkupSafe==2.0.1
Using cached MarkupSafe-2.0.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (30 kB)
Installing collected packages: MarkupSafe
WARNING: Value for scheme.purelib does not match. Please report this to <https://github.com/pypa/pip/issues/10151>
distutils: /home/ec2-user/app4/app/spectap/lib/python3.6/dist-packages
sysconfig: /home/ec2-user/app4/app/spectap/lib/python3.6/site-packages
WARNING: Additional context:
user = False
home = None
root = None
prefix = None
Successfully installed MarkupSafe-2.0.1
</code></pre>
<p>After installation in my virtual env, I run <code>pip show markupsafe</code> but it still does not seem to find the package:</p>
<pre><code>(spectap) [ec2-user@ip-172-31-23-155 app]$ pip3 show markupsafe
WARNING: Package(s) not found: markupsafe
</code></pre>
<p>Also re-running my flask app still shows the <code>no module named markupsafe</code> error.</p>
<p>What am I missing?</p>
<p>P.S. Here is what the path variable looks like from inside the virtual environment:</p>
<pre><code>(spectap) [ec2-user@ip-172-31-23-155 app]$ python -c "import sys; print(sys.path)"
['',
'/home/ec2-user/app5/app/spectap/local/lib64/python3.6/site-packages',
'/home/ec2-user/app5/app/spectap/local/lib/python3.6/site-packages',
'/home/ec2-user/app5/app/spectap/lib64/python3.6',
'/home/ec2-user/app5/app/spectap/lib/python3.6',
'/home/ec2-user/app5/app/spectap/lib64/python3.6/site-packages',
'/home/ec2-user/app5/app/spectap/lib/python3.6/site-packages',
'/home/ec2-user/app5/app/spectap/lib64/python3.6/lib-dynload',
'/home/ec2-user/app5/app/spectap/local/lib/python3.6/dist-packages',
'/usr/lib64/python3.6', '/usr/lib/python3.6']
</code></pre>
|
<python><flask><pip><python-3.6>
|
2024-02-18 02:44:02
| 0
| 6,511
|
Darth.Vader
|
78,014,380
| 3,555,115
|
Combine Dataframes based on specific iiteration number from dataframes
|
<p>I have two dataframes, df1 and df2, where I would like to combine them based on iteration number from two columns</p>
<pre><code>df1 =
iteration IOPS Latency
0 1_1 46090 0.7300
1 2_2 12 0.0221
2 3_3 49164 0.1236
3 4_4 98311 0.1318
4 5_5 196604 0.2076
5 6_6 249843 0.1467
6 7_7 298974 0.1578
7 8_8 348108 0.1604
8 9_9 397230 0.1707
df2 =
iteration IOPS Latency
0 1_1 46074 0.6977
1 2_2 12 0.0279
2 3_3 49159 0.1921
3 4_4 98307 0.2189
4 5_5 298976 0.2337
5 6_6 397265 0.2622
</code></pre>
<p>I need to combine iteration 1_1,2_2,3_3,9_9 from df1 and 1_1,2_2,5_5,6_6</p>
<pre><code>df3 =
iteration IOPS Latency iteration IOPS Latency
1_1 46090 0.7300 1_1 46074 0.6977
2_2 12 0.0221 2_2 12 0.0279
3_3 49164 0.1236 5_5 298976 0.2337
9_9 397230 0.1707 6_6 397265 0.2622
</code></pre>
|
<python><pandas><dataframe>
|
2024-02-18 02:37:12
| 1
| 750
|
user3555115
|
78,014,354
| 12,744,116
|
How do I parallelize a set of matrix multiplications
|
<p>Consider the following operation, where I take 20 x 20 slices of a larger matrix and dot product them with another 20 x 20 matrix:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
a = np.random.rand(10, 20)
b = np.random.rand(20, 1000)
ans_list = []
for i in range(980):
ans_list.append(
np.dot(a, b[:, i:i+20])
)
</code></pre>
<p>I know that NumPy parallelizes the actual matrix multiplication, but how do I parallelize the outer for loop so that the individual multiplications are run at the same time instead of sequentially?</p>
<p>Additionally, how would I go about it if I wanted to do the same using a GPU? Obviously, I'll use CuPy instead of NumPy, but how do I submit the multiple matrix multiplications to the GPU either simultaneously or asynchronously?</p>
<hr />
<p><strong>PS:</strong> Please note that the sliding windows above are an example to generate multiple matmuls. I know one solution (shown below) in this particular case is to use NumPy built-in sliding windows functionality, but I'm interested in knowing the optimal way to run an arbitrary set of matmuls in parallel (optionally on a GPU), and not just a faster solution for this particular example.</p>
<pre class="lang-py prettyprint-override"><code>windows = np.lib.stride_tricks.sliding_window_view(b, (20, 20)).squeeze()
ans_list = np.dot(a, windows)
</code></pre>
|
<python><numpy><parallel-processing><gpu><cupy>
|
2024-02-18 02:22:53
| 2
| 1,350
|
anonymous1a
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.