QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,990,203 | 10,669,558 | Decorators returning function as none if passed as an argument to the decorator | <p>I have been trying to wrap my head around decorators and I found this weird little thing, let's say we have a decorator like</p>
<pre><code>import functools
global_event_registration =[]
def handler_all(func):
if(func):
global_event_registration.append(func)
return func
</code></pre>
<p>which just adds the function to a global list and returns it, and if we run the function we get the expected output</p>
<pre><code>@handler_all
def print_yay():
print('yay!')
print(global_event_registration)
print_yay()
</code></pre>
<p>output:</p>
<pre><code>[<function __main__.print_yay()>]
yay!
</code></pre>
<p>but if we change the <code>handler_all</code> to accept an arbitrary argument let's say <code>count</code>, and then run the same thing again, it errors out,</p>
<pre><code>def handler_all(func, count=2):
if(func):
global_event_registration.append(func)
return func
@handler_all(count=2)
def print_yay():
print('yay')
</code></pre>
<p>it'll error out saying that positional argument <code>func</code> is missing, My question is since in the first case, <code>func</code> is being passed in implicitly, why is it not the case in the second case one?</p>
<p>Also I found this <a href="https://stackoverflow.com/questions/66851640/python-decorators-using-functools-partial-where-does-func-come-from">stackoverflow thread</a> which deals with it using <code>functools.partial</code> like</p>
<pre><code>def handler_all(func=None, count=2):
if not func:
return functools.partial(handler_all, count=count)
if(func):
func()
global_event_registration.append(func)
return func
</code></pre>
<p>I got the thread's answer that we are just returning a partial function which has the <code>func </code>passed to it implicitly but again the same question remains, how did this get the function passed to it vs simply not using it at all? why is it being passed implicitly in the second case and not in the first one since the function definition is the same?</p>
<p>Also another weird thing is,</p>
<p>in this example</p>
<pre><code>def test(name="your_name"):
def decorator(func):
print(name)
func()
return func
return decorator
@test(name='altair')
def print_yay():
print('yay')
</code></pre>
<p>when you run this, the output is</p>
<pre><code>altair
yay
</code></pre>
<p>My question is why? we are just returning the reference the to the inner function and not calling it, so why does it get called?</p>
<p>and if we move the func to the outer function, it works as expected with no output.</p>
<pre><code>def test(func):
def decorator():
func()
return func
return decorator
@test
def print_yay():
print('yay')
</code></pre>
<p>In the same nuance, if I define a decorator like this and call the inner function explicitly, which in turn should return the function passed to it but when I run this it returns a <code>NoneType</code>, again why?</p>
<pre><code>def test(func=None,name="your_name"):
def decorator(func):
print(name)
func()
return func
return decorator(func)
</code></pre>
<p>the output is</p>
<pre><code>altair
<function print_yay at 0x7fcf60466440>
</code></pre>
<p>The new line is returned by the decorator which means the <code>print</code> that we passed in as <code>func</code> is working fine.
yet if I call <code>print_yay</code>, it just says <code>NoneObject is not callable</code>.</p>
<pre><code>@test(func=print, name='altair')
def print_yay():
print('yay')
print_yay()
</code></pre>
<p>output:</p>
<pre><code> 1 @test(func=print,name='altair')
2 def print_yay():
3 print('yay')
----> 5 print_yay()
TypeError: 'NoneType' object is not callable
</code></pre>
<p>Can somebody explain these nuances to me? I am having a hard time figuring out what exactly is happening under the hood and I cant find any resources which will explain this at all.</p>
| <python><python-decorators> | 2023-08-28 05:27:40 | 1 | 645 | Altair21 |
76,990,120 | 7,535,168 | Would it be possible not to disable a button when opening a dialog in KivyMD? | <p>Basically, I have a button which I don't want to get disabled when I open <code>MDDialog</code>. Is that possible to do and what should be done to achieve it? I did some searching on this and haven't found anything useful.</p>
<p>Code example:</p>
<pre><code>from kivymd.app import MDApp
from kivy.lang import Builder
from kivymd.uix.dialog import MDDialog
kv = """
Screen:
Button:
text: 'dont disable me'
pos_hint: {'x': 0, 'y': 0}
size_hint: (0.2, 0.2)
Button:
text: 'open dialog'
on_release: app.openDialog()
pos_hint: {'x': 0.5, 'y': 0.5}
size_hint: (0.2, 0.2)
"""
class app(MDApp):
def build(self):
return Builder.load_string(kv)
def openDialog(self):
dialog = MDDialog(title = 'hi',
size_hint = (0.2, 0.2))
dialog.open()
app().run()
</code></pre>
| <python><kivy><kivymd> | 2023-08-28 04:57:51 | 0 | 601 | domdrag |
76,990,085 | 13,060,649 | Django: calling .only() on my model causing infinite loop? | <p>I am using <code>.only</code> to fetch the required fields from my model, it seems like my <code>__init__</code> method causing a inifinite loop when calling <code>only</code> on this model, and this is my model:</p>
<pre><code>class PodcastEpisode(models.Model):
audio_metadata = models.JSONField(null=True)
featured_artists = models.ManyToManyField(to=User, related_name='featured_artists')
podcast_series = models.ForeignKey(to=PodcastSeries, on_delete=models.CASCADE, null=False)
published = models.BooleanField(default=False)
published_at = models.DateTimeField(blank=True, null=True)
_original_audios = None # To store current data
_published = None
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# self._original_audios = self.audio_metadata
# self._published = self.published
</code></pre>
<p>When I commented these lines <code>self._original_audios = self.audio_metadata</code> and <code>self._published = self.published</code>, it doesn't cause inifinite loop. I am not sure how this is happening even if I have included <code>audio_metadata</code> in my <code>.only()</code> fields. This is my query</p>
<pre><code>PodcastEpisode.objects\
.filter(id__in=id_list).prefetch_related(*prefetches).only(*['id' 'audio_metadata'])
</code></pre>
<p>Please suggest me how do I use <code>.only()</code> and where should I place these <code>_original_audios</code> and <code>_published</code> variables.</p>
<p>For reference this is the whole stacktrace:</p>
<pre><code> File "/Users/dev/Desktop/dev/Podsack_backend/mediacontent/models.py", line 152, in __init__
self._original_audios = self.audio_metadata
^^^^^^^^^^^^^^^^^^^
File "/Users/dev/Library/Caches/pypoetry/virtualenvs/narratave-9vDc9ea1-py3.11/lib/python3.11/site-packages/django/db/models/query_utils.py", line 182, in __get__
instance.refresh_from_db(fields=[field_name])
File "/Users/dev/Library/Caches/pypoetry/virtualenvs/narratave-9vDc9ea1-py3.11/lib/python3.11/site-packages/django/db/models/base.py", line 707, in refresh_from_db
).filter(pk=self.pk)
^^^^^^^^^^^^^^^^^^
File "/Users/dev/Library/Caches/pypoetry/virtualenvs/narratave-9vDc9ea1-py3.11/lib/python3.11/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dev/Library/Caches/pypoetry/virtualenvs/narratave-9vDc9ea1-py3.11/lib/python3.11/site-packages/django/db/models/query.py", line 1420, in filter
return self._filter_or_exclude(False, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dev/Library/Caches/pypoetry/virtualenvs/narratave-9vDc9ea1-py3.11/lib/python3.11/site-packages/django/db/models/query.py", line 1438, in _filter_or_exclude
clone._filter_or_exclude_inplace(negate, args, kwargs)
File "/Users/dev/Library/Caches/pypoetry/virtualenvs/narratave-9vDc9ea1-py3.11/lib/python3.11/site-packages/django/db/models/query.py", line 1445, in _filter_or_exclude_inplace
self._query.add_q(Q(*args, **kwargs))
File "/Users/dev/Library/Caches/pypoetry/virtualenvs/narratave-9vDc9ea1-py3.11/lib/python3.11/site-packages/django/db/models/sql/query.py", line 1532, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dev/Library/Caches/pypoetry/virtualenvs/narratave-9vDc9ea1-py3.11/lib/python3.11/site-packages/django/db/models/sql/query.py", line 1562, in _add_q
child_clause, needed_inner = self.build_filter(
^^^^^^^^^^^^^^^^^^
File "/Users/dev/Library/Caches/pypoetry/virtualenvs/narratave-9vDc9ea1-py3.11/lib/python3.11/site-packages/django/db/models/sql/query.py", line 1478, in build_filter
condition = self.build_lookup(lookups, col, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dev/Library/Caches/pypoetry/virtualenvs/narratave-9vDc9ea1-py3.11/lib/python3.11/site-packages/django/db/models/sql/query.py", line 1289, in build_lookup
lookup_class = lhs.get_lookup(lookup_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dev/Library/Caches/pypoetry/virtualenvs/narratave-9vDc9ea1-py3.11/lib/python3.11/site-packages/django/db/models/expressions.py", line 377, in get_lookup
return self.output_field.get_lookup(lookup)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dev/Library/Caches/pypoetry/virtualenvs/narratave-9vDc9ea1-py3.11/lib/python3.11/site-packages/django/db/models/query_utils.py", line 216, in get_lookup
found = self._get_lookup(lookup_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dev/Library/Caches/pypoetry/virtualenvs/narratave-9vDc9ea1-py3.11/lib/python3.11/site-packages/django/db/models/query_utils.py", line 203, in _get_lookup
return cls.get_lookups().get(lookup_name, None)
^^^^^^^^^^^^^^^^^
RecursionError: maximum recursion depth exceeded in comparison
</code></pre>
| <python><django><django-models><django-queryset> | 2023-08-28 04:45:46 | 1 | 928 | suvodipMondal |
76,989,973 | 2,827,181 | Have graduations on the axes, and not [or not only] on the left and the bottom of the box where my curve is drawn | <p>Using <em>pandas</em> module, I'm drawing a curve on a <em>Jupyter</em> notebook:</p>
<p><a href="https://i.sstatic.net/przn6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/przn6.png" alt="enter image description here" /></a></p>
<p>For drawing this, I wrote this code:</p>
<pre class="lang-py prettyprint-override"><code>import pylab as pl
from numpy import *
pl.style.use('bmh')
# Df = ℝ - {-1, 1}
Df1 = [-4 + k/10 for k in range(28)] # [-4, -1[ par pas de 0.1
Df2 = [-0.8 + k/10 for k in range(18)] # ]-1, 1] par pas de 0.1
Df3 = [1.1 + k/10 for k in range(30)] # ]-1, 4] par pas de 0.1
Df = [Df1, Df2, Df3]
for D in Df:
Y = [(-(x ** 2) + 2 * x) / (x ** 2 - 1) for x in D] # Pour tout x ∈ D, calculer y = f(x)
pl.plot(D, Y, color='red')
pl.legend([ r'$\mathscr{C}_f$'], fontsize = 18) # Cet appel est important : sans, la légende ne s'affichera pas
# Axe des x : de -4 à 4, axe des y : de -6 à 6
pl.axis(xmin=-4, xmax=4, ymin=-6, ymax=6)
pl.axvline(x = 0)
pl.axhline(y = 0)
</code></pre>
<p>I would like to see graduations on the axes themselves,<br />
instead of [or also added to] the left and the bottom of the box where they are currently.</p>
<p>Is it possible?</p>
| <python><matplotlib><jupyter-notebook> | 2023-08-28 04:01:47 | 1 | 3,561 | Marc Le Bihan |
76,989,950 | 3,099,733 | How to load cells from other notebooks in Jupyter? | <p>Suppose there is a Jupyter notebook file named <code>template.ipynb</code> in the current directory. What I want to do is to load its cells into the current notebook. It's different from the <code>%run</code> magic command, which executes the code in the notebook file and shows the output, or the <code>%load</code> magic command, which loads the code into the current cell. I want to load the cells into the current notebook, so that I can edit the code and execute it later. Is there any way to do this?</p>
<p>The reason I look for such capabilities is to allow user to load some preset template into their current notebook by just running a command, for example <code>my_package.load_template('opencv')</code> and then cells from the preset opencv template will be created in the current notebook.</p>
<p>Is there any magic command, for example, named <code>%load_nb</code>, which works like <code>%load</code>, but instead of loading code into the current cell, <code>%load_nb</code> should load another notebook from local file system or internet into the current notebook.</p>
| <python><jupyter-notebook> | 2023-08-28 03:54:31 | 0 | 1,959 | link89 |
76,989,885 | 8,754,958 | ChatGPT API Custom-trained AI Chatbot answering "None" to Python Query | <p>I'm connecting to my first chatbot. Based on the process outlined here:
<a href="https://beebom.com/how-train-ai-chatbot-custom-knowledge-base-chatgpt-api/" rel="nofollow noreferrer">https://beebom.com/how-train-ai-chatbot-custom-knowledge-base-chatgpt-api/</a></p>
<p>I created the code he suggested to get ChatGPT to analyze my PDF. The code was a bit outdated though, and I had to make some adjustments. This is what I have now:</p>
<pre><code>from llama_index import *
from langchain.chat_models import ChatOpenAI
import gradio as gr
import sys
import os
import openai
os.environ["OPENAI_API_KEY"] = 'XXXX'
openai.api_key = "XXXX"
documents = ""
service_context = ""
def construct_index(directory_path):
max_input_size = 4096
num_outputs = 512
max_chunk_overlap = 20
chunk_size_limit = 600
prompt_helper = PromptHelper(max_input_size, num_outputs, chunk_overlap_ratio=0.1, chunk_size_limit=chunk_size_limit)
llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.7, model_name="gpt-3.5-turbo", max_tokens=num_outputs))
documents = SimpleDirectoryReader(directory_path).load_data()
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)
# apparently this saves it to disk?
index.storage_context.persist(persist_dir='docs')
storage_context = StorageContext.from_defaults(persist_dir='docs')
index = load_index_from_storage(storage_context)
return index
def chatbot(input_text):
index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)
index.storage_context.persist(persist_dir='docs')
storage_context = StorageContext.from_defaults(persist_dir='docs')
index = load_index_from_storage(storage_context)
# tried this method as well with no success instead of above
#index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)
query_engine = index.as_query_engine()
response = query_engine.query(input_text)
# am I returning the correct object here? I believe its supposed to be JSON?
return response
iface = gr.Interface(fn=chatbot,
inputs=gr.components.Textbox(lines=7, label="Enter your text"),
outputs="text",
title="Custom-trained AI Chatbot")
index = construct_index("docs")
iface.launch(share=True)
</code></pre>
<p>When I run the program, There is no error, and it says its running on my Ip. When I get to the chatbot, everything looks ok, until I ask a question. Then it just keeps saying "None"</p>
<p><a href="https://i.sstatic.net/nVxod.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nVxod.png" alt="enter image description here" /></a></p>
<p>There are no errors or warnings in the Console, the program keeps running. It just keeps saying None whenever I query it. Where am I going wrong? And I don't 100% understand the code btw, this is heavy modification from the original example to get all the libraries working. If someone could explain simply what is happening it would be appreciated. Thanks G</p>
| <python><openai-api><chatgpt-api><llama-index> | 2023-08-28 03:29:30 | 1 | 805 | Geoff L |
76,989,743 | 8,214,951 | RabbitMQ Python Producer and Consumer Microservice | <p>I've found numerous examples online of both Python RabbitMQ Consumers and Producers (as separate scripts). Is there a way to create a single Python Microservice that is able to act as both a consumer and producer?</p>
| <python><architecture><rabbitmq><microservices> | 2023-08-28 02:33:20 | 1 | 430 | flying_loaf_3 |
76,989,457 | 8,121,824 | Iterate over dataframe and create new column | <p>I need to create a new column that takes the result from another column and gets the index value from a list. I've tried using itertuples which rather than gets the correct index, always gets the last index. I've also tried creating a column but it says the truth value of a series is ambiguous. The datasets could be large so efficiency is important.</p>
<pre><code>import pandas as pd
Year = [2020,2021,2022]
df = pd.DataFrame({'Year':[2020,,2021,2021,2022],'Sales':[100000,101000,103000,112000]})
</code></pre>
<p>##This ends up setting the index for all years as 2 rather than giving them their respective index</p>
<pre><code>for row in df.itertuples():
value = getattr(row, 'Year')
df['Index_value'] = Year.index((value)
print(df)
</code></pre>
<p>##This returns an error that the truth value of a series is ambiguous.</p>
<pre><code>df['Index_value'] = Year.index(df['Year'])
print(df)
</code></pre>
| <python><pandas><dataframe> | 2023-08-28 00:27:51 | 4 | 904 | Shawn Schreier |
76,989,440 | 16,220,410 | relegate frozenset values to each set values | <p>I'm a beginner in python, from the code below I found on twitter</p>
<p>how would you relegate the frozenset values to each of the set value to get the desired output? shoud the output not be a frozenset?</p>
<pre><code> admin_permissions = frozenset(['view', 'edit', 'delete', 'add'])
editor_permissions = frozenset(['view', 'edit', 'add','deny'])
viewer_permissions = frozenset(['view'])
admins = {'Alice', 'Bob'}
editors = {'Bob', 'Charlie', 'Dave'}
viewers = {'Eve', 'Frank', 'Alice'}
user_permissions = {}
for user in admins:
user_permissions[user] = admin_permissions
for user in editors:
user_permissions.setdefault(user, frozenset()).union(editor_permissions)
for user in viewers:
user_permissions.setdefault(user, frozenset()).union(viewer_permissions)
print(user_permissions)
</code></pre>
<p>output is</p>
<pre class="lang-py prettyprint-override"><code>{'Bob': frozenset({'edit', 'add', 'delete', 'view'}),
'Alice': frozenset({'edit', 'add', 'delete', 'view'}),
'Dave': frozenset(),
'Charlie': frozenset(),
'Frank': frozenset(),
'Eve': frozenset()
}
</code></pre>
<p><strong>desired output</strong></p>
<pre class="lang-py prettyprint-override"><code>{'Bob': frozenset({'edit', 'add', 'delete', 'view', 'deny'}),
'Alice': frozenset({'edit', 'add', 'delete', 'view'}),
'Dave': frozenset({'edit', 'add', 'view', 'deny'}),
'Charlie': frozenset({'edit', 'add', 'view', 'deny'}),
'Frank': frozenset({'view'}),
'Eve': frozenset({'view'})
}
</code></pre>
| <python><frozenset> | 2023-08-28 00:20:12 | 0 | 1,277 | k1dr0ck |
76,989,290 | 11,163,122 | How to fool issubclass checks with a MagicMock? | <p>I have something like this:</p>
<pre class="lang-py prettyprint-override"><code>from unittest.mock import MagicMock
class A:
pass
class B(A):
pass
mock_B = MagicMock(spec_set=B)
assert issubclass(mock_B, A) # TypeError: issubclass() arg 1 must be a class
</code></pre>
<p>How can I get this to pass?</p>
<p><a href="https://stackoverflow.com/questions/11146725/isinstance-and-mocking">isinstance and Mocking</a> had a lot of stuff various answers, but from there I can't figure it out.</p>
| <python><unit-testing><subclass><python-unittest.mock> | 2023-08-27 23:16:58 | 2 | 2,961 | Intrastellar Explorer |
76,989,227 | 436,721 | HOSTNAME env-var returns empty during Docker pod startup | <p>When I SSH onto a <strong>running</strong> Docker container in a Kubernetes cluster and run <code>os.getenv("HOSTNAME")</code> from within a python interpreter, I am able to see the name of the <strong>deployment</strong> being used.</p>
<p>But if I try and run <code>os.getenv("HOSTNAME")</code> in a script that gets run from the <code>Dockerfile</code>, the env-var is <code>null</code>.</p>
<p>Is this expected? Is there some workaround here?</p>
<p><strong>UPDATE</strong>: I tried to get the contents from <code>/etc/hostname</code> instead and to my surprise I got <code>debuerreotype</code>. After some googling I saw that that is the base Debian image in use and apparently <a href="https://github.com/debuerreotype/debuerreotype/blob/master/docker-run.sh#L50" rel="nofollow noreferrer">it passes that name as the hostname</a></p>
<ul>
<li>Opened an <a href="https://github.com/debuerreotype/debuerreotype/issues/160" rel="nofollow noreferrer">Issue</a> with them in the meantime</li>
<li>(I still don't understand why I get the correct value when I SSH into the container though)</li>
</ul>
| <python><docker><kubernetes><environment-variables><debian> | 2023-08-27 22:52:22 | 1 | 11,937 | Felipe |
76,989,170 | 10,771,559 | Change column value depending on how many times value appears in other column | <p>I have a dataframe that looks like this:</p>
<pre><code>Container Event
A Clean
B Dry
A Clean
A Dry
B Clean
C Clean
C Clean
C Clean
</code></pre>
<p>I want to introduce a new column called 'Temperature', which has the value 4 the first time a container has the event 'Clean' and value 3 for all subsequent 'Clean' events. Dry would always have value 1. The dataframe should look like this:</p>
<pre><code>Container Event Temperature
A Clean 4
B Dry 1
A Clean 3
A Dry 1
B Clean 4
C Clean 4
C Clean 3
C Clean 3
</code></pre>
<p>Reproducible dataframe:</p>
<pre><code>d = {'Container': ['A','B','A','A','B','C','C','C'], 'Event': ['Clean', 'Dry', 'Clean', 'Dry', 'Clean', 'Clean', 'Clean', 'Clean']}
df = pd.DataFrame(data=d)
</code></pre>
| <python><pandas> | 2023-08-27 22:22:18 | 3 | 578 | Niam45 |
76,989,100 | 5,394,072 | Pandas: (By groups based on 1 column) How to both forward fill and backward fill a column based on the values in another column | <p>I Was looking to do the following by groups of another column (category column), sorted by date.</p>
<ol>
<li>forward fill value in "value" column, if the "value_stage" column has a "start" value in the same row.</li>
<li>Then after doing above, backward fill the value in "value" column, if the "value_stage" column has a "end" value in the same row.</li>
</ol>
<p>Please find an example below.</p>
<p>This is my data</p>
<pre><code>pd.DataFrame([['2022-01-01','A','',''],['2022-01-02','A','3','End'],['2022-01-03','A','4','Start'],['2022-01-04','A','',''],['2022-01-05','A','2','Start'],['2022-01-06','A','',''],['2022-01-07','A','',''],
['2022-01-01','B','3','End'],['2022-01-02','B','',''],['2022-01-03','B','1','Start'],['2022-01-04','B','',''],['2022-01-05','B','',''],['2022-01-06','B','2','end'],['2022-01-07','B','',''],
['2022-01-01','C','',''],['2022-01-02','C','3','End'],['2022-01-03','C','',''],['2022-01-04','C','1','End'],], columns = ['date', 'category', 'value','value_stage'])
</code></pre>
<p>This is how the output looks like</p>
<pre><code> pd.DataFrame([['2022-01-01','A','3',''],['2022-01-02','A','3','End'],['2022-01-03','A','4','Start'],['2022-01-04','A','4',''],['2022-01-05','A','2','Start'],['2022-01-06','A','2',''],['2022-01-07','A','2',''],
['2022-01-01','B','3','End'],['2022-01-02','B','',''],['2022-01-03','B','1','Start'],['2022-01-04','B','1',''],['2022-01-05','B','1',''],['2022-01-06','B','2','end'],['2022-01-07','B','',''],
['2022-01-01','C','3','End'],['2022-01-02','C','3','End'],['2022-01-03','C','1','End'],['2022-01-04','C','1','End'],], columns = ['date', 'category', 'value','value_stage'])
</code></pre>
<p>From the output, category 'A' has the value 4 forward filled on Jan 4, because the Jan 3's "value_stage" = "start" and also then had backfill for Jan 1 since the Jan 2 "value_stage" = "End". (Same is the case with Jan 6,7, ffill from JAn 5 for category 5)
Category 'B' also has the value 1 forward filled on Jan 4, Jan 5 since Jan 3's "value_stage" = "start" . Since Category has neither the "start", "end" values in the last columns, nothing was done.</p>
<p>Update - Please note: A category can have multiple "Start" (in case of example A above) and multiple "End" each associated with a different values. (to illustrate this added 3 more days to category A and B at the end : Jan 5 - Jan 7)</p>
| <python><pandas><dataframe> | 2023-08-27 21:53:31 | 2 | 738 | tjt |
76,988,933 | 14,252,319 | Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d | <p>I am working on an OpenAI API application using Chainlit for a website but when I run my command</p>
<p><code>chainlit run main.py</code></p>
<p>The server runs but there are errors or some warnings like this</p>
<blockquote>
<p>E0828 02:21:10.694000000 20136 src/core/tsi/ssl_transport_security.cc:1446] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED.</p>
</blockquote>
<p>I seem to understand it is causing because of some error while connecting to the server and client but did not understand how to fix them.</p>
<p>I have tried searching about this but didn't find any clue.
This is the code block</p>
<pre><code>import chainlit as cl
import openai
import os
os.environ['openAi_API_KEY'] = 'API-kEY'
openai.api_key = 'API_KEY'
#pass the message into chatgpt api, send() the answer
#return everything that the user inputs
@cl.on_message
async def main(message : str):
response = openai.ChatCompletion.create(
model = 'gpt-3.5-turbo',
messages = [
{"role":"assistant","content":"you are a form taker assistant, you ask questions to the user their name,email and phone number"},
{"role":"user","content":message}
],
temprature = 1
)
await cl.Message(content=str(response)).send()
</code></pre>
| <python><machine-learning><deep-learning><openai-api> | 2023-08-27 20:59:48 | 0 | 336 | Puranjay Kwatra |
76,988,796 | 16,420,204 | Python Polars: Number of rows since last value >0 | <p>Given a polars <code>DataFrame</code> column like</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({"a": [0, 29, 28, 4, 0, 0, 13, 0]})
</code></pre>
<p>how to get a new column like</p>
<pre><code>shape: (8, 2)
┌─────┬──────┐
│ a ┆ dist │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═════╪══════╡
│ 0 ┆ 1 │
│ 29 ┆ 0 │
│ 28 ┆ 0 │
│ 4 ┆ 0 │
│ 0 ┆ 1 │
│ 0 ┆ 2 │
│ 13 ┆ 0 │
│ 0 ┆ 1 │
└─────┴──────┘
</code></pre>
<p>The solution should preferably work with <code>.over()</code> for grouped values and optionally an additional rolling window function like <code>rolling_mean()</code>.</p>
<p>I know of the respective <a href="https://stackoverflow.com/questions/56942937/pandas-row-number-since-last-greater-than-0-value">question</a> for pandas but couldn't manage to translate it.</p>
| <python><dataframe><python-polars> | 2023-08-27 20:19:06 | 2 | 1,029 | OliverHennhoefer |
76,988,689 | 687,827 | PyQt5 - QGridLayout inside QFormLayout | <p>I have a form I am working on where all of the initial elements populate correctly but I am now trying to add a gridbox that will eventually populate a list of patterns. As I test I have done the following:</p>
<pre><code> def show_game_types(self):
self.game_types = loadJSONFromFile(game_types_file)
self.setStyleSheet("")
game_types_page = QVBoxLayout()
game_types_page.setContentsMargins(30, 30, 30, 10)
game_types_page.setSpacing(0)
# creating a group box
self.gameTypesFormBox = QGroupBox("Game Types")
self.gameTypesFormBox.setStyleSheet("font-size: 14px; font-weight: bold;")
regular_font = "font-size: 11px; font-weight: normal;"
titles_style = "font-size: 12px; font-weight: bold;"
self.gt_layout = QFormLayout()
create_new_gt_button = QPushButton("Create New")
create_new_gt_button.setStyleSheet(regular_font)
create_new_gt_button.clicked.connect(self.save_new_game_type)
self.create_new_gt_textbox = QLineEdit()
self.create_new_gt_textbox.setMinimumWidth(600)
self.create_new_gt_textbox.setStyleSheet(regular_font)
add_patterns_label = QLabel("Modify game type patterns")
add_patterns_label.setStyleSheet(titles_style)
self.this_gt_combo = QComboBox()
self.this_gt_combo.setStyleSheet(regular_font)
self.this_gt_combo.addItem("-- Select game type --")
self.this_gt_combo.currentTextChanged.connect(self.load_gt_patterns)
for type in range(len(self.game_types)):
self.this_gt_combo.addItem(self.game_types[type]["name"])
self.gt_layout.addRow(self.create_new_gt_textbox, create_new_gt_button)
self.gt_layout.addRow(add_patterns_label)
self.gt_layout.addRow(self.this_gt_combo)
self.selected_label = QLabel("")
self.gt_layout.addRow(self.selected_label)
# for pattern in selected game patterns populate a grid
# start with a blank for "new"
self.grid_widget = QWidget()
self.pattern_grid = QGridLayout()
test = Color('red')
self.pattern_grid.addWidget(test, 0, 0)
test2 = Color('blue')
self.pattern_grid.addWidget(test2, 0, 1)
self.grid_widget.setLayout(self.pattern_grid)
self.gt_layout.addRow(QLabel("test"), self.grid_widget)
self.buttonBox = QDialogButtonBox(QDialogButtonBox.Cancel)
self.buttonBox.rejected.connect(self.showHomePage)
self.gameTypesFormBox.setLayout(self.gt_layout)
game_types_page.addWidget(self.gameTypesFormBox)
game_types_page.addWidget(self.buttonBox)
widget = QWidget()
widget.setLayout(game_types_page)
self.setCentralWidget(widget)
</code></pre>
<p>For reference the Color class is</p>
<pre><code>class Color(QWidget):
def __init__(self, color):
super(Color, self).__init__()
self.setAutoFillBackground(True)
self.color = color
palette = self.palette()
palette.setColor(QPalette.Window, QColor(self.color))
self.setPalette(palette)
</code></pre>
<p>When I try running this the label "test" shows up but the grid added to a widget, which is then added does not appear:
<a href="https://i.sstatic.net/Giiyx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Giiyx.png" alt="missing grid in qformlayout" /></a></p>
<p>For testing I set the following and the gridlayout does populate the window correctly</p>
<pre><code>self.setCentralWidget(self.grid_widget)
</code></pre>
<p><a href="https://i.sstatic.net/Tt8Hl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tt8Hl.png" alt="gridLayout works" /></a></p>
<p>I am also able to get the gridlayout to show outside the QFormLayout but then it's not within the Form box and floats underneath it.</p>
<pre><code> game_types_page.addWidget(self.grid_widget)
</code></pre>
<p><a href="https://i.sstatic.net/PoImk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PoImk.png" alt="grid add to entire page" /></a></p>
<p>How do I add the QGridLayout to the QFormLayout?</p>
| <python><pyqt5> | 2023-08-27 19:45:23 | 0 | 484 | Scott Rowley |
76,988,589 | 20,200,927 | Creating Duplicate Subdirectories in Current Working Directory When Organizing Files Based on Categories | <p>I am currently working on a script aimed at organizing files based on their extensions, categories, and subcategories. The objective is to move these files to designated subdirectories within a target directory, following a well-defined category and subcategory structure. However, I have encountered a perplexing issue: the script is inadvertently generating a duplicated copy of my current working directory (cwd) within the existing cwd, rather than correctly forming the intended subdirectory structure within the designated target directory.</p>
<p>Here's a Minimal Reproducible Example (MRE) that illustrates the situation (please note that <code>file</code>s would be a <code>List</code> of strings):</p>
<pre class="lang-py prettyprint-override"><code>import os
import shutil
from typing import List
def get_extension(file_name: str) -> str:
return "." + file_name.split(".")[-1].lower()
TEXT_ = (".txt", ".rtf", ".md")
TARGET_DIR_ = "TestDIR"
file_extensions = {
TEXT_: "Text",
}
def establish_current_dir() -> List[str]:
try:
os.chdir(TARGET_DIR_)
current_dir = os.getcwd()
files = os.listdir(current_dir)
return files
except FileNotFoundError:
print(f"{TARGET_DIR_} directory not found.")
return []
def organize(files: List[str]) -> None:
for file in files:
extension = get_extension(file)
for extensions, category in file_extensions.items():
category_dir = category
if extension in extensions:
if extension in [".md"]:
subdir_name = "markdown"
else:
subdir_name = "foobar"
subcategory_dir = os.path.join(category_dir, subdir_name)
if not os.path.exists(os.path.join(TARGET_DIR_, subcategory_dir)):
os.makedirs(subcategory_dir)
shutil.move(file, os.path.join(TARGET_DIR_, subcategory_dir, os.path.basename(file)))
files = establish_current_dir()
organize(files)
</code></pre>
<p>While I am new to the <code>os</code> library, I've reviewed documentation and deduced that the following code snippet is intended to create a new directory:</p>
<pre><code> if not os.path.exists(os.path.join(TARGET_DIR_, subcategory_dir)):
os.makedirs(os.path.join(TARGET_DIR_, subcategory_dir))
</code></pre>
<p>However, the actual outcome is that a copy (subdirectory) of my <code>cwd</code> is created in the location where files should be sorted. Functionally, the script works as intended, sorting the files into categories, but into subdirectories rather than the main target directory.</p>
<p>I suspect that my usage of <code>os.makedirs(os.path.join(TARGET_DIR_, subcategory_dir))</code> might be causing the unintended duplication, essentially resulting in <code>TestDIR/TestDIR/Text</code>. I experimented with just <code>os.makedirs(os.path.join(subcategory_dir))</code>, but this modification did not resolve the issue.</p>
<p>I would greatly appreciate insights into what might be causing this behavior and suggestions for rectifying the problem. Thank you for your assistance!</p>
<p><strong>edit:</strong></p>
<p>I am using the <code>.chdir()</code> function from the <code>os</code> library to change the current directory to my <code>TARGET_DIR</code> and then grabbing all the files and storing them in a <code>List</code> of strings.</p>
<pre class="lang-py prettyprint-override"><code>def establish_current_dir() -> List[str]:
try:
os.chdir(TARGET_DIR_)
current_dir = os.getcwd()
files = os.listdir(current_dir)
return files
except FileNotFoundError:
print(f"{TARGET_DIR_} directory not found.")
return []
</code></pre>
| <python> | 2023-08-27 19:17:17 | 1 | 320 | i_hate_F_sharp |
76,988,440 | 1,850,007 | tKinter TreeView does not work with self.after when trying to make a row flash | <p>Below is my implementation of a treeview class which allows you to edit a cell upon double click. Upon an edit, I want to change the row edited to flash yellow for a fixed time. Here are my code</p>
<pre><code>import tkinter as tk
from tkinter import ttk, filedialog
import pandas as pd
class EditableTreeView(ttk.Treeview):
def __init__(self, application, *args, **kwargs):
super().__init__(*args, **kwargs)
self.bind('<Double-1>', self.on_double_click)
self.application = application
self.edit_stack = []
self.tag_configure("flash_yellow", background="yellow")
def on_double_click(self, event):
region_clicked = self.identify_region(event.x, event.y)
if region_clicked != "cell":
return
column = self.identify_column(event.x)
selected_iid = self.focus()
selected_value = self.item(selected_iid)
self.get_children()
column_index = int(column[1:]) - 1
if column == "#0":
return
else:
selected_text = selected_value.get("values")[column_index]
column_box = self.bbox(selected_iid, column)
entry_edit = ttk.Entry(self)
entry_edit.editing_column_index = column_index
entry_edit.editing_item_iid = selected_iid
entry_edit.insert(0, selected_text)
entry_edit.selection_range(0, tk.END)
entry_edit.focus()
entry_edit.place(x=column_box[0], y=column_box[1], w=column_box[2], h=column_box[3])
entry_edit.bind("<FocusOut>", self.on_entry_confirm)
entry_edit.bind("<Return>", self.on_entry_confirm)
entry_edit.bind("<Escape>", self.on_escape)
def flash_yellow(self, iid):
#self.item(selected_iid, tags=("flash_yellow"))
self.after(100, self.item(iid, tags=("flash_yellow",)))
self.after(500, self.item(iid, tags=()))
@staticmethod
def on_escape(event):
event.widget.destroy()
def on_entry_confirm(self, event):
new_text = event.widget.get()
selected_iid = event.widget.editing_item_iid
column_index = event.widget.editing_column_index
current_values = self.item(selected_iid).get("values")
self.edit_stack.append((selected_iid, current_values))
current_values[column_index] = new_text
self.item(selected_iid, values=current_values)
# This adds a new row if last row is non-empty
if selected_iid == self.get_children()[-1] and current_values != [""] * len(current_values):
self.insert("", "end", values=("",) * len(current_values))
self.flash_yellow(selected_iid)
event.widget.destroy()
</code></pre>
<p>Note that if I replaced the lines inside the flash yellow function with <code>self.item(selected_iid, tags=("flash_yellow"))</code>, this works as expected. I do not quite understand how I am using after wrong. I also tried <code>self.after(100, lambda: self.item(iid, tags=("flash_yellow",)))</code></p>
| <python><tkinter> | 2023-08-27 18:31:41 | 1 | 1,062 | Lost1 |
76,988,332 | 6,694,404 | Simultaneous Inverse Matrix Computations in TensorFlow for Orthogonal Matching Pursuit | <p>I am currently working on creating a version of Orthogonal Matching Pursuit (OMP) that can be performed simultaneously on different patches, utilizing the power of TensorFlow for optimization through static graph compilation.</p>
<p>I have provided the pseudocode for a single step of the main loop of the algorithm below:</p>
<pre><code>
support = tf.TensorArray(dtype=tf.int64, size=1, dynamic_size=True)
# For each element in patches, compute the projection on the dictionary using only TensorFlow API
dot_products = tf.abs(tf.matmul(A, patches, transpose_a=True))
max_index = tf.argmax(dot_products, axis=1)
support = support.write(i, max_index + 1)
support = support.write(i + 1, max_index + 1)
idx = support.stack()
non_zero_rows = tf.reduce_all(tf.not_equal(idx, 0), axis=1)
idx = tf.boolean_mask(idx, non_zero_rows) - 1
A_tilde = tf.gather(A, idx, axis=1)
m, n = A.shape
selected_atoms = tf.matmul(A_tilde, A_tilde, transpose_a=True)
</code></pre>
<p>After obtaining selected_atoms, which is a 3D tensor consisting of l matrices of size nxm, I need to solve the least square problem
<code>|patches - selected_atoms * x_coeff| ** 2</code>
To do this, I need to compute the inverse of <code>selected_atoms.T @ selected_atoms</code>. Is there a method using TensorFlow's API to simultaneously compute the l inverse matrices and store them in a 3D tensor of shape lxnxm? I would greatly appreciate any guidance.</p>
<p>Thank you.</p>
| <python><tensorflow><linear-algebra><matrix-inverse> | 2023-08-27 17:56:20 | 1 | 681 | P.Carlino |
76,988,179 | 16,383,578 | How to colorize a grayscale image in Python OpenCV? | <p>So I have a grayscale image, it might have an alpha channel (transparency), it holds the intensity information but no color information.</p>
<p>Now I want to colorize it, meaning I have a single RGB color, any RGB color, and I want to combine the color and the image to produce a new monotone image. Without the grayscale image, the result would be an image with the same size as the grayscale image flood-filled with the given color. The grayscale image makes certain area lighter or darker, so that the new image has discernible features.</p>
<p>I tried to do it but the result is wrong:</p>
<pre><code>import cv2
import numpy as np
def colorize(img, rgb):
grey = img[..., 0]
channels = [grey * (channel / 256) for channel in rgb[::-1]]
if img.shape[2] == 4:
channels.append(img[..., 3])
return np.dstack(channels).astype(np.uint8)
</code></pre>
<p>Original image:</p>
<p><a href="https://i.sstatic.net/5jFRo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5jFRo.png" alt="enter image description here" /></a></p>
<p>My original image has transparency and I loaded it with <code>cv2.IMREAD_UNCHANGED</code> flag.</p>
<p>I want to colorize it with (128, 192, 255), the expected result from GIMP 2 is:</p>
<p><a href="https://i.sstatic.net/UUY0Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UUY0Q.png" alt="enter image description here" /></a></p>
<p>But I got this instead, the picture is way too dark:</p>
<p><a href="https://i.sstatic.net/B95XZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B95XZ.png" alt="enter image description here" /></a></p>
<hr />
<p>I now realize to get the same effect as seen in GIMP 2 I need to create a flood filled image of the desired color and blend it with the source image with "hard light" as blending mode.</p>
<p>But semantically what I originally wanted to achieve is to replace the original image's hue component and saturation component of each pixel with the new color's HS in the HSV color model, and use the original images lightness as the new image's lightness to add shading.</p>
<p>What I have just described is the HSL color blending mode.</p>
<p>Google searching anything more than a few words provides nothing relevant, but searching keywords <code>"blend mode"</code> gives me <a href="https://en.wikipedia.org/wiki/Blend_modes" rel="nofollow noreferrer">Blend modes Wikipedia</a> and <a href="https://www.w3.org/TR/compositing-1" rel="nofollow noreferrer">Blend modes CSS specification</a>, I have also read <a href="https://docs.gimp.org/en/gimp-concepts-layer-modes.html" rel="nofollow noreferrer">GIMP 2 docs</a>.</p>
<p>There are no Python implementations listed, but that won't stop me. I am extremely intelligent and the formulas don't look that complicated, I can implement all 38 of them in a few hours and I will spend the rest of day implementing them. And I am learning C++ as well so I will also try to implement them in C++.</p>
<p>However I don't think if my custom implementation will be the most efficient, there are likely library methods that does at least part of the job and using them will increase efficiency, but I am unaware of those methods.</p>
<p>The <code>cv2</code> library provides 635 functions, and it would take a large amount of time for me to inspect all of them one by one.</p>
<p>I found them using the following:</p>
<pre><code>import cv2
from pathlib import Path
from typing import Callable
cv2_functions = [f'cv2.{k}' for k, v in cv2.__dict__.items() if isinstance(v, Callable)]
Path('D:/cv2_functions.txt').write_text('\n'.join(cv2_functions))
</code></pre>
<p>I didn't use <code>inspect</code> because:</p>
<pre><code>In [74]: import inspect
In [75]: inspect.isfunction(cv2.multiply)
Out[75]: False
In [76]: inspect.ismethod(cv2.multiply)
Out[76]: False
In [77]: inspect.iscode(cv2.multiply)
Out[77]: False
</code></pre>
<p>The existing answer implements the multiply blending mode which has the same effect as my original code and fails to give desired result.</p>
<p>New answers are required to implement hard light blend mode and HSL color blend mode, so I can know how to implement the blending modes more efficiently.</p>
<hr />
<p>I am not seeking software recommendations here. The scope of the question is limited to <code>numpy</code> and <code>cv2</code> only, I am implementing the blend modes in <code>numpy</code>, but the <code>cv2</code> module offers 423 methods, some of them might have at least already done part of the work here, and so using these methods can make code more efficient, but I don't know which methods I can use.</p>
<p>So I want examples of implementing the blend modes using methods provided by <code>cv2</code>.</p>
<p>As for my purpose, I am currently writing a Tic-Tac-Toe game with AI and GUI in Python with PyQt6.</p>
<p>I have already completed the artificial intelligence part, but I am not familiar with GUI programming and I am struggling to create the GUI.</p>
<p>I want the user to be fully able to customize the GUI to change the color and style of everything, so I need to find ways to colorize the pictures on the fly.</p>
<p>When the program is complete I will post the program on Code Review, and I will then try to re-implement the program in C++ in accordance with feedback.</p>
<p>I have already implemented most of the blend modes, and I am implementing the rest of them. Here is a selection of them, I won't post all of them so that the question won't be cluttered by code, and I will post a separate question for them because I don't know how to blend images with alpha channel, the formulas said nothing about transparency:</p>
<pre><code>import numpy as np
import cv2
def scale_down(base: np.ndarray, top: np.ndarray) -> np.ndarray:
return base / 255, top / 255
def scale_up(img: np.ndarray) -> np.ndarray:
return (img * 255).astype(np.uint8)
def blend_overlay(base: np.ndarray, top: np.ndarray) -> np.ndarray:
mask = base >= 0.5
result = np.zeros_like(base)
result[~mask] = (mult_2 := (2 * base * top))[~mask]
result[mask] = (2 * base + 2 * top - mult_2 - 1)[mask]
return result
def blend_hardlight(base: np.ndarray, top: np.ndarray) -> np.ndarray:
return blend_overlay(top, base)
def blend_multiply(base: np.ndarray, top: np.ndarray) -> np.ndarray:
return base * top
def blend_screen(base: np.ndarray, top: np.ndarray) -> np.ndarray:
return base + top - base * top
def blend_softlight(base: np.ndarray, top: np.ndarray) -> np.ndarray:
return (1 - 2 * top) * base**2 + 2 * base * top
def blend_colorburn(base: np.ndarray, top: np.ndarray) -> np.ndarray:
result = np.zeros_like(base)
result[ones := base == 1.0] = 1
result[zeros := top == 0.0] = 0
mask = (~ones == ~zeros) == (ones == 0)
result[mask] = 1 - np.minimum(1, (1 - base[mask]) / top[mask])
return result
</code></pre>
| <python><python-3.x><image><opencv><image-processing> | 2023-08-27 17:16:10 | 1 | 3,930 | Ξένη Γήινος |
76,988,177 | 4,348,400 | Why does `pip install elsie` fail? | <p>First I create a folder:</p>
<pre class="lang-bash prettyprint-override"><code>mkdir try_elsie
</code></pre>
<p>Go into folder:</p>
<pre class="lang-bash prettyprint-override"><code>cd try_elsie
</code></pre>
<p>Next I create a virtual environment:</p>
<pre class="lang-bash prettyprint-override"><code>python -m venv venv
</code></pre>
<p>Then I update pip:</p>
<pre class="lang-bash prettyprint-override"><code>pip install --upgrade pip
</code></pre>
<p>Then I try to install Elsie:</p>
<pre class="lang-bash prettyprint-override"><code>pip install elsie
</code></pre>
<p>This produces the error:</p>
<pre class="lang-bash prettyprint-override"><code>Collecting elsie
Using cached elsie-3.4-py3-none-any.whl (62 kB)
Collecting Pillow<9.1.0,>=9.0.0
Using cached Pillow-9.0.1.tar.gz (49.5 MB)
Preparing metadata (setup.py) ... done
Collecting lxml<4.7,>=4.6
Using cached lxml-4.6.5.tar.gz (3.2 MB)
Preparing metadata (setup.py) ... done
Collecting marko==1.2.0
Using cached marko-1.2.0-py3-none-any.whl (37 kB)
Collecting pygments<2.12,>=2.11
Using cached Pygments-2.11.2-py3-none-any.whl (1.1 MB)
Collecting pypdf2<1.27,>=1.26
Using cached PyPDF2-1.26.0.tar.gz (77 kB)
Preparing metadata (setup.py) ... done
Installing collected packages: pypdf2, pygments, Pillow, marko, lxml, elsie
DEPRECATION: pypdf2 is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for pypdf2 ... done
DEPRECATION: Pillow is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for Pillow ... done
DEPRECATION: lxml is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for lxml ... error
error: subprocess-exited-with-error
× Running setup.py install for lxml did not run successfully.
│ exit code: 1
╰─> [252 lines of output]
Building lxml version 4.6.5.
Building without Cython.
Building against libxml2 2.11.4 and libxslt 1.1.38
running install
/home/galen/projects/try_elsie/venv/lib/python3.11/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-311
creating build/lib.linux-x86_64-cpython-311/lxml
copying src/lxml/usedoctest.py -> build/lib.linux-x86_64-cpython-311/lxml
copying src/lxml/sax.py -> build/lib.linux-x86_64-cpython-311/lxml
copying src/lxml/pyclasslookup.py -> build/lib.linux-x86_64-cpython-311/lxml
copying src/lxml/doctestcompare.py -> build/lib.linux-x86_64-cpython-311/lxml
copying src/lxml/cssselect.py -> build/lib.linux-x86_64-cpython-311/lxml
copying src/lxml/builder.py -> build/lib.linux-x86_64-cpython-311/lxml
copying src/lxml/_elementpath.py -> build/lib.linux-x86_64-cpython-311/lxml
copying src/lxml/__init__.py -> build/lib.linux-x86_64-cpython-311/lxml
copying src/lxml/ElementInclude.py -> build/lib.linux-x86_64-cpython-311/lxml
creating build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/__init__.py -> build/lib.linux-x86_64-cpython-311/lxml/includes
creating build/lib.linux-x86_64-cpython-311/lxml/html
copying src/lxml/html/usedoctest.py -> build/lib.linux-x86_64-cpython-311/lxml/html
copying src/lxml/html/soupparser.py -> build/lib.linux-x86_64-cpython-311/lxml/html
copying src/lxml/html/html5parser.py -> build/lib.linux-x86_64-cpython-311/lxml/html
copying src/lxml/html/formfill.py -> build/lib.linux-x86_64-cpython-311/lxml/html
copying src/lxml/html/diff.py -> build/lib.linux-x86_64-cpython-311/lxml/html
copying src/lxml/html/defs.py -> build/lib.linux-x86_64-cpython-311/lxml/html
copying src/lxml/html/clean.py -> build/lib.linux-x86_64-cpython-311/lxml/html
copying src/lxml/html/builder.py -> build/lib.linux-x86_64-cpython-311/lxml/html
copying src/lxml/html/_setmixin.py -> build/lib.linux-x86_64-cpython-311/lxml/html
copying src/lxml/html/_html5builder.py -> build/lib.linux-x86_64-cpython-311/lxml/html
copying src/lxml/html/_diffcommand.py -> build/lib.linux-x86_64-cpython-311/lxml/html
copying src/lxml/html/__init__.py -> build/lib.linux-x86_64-cpython-311/lxml/html
copying src/lxml/html/ElementSoup.py -> build/lib.linux-x86_64-cpython-311/lxml/html
creating build/lib.linux-x86_64-cpython-311/lxml/isoschematron
copying src/lxml/isoschematron/__init__.py -> build/lib.linux-x86_64-cpython-311/lxml/isoschematron
copying src/lxml/etree.h -> build/lib.linux-x86_64-cpython-311/lxml
copying src/lxml/etree_api.h -> build/lib.linux-x86_64-cpython-311/lxml
copying src/lxml/lxml.etree.h -> build/lib.linux-x86_64-cpython-311/lxml
copying src/lxml/lxml.etree_api.h -> build/lib.linux-x86_64-cpython-311/lxml
copying src/lxml/includes/xslt.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/xpath.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/xmlschema.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/xmlparser.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/xmlerror.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/xinclude.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/uri.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/tree.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/schematron.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/relaxng.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/htmlparser.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/etreepublic.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/dtdvalid.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/config.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/c14n.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/__init__.pxd -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/lxml-version.h -> build/lib.linux-x86_64-cpython-311/lxml/includes
copying src/lxml/includes/etree_defs.h -> build/lib.linux-x86_64-cpython-311/lxml/includes
creating build/lib.linux-x86_64-cpython-311/lxml/isoschematron/resources
creating build/lib.linux-x86_64-cpython-311/lxml/isoschematron/resources/rng
copying src/lxml/isoschematron/resources/rng/iso-schematron.rng -> build/lib.linux-x86_64-cpython-311/lxml/isoschematron/resources/rng
creating build/lib.linux-x86_64-cpython-311/lxml/isoschematron/resources/xsl
copying src/lxml/isoschematron/resources/xsl/XSD2Schtrn.xsl -> build/lib.linux-x86_64-cpython-311/lxml/isoschematron/resources/xsl
copying src/lxml/isoschematron/resources/xsl/RNG2Schtrn.xsl -> build/lib.linux-x86_64-cpython-311/lxml/isoschematron/resources/xsl
creating build/lib.linux-x86_64-cpython-311/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_svrl_for_xslt1.xsl -> build/lib.linux-x86_64-cpython-311/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_skeleton_for_xslt1.xsl -> build/lib.linux-x86_64-cpython-311/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_message.xsl -> build/lib.linux-x86_64-cpython-311/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_dsdl_include.xsl -> build/lib.linux-x86_64-cpython-311/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_abstract_expand.xsl -> build/lib.linux-x86_64-cpython-311/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/readme.txt -> build/lib.linux-x86_64-cpython-311/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
running build_ext
building 'lxml.etree' extension
creating build/temp.linux-x86_64-cpython-311
creating build/temp.linux-x86_64-cpython-311/src
creating build/temp.linux-x86_64-cpython-311/src/lxml
gcc -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -ffat-lto-objects -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -fPIC -DCYTHON_CLINE_IN_TRACEBACK=0 -I/usr/include/libxml2 -Isrc -Isrc/lxml/includes -I/home/galen/projects/try_elsie/venv/include -I/usr/include/python3.11 -c src/lxml/etree.c -o build/temp.linux-x86_64-cpython-311/src/lxml/etree.o -w
src/lxml/etree.c: In function ‘__Pyx_PyErr_GetTopmostException’:
src/lxml/etree.c:261877:21: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
261877 | while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) &&
| ^~
src/lxml/etree.c:261877:51: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
261877 | while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) &&
| ^~
src/lxml/etree.c: In function ‘__Pyx__ExceptionSave’:
src/lxml/etree.c:261891:21: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
261891 | *type = exc_info->exc_type;
| ^~
src/lxml/etree.c:261893:19: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
261893 | *tb = exc_info->exc_traceback;
| ^~
src/lxml/etree.c: In function ‘__Pyx__ExceptionReset’:
src/lxml/etree.c:261907:24: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
261907 | tmp_type = exc_info->exc_type;
| ^~
src/lxml/etree.c:261909:22: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
261909 | tmp_tb = exc_info->exc_traceback;
| ^~
src/lxml/etree.c:261910:13: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
261910 | exc_info->exc_type = type;
| ^~
src/lxml/etree.c:261912:13: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
261912 | exc_info->exc_traceback = tb;
| ^~
src/lxml/etree.c: In function ‘__Pyx__GetException’:
src/lxml/etree.c:261994:28: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
261994 | tmp_type = exc_info->exc_type;
| ^~
src/lxml/etree.c:261996:26: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
261996 | tmp_tb = exc_info->exc_traceback;
| ^~
src/lxml/etree.c:261997:17: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
261997 | exc_info->exc_type = local_type;
| ^~
src/lxml/etree.c:261999:17: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
261999 | exc_info->exc_traceback = local_tb;
| ^~
src/lxml/etree.c: In function ‘__Pyx__ExceptionSwap’:
src/lxml/etree.c:262185:24: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
262185 | tmp_type = exc_info->exc_type;
| ^~
src/lxml/etree.c:262187:22: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
262187 | tmp_tb = exc_info->exc_traceback;
| ^~
src/lxml/etree.c:262188:13: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
262188 | exc_info->exc_type = *type;
| ^~
src/lxml/etree.c:262190:13: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
262190 | exc_info->exc_traceback = *tb;
| ^~
src/lxml/etree.c: In function ‘__Pyx_Coroutine_ExceptionClear’:
src/lxml/etree.c:264391:18: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
264391 | t = exc_state->exc_type;
| ^~
src/lxml/etree.c:264393:19: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
264393 | tb = exc_state->exc_traceback;
| ^~
src/lxml/etree.c:264394:14: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
264394 | exc_state->exc_type = NULL;
| ^~
src/lxml/etree.c:264396:14: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
264396 | exc_state->exc_traceback = NULL;
| ^~
src/lxml/etree.c: In function ‘__Pyx_Coroutine_SendEx’:
src/lxml/etree.c:264473:18: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
264473 | if (exc_state->exc_type) {
| ^~
src/lxml/etree.c:264476:22: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
264476 | if (exc_state->exc_traceback) {
| ^~
src/lxml/etree.c:264477:68: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
264477 | PyTracebackObject *tb = (PyTracebackObject *) exc_state->exc_traceback;
| ^~
src/lxml/etree.c:264481:14: error: invalid use of incomplete typedef ‘PyFrameObject’ {aka ‘struct _frame’}
264481 | f->f_back = PyThreadState_GetFrame(tstate);
| ^~
src/lxml/etree.c: In function ‘__Pyx_Coroutine_ResetFrameBackpointer’:
src/lxml/etree.c:264512:33: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
264512 | PyObject *exc_tb = exc_state->exc_traceback;
| ^~
In file included from /usr/include/python3.11/Python.h:38,
from src/lxml/etree.c:96:
src/lxml/etree.c:264518:19: error: invalid use of incomplete typedef ‘PyFrameObject’ {aka ‘struct _frame’}
264518 | Py_CLEAR(f->f_back);
| ^~
/usr/include/python3.11/pyport.h:24:38: note: in definition of macro ‘_Py_CAST’
24 | #define _Py_CAST(type, expr) ((type)(expr))
| ^~~~
/usr/include/python3.11/object.h:581:29: note: in expansion of macro ‘_PyObject_CAST’
581 | PyObject *_py_tmp = _PyObject_CAST(op); \
| ^~~~~~~~~~~~~~
src/lxml/etree.c:264518:9: note: in expansion of macro ‘Py_CLEAR’
264518 | Py_CLEAR(f->f_back);
| ^~~~~~~~
In file included from /usr/include/python3.11/Python.h:44:
src/lxml/etree.c:264518:19: error: invalid use of incomplete typedef ‘PyFrameObject’ {aka ‘struct _frame’}
264518 | Py_CLEAR(f->f_back);
| ^~
/usr/include/python3.11/object.h:583:14: note: in definition of macro ‘Py_CLEAR’
583 | (op) = NULL; \
| ^~
In file included from /usr/include/python3.11/Python.h:45:
src/lxml/etree.c: In function ‘__Pyx_Coroutine_traverse_excstate’:
src/lxml/etree.c:264824:23: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
264824 | Py_VISIT(exc_state->exc_type);
| ^~
/usr/include/python3.11/objimpl.h:199:13: note: in definition of macro ‘Py_VISIT’
199 | if (op) { \
| ^~
src/lxml/etree.c:264824:23: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
264824 | Py_VISIT(exc_state->exc_type);
| ^~
/usr/include/python3.11/pyport.h:24:38: note: in definition of macro ‘_Py_CAST’
24 | #define _Py_CAST(type, expr) ((type)(expr))
| ^~~~
/usr/include/python3.11/objimpl.h:200:30: note: in expansion of macro ‘_PyObject_CAST’
200 | int vret = visit(_PyObject_CAST(op), arg); \
| ^~~~~~~~~~~~~~
src/lxml/etree.c:264824:5: note: in expansion of macro ‘Py_VISIT’
264824 | Py_VISIT(exc_state->exc_type);
| ^~~~~~~~
src/lxml/etree.c:264826:23: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
264826 | Py_VISIT(exc_state->exc_traceback);
| ^~
/usr/include/python3.11/objimpl.h:199:13: note: in definition of macro ‘Py_VISIT’
199 | if (op) { \
| ^~
src/lxml/etree.c:264826:23: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
264826 | Py_VISIT(exc_state->exc_traceback);
| ^~
/usr/include/python3.11/pyport.h:24:38: note: in definition of macro ‘_Py_CAST’
24 | #define _Py_CAST(type, expr) ((type)(expr))
| ^~~~
/usr/include/python3.11/objimpl.h:200:30: note: in expansion of macro ‘_PyObject_CAST’
200 | int vret = visit(_PyObject_CAST(op), arg); \
| ^~~~~~~~~~~~~~
src/lxml/etree.c:264826:5: note: in expansion of macro ‘Py_VISIT’
264826 | Py_VISIT(exc_state->exc_traceback);
| ^~~~~~~~
src/lxml/etree.c: In function ‘__Pyx__Coroutine_NewInit’:
src/lxml/etree.c:265073:22: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
265073 | gen->gi_exc_state.exc_type = NULL;
| ^
src/lxml/etree.c:265075:22: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_traceback’
265075 | gen->gi_exc_state.exc_traceback = NULL;
| ^
src/lxml/etree.c: In function ‘__Pyx__ReturnWithStopIteration’:
src/lxml/etree.c:266058:32: error: ‘_PyErr_StackItem’ {aka ‘struct _err_stackitem’} has no member named ‘exc_type’
266058 | if (!__pyx_tstate->exc_info->exc_type)
| ^~
src/lxml/etree.c: In function ‘__Pyx_AddTraceback’:
src/lxml/etree.c:522:62: error: invalid use of incomplete typedef ‘PyFrameObject’ {aka ‘struct _frame’}
522 | #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
| ^~
src/lxml/etree.c:267283:5: note: in expansion of macro ‘__Pyx_PyFrame_SetLineNumber’
267283 | __Pyx_PyFrame_SetLineNumber(py_frame, py_line);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
Compile failed: command '/usr/bin/gcc' failed with exit code 1
creating tmp
cc -I/usr/include/libxml2 -I/usr/include/libxml2 -c /tmp/xmlXPathInitdq3mqvxo.c -o tmp/xmlXPathInitdq3mqvxo.o
/tmp/xmlXPathInitdq3mqvxo.c: In function ‘main’:
/tmp/xmlXPathInitdq3mqvxo.c:3:5: warning: ‘xmlXPathInit’ is deprecated [-Wdeprecated-declarations]
3 | xmlXPathInit();
| ^~~~~~~~~~~~
In file included from /tmp/xmlXPathInitdq3mqvxo.c:1:
/usr/include/libxml2/libxml/xpath.h:564:21: note: declared here
564 | xmlXPathInit (void);
| ^~~~~~~~~~~~
cc tmp/xmlXPathInitdq3mqvxo.o -lxml2 -o a.out
error: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> lxml
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
</code></pre>
<p>I thought that</p>
<pre class="lang-bash prettyprint-override"><code>× Encountered error while trying to install package.
╰─> lxml
</code></pre>
<p>suggested that I should separately install <code>lxml</code>.</p>
<p>So I tried installing <code>lxml</code></p>
<pre class="lang-bash prettyprint-override"><code>pip install lxml
</code></pre>
<p>and confirmation of installation.</p>
<pre class="lang-bash prettyprint-override"><code>Collecting lxml
Obtaining dependency information for lxml from https://files.pythonhosted.org/packages/ed/62/ffc30348ae141f69f9f23b65ba769db7ca209856c9a9b3406279e0ea24de/lxml-4.9.3-cp311-cp311-manylinux_2_28_x86_64.whl.metadata
Using cached lxml-4.9.3-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (3.8 kB)
Using cached lxml-4.9.3-cp311-cp311-manylinux_2_28_x86_64.whl (7.9 MB)
Installing collected packages: lxml
Successfully installed lxml-4.9.3
</code></pre>
<p>So then I went back to install Elsie again:</p>
<pre class="lang-bash prettyprint-override"><code>pip install elsie
</code></pre>
<p>which resulted in a similar error (if not identical). I have not included it since SO limits character length. Hopefully my example should be reproducible with my system and Python info:</p>
<pre class="lang-bash prettyprint-override"><code>$ uname -a
Linux orcus 6.4.8-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 03 Aug 2023 16:02:01 +0000 x86_64 GNU/Linux
$ python --version
Python 3.11.3
$ pip --version
pip 23.2.1 from /home/galen/projects/try_elsie/venv/lib/python3.11/site-packages/pip (python 3.11)
</code></pre>
<p>I can see from the output that something is going wrong with the building of the <code>lxml</code> extension...</p>
<p>What is going wrong? Is there a maintainable fix such that Elsie installs?</p>
| <python><pip><lxml> | 2023-08-27 17:16:08 | 0 | 1,394 | Galen |
76,987,735 | 713,026 | Python patching order is unexpected | <p>Given the following code:</p>
<pre class="lang-py prettyprint-override"><code>import os
from unittest.mock import patch
def sys_exit_new1():
print("sys_exit_new1:", os.environ.get("BANANA"))
def sys_exit_new2():
print("sys_exit_new2:", os.environ.get("BANANA"))
@patch("sys.exit", new_callable=sys_exit_new1)
@patch.dict(os.environ, {"BANANA": "1"})
@patch.dict(os.environ, {"BANANA": "2"})
@patch("sys.exit", new_callable=sys_exit_new2)
@patch.dict(os.environ, {"BANANA": "3"})
def test_mytest(m1, m2):
...
test_mytest()
</code></pre>
<p>The test will produce:</p>
<pre><code>sys_exit_new2: 2
sys_exit_new1: 2
</code></pre>
<p>Can someone please explain why? The <a href="https://docs.python.org/3/library/unittest.mock.html#nesting-patch-decorators" rel="nofollow noreferrer">documentation</a> says:</p>
<blockquote>
<p>Note that the decorators are applied from the bottom upwards. This is the standard way that Python applies decorators. The order of the created mocks passed into your test function matches this order.</p>
</blockquote>
<p>If this was true, I would expect it to output:</p>
<pre><code>sys_exit_new2: 3
sys_exit_new2: 1
</code></pre>
<p>So <code>patch.dict</code> is behaving differently.</p>
| <python><python-unittest><patch> | 2023-08-27 15:20:40 | 1 | 4,779 | Blazes |
76,987,720 | 827,927 | Why can't I activate my virtual environment? | <p>I created a Python virtual environment in <code>D:\Envs\.venv1</code>.</p>
<p>I can activate it by specifying a relative path, for example:</p>
<pre><code>PS D:\> .\Envs\.venv1\Scripts\activate
(.venv1) PS D:\>
</code></pre>
<p>But, when I try to specify an absolute path, it does not work:</p>
<pre><code>PS D:\> \Envs\.venv1\Scripts\activate
PS D:\>
</code></pre>
<p>Here, after I click "Enter", I see a flashing window, which closes immediately, and then I get the prompt back, and as you see, I am not in my virtual environment.</p>
<p>Is there a way to activate my virtual environment through an absolute path?</p>
| <python><windows><powershell><virtualenv> | 2023-08-27 15:16:55 | 1 | 37,410 | Erel Segal-Halevi |
76,987,690 | 5,769,814 | Overriding enumerate for custom class | <p>I have a <a href="https://replit.com/@matedevita/CustomEnumerate#main.py" rel="nofollow noreferrer">custom class</a> that's essentially a list, but with negative indices being valid indices, rather than referring to elements from the rear of the list.</p>
<pre><code>from collections.abc import Sequence
class MultiFloorPlan(Sequence):
def __init__(self):
super().__init__()
self._floors = []
self._subfloors = []
def __eq__(self, other):
if not isinstance(other, MultiFloorPlan):
return NotImplemented
return self._subfloors == other._subfloors and self._floors == other._floors
def _reindex(self, floor):
if floor >= 0:
return self._floors, floor
return self._subfloors, -floor - 1
def __len__(self):
return len(self._subfloors) + len(self._floors)
def __getitem__(self, floor):
floor_list, floor = self._reindex(floor)
return floor_list[floor]
def __delitem__(self, floor):
floor_list, floor = self._reindex(floor)
del floor_list[floor]
def __iter__(self):
for plan in self._subfloors:
yield plan
for plan in self._floors:
yield plan
def __reversed__(self):
for plan in reversed(self._floors):
yield plan
for plan in reversed(self._subfloors):
yield plan
def __contains__(self, value):
return value in self._subfloors or value in self._floors
def append(self, subfloor=False):
if subfloor:
return self._subfloors.append(None) # For this example we append a dummy None
return self._floors.append(None) # value instead of an actual Plan instance
</code></pre>
<p>Is it possible to get the built-in <code>enumerate</code> to return the floor indices, rather than the values shifted to the non-negative integers? Example:</p>
<pre><code>mfp = MultiFloorPlan()
for _ in range(5):
mfp.append(subfloor=False)
mfp.append(subfloor=True)
for floor, _ in enumerate(mfp):
print(floor)
# This prints 0 1 2 3 4 5 6 7 8 9, but I'd like it to print -5 -4 -3 -2 -1 0 1 2 3 4
</code></pre>
| <python><python-3.x><enumerate> | 2023-08-27 15:09:39 | 1 | 1,324 | Mate de Vita |
76,987,491 | 1,107,049 | How to use OR operator with DynamoDB KeyConditionExpression | <p>DynamoDB table has three data rows all with the same <code>PK</code> value but different <code>SK</code>:</p>
<p>Row 1: <code>PK: PROJECT</code> <code>SK: A-001</code></p>
<p>Row 2: <code>PK: PROJECT</code> <code>SK: B-001</code></p>
<p>Row 3: <code>PK: PROJECT</code> <code>SK: C-001</code></p>
<p>I want to make query that returns only two rows with SK that start with "A" and "C".</p>
<p>First I define it for the item with SK that begins with "A" and it works:</p>
<pre><code> resp = dynamodb.query(
TableName='my-table',
KeyConditionExpression="PK = :pk AND (begins_with(SK, :those_with_A) ),
ExpressionAttributeValues={
":pk": { "S": "PROJECT" },
":those_with_A": { "S": "A" }
}
)
</code></pre>
<p>Next I add <code>OR</code> operator to query <code>SK</code> that begins with "C":</p>
<pre><code> resp = dynamodb.query(
TableName='my-table',
KeyConditionExpression="PK = :pk AND (begins_with(SK, :those_with_A) OR begins_with(SK, :those_with_C) ),
ExpressionAttributeValues={
":pk": { "S": "PROJECT" },
":those_with_A": { "S": "A" },
":those_with_C": { "S": "C" },
}
)
</code></pre>
<p>Unfortunately it fails with error</p>
<pre><code>An error occurred (ValidationException) when calling the Query operation: Invalid operator used in KeyConditionExpression: OR
</code></pre>
<p>How to correct this error?</p>
| <python><node.js><amazon-web-services><amazon-dynamodb><aws-sdk-js> | 2023-08-27 14:09:52 | 1 | 19,609 | alphanumeric |
76,987,365 | 1,977,050 | python pandas perform calculation based on condition | <p>I have some challenge in my work regarding working with pandas.</p>
<p>I have a pandas dataframe with columns <code>Time, x, y, a, b</code>.</p>
<p>For simplicity, the dataframe has 5 records, from which 3 records are fully filled (<code>Time, x,y,a,b</code> has data). Other 2 <code>a,b</code> are empty. <strong>Time is unidirectional</strong></p>
<p>I'd like to perform a calculation on some condition on <code>Time</code> (lets say <code>Time > 3</code>) and store result on <code>a</code> and <code>b</code> (lets say the functions are <code>a=x^2, b=x^3</code>). The calculation of <code>a,b</code> shall be performed in single function (I'm using lambda function). For example</p>
<pre><code>Time x y a b
0.3 0 1 2.0 3.0
1.5 4 5 6.0 7.0
2.8 8 9 10.0 11.0
3.3 8 13 None None
4.5 3 17 None None
</code></pre>
<p>Shall be converted to</p>
<pre><code>Time x y a b
0.3 0 1 2.0 3.0
1.5 4 5 6.0 7.0
2.8 8 9 10.0 11.0
3.3 8 13 64.0 512.0
4.5 3 17 9.0 27.0
</code></pre>
<p>Any assistance would be appreciated</p>
<p>Notes:</p>
<ul>
<li>number of records here are for simplicity and the code shall be general
for every number of records.</li>
<li>need that the code will be optimized for performance.</li>
</ul>
| <python><pandas> | 2023-08-27 13:34:20 | 3 | 534 | user1977050 |
76,987,268 | 595,305 | Eclipse oddity trying to call Rust from Python using PyO3 | <p>I'm following the PyO3 Guide and have so far got to chapter 2, <a href="https://pyo3.rs/main/module" rel="nofollow noreferrer">Python modules</a>. OS is W10.</p>
<p>I am entering all commands using the CLI outside Eclipse and just using the latter to edit files. NB the Rust add-on, Corrosion, provides some "intellisense" for Rust, and syntactical highlighting. I <em>could</em> use an ordinary text editor if I can't solve this issue.</p>
<p>I've managed to get python modules created, but a funny thing happens: when I call <code>maturin develop</code> for a second time I get "Caused by: The process cannot access the file because it is being used by another process."</p>
<p>On inspection it turns out that this relates to a .pyd file created in the "target" location. I am unable to delete this... until I exit Eclipse. After the restart of Eclipse I can run <code>maturin develop</code> again and it concludes OK.</p>
<p>There is nothing obvious to "close down" in Eclipse which I can see. But it appears that some process in Eclipse has been activated and won't shut down. And this is despite the fact that everything is being run from the command line, outside Eclipse.</p>
<p>NB would be interested to know if anyone sees the same thing in Linux.</p>
| <python><eclipse><rust><process><pyo3> | 2023-08-27 13:10:26 | 0 | 16,076 | mike rodent |
76,987,203 | 15,632,586 | AttributeError: Adam object has no attribute '_decayed_lr' when fine-tuning T5 | <p>I am fine-tuning T5 LLM, with the model based on this Colab notebook from Google: <a href="https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb#scrollTo=dEutWnhiWRAq" rel="nofollow noreferrer">TF-T5- Training.ipynb</a>. My current model was defined like this:</p>
<pre><code>class SnapthatT5(TFT5ForConditionalGeneration):
def __init__(self, *args, log_dir=None, cache_dir= None, **kwargs):
super().__init__(*args, **kwargs)
self.loss_tracker= tf.keras.metrics.Mean(name='loss')
@tf.function
def train_step(self, data):
x = data
y = x["labels"]
y = tf.reshape(y, [-1, 1])
with tf.GradientTape() as tape:
outputs = self(x, training=True)
loss = outputs[0]
logits = outputs[1]
loss = tf.reduce_mean(loss)
grads = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(grads, self.trainable_variables))
lr = self.optimizer._decayed_lr(tf.float32)
self.loss_tracker.update_state(loss)
self.compiled_metrics.update_state(y, logits)
metrics = {m.name: m.result() for m in self.metrics}
metrics.update({'lr': lr})
return metrics
def test_step(self, data):
x = data
y = x["labels"]
y = tf.reshape(y, [-1, 1])
output = self(x, training=False)
loss = output[0]
loss = tf.reduce_mean(loss)
logits = output[1]
self.loss_tracker.update_state(loss)
self.compiled_metrics.update_state(y, logits)
return {m.name: m.result() for m in self.metrics}
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, warmup_steps=1e4):
super().__init__()
self.warmup_steps = tf.cast(warmup_steps, tf.float32)
def __call__(self, step):
step = tf.cast(step, tf.float32)
m = tf.maximum(self.warmup_steps, step)
m = tf.cast(m, tf.float32)
lr = tf.math.rsqrt(m)
return lr
</code></pre>
<p>My training process looks like this though (I'm doing it with Google Colab):</p>
<pre><code>learning_rate = CustomSchedule()
# learning_rate = 0.001 # Instead set a static learning rate
optimizer = tf.keras.optimizers.Adam(learning_rate)
model = SnapthatT5.from_pretrained("t5-base")
model.compile(optimizer=optimizer, metrics=metrics)
epochs_done = 0
model.fit(tf_train_ds, epochs=5, steps_per_epoch=steps, callbacks=callbacks,
validation_data=tf_valid_ds, validation_steps=valid_steps, initial_epoch=epochs_done)
</code></pre>
<p>However, when I tried to train the model (with TensorFlow 2.12.0), I got this error from Colab: <code>AttributeError: 'Adam' object has no attribute '_decayed_lr'</code>. I tried to change <code>_decayed_lr</code> to <code>lr</code>, but this was not recognized by Colab.</p>
<p>So, what could I do to get the decayed learning rate to the training step, and get the above problem fixed?</p>
| <python><tensorflow><keras><huggingface-transformers> | 2023-08-27 12:53:07 | 0 | 451 | Hoang Cuong Nguyen |
76,986,984 | 19,600,130 | problem with logic in django code. read data from redis and database | <p>I have this django code and it take data from redis by inscode and return. if redis is down it should jump out of try,expect block and read data from database (postgresql).</p>
<pre><code>class GetOrderBook(APIView):
def get(self, request, format=None):
# get parameters
inscode = None
if 'i' in request.GET:
inscode = request.GET['i']
if inscode != None:
try:
#raise ('disabling redis!')
r_handle = redisconnection
data = r_handle.get(inscode)
if data != None:
return Response(200)
else:
print('not found in cache')
except BaseException as err:
print(err)
orderbook = OrderBook.objects.filter(
symbol__inscode=inscode).order_by('rownum')
if len(orderbook) > 0:
data = OrderBookSer(orderbook, many=True).data
data_formatted = {'buynum': [0]*5, 'buyvol': [0]*5, 'buyprice': [
0]*5, 'sellnum': [0]*5, 'sellvol': [0]*5, 'sellprice': [0]*5}
return Response(data_formatted, status=status.HTTP_200_OK)
else:
return Response({'Bad Request'}, status.HTTP_404_NOT_FOUND)
return Response({'Bad Request'}, status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>but problem is that it not reading from database also I write this test for it to return 200 if you read data. but when redis is down it dont read data and return 404 to me .</p>
<pre><code>def test_redis_function(self, api_client, test_user):
url = self.url
api_client.force_authenticate(user=test_user)
response = api_client.get(url, {'i': '2400322364771558'})
assert response.status_code == status.HTTP_200_OK
</code></pre>
<p>I would be more than happy if you help me</p>
| <python><django><redis><pytest> | 2023-08-27 11:54:33 | 1 | 983 | HesamHashemi |
76,986,958 | 16,124,033 | How can I customize the region of a Firebase Cloud Function written in Python? | <p>I want to customize the region of a Firebase Cloud Function. I've searched the internet and found many solutions for Firebase Cloud Functions written in Node.js, but not for Firebase Cloud Functions written in Python.</p>
<p>In Node.js, you can use <code>.region('region')</code> before your Cloud Functions code. Can this functionality also be utilized in Python? Alternatively, is there an alternative approach to achieve this in Python? Can the region be customized using specific commands?</p>
<blockquote>
<p>Note that I'm using Firebase Cloud Functions, <strong>NOT</strong> Google Cloud Functions.</p>
</blockquote>
| <python><firebase><google-cloud-functions> | 2023-08-27 11:48:43 | 1 | 4,650 | My Car |
76,986,871 | 11,357,695 | Efficient Weighted Jaccard distance | <p>I am trying to find the weighted jaccard distance for every pair of rows in a ~8000*8000 pandas df matrix. I've tried the following :</p>
<pre><code>import pandas as pd
import numpy as np
def weighted_j_sim(array1, array2):
return np.minimum(array1, array2).sum()/np.maximum(array1, array2).sum()
matrix = pd.DataFrame([[1, 2 ,3],
[2, 1, 1],
[3, 1, 1]])
for index, (name, values) in enumerate(matrix.iterrows()):
for other_index, (other_name, other_values) in enumerate(matrix.iterrows()):
if other_index>index: #dont check self or something already compared
weighted_js = weighted_j_sim(values,
other_values)
</code></pre>
<p>and</p>
<pre><code>import pandas as pd
import numpy as np
def weighted_j_sim(array1, array2):
#https://stackoverflow.com/a/71276180/11357695
q = np.concatenate((array1.T, array2.T), axis=1)
return np.sum(np.amin(q,axis=1))/np.sum(np.amax(q,axis=1))
matrix = pd.DataFrame([[1, 2 ,3],
[2, 1, 1],
[3, 1, 1]])
for index, (name, values) in enumerate(matrix.iterrows()):
for other_index, (other_name, other_values) in enumerate(matrix.iterrows()):
if other_index>index: #dont check self or something already compared
weighted_jd = weighted_j_sim(np.array([values.values]),
np.array([other_values.values]))
</code></pre>
<p>This is very slow - can anyone suggest some numpy magic to apply here?</p>
| <python><arrays><numpy><scipy><numpy-ndarray> | 2023-08-27 11:26:03 | 1 | 756 | Tim Kirkwood |
76,986,869 | 17,315,212 | Firebase DatabaseURL not showing in config even after Realtime Database creation | <p>I am trying to run a python firebase_admin setup script to use Firestore. Obviously, this requires a <code>databaseURL</code>. However, my project does not appear to have a database URL in the configuration.</p>
<p>What I have tried:</p>
<ul>
<li>Inputting my database name as a <code>https://____.firebaseio.com</code> url (gives 404 error)</li>
<li>Many stackoverflow and github posts have suggested creating a Realtime Database first (even for Firestore, for some reason)</li>
</ul>
<p>(Such as <a href="https://stackoverflow.com/questions/66188976/setting-up-firebase-cant-find-databaseurl">this</a> or <a href="https://github.com/firebase/firebase-js-sdk/issues/4211" rel="nofollow noreferrer">this</a>)</p>
<p>My python code is nothing special (essentially just the default code)</p>
<pre class="lang-py prettyprint-override"><code>import firebase_admin
from firebase_admin import db
cred_obj = firebase_admin.credentials.Certificate('credentials.json')
default_app = firebase_admin.initialize_app(cred_obj, {'databaseURL':"https://{my project name}.firebaseio.com"})
</code></pre>
<p>Which gives the error:</p>
<pre><code>Traceback (most recent call last):
...
requests.exceptions.InvalidURL: URL has an invalid label.
During handling of the above exception, another exception occurred:
...
firebase_admin.exceptions.UnknownError: Unknown error while making a remote service call: URL has an invalid label.
</code></pre>
<p>Overall, I simply need advice on how to create / retrieve a databaseURL for Firestore.</p>
| <python><firebase-realtime-database><firebase-admin> | 2023-08-27 11:24:54 | 0 | 1,120 | Larry the Llama |
76,986,778 | 1,279,318 | Hebrew text disappears from PDF | <p>I have a PDF with a form inside (the PDF was created with master pdf creator). I'm using Python's <code>fillpdf</code> library to populate the values, at this stage everything is OK, except some applications show the text flipped. The main issue occurs when I'm creating a "flattened" version, where all the Hebrew text disappears.</p>
<p>Under the hood, the library uses <code>pdfindo</code>, but I can't understand what went wrong.</p>
<p>The code is simply:</p>
<pre><code>fillpdfs.write_fillable_pdf("test.pdf", "output1.pdf", {"test": "מידע בעברית"})
fillpdfs.flatten_pdf("output1.pdf", "output2.pdf", as_images=True)
</code></pre>
<p><strong>Update</strong>:</p>
<p>looks like the underlining usage of <code>pdftoppm</code> omits the characters:</p>
<pre><code>$ pdftoppm -r 200 test.pdf -f 1 -l 5 > /dev/null
Syntax Error: AnnotWidget::layoutText, cannot convert U+05DB
Syntax Error: AnnotWidget::layoutText, cannot convert U+05D0
Syntax Error: AnnotWidget::layoutText, cannot convert U+05DF
Syntax Error: AnnotWidget::layoutText, cannot convert U+05E8
Syntax Error: AnnotWidget::layoutText, cannot convert U+05E9
Syntax Error: AnnotWidget::layoutText, cannot convert U+05D5
Syntax Error: AnnotWidget::layoutText, cannot convert U+05DD
Syntax Error: AnnotWidget::layoutText, cannot convert U+05DE
Syntax Error: AnnotWidget::layoutText, cannot convert U+05E9
Syntax Error: AnnotWidget::layoutText, cannot convert U+05D4
Syntax Error: AnnotWidget::layoutText, cannot convert U+05D5
</code></pre>
<p>using a file filled on third party vendor (<a href="https://www.sejda.com" rel="nofollow noreferrer">www.sejda.com</a>) managed to produce a readable file</p>
| <python><pdf> | 2023-08-27 11:00:04 | 1 | 706 | eplaut |
76,986,565 | 5,585,075 | Ghidra Python script to print codeunits with symbols | <p>I'm using Ghidra to disassemble and study a 68000 binary. I want to write a Python script to get a pretty print version of the disassembly (<code>Save as</code> menu won't be sufficient here).</p>
<p>I thought about simply iterating through codeunits, printing labels if any, then the codeunit. But I get things like :</p>
<pre><code>move.w #-0x5d56,(0xffffa602).w
bsr.b 0x000002c2
</code></pre>
<p>while in Ghidra Listing window, it was :</p>
<pre><code>move.w #0xA2AA ,(ptr_to_last_updatable_bg_area).w
bsr.b set_reg_values_2e2
</code></pre>
<p>How can I, at least, recover symbols from addresses (<code>ptr_to_last_updatable_bg_area</code> and <code>set_reg_values_2e2</code>), and, at best, formatted values (unsigned <code>0xA2AA</code> rather than signed <code>-0x5d56</code>) ?</p>
| <python><ghidra> | 2023-08-27 10:04:10 | 1 | 318 | T. Tournesol |
76,986,545 | 11,426,624 | fillna with rolling mean of a group | <p>I have a data frame with date time and would like to fill the missing values with the rolling average of the two rows around the row with nans - at the same time time though (therefore need the groupby time). The below unfortunately does not work.</p>
<pre><code>df = pd.DataFrame({'datetime':['2023-04-20 13:00', '2023-04-21 13:00','2023-04-22 13:00', '2023-04-23 13:00','2023-04-21 14:00', '2023-04-22 14:00', '2023-04-23 14:00'], 'var':[1, 2, np.nan, 3, np.nan, 4, 5]})
df = df.assign(datetime=pd.to_datetime(df.datetime))
df = df.assign(time=df['datetime'].dt.time)
#does not work
df.assign(var=df.groupby('time', sort=False).var.apply(lambda col: col.fillna(col.rolling(window=2, center=2).mean())))
</code></pre>
<p>so I have this</p>
<pre><code> datetime var
0 2023-04-20 13:00 1.0
1 2023-04-21 13:00 2.0
2 2023-04-22 13:00 NaN
3 2023-04-23 13:00 3.0
4 2023-04-21 14:00 NaN
5 2023-04-22 14:00 4.0
6 2023-04-23 14:00 5.0
</code></pre>
<p>and would like to have this</p>
<pre><code> datetime var
0 2023-04-20 13:00 1.0
1 2023-04-21 13:00 2.0
2 2023-04-22 13:00 2.5
3 2023-04-23 13:00 3.0
4 2023-04-21 14:00 4.0
5 2023-04-22 14:00 4.0
6 2023-04-23 14:00 5.0
</code></pre>
<p>Also is there an option to increase the window size over which I want to take the mean?</p>
| <python><pandas><group-by><moving-average> | 2023-08-27 09:55:46 | 1 | 734 | corianne1234 |
76,986,516 | 5,224,236 | how to retrieve all spark session config variables | <p>In databricks I can set a config variable at session level, but it is not found in the context variables:</p>
<pre><code>spark.conf.set(f"dataset.bookstore", '123') #dataset_bookstore
spark.conf.get(f"dataset.bookstore")#123
scf = spark.sparkContext.getConf()
allc = scf.getAll()
scf.contains(f"dataset.bookstore") # False
</code></pre>
<p>I understand there is a difference between session and context-level config variables, how can I retrieve all session-level variables using <code>spark.conf</code>?</p>
<p>Note:
all_session_vars = spark.conf.getAll()</p>
<p>returns</p>
<pre><code>AttributeError: 'RuntimeConfig' object has no attribute 'getAll'
</code></pre>
<p>so it looks like a runtime-level config</p>
| <python><apache-spark><databricks> | 2023-08-27 09:48:35 | 2 | 6,028 | gaut |
76,986,430 | 5,224,236 | List and download azure Blob storage files in python | <p>I am following a tutorial where files are downloaded into databricks environment using <code>dbutils</code>. I'd like to do the same outside databricks.</p>
<p>How can I adapt the <code>download_dataset</code> function to work locally? There are no other credentials defined in any envvar that I can see, only the URI is avaiable at <code>wasbs://course-resources@dalhussein.blob.core.windows.net/datasets/bookstore/v1/</code>.</p>
<pre><code>import os
import requests
import pandas as pd
def download_dataset(source, target):
files = dbutils.fs.ls(source)
for f in files:
source_path = f"{source}/{f.name}"
target_path = f"{target}/{f.name}"
if not path_exists(target_path):
print(f"Copying {f.name} ...")
dbutils.fs.cp(source_path, target_path, True)
data_source_uri = "wasbs://course-resources@dalhussein.blob.core.windows.net/datasets/bookstore/v1/"
dataset_bookstore = 'dbfs:/mnt/demo-datasets/bookstore'
spark.conf.set(f"dataset.bookstore", dataset_bookstore)
download_dataset(data_source_uri, dataset_bookstore)
</code></pre>
<p>Tried with <code>response = requests.get(data_source_uri)</code></p>
<pre><code>requests.exceptions.InvalidSchema: No connection adapters were found for 'wasbs://course-resources@dalhussein.blob.core.windows.net/datasets/bookstore/v1/'
</code></pre>
<p>and of course <code>dbutils</code> isn't available outside databricks</p>
| <python><azure-blob-storage><databricks> | 2023-08-27 09:24:50 | 1 | 6,028 | gaut |
76,986,393 | 5,586,359 | How to get reassign column values from groupby.aggregrate back to original dataframe in dask? | <p>I have a dataset like this where each row is player data:</p>
<pre class="lang-py prettyprint-override"><code>>>> df.head()
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">game_size</th>
<th style="text-align: left;">match_id</th>
<th style="text-align: right;">party_size</th>
<th style="text-align: right;">player_assists</th>
<th style="text-align: right;">player_kills</th>
<th style="text-align: left;">player_name</th>
<th style="text-align: right;">team_id</th>
<th style="text-align: right;">team_placement</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">37</td>
<td style="text-align: left;">2U4GBNA0YmnNZYkzjkfgN4ev-hXSrak_BSey_YEG6kIuDG9fxFrrePqnqiM39pJO</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">SnuffIes</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">18</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">37</td>
<td style="text-align: left;">2U4GBNA0YmnNZYkzjkfgN4ev-hXSrak_BSey_YEG6kIuDG9fxFrrePqnqiM39pJO</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">Ozon3r</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">18</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">37</td>
<td style="text-align: left;">2U4GBNA0YmnNZYkzjkfgN4ev-hXSrak_BSey_YEG6kIuDG9fxFrrePqnqiM39pJO</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;">bovize</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">33</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">37</td>
<td style="text-align: left;">2U4GBNA0YmnNZYkzjkfgN4ev-hXSrak_BSey_YEG6kIuDG9fxFrrePqnqiM39pJO</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;">sbahn87</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">33</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: right;">37</td>
<td style="text-align: left;">2U4GBNA0YmnNZYkzjkfgN4ev-hXSrak_BSey_YEG6kIuDG9fxFrrePqnqiM39pJO</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">GeminiZZZ</td>
<td style="text-align: right;">14</td>
<td style="text-align: right;">11</td>
</tr>
</tbody>
</table>
</div>
<p>Source: <a href="https://github.com/OpenDebates/openskill.py/blob/main/benchmark/data/pubg.7z" rel="nofollow noreferrer">Full Dataset - Compressed 126MB, Decompressed 1.18GB</a></p>
<p>I need to create a new column called <code>weights</code> where each row is a number between 0 and 1. It needs to be calculated as the total number of kills per player (<code>player_kills</code>) divided by the total number of kill per team.</p>
<h2>My Attempt</h2>
<p>My initial thought was to create a new column called <code>total_kills</code> from a groupby aggregation sum. The it's easy to create the <code>weights</code> columns where each row is simply <code>player_kills</code> divided by <code>total_kills</code>. This is the code so far to calculate the groupby sum.</p>
<pre class="lang-py prettyprint-override"><code>import dask.dataframe as dd
from dask.diagnostics import ProgressBar
df = dd.read_csv("pubg.csv")
print(df.compute().head().to_markdown())
total_kills = df.groupby(
['match_id', 'team_id']
).aggregate({"player_kills": 'sum'}).reset_index()
print(total_kills.compute().head().to_markdown())
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">match_id</th>
<th style="text-align: right;">team_id</th>
<th style="text-align: right;">player_kills</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2U4GBNA0YmnNZYkzjkfgN4ev-hXSrak_BSey_YEG6kIuDG9fxFrrePqnqiM39pJO</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">2U4GBNA0YmnNZYkzjkfgN4ev-hXSrak_BSey_YEG6kIuDG9fxFrrePqnqiM39pJO</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">2U4GBNA0YmnNZYkzjkfgN4ev-hXSrak_BSey_YEG6kIuDG9fxFrrePqnqiM39pJO</td>
<td style="text-align: right;">14</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">2U4GBNA0YmnNZYkzjkfgN4ev-hXSrak_BSey_YEG6kIuDG9fxFrrePqnqiM39pJO</td>
<td style="text-align: right;">15</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">2U4GBNA0YmnNZYkzjkfgN4ev-hXSrak_BSey_YEG6kIuDG9fxFrrePqnqiM39pJO</td>
<td style="text-align: right;">17</td>
<td style="text-align: right;">1</td>
</tr>
</tbody>
</table>
</div>
<p>So far, so good. Trying to reassign the new <code>player_kills</code> column back using this line of code doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>df['total_kills'] = total_kills['player_kills']
</code></pre>
<p>It produces this error:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "C:\Users\taven\PycharmProjects\openskill.py\benchmark\data\process.py", line 11, in <module>
df['total_kills'] = total_kills['player_kills']
~~^^^^^^^^^^^^^^^
File "C:\Users\taven\PycharmProjects\openskill.py\benchmark\venv\3.11\Lib\site-packages\dask\dataframe\core.py", line 4952, in __setitem__
df = self.assign(**{key: value})
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\taven\PycharmProjects\openskill.py\benchmark\venv\3.11\Lib\site-packages\dask\dataframe\core.py", line 5401, in assign
data = elemwise(methods.assign, data, *pairs, meta=df2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\taven\PycharmProjects\openskill.py\benchmark\venv\3.11\Lib\site-packages\dask\dataframe\core.py", line 6505, in elemwise
args = _maybe_align_partitions(args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\taven\PycharmProjects\openskill.py\benchmark\venv\3.11\Lib\site-packages\dask\dataframe\multi.py", line 176, in _maybe_align_partitions
dfs2 = iter(align_partitions(*dfs)[0])
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\taven\PycharmProjects\openskill.py\benchmark\venv\3.11\Lib\site-packages\dask\dataframe\multi.py", line 130, in align_partitions
raise ValueError(
ValueError: Not all divisions are known, can't align partitions. Please use `set_index` to set the index.
</code></pre>
<p>How do I solve this problem?</p>
| <python><pandas><dataframe><dask><dask-dataframe> | 2023-08-27 09:12:25 | 1 | 954 | Vivek Joshy |
76,986,209 | 8,176,763 | alembic to autogenerated computed columns , generated as always in postgres | <p>I have a column as such in my model:</p>
<pre><code>class Mem(BaseMem,table=True):
id: Optional[int] = Field(sa_column=Column(Integer,Identity(always=True,start=1,cycle=True),primary_key=True,nullable=False))
it_service_instance: Optional[str] = Column(String(),Computed("CASE WHEN environment = 'PROD' THEN it_service ELSE it_service || '-' || environment END",persisted=True),index=True)
hostname: Optional[str] = Field(default=None,index=True)
patch_version: Optional[str] = Field(default=None,index=True)
product_alias: Optional[str] = Field(default=None,index=True)
</code></pre>
<p>When I try to autogenerate a migration with alembic i get the following:</p>
<pre><code>op.add_column('mem', sa.Column('it_service_instance', sqlmodel.sql.sqltypes.AutoString(), nullable=True))
</code></pre>
<p>Can alembic autogenerated computed columns ?</p>
<p>I am getting this warning:</p>
<pre><code>Computed default on mem.it_service_instance cannot be modified
util.warn("Computed default on %s.%s cannot be modified" % (tname, cname))
</code></pre>
| <python><sqlalchemy><alembic><sqlmodel> | 2023-08-27 08:07:19 | 0 | 2,459 | moth |
76,986,042 | 3,685,918 | how to apply ffill() groupby and conditionally on pandas | <p>I'd like to fill previous value like <code>ffill()</code> conditionally.</p>
<p>For example, if ['day'] columns is 'None', I want to apply ['close'] <code>ffill()</code> on <code>Groupby</code>.</p>
<p>I attached example as below.</p>
<pre><code>df = pd.DataFrame({'name' : ['AAPL','AAPL','AAPL','AAPL','AAPL','AAPL','MSFT','MSFT','MSFT','MSFT','MSFT','MSFT'],
'day' : [None,'Fri', None, None, 'Mon', 'Thue', None,'Fri', None, None, 'Mon', 'Thue',],
'close' : [np.nan, 174.49, np.nan, np.nan, 175.84, np.nan, np.nan, 128.11, np.nan, np.nan, 128.93, np.nan]
})
# df
# Out[46]:
# AAPL None NaN
# AAPL Fri 174.49
# AAPL None NaN
# AAPL None NaN
# AAPL Mon 175.84
# AAPL Thue NaN
# MSFT None NaN
# MSFT Fri 128.11
# MSFT None NaN
# MSFT None NaN
# MSFT Mon 128.93
# MSFT Thue NaN
# What I wannt
# name day close
# AAPL None NaN
# AAPL Fri 174.49
# AAPL None 174.49 <- if 'day' is None then wnat to ffill()
# AAPL None 174.49 <- if 'day' is None then wnat to ffill()
# AAPL Mon 175.84
# AAPL Thue NaN
# MSFT None NaN
# MSFT Fri 128.11
# MSFT None 128.11 <- if 'day' is None then wnat to ffill()
# MSFT None 128.11 <- if 'day' is None then wnat to ffill()
# MSFT Mon 128.93
# MSFT Thue NaN
</code></pre>
| <python><pandas><dataframe> | 2023-08-27 07:13:59 | 1 | 427 | user3685918 |
76,985,993 | 1,609,514 | Find the sub-set of unique objects in a list which contains duplicate references to the objects | <p>I have an algorithm that creates a set of lists in the values of a dictionary. However, the number of lists is less than the number of dictionary keys because some of the values are references to the same list object.</p>
<p>When the algorithm is complete, I want to extract a list of only the remaining unique list objects. I want to avoid having to compare two lists since this is inefficient.</p>
<p>I came up with this way to do it using the <a href="https://docs.python.org/3/library/functions.html#id" rel="nofollow noreferrer">id</a> function. Maybe this is fine but I wasn't sure if this is an appropriate use of id and wondering if there's a simpler way to do it.</p>
<pre class="lang-python prettyprint-override"><code># Start with a full set of unique lists
groups = {i: [i] for i in range(1, 6)}
# Algorithm joins some of the lists together which
# reduces the total number
for a, b in [(1, 2), (2, 4)]:
groups[a] += groups[b]
groups[b] = groups[a]
print(groups)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>{1: [1, 2, 4], 2: [1, 2, 4], 3: [3], 4: [1, 2, 4], 5: [5]}
</code></pre>
<p>Note: Now some of the values of the dictionary contain the same list, <code>[1, 2, 4]</code>.</p>
<pre class="lang-python prettyprint-override"><code># Find the remaining unique lists
all_values = list(groups.values())
ids = [id(x) for x in all_values]
result = [all_values[ids.index(a)] for a in set(ids)]
print(result)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>[[1, 2, 4], [3], [5]]
</code></pre>
<p>This is a common question in other languages but I couldn't find one on how to do this in Python.</p>
| <python><list><object><unique> | 2023-08-27 06:51:42 | 1 | 11,755 | Bill |
76,985,804 | 8,176,763 | alembic and sqlalchemy server default problem | <p>I have a table structure as such, omitting some columns for clearance:</p>
<pre><code>from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
import sqlmodel
op.create_table('global_evg_report_history',
sa.Column('issue_status', sa.String(), server_default=sa.text('wrongly_closed'), nullable=True),
sa.PrimaryKeyConstraint('plada_service_alias', 'run_date', 'technology_version'),
sa.UniqueConstraint('plada_service_alias', 'technology_version', 'issue_id', 'run_date')
)
</code></pre>
<p>But after trying a migration with alembic I get the following error:</p>
<pre><code>sqlalchemy.exc.NotSupportedError: (psycopg2.errors.FeatureNotSupported) cannot use column reference in DEFAULT expression
issue_status VARCHAR DEFAULT wrongly_closed,
</code></pre>
<p>I have tried to change the quoting to double quotes but still it doesn't work and gives same problem.</p>
| <python><sqlalchemy> | 2023-08-27 05:19:37 | 1 | 2,459 | moth |
76,985,802 | 1,187,936 | Json normalize meta issue | <p>I am trying to convert a nested JSON to a dataframe. The JSON file is deeply nested and I am used the meta parameter to specify the nested structure so that all the JSON attributes can be stored as rows and columns- but I am getting the following error:</p>
<p>TypeError: string indices must be integers</p>
<p>Did i not set the meta parameter properly?</p>
<pre><code>import requests
import pandas as pd
import json
from pandas import json_normalize
requestObject=requests.get("https://dummyjson.com/carts")
#print(requestObject)
requestObject_JSON=requestObject.json()
requestObject_JSON_DF_2=pd.json_normalize(requestObject_JSON,record_path='carts',meta=['id','products'['id','title','price','quantity','total','discountPercentage','discountedPrice'],'total','discountedTotal','userId','totalProducts','totalQuantity'])
print(requestObject_JSON_DF_2)
</code></pre>
<pre class="lang-py prettyprint-override"><code>data =\
{'carts': [{'discountedTotal': 1941,
'id': 1,
'products': [{'discountPercentage': 8.71,
'discountedPrice': 55,
'id': 59,
'price': 20,
'quantity': 3,
'title': 'Spring and summershoes',
'total': 60},
{'discountPercentage': 3.19,
'discountedPrice': 56,
'id': 88,
'price': 29,
'quantity': 2,
'title': 'TC Reusable Silicone Magic Washing Gloves',
'total': 58},
{'discountPercentage': 13.1,
'discountedPrice': 70,
'id': 18,
'price': 40,
'quantity': 2,
'title': 'Oil Free Moisturizer 100ml',
'total': 80},
{'discountPercentage': 17.67,
'discountedPrice': 766,
'id': 95,
'price': 930,
'quantity': 1,
'title': 'Wholesale cargo lashing Belt',
'total': 930},
{'discountPercentage': 17.2,
'discountedPrice': 994,
'id': 39,
'price': 600,
'quantity': 2,
'title': 'Women Sweaters Wool',
'total': 1200}],
'total': 2328,
'totalProducts': 5,
'totalQuantity': 10,
'userId': 97},
{'discountedTotal': 1942,
'id': 2,
'products': [{'discountPercentage': 8.71,
'discountedPrice': 55,
'id': 59,
'price': 20,
'quantity': 3,
'title': 'Spring and summershoes',
'total': 60},
{'discountPercentage': 3.19,
'discountedPrice': 56,
'id': 88,
'price': 29,
'quantity': 2,
'title': 'TC Reusable Silicone Magic Washing Gloves',
'total': 58},
{'discountPercentage': 13.1,
'discountedPrice': 70,
'id': 18,
'price': 40,
'quantity': 2,
'title': 'Oil Free Moisturizer 100ml',
'total': 80},
{'discountPercentage': 17.67,
'discountedPrice': 766,
'id': 95,
'price': 930,
'quantity': 1,
'title': 'Wholesale cargo lashing Belt',
'total': 930},
{'discountPercentage': 17.2,
'discountedPrice': 994,
'id': 39,
'price': 600,
'quantity': 2,
'title': 'Women Sweaters Wool',
'total': 1200}],
'total': 2328,
'totalProducts': 5,
'totalQuantity': 10,
'userId': 98}
]}
</code></pre>
| <python><json><pandas><dictionary><json-normalize> | 2023-08-27 05:18:58 | 1 | 737 | Nidhin_toms |
76,985,504 | 3,579,144 | Poetry: Could not find a matching version of package abnumber | <p>I am new to the Poetry dependency manager in python and am trying to transition my repo to use it.</p>
<p>So far, most things have gone well, but one package it can't seem to resolve is <code>abnumber</code>, which is found here:</p>
<p><a href="https://anaconda.org/bioconda/abnumber" rel="nofollow noreferrer">https://anaconda.org/bioconda/abnumber</a></p>
<p>When I try to run:</p>
<pre><code>poetry add abnumber
</code></pre>
<p>I get:</p>
<pre><code>Could not find a matching version of package abnumber
</code></pre>
<p>Any ideas what I might be doing wrong here? I have tried manually specifying version numbers both in the command line and in the <code>pyproject.toml</code> file but I get the same result.</p>
| <python><python-poetry> | 2023-08-27 02:24:06 | 1 | 1,541 | Vranvs |
76,985,421 | 485,330 | How can I use Python to take a screenshot of a webpage given its URL? | <p>Is there a method to capture a screenshot of a website's main page using the CLI?</p>
<p>I know that Selenium offers this capability, but my code runs on a GUI-less Linux server.</p>
<p>I tried to use url2png API as an alternative but I want to implement native code.</p>
| <python> | 2023-08-27 01:31:42 | 0 | 704 | Andre |
76,985,399 | 8,842,262 | How to enable type hint for inherited Column class in SQLAlchemy? | <p>While working with FastAPI and SQLAlchemy 2.0, I understood that the default field type is nullable. As I have many non-nullable fields, I did something like this:</p>
<pre class="lang-py prettyprint-override"><code>class NotNullColumn(Column):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs, nullable=False)
@property
def _constructor(self):
return Column
</code></pre>
<p>So it is easier for me to implement non-nullable fields. Although the logic works fine, I came to realize that type hinting fails for this class. I have installed SQLAlchemy with mypy, and I have the following model:</p>
<pre class="lang-py prettyprint-override"><code>class Notification(Base):
__tablename__ = "core_notification"
id: int = Column(Integer, primary_key=True, autoincrement=True)
user_id: int = Column(ForeignKey("auth_user.id", ondelete="CASCADE"))
user: "User" = relationship("User", backref="notifications")
message = Column(Text)
sent_at = Column(DateTime, default=func.now())
status = NotNullColumn(Boolean)
</code></pre>
<p>While the type of <code>message</code> or <code>sent_at</code> shows <code>Column[str]</code> and <code>Column[datetime]</code> respectively, the type for status shows <code>NotNullColumn</code> (without any type). Because of this, mypy complains when I try to assign a boolean to the <code>status</code> field, even though other fields work just fine.</p>
<p>P. S. I cannot manually write <code>status: bool = NotNullColumn(Boolean)</code>, as mypy complains again, because of the mismatch of types.</p>
| <python><sqlalchemy><fastapi><python-typing><mypy> | 2023-08-27 01:15:43 | 0 | 513 | Miradil Zeynalli |
76,985,361 | 8,430,792 | DRF Create list of objects even when some of them not valid | <p>I'm currently working on a DRF project where I'm dealing with a scenario where I receive a list of objects through a POST request. The challenge I'm facing is that this list might contain some objects that are not valid according to certain criteria. However, I need to go ahead and create all the valid objects, and then perform a bulk creation of all these valid objects in one go.</p>
<p>I've tried a few approaches, but I'm running into issues with properly validating and handling the objects, especially when using a many=True, if one of the items are not valid than no items created at all.</p>
<p>Has anyone encountered a similar situation or can provide insights on how to approach this? Any guidance on how to correctly structure the validation, filtering, and bulk creation process would be greatly appreciated! Thank you in advance.</p>
<p>Here is the code I've managed to write, but it doesn't look right to iterate over objects in the view:</p>
<pre><code>@action(methods=['post'], detail=False, url_path='custom_method')
def custom_method(self, request):
created = []
for item in request.data:
serializer = self.get_serializer(data=item)
try:
serializer.is_valid(raise_exception=True)
except ValidationError:
# ... make log
continue
self.perform_create(serializer)
created.append(serializer.data.get('id'))
# custom action for created objects
...
return Response(...)
</code></pre>
| <python><django><django-rest-framework> | 2023-08-27 00:50:36 | 0 | 881 | karambaq |
76,985,169 | 1,850,007 | Why does Treeview in tkinter change position and column width when I load data from excel? | <p>The problem I have is that after updating a Treeview object, the width of the column changes. This is the code I have written. I am not sure why upload changes how the table looks. In particlar, it seems to me that the code for populate_treeview is identical to the way I populated it in the main function.</p>
<pre><code>import tkinter as tk
from tkinter import ttk, filedialog
import pandas as pd
class EdibleTreeviewWithUpload(tk.Frame):
def __init__(self, application, *args, **kwargs):
tk.Frame.__init__(self, *args, **kwargs)
self.tree = ttk.Treeview(self, show='headings', selectmode='browse')
self.tree.grid(row=0, column=0)
self.open_button = tk.Button(self, text="Open File", command=self.open_file)
self.open_button.grid(row=1, column=0)
def open_file(self):
file_path = filedialog.askopenfilename(filetypes=[
("Excel files", "*.xlsx *.xlsb *.xlsm"),
("CSV files", "*.csv")
])
if file_path:
if file_path.lower().endswith((".xlsx", ".xlsb", ".xlsm", ".csv")):
df = pd.read_excel(file_path) if file_path.lower().endswith(
(".xlsx", ".xlsm", ".xlsb")) else pd.read_csv(file_path)
self.clear_treeview()
self.populate_treeview(df)
def clear_treeview(self):
for item in self.tree.get_children():
self.tree.delete(item)
def populate_treeview(self, df):
columns = df.columns.tolist()
self.tree["columns"] = columns
for idx, col in enumerate(columns):
self.tree.heading(f"#{idx+1}", text=col)
self.tree.column(f"#{idx+1}", anchor=tk.CENTER, width=100)
for index, row in df.iterrows():
values = [str(row[col]) for col in columns]
self.tree.insert("", "end", values=values)
if __name__ == "__main__":
root = tk.Tk()
root.title("Editable TreeView Table")
tree = EdibleTreeviewWithUpload(root)
tree.tree['columns'] = ('Name', 'Age', 'Occupation')
tree.tree.heading('#1', text='Name')
tree.tree.heading('#2', text='Age')
tree.tree.heading('#3', text='Occupation')
tree.tree.column('#1', anchor=tk.CENTER, width=100)
tree.tree.column('#2', anchor=tk.CENTER, width=100)
tree.tree.column('#3', anchor=tk.CENTER, width=100)
# Sample data
data = [
("John Doe", "30", "Engineer"),
("Jane Smith", "25", "Teacher"),
("Bob Johnson", "45", "Doctor"),
]
for item in data:
tree.tree.insert('', 'end', values=item)
tree.grid(column=1, row=1)
root.mainloop()
</code></pre>
<p>This is how it looks like <a href="https://i.sstatic.net/GdQaf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GdQaf.png" alt="Before" /></a>,</p>
<p>comparing to how it looks <a href="https://i.sstatic.net/N8VdB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N8VdB.png" alt="After" /></a>.</p>
| <python><tkinter> | 2023-08-26 23:14:40 | 0 | 1,062 | Lost1 |
76,985,151 | 3,520,791 | Number of distinct islands | <p>There is a leetcode question as:</p>
<blockquote>
<p>You are given an m x n binary matrix grid. An island is a group of 1's (representing land) connected 4-directionally (horizontal or vertical.) You may assume all four edges of the grid are surrounded by water.</p>
<p>An island is considered to be the same as another if they have the same shape, or have the same shape after rotation (90, 180, or 270 degrees only) or reflection (left/right direction or up/down direction).</p>
<p>Return the number of distinct islands.</p>
</blockquote>
<p>I came up with a solution and it passes 508 test cases out of 510. I believe my solution should work and not sure where do i exactly miss, and was wondering if anyone can add some insights or fix my code?</p>
<p>following is my solution:</p>
<pre><code>def numDistinctIslands2(self, grid: List[List[int]]) -> int:
islands = []
visited = set()
def dfs(i, j, curr_island): # standard dfs to find all islands
if (
0 <= i < len(grid)
and 0 <= j < len(grid[0])
and grid[i][j] == 1
and (i, j) not in visited
):
visited.add((i, j))
curr_island.append((i, j))
dfs(i + 1, j, curr_island)
dfs(i - 1, j, curr_island)
dfs(i, j + 1, curr_island)
dfs(i, j - 1, curr_island)
for i in range(len(grid)): # loop over the entire grid
for j in range(len(grid[0])):
if grid[i][j] == 1 and (i, j) not in visited:
curr_island = []
dfs(i, j, curr_island)
islands.append(curr_island)
distinct_islands = set() # keep the representative of all similar islands
def get_distinct_islands(islands):
for island in islands:
island_signature_x = defaultdict(int) # x(row) signature of island
island_signature_y = defaultdict(int) # y(col) signature of island
for x, y in island:
# calculate x signature (number of "1" in each row of the island)
island_signature_x[x] += 1
# calculate y signature (number of "1" in each column of the island)
island_signature_y[y] += 1
x_val = []
# loop through sorted (we need to have the orders intact) signature of rows # ex. [1, 2, 1] is different than [1, 1, 2]
for k, v in sorted(island_signature_x.items(), key=lambda i: i[0]):
x_val.append(str(v))
x_sign = ".".join(x_val) # string of x signature
x_sign_r = ".".join(x_val[::-1]) # reverse string of x signature
y_val = []
# loop through sorted (we need to have the orders intact) signature of columns
for k, v in sorted(island_signature_y.items(), key=lambda i: i[0]):
y_val.append(str(v))
y_sign = ".".join(y_val) # string of y signature
y_sign_r = ".".join(y_val[::-1]) # reverse string of y signature
# if none of the rotations/reflections (8 possibilities) are registered then register it as a new island
if (
(x_sign, y_sign) not in distinct_islands
and (x_sign, y_sign_r) not in distinct_islands
and (x_sign_r, y_sign) not in distinct_islands
and (x_sign_r, y_sign_r) not in distinct_islands
and (y_sign, x_sign) not in distinct_islands
and (y_sign, x_sign_r) not in distinct_islands
and (y_sign_r, x_sign) not in distinct_islands
and (y_sign_r, x_sign_r) not in distinct_islands
):
distinct_islands.add((x_sign, y_sign))
get_distinct_islands(islands)
return len(distinct_islands)
</code></pre>
<p>input example as follows... my algorithm returns 68 and it says it should be 69. It's the 509th out of 510 test cases and all previous (508) test cases are passing:</p>
<pre><code> [[0,1,0,1,0,1,1,1,1,0,1,1,1,1,0,0,0,1,1,0,1,1,1,1,1,0,0,0,1,1,1,1,0,0,1,1,1,0,1,1,1,0,1,0,0,0,1,1,1,1],[0,1,1,0,1,1,0,1,1,1,0,0,0,0,0,1,1,0,1,1,1,1,0,1,0,1,1,1,1,1,1,1,0,1,1,0,1,1,1,1,1,0,1,1,0,1,0,0,1,1],[0,0,1,1,0,0,0,0,0,1,1,1,0,0,0,0,0,1,1,0,1,0,0,1,0,0,1,1,1,1,1,0,0,1,0,0,1,0,0,0,0,0,1,0,0,1,0,1,1,1],[1,0,0,0,0,1,1,0,1,1,0,0,1,0,1,0,0,1,0,1,1,1,1,1,0,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,0,1,1,0,1,0,1,0,0],[0,0,1,0,0,1,1,1,1,1,1,1,0,0,1,0,1,1,1,0,0,0,1,0,0,0,0,1,0,0,1,1,1,0,1,1,1,0,1,1,0,1,1,1,0,1,1,0,1,1],[1,1,1,0,0,1,1,1,0,0,1,0,0,0,0,0,0,0,1,1,0,1,1,0,1,1,0,1,0,1,0,1,0,0,0,0,0,0,0,0,1,1,0,1,1,0,1,1,1,0],[0,0,0,0,1,1,0,1,0,0,0,1,1,1,1,0,1,0,1,1,1,0,0,1,0,1,1,1,1,1,0,0,0,0,1,0,1,0,1,1,1,1,1,1,0,0,1,1,1,0],[0,0,1,1,0,1,0,1,1,0,0,0,1,0,1,0,1,1,1,0,0,0,1,1,0,0,0,0,0,1,1,0,1,0,1,1,1,1,0,1,1,0,0,1,0,1,0,1,1,0],[1,1,1,0,0,1,0,1,1,0,0,1,1,1,0,1,0,1,1,0,0,1,0,0,1,1,0,0,0,1,0,1,0,0,1,1,0,1,0,0,0,0,1,0,0,1,1,1,1,0],[1,0,0,0,1,1,0,0,1,0,0,1,0,1,1,1,1,0,0,1,1,1,1,0,0,1,1,0,1,1,1,0,1,0,0,0,0,0,0,1,0,1,0,0,0,1,0,0,0,1],[0,1,1,1,0,0,1,1,1,0,0,0,1,1,0,1,0,0,0,0,1,0,0,1,1,0,0,0,0,0,0,0,1,0,1,1,1,0,0,0,0,0,1,1,0,0,0,0,0,1],[0,1,0,0,0,1,1,0,1,0,0,0,1,1,1,0,1,0,0,0,1,0,0,0,0,0,0,1,1,1,0,1,0,1,0,1,1,1,1,1,0,0,1,0,1,0,0,1,1,1],[1,1,0,1,1,1,0,0,1,1,0,1,0,0,0,0,0,0,0,1,0,1,1,0,1,1,0,0,1,0,0,1,0,1,1,1,0,1,0,1,0,1,1,1,0,0,1,0,0,1],[0,1,1,0,0,1,1,0,0,0,0,1,0,1,0,1,1,1,0,1,0,1,1,1,0,1,0,0,0,0,0,1,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,1],[0,0,1,0,0,0,0,1,1,1,1,1,0,1,1,0,1,0,1,0,0,1,1,1,0,1,0,0,0,0,1,0,1,1,0,0,1,1,1,0,0,0,0,0,1,1,1,1,1,0],[0,0,0,1,0,0,0,1,1,0,1,0,1,1,1,1,0,1,0,0,1,0,0,1,0,0,1,0,0,1,0,0,0,1,1,1,0,0,1,0,1,0,1,1,0,0,1,1,1,1],[1,1,1,0,0,0,0,1,1,1,1,0,1,1,0,1,1,1,1,0,0,0,1,1,1,1,1,0,0,0,0,1,1,0,0,0,1,0,0,1,1,1,1,0,1,1,0,1,1,0],[1,0,0,1,1,0,0,1,0,1,1,1,0,1,0,1,1,0,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,1,1,0,1,1,0,0,0,1,1,1,0,1,0,1,1,0],[1,0,1,0,0,0,0,1,0,1,1,0,0,0,1,0,0,0,0,1,1,0,0,1,1,0,0,1,1,0,1,0,0,1,0,1,1,0,0,0,0,0,0,0,0,1,1,1,1,1],[1,0,0,1,0,1,0,1,0,0,1,0,1,1,1,0,1,0,0,0,0,0,1,1,1,0,1,0,1,0,1,1,0,1,1,1,0,1,1,0,0,0,0,1,0,0,1,0,1,1],[1,1,0,0,1,1,1,0,0,0,1,0,1,0,0,0,1,1,0,1,1,1,1,1,0,1,1,1,0,0,1,0,1,1,1,1,1,0,0,0,0,1,1,0,1,1,1,1,0,0],[1,0,0,1,1,0,1,0,1,0,0,1,0,1,0,1,1,1,0,0,0,0,0,1,1,0,1,1,0,0,0,0,1,0,1,0,0,1,1,0,1,1,1,0,0,1,0,0,0,1],[1,1,1,0,0,1,0,1,0,0,0,1,0,1,1,1,1,1,0,0,0,0,0,0,0,0,1,1,0,1,0,0,0,0,1,0,0,1,0,1,0,1,0,1,0,0,1,0,1,0],[1,0,1,1,1,0,1,0,0,0,1,1,0,1,1,1,1,1,0,0,1,1,1,0,0,1,1,1,1,0,0,1,1,1,0,1,1,0,0,1,1,1,0,0,0,1,1,0,0,1],[1,0,0,1,1,0,0,1,0,0,0,1,0,0,1,1,1,1,0,0,1,1,1,0,0,0,0,1,1,0,1,1,1,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0],[1,1,1,0,0,1,0,1,1,0,1,0,0,1,1,1,1,1,1,0,1,1,0,1,0,1,0,1,0,0,1,0,0,0,1,0,1,1,1,1,0,1,1,0,1,1,0,0,0,0],[1,1,0,1,0,1,1,1,0,0,1,1,0,1,0,0,1,1,1,1,0,1,1,1,1,1,0,1,1,1,0,0,0,1,0,0,0,1,1,1,0,0,1,1,1,0,1,1,0,0],[1,0,1,1,0,1,1,0,0,1,0,1,1,0,1,1,0,0,0,1,1,0,1,0,0,1,0,1,0,0,1,0,0,1,0,1,0,0,0,0,1,0,0,0,0,1,0,1,1,1],[0,1,1,0,1,1,1,1,0,0,1,0,1,0,0,0,1,0,0,1,1,1,1,0,1,1,1,1,1,0,0,1,0,0,1,0,1,1,1,1,0,0,0,0,1,1,1,0,0,1],[0,0,0,0,0,1,1,0,0,0,1,1,0,1,1,0,0,1,0,0,0,1,1,0,1,0,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,0,0,0,0,1,1,1,1,0],[0,1,0,1,1,0,1,0,1,1,0,0,1,1,1,0,0,1,1,1,0,0,0,0,1,0,1,1,0,0,0,0,0,1,0,0,1,0,1,0,1,0,1,1,1,0,1,1,0,0],[0,0,1,1,0,0,1,0,0,1,1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,0,0,0,1,1,1,1,1,1,1,1,0,1,1,0,0,1,0,0,0,0,0,0,1,0],[1,0,1,0,0,1,1,1,0,0,1,1,1,0,1,1,0,0,0,1,0,0,1,1,1,1,1,0,0,1,1,1,1,0,0,0,1,0,0,1,0,0,0,0,1,0,0,1,0,1],[0,0,0,0,1,0,0,0,1,1,1,0,0,1,1,1,1,0,0,0,1,0,0,0,0,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,1,1,1,0,1,0,1],[1,0,1,1,0,0,1,1,1,1,0,1,1,1,1,1,1,0,0,1,1,1,0,1,0,0,1,0,0,1,1,1,0,0,0,0,1,0,0,0,0,0,1,1,1,0,1,0,1,1],[0,1,0,0,0,0,1,1,1,0,1,0,0,1,0,1,0,0,1,0,1,1,1,0,0,1,0,1,1,0,1,0,1,0,1,1,1,0,1,1,1,1,0,1,1,0,0,1,0,0],[1,0,0,1,1,1,1,0,1,0,1,0,1,1,1,1,1,1,1,1,0,1,1,0,0,0,1,1,0,1,1,0,0,0,0,1,1,0,0,0,1,1,0,0,0,1,1,1,1,0],[0,1,1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,0,0,1,0,1,1,0,0,1,0,1,0,1,1,0,1,1,1,0,0,1,0,0,0,0,1,0,0,1,0,0,1,0],[0,0,1,0,0,0,1,1,0,1,0,0,0,0,0,0,1,1,1,1,1,1,0,1,0,1,0,1,0,1,0,1,0,0,0,1,0,1,1,0,1,0,1,0,1,0,0,1,1,1],[0,0,0,1,0,1,1,1,1,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,0,1,0,1,0,1,0,0,0,1,1,0,1,0,0,0,0,1,1,1,0,1,1],[0,0,1,0,0,1,1,0,0,1,1,0,0,0,0,1,1,1,0,1,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,1,0,1,0,0,0,1,1,0,1,1,1,0,0,0],[1,0,0,1,0,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,0,0,1,0,0,1,0,0,1,0,1,0,0,1,0,1,1,0,0,1,0,1,0,1,1,1,1,1,1,0],[0,1,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,1,0,1,0,0,0,1,0,1,0,0,1,1,1,1,0,0,1,0,1,1,0,0,1,1,1,1,1,1,1,1,0,0],[1,0,1,0,1,1,0,0,0,1,1,0,1,1,1,1,0,1,0,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,1,1,0,0,0,1,1,1,1,1,1,0,1,0],[1,1,0,1,0,1,0,1,0,1,0,0,1,0,1,1,1,1,0,1,1,0,1,0,0,0,1,0,1,1,0,0,0,1,0,0,0,0,1,0,1,0,0,1,0,0,0,1,0,0],[1,0,0,1,0,1,0,0,1,0,0,0,0,0,1,0,1,1,0,1,0,1,0,1,0,0,1,0,0,1,0,0,0,1,0,1,0,1,1,0,0,0,1,0,0,0,1,0,0,1],[1,1,0,0,0,1,0,0,0,0,1,1,0,0,0,1,0,1,0,0,0,0,1,1,0,1,1,1,0,0,1,1,0,0,1,1,1,1,0,0,0,1,1,1,0,0,0,1,1,0],[1,1,0,1,0,1,1,0,0,0,0,1,1,1,1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,1,1,0,1,1,0,0,1,0,1,0,1,1,0,1,0,1,0,0],[1,0,0,1,0,1,0,1,0,0,0,1,1,0,0,1,0,1,0,1,0,1,0,1,1,1,0,0,1,1,1,1,1,0,1,0,1,0,0,1,0,0,1,0,1,0,0,0,0,0],[1,0,1,1,1,0,0,1,1,1,0,0,0,1,1,1,1,1,0,0,1,0,1,1,0,1,1,1,1,0,0,0,0,1,0,1,0,1,1,0,0,1,0,0,0,0,0,0,1,1]]
</code></pre>
<p>68 unique islands that my algorithm finds:</p>
<p><a href="https://i.sstatic.net/sXTBw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sXTBw.png" alt="enter image description here" /></a></p>
| <python><algorithm><dictionary><depth-first-search> | 2023-08-26 23:05:33 | 3 | 469 | Alan |
76,985,073 | 14,492,001 | Why does casting a column with numeric Categorical datatype to an integer in Polars result in unexpected behavior? | <p>I have a <code>Categorical</code> column named <code>decile</code> in my polars DataFrame <code>df</code>, with its values ranging from "01" to "10". When attempting to convert that column into a numerical representation via:
<code>df.with_columns(pl.col('decile').cast(pl.Int8))</code>, the casted values are not mapped as expected (i.e., "01" doesn't get mapped to 1, and so on), and the range now also from 0 to 9, not 1 to 10.</p>
<p>The weird thing is that no matter what the original values of the column <code>decile</code> were, they will always get mapped unexpectedly, and to [0, 9] when casting it into an integer datatype.</p>
<p>I am trying to cast the values into integer datatype for plotting purposes.</p>
<p>Here is a minimal reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>size = 1e3
df = pl.DataFrame({
"id": np.random.randint(50, size=int(size), dtype=np.uint16),
"amount": np.round(np.random.uniform(10, 100000, int(size)).astype(np.float32), 2),
"quantity": np.random.randint(1, 7, size=int(size), dtype=np.uint16),
})
df = (df
.group_by("id")
.agg(revenue=pl.sum("amount"), tot_quantity=pl.sum("quantity"))
)
df = (df.with_columns(
pl.col('revenue')
.qcut(10, labels=[f'q{i:02}' for i in range(10, 0, -1)])
.alias("decile")
))
</code></pre>
<p>How to have the casting be proper (as one would expect the values to be mapped), and in the same range as the original values?</p>
| <python><python-3.x><dataframe><python-polars> | 2023-08-26 22:24:44 | 1 | 1,444 | Omar AlSuwaidi |
76,984,731 | 14,509,604 | RAM usage rises after printing dataframe copy in pandas | <p>Whenever I slice a data frame with <code>loc</code> <code>iloc</code> <code>filter</code>, etc. it increases my RAM usage, whats happening? why this does not just print the slice?
If it is saving it in Out like it's suggested <a href="https://stackoverflow.com/questions/67356512/memory-leak-in-python-jupyter-notebook">here</a>, how can I avoid that behavior?</p>
<p>I'm using jupyter notebook in vscode, <code>pandas==1.5.2</code>.</p>
| <python><pandas><jupyter><ram> | 2023-08-26 20:30:06 | 1 | 329 | juanmac |
76,984,560 | 8,030,794 | Pandas Dataframe add columns based on Date value | <p>I have <code>df</code> like this</p>
<pre><code> Open High Low Close Volume Date
3 25940.99 25972.27 25934.36 25938.65 176.08278 2023-08-23 02:20:00
4 25938.65 25938.65 25903.73 25921.67 67.24124 2023-08-23 02:25:00
5 25921.66 25963.83 25904.68 25951.51 83.37560 2023-08-23 02:30:00
6 25951.51 26011.07 25950.00 25998.00 119.22832 2023-08-23 02:35:00
7 25997.99 26050.50 25997.99 26015.14 242.94235 2023-08-23 02:40:00
</code></pre>
<p>Then i used resample <code>df</code> to 60 min - <code>df_resample</code></p>
<pre><code> Open High Low Close Volume
Date
2023-08-23 02:00:00 25940.99 26070.04 25903.73 26056.00 1055.06782
2023-08-23 03:00:00 26055.99 26187.99 26030.56 26040.38 2447.77226
2023-08-23 04:00:00 26040.38 26081.64 25994.12 26049.41 728.81260
2023-08-23 05:00:00 26049.41 26091.75 26033.00 26044.42 795.45411
2023-08-23 06:00:00 26044.41 26078.01 25990.15 26006.32 764.41941
</code></pre>
<p>How to insert new column <code>df_resample['High'] - df_resample['Low']</code> from <code>df_resample</code> to the <code>df</code>? Based on what hour the <code>Date</code> value is</p>
| <python><pandas> | 2023-08-26 19:39:42 | 1 | 465 | Fresto |
76,984,531 | 6,502,077 | How to replace characters in a text file with items from a dictionary? (ASCII characters to Unicode) | <p>I have created a function that is supposed to read a text file and replace a lot of ASCII characters with equivalents in Unicode. The problem is that the function does not replace any characters in the string, only if I remove all of the items in the dictionary except one. I have experimented the whole day but cannot seem to find the solution to the problem.</p>
<p>Here is the function:</p>
<pre><code>import re
match = {
# the original dictionary contain over 100 items
"᾿Ι" : "Ἰ",
"᾿Α" : "Ἀ",
"´Α" : "Ά",
"`Α" : "Ὰ",
"᾿Α" : "Ἀ",
"᾿Ρ" : "ῤ",
"῾Ρ" : "Ῥ"
}
with open("file.txt", "r", encoding="utf-8") as file, open("OUT.txt", "w", encoding="utf-8") as newfile:
def replace_all(text, dict):
for i, j in dict.items():
result, count = re.subn(r"%s" % i, j, str(text))
return result, count
# start the function
string = file.read()
result, count = replace_all(string, match)
# write out the result
newfile.write(result)
print("Changes: " + str(count))
</code></pre>
<p>The text file contains a lot of rows similar to the one below:</p>
<blockquote>
<p>Βίβλος γενέσεως ᾿Ιησοῦ Χριστοῦ, υἱοῦ Δαυῒδ, υἱοῦ ᾿Αβραάμ.</p>
</blockquote>
<p>Here the characters "᾿Ι" and "᾿Α" are supposed to be replaced with "Ἰ" and "Ἀ".</p>
| <python><python-3.x><dictionary> | 2023-08-26 19:32:08 | 2 | 702 | Lavonen |
76,984,491 | 8,276,973 | How to write integers, not integers-as-strings, in Python | <p>I need to create file of 10,000 random integers for testing. I will be using the file in Python and C, so I can't have the data represented as strings because I don't want the extra overhead of integer conversion in C.</p>
<p>In Python I can use <code>struct.unpack</code> to convert the file to integer, but I can't use the <code>write()</code> method to write that to a file for use in C.</p>
<p>Is there any way in Python to write just integers, not integers-as-strings, to a file? I have used <code>print(val, file=f)</code> and <code>f.write(str(val))</code>, but in both cases it writes a string.</p>
<p>Here is where I am now:</p>
<pre><code>file_root = "[ file root ]"
file_name = file_root + "Random_int64"
if os.path.exists(file_name):
f = open(file_name, "wb")
f.seek(0)
for _ in range(10000):
val = random.randint(0, 10000)
f.write(bytes(val))
f.close()
f = open(file_name, "rb")
wholefile = f.read()
struct.unpack(wholefile, I)
</code></pre>
<p>My <code>unpack</code> format string is wrong, so I am working on that now. I'm not that familiar with <code>struct.unpack</code>.</p>
| <python><python-3.x> | 2023-08-26 19:19:21 | 1 | 2,353 | RTC222 |
76,984,486 | 5,305,242 | Convert "Unix Epoch Time" to human readable time using time.gmtime raises OSError "invalid argument" | <p>Converting Unix Epoch time to readable time information. Trials are as per below. It throws an error "OSError: [Errno 22] Invalid argument". Looks like the method does not like the given argument (1693063813031885), but it works great at <a href="https://unixtime.org/" rel="nofollow noreferrer">https://unixtime.org/</a> where in "Your Time Zone".</p>
<pre><code>import time
dt_ts = time.strftime("%m-%d-%Y %H:%M:%S", time.gmtime(1693063813031885))
print(dt_ts)
</code></pre>
<p>Output should be <code>2023-08-26-11:30:13.031_885(UTC-05:00)</code>.</p>
| <python><datetime><unix-timestamp> | 2023-08-26 19:18:13 | 1 | 458 | OO7 |
76,984,484 | 6,626,441 | Highstock highcharts - How do flag positions properly map to csv column/headers and to chart series items using options.series[#] in Javascript? | <p><strong>Edit - Solution - Fixed Syntax Error!</strong></p>
<p>The question below has two answers expressed in the body and links to JSFiddle:</p>
<p><a href="https://stackoverflow.com/questions/73872553/highcharts-highstock-how-to-add-flags-to-chart-drawn-from-embedded-csv-data">Highcharts Highstock How to add flags to chart drawn from embedded CSV data?</a></p>
<p>In each answer the csv data has several columns with headers. The Javascript, for putting flags on the Highstock chart, has code lines such as options.series[n] where n is an integer. The Javascript also has a number of series included within brackets under the "series" tag such as: {},{}, {type: flag}. My question is how do the csv columns explicitly map to the options.series[#] in Javascript code and also explicitly map to the items listed under the series tag?</p>
<p>In particular I want to modify the csv to include one or more indicator columns without disrupting the flags on the high and low columns. This goal is shown at the bottom.</p>
<hr />
<p>This link <a href="https://jsfiddle.net/BlackLabel/8je3yfuc/" rel="nofollow noreferrer">https://jsfiddle.net/BlackLabel/8je3yfuc/</a> has relevant snippets shown below:</p>
<pre><code><pre id="csv" style="display: none">
date,adj_high,adj_low,flag
2018-02-27,180.48,178.16,flag1
</pre>
Highcharts.stockChart('chart-container', {
...,
data: {
csv: document.getElementById('csv').innerHTML,
complete: function(options) {
const flagSeries = options.series[2];
flagSeries.data = flagSeries.data.filter(
dataEl => dataEl[1]
);
}
},
series: [{}, {}, {
type: 'flags',
keys: ['x', 'title']
}]
});
</code></pre>
<p>There are four csv headers (columns), option.series[1], and three series {},{}, { ... type: flags ... }. <strong>Why is there a "2" in <code>option.series[2]</code>?</strong></p>
<hr />
<p>This link <a href="https://jsfiddle.net/BlackLabel/ues10k8q/" rel="nofollow noreferrer">https://jsfiddle.net/BlackLabel/ues10k8q/</a> has relevent snippets shown:</p>
<pre><code><pre id="csv" style="display: none">
date,adj_high,adj_low,flag
2018-02-27,180.48,178.16,flag1,0
2018-02-28,180.615,178.05
2018-03-01,179.775,172.66
2018-03-02,176.3,172.45
2018-03-05,177.74,174.52
2018-03-06,178.25,176.13
2018-03-07,175.85,174.27
2018-03-08,177.12,175.07
2018-03-09,180.0,177.39
2018-03-12,182.39,180.21,flag2,1
2018-03-13,183.5,179.24
2018-03-14,180.52,177.81
2018-03-15,180.24,178.0701
2018-03-16,179.12,177.62
2018-03-19,177.47,173.66,flag3,2
2018-03-20,176.8,174.94
2018-03-21,175.09,171.26
2018-03-22,172.68,168.6
2018-03-23,169.92,164.94
2018-03-26,173.1,166.44
2018-03-27,175.15,166.92
</pre>
Highcharts.stockChart('chart-container', {
...,
data: {
csv: document.getElementById('csv').innerHTML,
complete: function(options) {
const processedFlagData = {
low: [],
hight: [],
none: []
};
const flagData = options.series[2].data.filter(
dataEl => dataEl[1]
);
const positions = options.series[3].data.filter(
dataEl => dataEl[1]
);
flagData.forEach(dataEl => {
const matchedPos = positions.find(pos => pos[0] === dataEl[0]);
if (!matchedPos) {
processedFlagData.none.push(dataEl);
} else if (matchedPos[1] === 1) {
processedFlagData.low.push(dataEl);
} else if (matchedPos[1] === 2) {
processedFlagData.hight.push(dataEl);
}
});
options.series[2].data = processedFlagData.none;
options.series[3].name = 'flag on low';
options.series[3].data = processedFlagData.low;
options.series[4] = {
name: 'flag on high'
};
options.series[4].data = processedFlagData.hight;
}
},
legend: {
enabled: true
},
series: [{
id: 'high'
}, {
id: 'low'
}, {
type: 'flags',
keys: ['x', 'title']
}, {
type: 'flags',
keys: ['x', 'title'],
onSeries: 'high'
}, {
type: 'flags',
keys: ['x', 'title'],
onSeries: 'low'
}]
});
</code></pre>
<p>There are five csv columns and five series items (two csv data and three flags). <strong>How do the options.series 2, options.series 3, and options.series 4 map explicitly to csv columns and/or series items? I do not see where the index values 2,3, and 4 come from when writing custom code!</strong></p>
<hr />
<p>I am trying to write Javascript which enables the addition of one or more indicators in the csv column headers without disrupting the rendering of flags. When I write derive code from the above examples all it renders is a blank chart or the three data lines with no flags!</p>
<pre><code>date,indicator,adj_high,adj_low,flag,pos
2018-02-27,190,180.48,178.16,flag1,0
2018-02-28,190,180.615,178.05,NaN,NaN
2018-03-01,190,179.775,172.66
2018-03-02,188,176.3,172.45
2018-03-05,185,177.74,174.52
2018-03-06,187,178.25,176.13
2018-03-07,182,175.85,174.27
2018-03-08,184,177.12,175.07
2018-03-09,185,180.0,177.39
2018-03-12,187,182.39,180.21,Flag2,1
2018-03-13,190,183.5,179.24
2018-03-14,185,180.52,177.81
2018-03-15,185,180.24,178.0701
2018-03-16,188,179.12,177.62
2018-03-19,183,177.47,173.66,Flag3,2
2018-03-20,182,176.8,174.94
2018-03-21,180,175.09,171.26
2018-03-22,178,172.68,168.6
2018-03-23,175,169.92,164.94
2018-03-26,179,173.1,166.44
2018-03-27,185,175.15,166.92
</code></pre>
<hr />
<hr />
<hr />
<p><strong>Further Research - Syntax Error!</strong></p>
<pre><code> series: [{
color: '#000000', <!-- black -->
lineColor: '#000000',
lineWidth: 2
}, {
id: 'high',
color: '#0000FF', <!-- blue -->
lineColor: '#0000FF',
lineWidth: 2
}, {
id: 'low',
color: '#A9A9A9', <!-- gray -->
lineColor: '#A9A9A9',
lineWidth: 2
}, {
type: 'flags',
keys: ['x', 'title']
}, {
type: 'flags',
keys: ['x', 'title'],
onSeries: 'high'
}, {
type: 'flags',
keys: ['x', 'title'],
onSeries: 'low'
}]
});
</script>
</body>
</html>
</code></pre>
<p>The chart renders properly when I bump index numbers up by 1 each from 2-4, to 3-5, and when I fixed the syntax error in the javascript.</p>
| <python><pandas><csv><highcharts> | 2023-08-26 19:17:58 | 1 | 379 | SystemTheory |
76,984,435 | 2,890,683 | How to connect a streamlit app to multiple data sources | <p>I was going through <a href="https://docs.streamlit.io/knowledge-base/tutorials/databases" rel="nofollow noreferrer">https://docs.streamlit.io/knowledge-base/tutorials/databases</a> and wanted to use <a href="https://docs.streamlit.io/library/api-reference/connections/st.experimental_connection" rel="nofollow noreferrer">https://docs.streamlit.io/library/api-reference/connections/st.experimental_connection</a> but I want the user to be given a choice on the data source he wants to connect to and then proceed.</p>
<p>Is there a way in streamlit to do this.</p>
| <python><database><streamlit> | 2023-08-26 19:03:35 | 1 | 416 | user2890683 |
76,984,392 | 6,184,683 | The size of tensor a (100) must match the size of tensor b (64) at non-singleton dimension 2" | <p>I am trying to implement a simple linear network.</p>
<p>The input tensor size is (B,3,64,64)</p>
<p>My network is defined like this.</p>
<pre><code> tensorSize = x.size()
input = tensorSize[0] * tensorSize[1] * tensorSize[2]
self.linear_one = torch.nn.Linear(64, input)
self.linear_two = torch.nn.Linear(input, 32)
self.linear_output = torch.nn.Linear(32, 6)
self.layer_in = self.linear_one(x)
self.layer_in_two = self.linear_two(self.layer_in)
self.layer_out = self.linear_output(self.layer_in_two)
</code></pre>
<p>My output is size (B,3,64,6) but it needs to be (B,6)</p>
<p>Why is my network not outputting the correct results?</p>
| <python><machine-learning><pytorch> | 2023-08-26 18:53:31 | 1 | 701 | Aeryes |
76,984,356 | 14,492,001 | Issues when sorting a polars DataFrame based on a Categorical column | <p>I am quite new <code>polars</code>, and while messing around with grouping and sorting operations, I found that sorting a polars DataFrame based on a <em>created</em> <code>Categorical</code> column results in weird behaviors.</p>
<p>Specifically, given a polars DataFrame <code>df</code> containing a categorical column <code>decile</code> that is created via the <code>.qcut()</code> method with values ('q1', 'q2', ..., 'q10'), directly applying the following: <code>df.sort('decile')</code> correctly sorts the data.</p>
<p>However, if we were to group the DataFrame based on the <code>decile</code> column <strong>first</strong>, and then sort it based on <code>decile</code>, the resulting DataFrame doesn't sort properly!</p>
<p>The weird thing is that if you were to sort the DataFrame based on another column <strong>before</strong> creating the <code>decile</code> column, sorting the <code>decile</code> column now works <em>even after grouping</em> by it; though the sorting order is reversed (i.e., ascending sorts descending and vice-versa, which is weird).</p>
<p>Here's a minimal reproducible example to illustrate the case:</p>
<pre class="lang-py prettyprint-override"><code># Create a toy DataFrame
size = 1e5
df = pl.DataFrame({
"id": np.random.randint(50000, size=int(size), dtype=np.uint16),
"amount": np.round(np.random.uniform(10, 100000, int(size)).astype(np.float32), 2),
"quantity": np.random.randint(1, 7, size=int(size), dtype=np.uint16)
})
</code></pre>
<p>Illustrating the first case:</p>
<pre class="lang-py prettyprint-override"><code>df = (df.group_by("id")
.agg(revenue=pl.sum("amount"), tot_quantity=pl.sum("quantity"))
)
df = df.with_columns(
(df["revenue"].qcut(10, labels=[f'q{i}' for i in range(10, 0, -1)])).alias("decile")
)
df = df.group_by("decile").agg(pl.col("revenue").sum(), pl.col("tot_quantity").sum())
df = df.sort('decile') # This doesn't work properly!
</code></pre>
<p>However, sorting based on <code>revenue</code> first, fixes the problem, but the sorting order is reversed:</p>
<pre class="lang-py prettyprint-override"><code>df = (df.group_by("id")
.agg(revenue=pl.sum("amount"), tot_quantity=pl.sum("quantity"))
).sort("revenue") # Sort by "revenue" prior to creating "decile"
df = df.with_columns(
(df["revenue"].qcut(10, labels=[f'q{i}' for i in range(10, 0, -1)])).alias("decile")
)
df = df.group_by("decile").agg(pl.col("revenue").sum(), pl.col("tot_quantity").sum())
df = df.sort('decile') # It works now but order is reversed!
</code></pre>
<p>Anyone has any idea what's going on? I've been trying to figure out why this is happening but to no avail, I tried casting to a different datatype but still did't work.</p>
<p>Appreciate any help!</p>
| <python><python-3.x><dataframe><sorting><python-polars> | 2023-08-26 18:44:37 | 1 | 1,444 | Omar AlSuwaidi |
76,984,254 | 10,789,707 | Pyinstaller renders Wxpython GUI buttons and staticline colors differently on other computers | <p>My issue isimilar to <a href="https://stackoverflow.com/questions/67151244/how-to-import-the-modules-you-need-when-converting-python-code-to-exe">this one</a> but the answer did not solve it.</p>
<p>My problem is my pyinstaller build renders my Wxpython GUI buttons and staticline colors differently on other computers.</p>
<p><strong>Here is the button and static line on the source computer:</strong></p>
<p><strong>Before hover:</strong></p>
<p><a href="https://i.sstatic.net/TZkb4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TZkb4.png" alt="bh" /></a></p>
<p><strong>On hover:</strong></p>
<p><a href="https://i.sstatic.net/WQY37.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WQY37.png" alt="oh" /></a></p>
<p><strong>And here it is on another computer:</strong></p>
<p><strong>Before hover:</strong></p>
<p><a href="https://i.sstatic.net/ZnV5q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZnV5q.png" alt="bn1" /></a></p>
<p><strong>On hover:</strong></p>
<p><a href="https://i.sstatic.net/qSg8E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qSg8E.png" alt="oh1" /></a></p>
<p>As you can see the button and lines are not the same colors.</p>
<p>How to get the same static line and buton colors output as the source one on every computers?</p>
<p>The source pc's specs:</p>
<pre><code>Windows-10-10.0.19044-SP0
python 3.9.12
pyinstaller 5.13.0
wxPython-4.2.1-cp39-cp39-win_amd64.whl
</code></pre>
<p><strong>The sequence I used to build the pyinstaller output:</strong></p>
<pre><code>cd /c/testwxpython
python -m venv virt
source virt/Scripts/activate
pip install wxPython
pip install -r requirements.txt
pip install pyinstaller
pyinstaller --name myapp --onefile --windowed --icon=icon.ico --add-data "YouthTouchDemoRegular-4VwY.ttf;." sample_one.py
</code></pre>
<p><strong>Here is the Git bash terminal output:</strong></p>
<pre><code>(virt)
GitT MINGW64 /c/testwxpython
$ pyinstaller --name myapp --onefile --windowed --icon=icon.ico --add-data "YouthTouchDemoRegular-4VwY.ttf;." sample_one.py
248 INFO: PyInstaller: 5.13.1
248 INFO: Python: 3.9.12
255 INFO: Platform: Windows-10-10.0.19044-SP0
256 INFO: wrote C:\testwxpython\myapp.spec
260 INFO: Extending PYTHONPATH with paths
['C:\\testwxpython']
575 INFO: Appending 'datas' from .spec
575 INFO: checking Analysis
586 INFO: Building because C:\testwxpython\sample_one.py changed
586 INFO: Initializing module dependency graph...
588 INFO: Caching module graph hooks...
600 INFO: Analyzing base_library.zip ...
1574 INFO: Loading module hook 'hook-encodings.py' from 'C:\\testwxpython\\virt\\lib\\site-packages\\PyInstaller\\hooks'...
2022 INFO: Loading module hook 'hook-heapq.py' from 'C:\\testwxpython\\virt\\lib\\site-packages\\PyInstaller\\hooks'...
2425 INFO: Loading module hook 'hook-pickle.py' from 'C:\\testwxpython\\virt\\lib\\site-packages\\PyInstaller\\hooks'...
3285 INFO: Caching module dependency graph...
3388 INFO: running Analysis Analysis-00.toc
3390 INFO: Adding Microsoft.Windows.Common-Controls to dependent assemblies of final executable
required by C:\Python39\python.exe
3452 INFO: Analyzing C:\testwxpython\sample_one.py
3494 INFO: Loading module hook 'hook-platform.py' from 'C:\\testwxpython\\virt\\lib\\site-packages\\PyInstaller\\hooks'...
3617 INFO: Loading module hook 'hook-xml.py' from 'C:\\testwxpython\\virt\\lib\\site-packages\\PyInstaller\\hooks'...
4258 INFO: Loading module hook 'hook-sqlite3.py' from 'C:\\testwxpython\\virt\\lib\\site-packages\\PyInstaller\\hooks'...
4543 INFO: Loading module hook 'hook-numpy.py' from 'C:\\testwxpython\\virt\\Lib\\site-packages\\numpy\\_pyinstaller'...
5098 INFO: Loading module hook 'hook-difflib.py' from 'C:\\testwxpython\\virt\\lib\\site-packages\\PyInstaller\\hooks'...
5226 INFO: Loading module hook 'hook-multiprocessing.util.py' from 'C:\\testwxpython\\virt\\lib\\site-packages\\PyInstaller\\hooks'...
5847 INFO: Loading module hook 'hook-sysconfig.py' from 'C:\\testwxpython\\virt\\lib\\site-packages\\PyInstaller\\hooks'...
6515 INFO: Processing module hooks...
6560 INFO: Looking for ctypes DLLs
6588 INFO: Analyzing run-time hooks ...
6590 INFO: Including run-time hook 'C:\\testwxpython\\virt\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_pkgutil.py'
6592 INFO: Including run-time hook 'C:\\testwxpython\\virt\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_inspect.py'
6593 INFO: Including run-time hook 'C:\\testwxpython\\virt\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_multiprocessing.py'
6601 INFO: Looking for dynamic libraries
C:\testwxpython\virt\lib\site-packages\PyInstaller\building\build_main.py:167: UserWarning: The numpy.array_api submodule is still experimental. See NEP 47.
__import__(package)
438 INFO: Extra DLL search directories (AddDllDirectory): ['C:\\testwxpython\\virt\\lib\\site-packages\\numpy\\.libs']
438 INFO: Extra DLL search directories (PATH): ['C', 'C:\\testwxpython\\virt\\Scripts', 'C:\\Users\\Head Rule\\bin', 'C:\\Program Files\\Git\\mingw64\\bin', 'C:\\Program Files\\Git\\usr\\local\\bin', 'C:\\Program Files\\Git\\usr\\bin', 'C:\\Program Files\\Git\\usr\\bin', 'C:\\Program Files\\Git\\mingw64\\bin', 'C:\\Program Files\\Git\\usr\\bin', 'C:\\Users\\user\\bin', 'C:\\Program Files\\Common Files\\Oracle\\Java\\javapath', 'C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath', 'C:\\Go\\bin\\go.exe', 'C:\\Windows\\System32', 'C:\\Go\\bin', 'C:\\Python39\\Scripts', 'C:\\Python39', 'C:\\Users\\user\\AppData\\Roaming\\Python\\Python311\\Scripts', 'C:\\Program Files (x86)\\Common Files\\Intel\\Shared Libraries\\redist\\intel64\\compiler', 'C:\\WINDOWS', 'C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0', 'C:\\WINDOWS\\System32\\OpenSSH', 'C:\\ProgramData\\chocolatey\\bin', 'C:\\Program Files\\dotnet', 'C:\\Users\\user\\Documents\\ffmpeg-5.0-essentials_build\\bin', 'C:\\Program Files\\Git\\cmd', 'C:\\Program Files\\Git\\mingw64\\bin', 'C:\\Program Files\\Git\\usr\\bin', 'C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn', 'C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn', 'C:\\Program Files (x86)\\Microsoft SQL Server\\150\\DTS\\Binn', 'C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn', 'C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn', 'C:\\Program Files (x86)\\Microsoft SQL Server\\Client SDK\\ODBC\\130\\Tools\\Binn', 'C:\\Program Files (x86)\\Microsoft SQL Server\\140\\Tools\\Binn', 'C:\\Program Files (x86)\\Microsoft SQL Server\\140\\DTS\\Binn', 'C:\\Program Files (x86)\\Microsoft SQL Server\\140\\Tools\\Binn\\ManagementStudio', 'C:\\Program Files\\nodejs', 'C:\\Windows', 'C:\\Windows\\System32', 'C:\\Go\\bin', 'C:\\Go\\bin\\go.exe', 'C:\\Users\\user\\AppData\\Local\\Microsoft\\WindowsApps', 'C:\\Users\\user\\.dotnet\\tools', 'C:\\Users\\user\\AppData\\Local\\Programs\\Microsoft VS Code\\bin', 'C:\\Users\\user\\AppData\\Roaming\\npm', 'C:\\Users\\user\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.9_pax3n2kfra5q2\\LocalCache\\local-packages\\Python39\\Scripts', 'C:\\Python39', 'C:\\Python39\\Scripts', 'C:\\Program Files\\Git\\usr\\bin\\vendor_perl', 'C:\\Program Files\\Git\\usr\\bin\\core_perl']
8127 INFO: Looking for eggs
8128 INFO: Using Python library C:\Python39\python39.dll
8128 INFO: Found binding redirects:
[]
8133 INFO: Warnings written to C:\testwxpython\build\myapp\warn-myapp.txt
8175 INFO: Graph cross-reference written to C:\testwxpython\build\myapp\xref-myapp.html
8184 INFO: checking PYZ
8200 INFO: checking PKG
8206 INFO: Building because toc changed
8207 INFO: Building PKG (CArchive) myapp.pkg
15082 INFO: Building PKG (CArchive) myapp.pkg completed successfully.
15084 INFO: Bootloader C:\testwxpython\virt\lib\site-packages\PyInstaller\bootloader\Windows-64bit-intel\runw.exe
15084 INFO: checking EXE
15084 INFO: Rebuilding EXE-00.toc because myapp.exe missing
15084 INFO: Building EXE from EXE-00.toc
15085 INFO: Copying bootloader EXE to C:\testwxpython\dist\myapp.exe.notanexecutable
15114 INFO: Copying icon to EXE
15114 INFO: Copying icons from ['C:\\testwxpython\\icon.ico']
15138 INFO: Writing RT_GROUP_ICON 0 resource with 20 bytes
15139 INFO: Writing RT_ICON 1 resource with 213032 bytes
15141 INFO: Copying 0 resources to EXE
15141 INFO: Embedding manifest in EXE
15142 INFO: Updating manifest in C:\testwxpython\dist\myapp.exe.notanexecutable
15168 INFO: Updating resource type 24 name 1 language 0
15170 INFO: Appending PKG archive to EXE
15194 INFO: Fixing EXE headers
15843 INFO: Building EXE from EXE-00.toc completed successfully.
(virt)
GitT MINGW64 /c/testwxpython
$
</code></pre>
<p><strong>Here is the output of the Build folder warn-myapp.txt file:</strong></p>
<pre><code>This file lists modules PyInstaller was not able to find. This does not
necessarily mean this module is required for running your program. Python and
Python 3rd-party packages include a lot of conditional or optional modules. For
example the module 'ntpath' only exists on Windows, whereas the module
'posixpath' only exists on Posix systems.
Types if import:
* top-level: imported at the top-level - look at these first
* conditional: imported within an if-statement
* delayed: imported within a function
* optional: imported within a try-except-statement
IMPORTANT: Do NOT post this list to the issue-tracker. Use it as a basis for
tracking down the missing module yourself. Thanks!
missing module named _frozen_importlib_external - imported by importlib._bootstrap (delayed), importlib (optional), importlib.abc (optional), zipimport (top-level)
excluded module named _frozen_importlib - imported by importlib (optional), importlib.abc (optional), zipimport (top-level)
missing module named pep517 - imported by importlib.metadata (delayed)
missing module named org - imported by pickle (optional)
missing module named posix - imported by os (conditional, optional), shutil (conditional), importlib._bootstrap_external (conditional)
missing module named resource - imported by posix (top-level)
missing module named grp - imported by shutil (optional), tarfile (optional), pathlib (delayed, optional), subprocess (optional)
missing module named pwd - imported by posixpath (delayed, conditional), shutil (optional), tarfile (optional), pathlib (delayed, conditional, optional), subprocess (optional), netrc (delayed, conditional), getpass (delayed), http.server (delayed, optional), webbrowser (delayed)
missing module named 'org.python' - imported by copy (optional), xml.sax (delayed, conditional)
missing module named 'java.lang' - imported by platform (delayed, optional), xml.sax._exceptions (conditional)
missing module named multiprocessing.BufferTooShort - imported by multiprocessing (top-level), multiprocessing.connection (top-level)
missing module named multiprocessing.AuthenticationError - imported by multiprocessing (top-level), multiprocessing.connection (top-level)
missing module named _posixshmem - imported by multiprocessing.resource_tracker (conditional), multiprocessing.shared_memory (conditional)
missing module named _posixsubprocess - imported by subprocess (optional), multiprocessing.util (delayed)
missing module named multiprocessing.get_context - imported by multiprocessing (top-level), multiprocessing.pool (top-level), multiprocessing.managers (top-level), multiprocessing.sharedctypes (top-level)
missing module named multiprocessing.TimeoutError - imported by multiprocessing (top-level), multiprocessing.pool (top-level)
missing module named multiprocessing.set_start_method - imported by multiprocessing (top-level), multiprocessing.spawn (top-level)
missing module named multiprocessing.get_start_method - imported by multiprocessing (top-level), multiprocessing.spawn (top-level)
missing module named pyimod02_importers - imported by C:\Lulls\virt\Lib\site-packages\PyInstaller\hooks\rthooks\pyi_rth_pkgutil.py (delayed)
missing module named _scproxy - imported by urllib.request (conditional)
missing module named termios - imported by getpass (optional), tty (top-level)
missing module named readline - imported by cmd (delayed, conditional, optional), code (delayed, conditional, optional), pdb (delayed, optional)
missing module named numpy.core.integer - imported by numpy.core (top-level), numpy.fft.helper (top-level)
missing module named numpy.core.conjugate - imported by numpy.core (top-level), numpy.fft._pocketfft (top-level)
missing module named pickle5 - imported by numpy.compat.py3k (optional)
missing module named _dummy_thread - imported by numpy.core.arrayprint (optional)
missing module named numpy.array - imported by numpy (top-level), numpy.ma.core (top-level), numpy.ma.extras (top-level), numpy.ma.mrecords (top-level)
missing module named numpy.recarray - imported by numpy (top-level), numpy.ma.mrecords (top-level)
missing module named numpy.ndarray - imported by numpy (top-level), numpy._typing._array_like (top-level), numpy.ma.core (top-level), numpy.ma.extras (top-level), numpy.ma.mrecords (top-level), numpy.ctypeslib (top-level)
missing module named numpy.dtype - imported by numpy (top-level), numpy._typing._array_like (top-level), numpy.array_api._typing (top-level), numpy.ma.mrecords (top-level), numpy.ctypeslib (top-level)
missing module named numpy.bool_ - imported by numpy (top-level), numpy._typing._array_like (top-level), numpy.ma.core (top-level), numpy.ma.mrecords (top-level)
missing module named numpy.expand_dims - imported by numpy (top-level), numpy.ma.core (top-level)
missing module named numpy.iscomplexobj - imported by numpy (top-level), numpy.ma.core (top-level)
missing module named numpy.amin - imported by numpy (top-level), numpy.ma.core (top-level)
missing module named numpy.amax - imported by numpy (top-level), numpy.ma.core (top-level)
missing module named numpy.histogramdd - imported by numpy (delayed), numpy.lib.twodim_base (delayed)
missing module named numpy.core.ufunc - imported by numpy.core (top-level), numpy.lib.utils (top-level)
missing module named numpy.core.ones - imported by numpy.core (top-level), numpy.lib.polynomial (top-level)
missing module named numpy.core.hstack - imported by numpy.core (top-level), numpy.lib.polynomial (top-level)
missing module named numpy.core.atleast_1d - imported by numpy.core (top-level), numpy.lib.polynomial (top-level)
missing module named numpy.core.atleast_3d - imported by numpy.core (top-level), numpy.lib.shape_base (top-level)
missing module named numpy.core.vstack - imported by numpy.core (top-level), numpy.lib.shape_base (top-level)
missing module named numpy.core.linspace - imported by numpy.core (top-level), numpy.lib.index_tricks (top-level)
missing module named numpy.core.transpose - imported by numpy.core (top-level), numpy.lib.function_base (top-level)
missing module named numpy.core.result_type - imported by numpy.core (delayed), numpy.testing._private.utils (delayed)
missing module named numpy.core.float_ - imported by numpy.core (delayed), numpy.testing._private.utils (delayed)
missing module named numpy.core.number - imported by numpy.core (delayed), numpy.testing._private.utils (delayed)
missing module named numpy.core.bool_ - imported by numpy.core (delayed), numpy.testing._private.utils (delayed)
missing module named numpy.core.inf - imported by numpy.core (delayed), numpy.testing._private.utils (delayed)
missing module named numpy.core.array2string - imported by numpy.core (delayed), numpy.testing._private.utils (delayed)
missing module named numpy.core.signbit - imported by numpy.core (delayed), numpy.testing._private.utils (delayed)
missing module named numpy.core.isscalar - imported by numpy.core (delayed), numpy.testing._private.utils (delayed), numpy.lib.polynomial (top-level)
missing module named numpy.core.isinf - imported by numpy.core (delayed), numpy.testing._private.utils (delayed)
missing module named numpy.core.isnat - imported by numpy.core (top-level), numpy.testing._private.utils (top-level)
missing module named numpy.core.ndarray - imported by numpy.core (top-level), numpy.testing._private.utils (top-level), numpy.lib.utils (top-level)
missing module named numpy.core.array_repr - imported by numpy.core (top-level), numpy.testing._private.utils (top-level)
missing module named numpy.core.arange - imported by numpy.core (top-level), numpy.testing._private.utils (top-level), numpy.fft.helper (top-level)
missing module named numpy.core.float32 - imported by numpy.core (top-level), numpy.testing._private.utils (top-level)
missing module named numpy.core.iinfo - imported by numpy.core (top-level), numpy.lib.twodim_base (top-level)
missing module named numpy.core.reciprocal - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.sort - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.argsort - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.sign - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.isnan - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (delayed)
missing module named numpy.core.count_nonzero - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.divide - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.swapaxes - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.fft._pocketfft (top-level)
missing module named numpy.core.matmul - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.object_ - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (delayed)
missing module named numpy.core.asanyarray - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.intp - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (top-level)
missing module named numpy.core.atleast_2d - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.product - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.amax - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.amin - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.moveaxis - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.geterrobj - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.errstate - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (delayed)
missing module named numpy.core.finfo - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.lib.polynomial (top-level)
missing module named numpy.core.isfinite - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (delayed)
missing module named numpy.core.sum - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.fastCopyAndTranspose - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.sqrt - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.fft._pocketfft (top-level)
missing module named numpy.core.multiply - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.add - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.dot - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.lib.polynomial (top-level)
missing module named numpy.core.Inf - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.all - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (delayed)
missing module named numpy.core.newaxis - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.complexfloating - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.inexact - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.cdouble - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.csingle - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.double - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.single - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.intc - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.empty_like - imported by numpy.core (top-level), numpy.linalg.linalg (top-level)
missing module named numpy.core.empty - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (top-level), numpy.fft.helper (top-level)
missing module named numpy.core.zeros - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.fft._pocketfft (top-level)
missing module named numpy.core.asarray - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.lib.utils (top-level), numpy.fft._pocketfft (top-level), numpy.fft.helper (top-level)
missing module named numpy.core.array - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (top-level), numpy.lib.polynomial (top-level)
missing module named numpy.eye - imported by numpy (delayed), numpy.core.numeric (delayed)
missing module named psutil - imported by numpy.testing._private.utils (delayed, optional)
missing module named win32pdh - imported by numpy.testing._private.utils (delayed, conditional)
missing module named asyncio.DefaultEventLoopPolicy - imported by asyncio (delayed, conditional), asyncio.events (delayed, conditional)
missing module named _ufunc - imported by numpy._typing (conditional)
missing module named numpy.bytes_ - imported by numpy (top-level), numpy._typing._array_like (top-level)
missing module named numpy.str_ - imported by numpy (top-level), numpy._typing._array_like (top-level)
missing module named numpy.void - imported by numpy (top-level), numpy._typing._array_like (top-level)
missing module named numpy.object_ - imported by numpy (top-level), numpy._typing._array_like (top-level)
missing module named numpy.datetime64 - imported by numpy (top-level), numpy._typing._array_like (top-level)
missing module named numpy.timedelta64 - imported by numpy (top-level), numpy._typing._array_like (top-level)
missing module named numpy.number - imported by numpy (top-level), numpy._typing._array_like (top-level)
missing module named numpy.complexfloating - imported by numpy (top-level), numpy._typing._array_like (top-level)
missing module named numpy.floating - imported by numpy (top-level), numpy._typing._array_like (top-level)
missing module named numpy.integer - imported by numpy (top-level), numpy._typing._array_like (top-level), numpy.ctypeslib (top-level)
missing module named numpy.unsignedinteger - imported by numpy (top-level), numpy._typing._array_like (top-level)
missing module named numpy.generic - imported by numpy (top-level), numpy._typing._array_like (top-level)
missing module named numpy.ufunc - imported by numpy (top-level), numpy._typing (top-level)
missing module named numpy.float64 - imported by numpy (top-level), numpy.array_api._typing (top-level)
missing module named numpy.float32 - imported by numpy (top-level), numpy.array_api._typing (top-level)
missing module named numpy.uint64 - imported by numpy (top-level), numpy.array_api._typing (top-level)
missing module named numpy.uint32 - imported by numpy (top-level), numpy.array_api._typing (top-level)
missing module named numpy.uint16 - imported by numpy (top-level), numpy.array_api._typing (top-level)
missing module named numpy.uint8 - imported by numpy (top-level), numpy.array_api._typing (top-level)
missing module named numpy.int64 - imported by numpy (top-level), numpy.array_api._typing (top-level)
missing module named numpy.int32 - imported by numpy (top-level), numpy.array_api._typing (top-level)
missing module named numpy.int16 - imported by numpy (top-level), numpy.array_api._typing (top-level)
missing module named numpy.int8 - imported by numpy (top-level), numpy.array_api._typing (top-level)
missing module named wx.GetMousePosition - imported by wx (top-level), rectshapedbitmapbuttonTwo (top-level)
missing module named StringIO - imported by six (conditional)
missing module named winxptheme - imported by wx.lib.agw.aui.dockart (conditional, optional), wx.lib.agw.aui.framemanager (conditional, optional)
missing module named 'Carbon.Appearance' - imported by wx.lib.agw.aui.aui_utilities (conditional, optional), wx.lib.agw.aui.tabart (conditional, optional)
missing module named winxpgui - imported by wx.lib.agw.artmanager (conditional, optional)
missing module named win32con - imported by wx.lib.agw.artmanager (conditional, optional)
missing module named win32api - imported by wx.lib.agw.artmanager (conditional, optional), wx.lib.agw.flatmenu (conditional, optional)
missing module named win32gui - imported by wx.lib.agw.flatmenu (conditional, optional)
missing module named UserDict - imported by wx.lib.agw.fmcustomizedlg (conditional)
missing module named Carbon - imported by wx.lib.colourutils (conditional, optional)
missing module named vms_lib - imported by platform (delayed, optional)
missing module named java - imported by platform (delayed)
missing module named _winreg - imported by platform (delayed, optional)
</code></pre>
<p><strong>Here is the myapp.spec output:</strong></p>
<pre><code># -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(
['sample_one.py'],
pathex=[],
binaries=[],
datas=[('YouthTouchDemoRegular-4VwY.ttf', '.')],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='myapp',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=False,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
icon=['icon.ico'],
)
</code></pre>
<p>I suspect my requirement file might be part of the reason for the issue because I can't account for it using Qt5 for my Wxpython app.</p>
<p><strong>Here is my requirement file:</strong></p>
<pre><code>click==7.1.2
numpy==1.23.5
pandas==1.5.2
PyQt5==5.15.4
pyqt5-plugins==5.15.4.2.2
PyQt5-Qt5==5.15.2
PyQt5-sip==12.11.0
pyqt5-tools==5.15.4.3.2
python-dateutil==2.8.2
python-dotenv==0.21.0
pytz==2022.6
qt5-applications==5.15.2.2.2
qt5-tools==5.15.2.1.2
six==1.16.0
</code></pre>
<p>I couldn't find any other related problem question online and don't know what to test next to fix it.</p>
| <python><python-3.x><wxpython><pyinstaller><exe> | 2023-08-26 18:12:23 | 0 | 797 | Lod |
76,984,212 | 1,305,700 | How to apply ip lookup using polars? | <p>Given two tables I'd like to conduct a lookup over all ips and find the network it belongs to:</p>
<p>I have two large tables:</p>
<p><a href="https://i.sstatic.net/dUCpF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dUCpF.png" alt="clients" /></a></p>
<p>and the following networks:</p>
<p><a href="https://i.sstatic.net/UbdTq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UbdTq.png" alt="networks" /></a></p>
<p>Regarding the ClientIP (First table) I thought of casting the whole column with <code>ip_address</code></p>
<p>Regarding the second column (second table) I thought of casting the whole column with <code>ip_network</code></p>
<p>Something like this:</p>
<pre><code>import ipaddress
network = ipaddress.ip_network('99.96.0.0/13')
ip_obj = ipaddress.ip_address('99.87.29.96')
print(ip_obj in network)
</code></pre>
<p>and then conduct an apply function, but it is very slow especially for tables with this kind of size.</p>
<p>I noticed in some databases like KQL, there is a built-in support:
<a href="https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/ipv4-lookup-plugin" rel="nofollow noreferrer">ipv4-lookup</a></p>
<p>Is there any kind of builtin support for iplookup in <strong>polars</strong>? or in <strong>pyarrows</strong>?
any suggestions?</p>
| <python><python-polars><pyarrow> | 2023-08-26 18:01:20 | 1 | 762 | JammingThebBits |
76,983,840 | 7,699,683 | How to properly stop and start bot in pyrogram(not restart) | <p>I want clients to be able to stop a bot, and after a while start it again. If I use stop() function, and then start() function again, it no longer receives any updates. I don't want to use restart(), because restart starts straight away, which is not what I want? How can you suggest I fix this issue?</p>
| <python><telegram><pyrogram> | 2023-08-26 16:18:48 | 0 | 333 | Efim Rubin |
76,983,477 | 2,867,882 | Struggling with class methods, cls, and self when using functions cross classes in python while updating a dictionary | <p>I am trying to build a small UI for updating some settings and then displaying some intake data on a raspberrypi. Python version is 3.9.2. The most logical way to save the settings I thought would be building JSON and then saving it to a file.</p>
<p>I am struggling to get the json_data dictionary to update and retain the settings. I got rid of the <code>__init__</code> for some tries using <code>@classmethod</code> with cls instead of self. I tried <code>@staticmethod</code>, but that didn't seem right based on what I think I understand about that decorator. I tried using args, **args, **kwargs into the function.</p>
<p>The Settings.update_settings() is called from multiple classes at different times to update the settings. I have been able to get the json_data to update, but the problem is after each call to update_settings the json_data is reset when using classmethod. When using self it is, which I think is expected, a missing self argument when called outside the Settings class.</p>
<p>The reason I need to save off the pieces of the json as they are entered is I am destroying the widgets when moving between settings groups.</p>
<p>Here is the main screen (MainApp) where they input the first update for the JSON in the Settings class.</p>
<pre><code>from guizero import App, Box, info, Picture, PushButton, Text, TextBox, Window, yesno
from datetime import datetime as dt
import subprocess
import sys
from data_output_setup_page import LoggerSetupPage
from logging_data_display import DataDisplayPage
import logging as log
from utilities import Settings
class MainApp:
def __init__(self):
self.submit_site_text = 'Submit'
self.app = App('Epic One Water Pulse Logging',
width=800, height=480, bg='#050f2b')
# Maximize the app window
#self.app.tk.attributes('-fullscreen', True)
# Top Box for header
self.top_box = Box(self.app, layout="grid")
self.brand = Picture(self.top_box, image=r'/home/ect-one-user'
r'/Desktop/One_Water_Pulse_Logger'
r'/assets/Brand.png',
align='left', grid=[0, 0])
self.header = Text(self.top_box, text=dt.now().strftime('%Y-%m-%d %H:%M:%S'),
align='right', grid=[1, 0], width='fill')
self.header.text_color = 'white'
self.header.width = 90
self.welcome_box = Box(self.app, layout='grid', align='top')
self.welcome_text = Text(self.welcome_box, text="Welcome to the ECT Pulse Logger Prototype.",
size=14, grid=[0, 0])
self.welcome_text.text_color = 'white'
self.welcome_text.width = 90
self.welcome_text_l2 = Text(self.welcome_box, text="Please send any feedback to this_email@Gmail.com",
size=14, grid=[0, 1])
self.welcome_text_l2.text_color = 'white'
self.welcome_text_l2.width = 90
# Middle of Screen box
self.box = Box(self.app, layout='grid', align='top')
self.spacer = Text(self.box, text='', grid=[0, 1], width='fill')
self.site_name_label = Text(self.box, text='Site Name:', grid=[0, 2])
self.site_name_label.text_color = 'white'
self.l_spacer = Text(self.box, text='', grid=[1, 2])
self.site_name = TextBox(self.box, grid=[2, 2])
self.site_name.width = 20
self.site_name.bg = 'white'
self.r_spacer = Text(self.box, text='', grid=[3, 2])
self.submit_site = PushButton(self.box, text=self.submit_site_text,
command=self.site_lock, grid = [4, 2])
self.submit_site.text_color = 'white'
self.spacer = Text(self.box, text='', grid=[0, 3], width='fill')
self.sv_stg_to_file = PushButton(self.box, text='Save Settings File',
command=Settings.save_to_json, grid=[0, 4, 3, 1])
self.sv_stg_to_file.text_color = 'white'
# Create a button holder at bottom of screen
self.bottom_box = Box(self.app, layout='grid', align='bottom')
self.open_ds_button = PushButton(self.bottom_box, text='Logger Setup',
command=lambda: self.open_window(
LoggerSetupPage, 'Logger Setup'),
align='left', grid=[0, 0])
self.open_ds_button.text_color = 'white'
self.open_ds_button.hide()
self.open_db_button = PushButton(self.bottom_box, text='Open Logging',
command=lambda: self.open_window(
DataDisplayPage, 'Logging'),
align='left', grid=[1, 0])
self.open_db_button.text_color = 'white'
self.open_db_button.hide()
self.close_button = PushButton(self.bottom_box, text='Shutdown Logger',
command=self.exit_pgm,
align='left', grid=[2, 0])
self.close_button.text_color = 'white'
def run(self):
self.app.display()
def get_settings(self):
self.settings = Settings.retrieve_settings()
print(self.settings)
if not isinstance(self.settings, type(None)):
load_settings = yesno('Load Settings', 'Settings file found. Load settings?')
if load_settings:
self.site_name.value = self.settings['Site Name']
self.logger_setup.import_settings(self.settings)
elif isinstance(self.settings, type(None)):
info('Config', 'No settings file found. Please configure settings.')
def check_json(self):
self.local_settings = Settings.check_json()
print(self.local_settings)
if self.local_settings:
info('Config', 'Settings ready for save.')
self.sv_stg_to_file.show()
else:
self.sv_stg_to_file.hide()
def site_lock(self):
if self.submit_site_text == 'Submit':
self.site_name.disable()
self.submit_site_text = 'Alter Site Name'
Settings.update_settings({
'Settings':
{'Site Name': self.site_name.value}
})
self.get_settings()
self.open_ds_button.show()
self.open_db_button.show()
# Add a log statement
log.info('Site name updated to {0}'.format(self.site_name.value))
else:
self.site_name.enable()
self.submit_site_text = 'Submit'
self.open_ds_button.hide()
self.open_db_button.hide()
# Add a log statement
log.info('Site name updated to {0}'.format(self.site_name.value))
self.submit_site.text = self.submit_site_text
def open_window(self, module, wdw_name):
self.app.hide()
new_window = Window(
self.app, title=wdw_name, width=800, height=480, bg='#050f2b')
#new_window.tk.attributes('-fullscreen', True)
# Create an instance of DataDisplayPage
open_page = module(new_window, self)
new_window.show()
def exit_pgm(self):
self.app.destroy()
# subprocess.Popen(['shutdown','-h','now'])
sys.exit()
if __name__ == "__main__":
app = MainApp()
app.check_json()
app.run()
</code></pre>
<p>Here is my settings class</p>
<pre><code>from botocore.client import Config
import boto3
import collections.abc
import ftplib
import json
import logging
from base64 import b64decode as bd
from datetime import datetime as dt
class Settings:
settings_directory = '/home/ect-one-user/Desktop/One_Water_Pulse_Logger/config/'
settings_filename = '_logger_config.json'
json_data = {
'Settings': {
'Site Name': None,
'Sensor': {
},
'Data Output': {
},
'Email Address': {
}
}
}
@staticmethod
def update_settings(d):
Settings.json_data.update(d)
@staticmethod
def check_json():
print(Settings.json_data)
try:
if Settings.json_data['Settings']['Site Name'] is not None \
and Settings.json_data['Settings']['Sensor']['Name'] is not None \
and Settings.json_data['Settings']['Data Output']['Location'] is not None:
return True
except:
return False
</code></pre>
<p>LoggerSetupPage Class</p>
<pre><code>from base64 import b64encode as be
from base64 import b64decode as bd
from datetime import datetime as dt
from guizero import Box, Combo, info, Picture, ListBox, PushButton, Text, TextBox, Window
from utilities import Settings
class LoggerSetupPage:
def __init__(self, parent, main_app):
self.parent = parent
self.main_app = main_app
self.current_row = 0
self.settings_dict = {}
self.widgets_to_destroy = []
# Top Box for header
self.top_box = Box(self.parent, layout='grid')
# Display the brand logo in the top left corner of the main window.
self.brand = Picture(self.top_box,
image='/home/ect-one-user/Desktop/One_Water_Pulse_Logger/assets/Brand.png'
, align='left', grid=[0, 0])
self.header = Text(self.top_box,
text=dt.now().strftime('%Y-%m-%d %H:%M:%S'),
align='right', grid=[1, 0])
self.header.width = 90
self.header.text_color = 'white'
# Middle box
self.mid_box = Box(self.parent, layout='grid')
self.config_selection = Combo(
self.mid_box,
options=[ 'Data Output Config', 'Sensor Config'],
command=self.check_selection,
grid=[0, 0]
)
self.config_selection.text_color = 'white'
self.config_selection.text_size = 16
# Bottom box for buttons
self.bottom_box = Box(self.parent, layout='grid', align='bottom')
self.return_button = PushButton(self.bottom_box,
text='Return to Main Page',
command=self.return_to_main, align='bottom', grid=[0, 2])
self.return_button.text_color = 'white'
def return_to_main(self):
self.main_app.app.show()
self.main_app.check_json()
self.parent.destroy()
def create_input_list(self):
if self.config_selection.value == 'Data Output Config':
self.data_output_choice_label = Text(self.mid_box, text='Data Output:',
grid=[0, 0])
self.data_output_choice_label.text_color = 'white'
self.data_output_choice = Combo(self.mid_box,
options=['local', 's3', 'ftp'], command=self.check_sub_selection,
grid=[1, 0])
self.data_output_choice.text_color = 'white'
self.current_row += 1
self.widgets_to_destroy.extend([
self.data_output_choice_label,
self.data_output_choice
])
def create_inputs(self):
if self.config_selection.value == 'Sensor Config':
self.sn_label = Text(self.mid_box, text='Sensor Name:',
align='left', grid=[0, 1])
self.sn_label.text_color = 'white'
self.sn_input = TextBox(self.mid_box, grid=[1, 1], width=30,
align='left',)
self.sn_input.text_color = 'white'
self.current_row += 1
self.kf_label = Text(self.mid_box, text='K Factor:',
align='left', grid=[0, 2])
self.kf_label.text_color = 'white'
self.kf_input = TextBox(self.mid_box, grid=[1, 2], width=10,
align='left',)
self.kf_input.text_color = 'white'
self.current_row += 1
self.su_label = Text(self.mid_box, text='Sensor Units:',
align='left', grid=[0, 3])
self.su_label.text_color = 'white'
self.su_input = TextBox(self.mid_box, grid=[1, 3], width=10,
align='left',)
self.su_input.text_color = 'white'
self.current_row += 1
self.du_label = Text(self.mid_box, text='Desired Units:', grid=[0, 4])
self.du_label.text_color = 'white'
self.du_input = TextBox(self.mid_box, grid=[1, 4], width=10,
align='left',)
self.du_input.text_color = 'white'
self.current_row += 1
self.widgets_to_destroy.extend([
self.sn_label,
self.sn_input,
self.kf_label,
self.kf_input,
self.su_label,
self.su_input,
self.du_label,
self.du_input
])
elif self.data_output_choice.value == 's3':
self.l_spacer = Text(self.mid_box, text='', grid=[0, 1], width = 'fill')
self.current_row += 1
self.s3_bucket_label = Text(self.mid_box, text='S3 Bucket:',
grid=[0, 2], align='left')
self.s3_bucket_label.text_color = 'white'
self.s3_bucket_input = TextBox(self.mid_box, grid=[1, 2], width=30,
align='left')
self.s3_bucket_input.text_color = 'white'
self.current_row += 1
self.s3_prefix_label = Text(self.mid_box, text='S3 Folder:',
grid=[0, 3], align='left')
self.s3_prefix_label.text_color = 'white'
self.s3_prefix_input = TextBox(self.mid_box, grid=[1, 3], width=30,
align='left')
self.s3_prefix_input.text_color = 'white'
self.current_row += 1
self.s3_key_label = Text(self.mid_box, text='S3 Filename:',
grid=[0, 4], align='left')
self.s3_key_label.text_color = 'white'
self.s3_key_input = TextBox(self.mid_box, grid=[1, 4], width=30,
align='left')
self.s3_key_input.text_color = 'white'
self.current_row += 1
self.s3_ak_label = Text(self.mid_box, text='User Access Key:',
grid=[0, 5], align='left')
self.s3_ak_label.text_color = 'white'
self.s3_ak_input = TextBox(self.mid_box, grid=[1, 5], width=30,
align='left')
self.s3_ak_input.text_color = 'white'
self.current_row += 1
self.s3_sk_label = Text(self.mid_box, text='User Secret Key:',
grid=[0, 6], align='left')
self.s3_sk_label.text_color = 'white'
self.s3_sk_input = TextBox(self.mid_box, grid=[1, 6], width=30,
align='left')
self.s3_sk_input.text_color = 'white'
self.current_row += 1
self.s3_role_label = Text(self.mid_box, text='Role to Assume:',
grid=[0, 7], align='left')
self.s3_role_label.text_color = 'white'
self.s3_role_input = TextBox(self.mid_box, grid=[1, 7], width=30,
align='left')
self.s3_role_input.text_color = 'white'
self.current_row += 1
self.widgets_to_destroy.extend([
self.s3_bucket_label,
self.s3_bucket_input,
self.s3_prefix_label,
self.s3_prefix_input,
self.s3_key_label,
self.s3_key_input,
self.s3_ak_label,
self.s3_ak_input,
self.s3_sk_label,
self.s3_sk_input,
self.s3_role_label,
self.s3_role_input,
self.l_spacer
])
elif self.data_output_choice.value == 'ftp':
self.l_spacer = Text(self.mid_box, text='', grid=[0, 1], width = 'fill')
self.ftp_host_label = Text(self.mid_box, text='FTP Host:',
grid=[0, 2], align='left')
self.ftp_host_label.text_color = 'white'
self.ftp_host_input = TextBox(self.mid_box, grid=[1, 2], width=30,
align='left')
self.ftp_host_input.text_color = 'white'
self.current_row += 1
self.ftp_port_label = Text(self.mid_box, text='FTP Port:',
grid=[0, 3], align='left')
self.ftp_port_label.text_color = 'white'
self.ftp_port_input = TextBox(self.mid_box, grid=[1, 3], width=30,
align='left')
self.ftp_port_input.text_color = 'white'
self.current_row += 1
self.ftp_un_label = Text(self.mid_box, text='FTP Username:',
grid=[0, 4], align='left')
self.ftp_un_label.text_color = 'white'
self.ftp_un_input = TextBox(self.mid_box, grid=[1, 4], width=30,
align='left')
self.ftp_un_input.text_color = 'white'
self.current_row += 1
self.ftp_pwd_label = Text(self.mid_box, text='FTP Password:',
grid=[0, 5], align='left')
self.ftp_pwd_label.text_color = 'white'
self.ftp_pwd_input = TextBox(self.mid_box, grid=[1, 5], width=30,
align='left')
self.ftp_pwd_input.text_color = 'white'
self.current_row += 1
self.ftp_dir_label = Text(self.mid_box, text='Save Location:',
grid=[0, 6], align='left')
self.ftp_dir_label.text_color='white'
self.ftp_dir_input = TextBox(self.mid_box, grid=[1, 6], width=30,
align='left')
self.ftp_dir_input.text_color='white'
self.current_row += 1
self.widgets_to_destroy.extend([
self.ftp_host_label,
self.ftp_host_input,
self.ftp_port_label,
self.ftp_port_input,
self.ftp_un_label,
self.ftp_un_input,
self.ftp_pwd_label,
self.ftp_pwd_input,
self.ftp_dir_label,
self.ftp_dir_input,
self.l_spacer
])
elif self.data_output_choice.value == 'local':
self.l_spacer = Text(self.mid_box, text='', grid=[0, 1], width = 'fill')
self.email_address_label = Text(self.mid_box, text='Email Address:',
grid=[0, 2], align='left')
self.email_address_label.text_color = 'white'
self.email_address_input = TextBox(self.mid_box, grid=[1, 2], width=40,
align='left')
self.email_address_input.text_color = 'white'
self.current_row += 1
self.widgets_to_destroy.extend([
self.email_address_label,
self.email_address_input,
self.l_spacer
])
# Create a button to return the ListBox to visible
self.show_list_btn = PushButton(self.bottom_box, text='Back to List',
command=self.show_config, grid=[0, self.current_row+1],
align='bottom')
self.show_list_btn.text_color = 'white'
self.save_settings_btn = PushButton(self.bottom_box, text='Save Settings',
command=self.save_settings, grid=[1, self.current_row+1], align='bottom')
self.save_settings_btn.text_color = 'white'
self.widgets_to_destroy.extend([
self.show_list_btn,
self.save_settings_btn
])
def import_settings(self, kwargs):
if kwargs['Location'] == 's3':
self.data_output_choice.value = 's3'
self.s3_bucket_input.value = kwargs['Settings']['Data Output']['Bucket']
self.s3_prefix_input.value = kwargs['Settings']['Data Output']['Prefix']
self.s3_key_input.value = kwargs['Settings']['Data Output']['Key']
self.s3_ak_input.value = bd(kwargs['Settings']['Data Output']\
['Auth']['Access Key']).decode('utf-8')
self.s3_sk_input.value = bd(kwargs['Settings']['Data Output']\
['Auth']['Secret Key']).decode('utf-8')
self.s3_role_input.value = kwargs['Settings']['Data Output']\
['Auth']['Role']
elif kwargs['Location'] == 'ftp':
self.data_output_choice.value = 'ftp'
self.ftp_host_input.value = kwargs['Settings']['Data Output']['Host']
self.ftp_port_input.value = kwargs['Settings']['Data Output']['Port']
self.ftp_un_input.value = bd(kwargs['Settings']['Data Output']\
['Auth']['Username']).decode('utf-8')
self.ftp_pwd_input.value = bd(kwargs['Settings']['Data Output']\
['Auth']['Password']).decode('utf-8')
self.ftp_dir_input.value = kwargs['Settings']['Data Output']['Directory']
else:
self.data_output_choice.value = 'local'
self.email_input.value = kwargs['Email Address']
self.sn_input.value = kwargs['Settings']['Sensor']['Name']
self.kf_input.value = kwargs['Settings']['Sensor']['K Factor']
self.su_input.value = kwargs['Settings']['Sensor']['Standard Unit']
self.du_input.value = kwargs['Settings']['Sensor']['Desired Unit']
def save_settings(self):
if self.config_selection.value == 'Data Output Config':
if self.data_output_choice.value == 's3':
self.settings_dict.update({
'Settings': {
'Data Ouput': {
'Location': self.data_output_choice.value,
'Bucket': self.s3_bucket_input.value,
'Prefeix': self.s3_prefix_input.value,
'Key': self.s3_key_input.value,
'Access Key': be(self.s3_ak_input.value.encode('utf-8')),
'Secret Key': be(self.s3_sk_input.value.encode('utf-8')),
'Role': self.s3_role_input.value
}
}
})
elif self.data_output_choice.value == 'ftp':
self.settings_dict.update({
'Settings': {
'Data Ouput': {
'Location': self.data_output_choice.value,
'Host': self.ftp_host_input.value,
'Port': self.ftp_port_input.value,
'Username': be(self.ftp_un_input.value.encode('utf-8')),
'Password': be(self.ftp_pwd_input.value.encode('utf-8')),
'Directory': self.ftp_dir_input.value
}
}
})
else:
self.settings_dict.update({
'Settings': {
'Data Ouput': {
'Location': self.data_output_choice.value,
'Email Address': self.email_address_input.value
}
}
})
elif self.config_selection.value == 'Sensor Config':
self.settings_dict.update({
'Settings': {
'Sensor': {
'Name': self.sn_input.value,
'K Factor': self.kf_input.value,
'Standard Unit': self.su_input.value,
'Desired Unit': self.du_input.value
}
}
})
Settings.update_settings(self.settings_dict)
info('success', 'settings staged.')
self.return_to_main()
def check_selection(self):
if self.config_selection.value == 'Data Output Config':
# Hide the ListBox
self.config_selection.hide()
self.return_button.hide()
# Create input widgets
self.create_input_list()
self.create_inputs()
elif self.config_selection.value == 'Sensor Config':
# Hide the ListBox
self.config_selection.hide()
self.return_button.hide()
# Create input widgets
self.create_inputs()
def check_sub_selection(self):
if self.data_output_choice.value in ['ftp', 's3'] \
and self.config_selection.visible == False:
# Destroy input widgets and the "Show List" button
self.destroy_widgets()
# Create input widgets
self.create_inputs()
def destroy_widgets(self):
# Destroy existing input widgets if there are any
for widget in self.widgets_to_destroy:
widget.destroy()
self.widgets_to_destroy = [] # Clear the list
def show_config(self):
# Destroy input widgets and the "Show List" button
self.destroy_widgets()
# Show the ListBox
self.config_selection.show()
self.return_button.show()
</code></pre>
<p>I have not used classes in pretty much any of my previous projects because it was a single use script for a very specific purpose generally related to moving data around which only had a few small functions. I am trying to expand my abilities, but I am really struggling to understand and fix this issue. I have spent way too much time on this and have to go to the next hard part of using threading to display the data and then log it at certain intervals.</p>
<p>What is the most efficient way to change the inputs or function to retain the necessary settings and then make the save settings button visible for writing the JSON to a file?</p>
<p>current status based on provided answer:</p>
<p>On Start up - check_json
{'Settings': {'Site Name': None, 'Sensor': {}, 'Data Output': {}, 'Email Address': {}}}</p>
<p>after save settings button on LoggerSetupPage with local chosen - check_json
{'Settings': {'Data Ouput': {'Location': 'local', 'Email Address': ''}}}</p>
<p>after save settings button on LoggerSetupPage with local chosen and sensor setup entered - check_json
{'Settings': {'Sensor': {'Name': '123', 'K Factor': '123', 'Standard Unit': '2512', 'Desired Unit': '441'}}}</p>
<p>This is the behavior I saw before. it isn't retaining the previously added dictionary items. It is resetting itself. Probably because I am updating an instance and not the actual class level dictionary.</p>
| <python><python-class> | 2023-08-26 14:47:49 | 2 | 1,076 | Shenanigator |
76,983,078 | 349,550 | Error 400: redirect_uri_mismatch because of trailing / in redirect uri from google standard Quickstart.py | <p>I am using <code>developers.google.com/calendar/api/quickstart/python</code> code as is.</p>
<p>Following is how my code looks like after I changed <code>flow.run_local_server</code> to <code>4200</code></p>
<pre><code>from __future__ import print_function
import datetime
import os.path
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
# If modifying these scopes, delete the file token.json.
SCOPES = ['https://www.googleapis.com/auth/calendar.readonly']
def main():
"""Shows basic usage of the Google Calendar API.
Prints the start and name of the next 10 events on the user's calendar.
"""
creds = None
# The file token.json stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.json'):
creds = Credentials.from_authorized_user_file('token.json', SCOPES)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=4200)
print(creds)
# Save the credentials for the next run
with open('token.json', 'w') as token:
token.write(creds.to_json())
try:
service = build('calendar', 'v3', credentials=creds)
# Call the Calendar API
now = datetime.datetime.utcnow().isoformat() + 'Z' # 'Z' indicates UTC time
print('Getting the upcoming 10 events')
events_result = service.events().list(calendarId='primary', timeMin=now,
maxResults=10, singleEvents=True,
orderBy='startTime').execute()
events = events_result.get('items', [])
if not events:
print('No upcoming events found.')
return
# Prints the start and name of the next 10 events
for event in events:
start = event['start'].get('dateTime', event['start'].get('date'))
print(start, event['summary'])
except HttpError as error:
print('An error occurred: %s' % error)
if __name__ == '__main__':
main()
</code></pre>
<hr />
<p>Following is how my <code>credentials.json</code> looks like</p>
<pre><code>{"web":{"client_id":"***","project_id":"***","auth_uri":"https://accounts.google.com/o/oauth2/auth","token_uri":"https://oauth2.googleapis.com/token","auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs","client_secret":"***","redirect_uris":["http://localhost:4200"],"javascript_origins":["http://localhost:4200"]}}
</code></pre>
<hr />
<p>After I run this code, I get following error</p>
<pre><code>Error 400: redirect_uri_mismatch
You can't sign in to this app because it doesn't comply with Google's OAuth 2.0 policy.
If you're the app developer, register the redirect URI in the Google Cloud Console.
Request details: redirect_uri=http://localhost:4200/
Related developer documentation
</code></pre>
<p>Notice the trailing <code>/</code> in <code>Request details: redirect_uri=http://localhost:4200/</code> which is causing <code>Error 400: redirect_uri_mismatch</code></p>
<p><em>How I know <code>/</code> is causing this error</em>. I tried following url without <code>/</code> or <code>%2F</code> in URL and it works fine.</p>
<pre><code>https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=***&redirect_uri=http%3A%2F%2Flocalhost%3A4200%2F&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcalendar.readonly&state=***&access_type=offline
</code></pre>
<p>Looking forward for possible solutions.</p>
| <python><google-api> | 2023-08-26 12:59:52 | 1 | 395 | Saurabh |
76,983,039 | 1,050,619 | CORS prefetch error in python pyramid application | <p>I have a webapp front end built in reactjs and calling a api built using python pyramid.</p>
<p>When I test this application in local development, I deploy the webapp in localhost:6543 and python application in localhost:3000.</p>
<p>I get a CORS prefetch error while calling the python application.</p>
<p>Here is my JS call -</p>
<pre><code>const onFinish = (values) => {
fetch(`${baseUrl}login`, {
method: "POST",
mode: 'no-cors',
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "http://localhost:3000",
},
body: JSON.stringify(values),
</code></pre>
<p>Python application adding cors header -</p>
<pre><code>from pyramid.config import Configurator
from invest_web.authentication.security import SecurityPolicy
from pyramid.response import Response
from pyramid.events import NewRequest
from invest_web.models.models import User, Issuer
def add_cors_headers_response_callback(event):
def cors_headers(request, response):
response.headers['Access-Control-Allow-Origin'] = '*'
response.headers['Access-Control-Allow-Methods'] = 'POST,GET,DELETE,PUT,OPTIONS'
response.headers['Access-Control-Allow-Headers'] = 'access-control-allow-origin,content-type'
response.headers['Access-Control-Allow-Credentials'] = 'true'
event.request.add_response_callback(cors_headers)
def main(global_config, **settings):
""" This function returns a Pyramid WSGI application.
"""
with Configurator(settings=settings) as config:
config.include('pyramid_jinja2')
config.set_security_policy(
SecurityPolicy(
secret=settings['invest_web.secret']
),
)
config.add_subscriber(add_cors_headers_response_callback, NewRequest)
config.include('.routes')
config.include('.models')
config.scan()
return config.make_wsgi_app()
</code></pre>
<p><a href="https://i.sstatic.net/Xdg9q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xdg9q.png" alt="enter image description here" /></a></p>
| <python><pyramid> | 2023-08-26 12:50:12 | 1 | 20,966 | user1050619 |
76,982,778 | 13,392,257 | Atomic transaction in django | <p>I have a view which updates object in database. Should I make this view atomic (<code> @transaction.atomic</code>)?
I want to eliminate data races (for example two requests update value at the same time)</p>
<p>My code:</p>
<pre><code>@api_view(["POST"])
@transaction.atomic
def employee_increase(request):
logger.info(f"Increase sallary: {request.data}")
serializer = IdSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
employee = get_object_or_404(Employee, pk=request.data["id"])
old_sallary = employee.sallary
employee.sallary = int(old_sallary * (1 + int(request.data["increase_percent"]) / 100))
employee.save()
</code></pre>
| <python><django> | 2023-08-26 11:35:54 | 1 | 1,708 | mascai |
76,982,776 | 2,300,597 | tk.Tk window goes behind my Windows taskbar so there's some part of it which I cannot see | <p>I have a window of type <code>tk.Tk</code> which I start up as 'zoomed'.</p>
<pre><code>import tkinter as tk
import tkinter.ttk as ttk
...
master = tk.Tk()
master.state('zoomed')
</code></pre>
<p>I work on a Windows laptop.</p>
<p>But I noticed that in that zoomed state there is a part of the Tk window which is behind my Windows OS taskbar.</p>
<p>How can I tell the <code>tk.Tk</code> object to fill in all my screen but without going "behind the taskbar" too i.e. I want it to fill in all the desktop space that I can actually see? The part behind the Windows taskbar I just cannot see, right?</p>
<p>I noticed that other program windows (e.g. Excel) do what I want (when fully zoomed). So maybe there's some option I can use to make the <code>tk.Tk</code> window do that too.</p>
<p>The problem is that when the Tk window is fully zoomed, there's some strip of it (behind my taskbar) which I cannot see unless I tell Windows to hide my taskbar.</p>
| <python><tkinter><tk-toolkit> | 2023-08-26 11:34:54 | 1 | 39,631 | peter.petrov |
76,982,761 | 5,675,881 | Copy file to dict with ast.literal_eval without memory leak | <p>I am trying to do a "file to dict" conversion where each line of the file is a literal representation of a dict and I want to store each dict in a dict that references them by line number.</p>
<p>I thought it would be pretty straighforward but it seems like <code>ast.literal_eval</code> is somehow producing memory leaks or that something is not garbage collected.</p>
<p>Here is a simple directly runnable way to reproduce the issue:</p>
<pre><code>import sys, os
from guppy import hpy
import ast
print("creating the file, each line is a python dict")
line="{'foo1':'bar1','foo2':'bar2','foo3':'bar3','foo4':'bar4','foo5':'bar5'}"
with open('each_line_is_a_python_dict.txt', 'w+') as file:
for i in range(10000000):
file.write('%s\n' % line)
print("the size of the file is", os.path.getsize("each_line_is_a_python_dict.txt")/1024/1024, "MB")
print("heap after creating the file: ")
h = hpy()
print(h.heap())
dict_of_dicts = {}
with open("each_line_is_a_python_dict.txt") as f:
for lineno,line in enumerate(f):
dict_of_dicts[lineno] = ast.literal_eval(line)
print("sys.getsizeof(dict_of_dicts)", sys.getsizeof(dict_of_dicts))
print("heap after creating the dict_of_dicts:")
h = hpy()
print(h.heap())
</code></pre>
<p>Here is the output I get (running with pipenv on Windows 10 PowerShell):</p>
<pre><code>creating the file, each line is a python dict
the size of the file is 696.1822509765625 MB
heap after creating the file:
Partition of a set of 40632 objects. Total size = 5515099 bytes.
Index Count % Size % Cumulative % Kind (class / dict of
class)
0 12414 31 1093030 20 1093030 20 str
1 2786 7 1002680 18 2095710 38 types.CodeType
2 615 2 620872 11 2716582 49 type
3 7811 19 554720 10 3271302 59 tuple
4 5164 13 464836 8 3736138 68 bytes
5 2541 6 386232 7 4122370 75 function
6 615 2 222264 4 4344634 79 dict of type
7 98 0 128704 2 4473338 81 dict of module
8 439 1 126936 2 4600274 83 dict (no owner)
9 1203 3 86616 2 4686890 85 types.WrapperDescriptorType
<145 more rows. Type e.g. '_.more' to view.>
sys.getsizeof(dict_of_dicts) 335544400
heap after creating the dict_of_dicts:
Partition of a set of 70040311 objects. Total size = 5111046479 bytes.
Index Count % Size % Cumulative % Kind (class / dict of
class)
0 50012416 71 2651093206 52 2651093206 52 str
1 10000421 14 2175667080 43 4826760286 94 dict (no owner)
2 10001078 14 280033684 5 5106793970 100 int
3 2786 0 1002680 0 5107796650 100 types.CodeType
4 615 0 620872 0 5108417522 100 type
5 7811 0 554720 0 5108972242 100 tuple
6 5164 0 464836 0 5109437078 100 bytes
7 2541 0 386232 0 5109823310 100 function
8 615 0 222264 0 5110045574 100 dict of type
9 98 0 128704 0 5110174278 100 dict of module
<145 more rows. Type e.g. '_.more' to view.>
</code></pre>
<p>Meanwhile, the memory use rises constantly, to values that far exceed the size of the file, here even 10GB:</p>
<p><a href="https://i.sstatic.net/Ustn6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ustn6.png" alt="enter image description here" /></a></p>
<p>Is it an issue with <code>ast.literal_eval</code> or is something else wrong?</p>
| <python> | 2023-08-26 11:30:03 | 2 | 1,576 | shrimpdrake |
76,982,728 | 6,484,726 | How would you develop a function for API response parsing using TDD? | <p>I just read <em>"Test Driven Development: By Example"</em> by Kent Beck, and I am trying to apply it on my current project.</p>
<p>Given an API response I need to extract information from it. API response is a JSON array:</p>
<pre class="lang-json prettyprint-override"><code>[
{"1789484079": "event1", "1531059415": "event2"},
{},
{},
{"1234256612": "event3"}
]
</code></pre>
<p>Each object in that array may have multiple keys, each key is a timestamp. Object may be empty.</p>
<p>The task is to find a most recent "event" according to given timestamp. For example:</p>
<ul>
<li>if given timestamp is 1889484079 "event1" shall be returned;</li>
<li>if given timestamp is 1689484079 an "event2" is a right choice.</li>
</ul>
<p>So, following the book's guidance, a test and a naive implementation which returns a constant:</p>
<pre class="lang-py prettyprint-override"><code>def parse_response(response, timestamp):
return "event1"
def test_api_response_parsing_returns_correct_events():
response = [{1789484079: "event1", 1531059415: "event2"}]
timestamp = 1889484079
assert parse_response(response, timestamp) == "event1"
</code></pre>
<p>Now I need to make the test fail, so I have room for improvements.</p>
<pre class="lang-py prettyprint-override"><code>def test_api_response_parsing_returns_correct_events():
response = [{1789484079: "event1", 1531059415: "event2"}]
timestamp = 1889484079
assert parse_response(response, timestamp) == "event1"
timestamp = 1689484079
assert parse_response(response, timestamp) == "event2"
</code></pre>
<p>What could the next step have been if we want to keep moving by little steps, as Kent Beck suggests in his book?</p>
<p>Introduce 2 more constants?</p>
<pre><code>def parse_response(response, timestamp):
if timestamp == 1889484079:
return "event1"
else:
return "event2"
</code></pre>
<p>This starts to seem like duplication, which we must get rid of. And how would we do it? My thought for the next step is that we have timestamps in <code>response</code>, so in order to remove the duplication we have to iterate over response while checking timestamps according to rules which function have to follow, but isn't it a big leap and not a small step?</p>
<p>How would you develop this function if you were following TDD principles?</p>
| <python><tdd> | 2023-08-26 11:18:05 | 0 | 398 | hardhypochondria |
76,982,684 | 919,872 | Flask dynamic import pattern | <p>I have a flask application with a directory structure like so (simplified for question).</p>
<pre><code>app.py
/some-serivce
- __init__.py
- impl1.py
- impl2.py
- impl3.py
</code></pre>
<p>Based on the configuration, I want to import and instantiate one of the implementations for <code>some-service</code>. I currently do something that seems somewhat hacky. In my <code>app.py</code> I create <code>app.config</code> from a python module containing a string referencing the desired implementation.</p>
<pre class="lang-py prettyprint-override"><code>import importlib
...
app = Flask(__name__)
app.config.from_object(config.dev)
mod = importlib.import_module(f"services.{app.config['SERVICE_IMPL']}")
mod.service_init(app)
</code></pre>
<p>And in the implementation I have a <code>service_init</code> function that looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>def service_init(app):
app.config['SERVICE_INSTANCE'] = ServiceImpl1()
</code></pre>
<p>What feels hacky to me is storing an instance of the desired service implementation on the app config. However, I haven't seen anything that points to how to implement the desired behavior in the docs.</p>
<p>How can I dynamically configure a service class that can be one of many implementations? And how can I access it in some way other than storing it in the <code>app.config</code>?</p>
| <python><flask> | 2023-08-26 11:05:38 | 0 | 40,688 | Zelazny7 |
76,982,672 | 9,778,828 | How to convert a tuple of dictionaries of pyTorch tensors into a dictionary of tensors? | <p>I have a tuple of dictionaries, that hold pyTorch tensors:</p>
<pre><code>tuple_of_dicts_of_tensors = (
{'key_1': torch.tensor([1,1,1]), 'key_2': torch.tensor([4,4,4])},
{'key_1': torch.tensor([2,2,2]), 'key_2': torch.tensor([5,5,5])},
{'key_1': torch.tensor([3,3,3]), 'key_2': torch.tensor([6,6,6])}
)
</code></pre>
<p>Which I would like to transform into a dictionary of tensors:</p>
<pre><code>dict_of_tensors = {
'key_1': torch.tensor([[1,1,1], [2,2,2], [3,3,3]]),
'key_2': torch.tensor([[4,4,4], [5,5,5], [6,6,6]])
}
</code></pre>
<p>How would you recommend doing that? What is the most efficient way?
The tensors are on a GPU device, so a minimal amount of for loops is required.</p>
<p>Thanks!</p>
| <python><python-3.x><dictionary><pytorch><tuples> | 2023-08-26 11:02:48 | 1 | 505 | AlonBA |
76,982,240 | 14,457,833 | How to update widget attribute based on user role in Django | <p>I have this custom widget:</p>
<pre class="lang-py prettyprint-override"><code>class RelatedFieldWidgetCanAdd(widgets.Select):
def __init__(self, related_model, related_url=None, can_add_related=True, *args, **kw):
self.can_add_related = can_add_related
super(RelatedFieldWidgetCanAdd, self).__init__(*args, **kw)
if not related_url:
rel_to = related_model
info = (rel_to._meta.app_label, rel_to._meta.object_name.lower())
related_url = 'admin:%s_%s_add' % info
# Be cautious, as "reverse" is not allowed here
self.related_url = related_url
def render(self, name, value, *args, **kwargs):
self.related_url = reverse(self.related_url)
output = [u'<div class="d-flex">']
output.append(super(RelatedFieldWidgetCanAdd, self).render(name, value, *args, **kwargs))
if self.can_add_related:
output.append(u'<a href="%s?_to_field=id&_popup=1" class="add-another" id="add_id_%s" onclick="return showAddAnotherPopup(this);"> ' %
(self.related_url, name))
output.append(u'<img src="%sadmin/img/icon-addlink.svg" width="20" height="50" class="pb-2 mx-2" alt="%s"/></a>' %
(settings.STATIC_URL, _('Add Another')))
output.append(u'</div>')
return mark_safe(u''.join(output))
</code></pre>
<p>And this is how I'm using it in <strong>form.py</strong>:</p>
<pre class="lang-py prettyprint-override"><code>class LeaveForm(forms.ModelForm):
...
leave_type = forms.ModelChoiceField(
queryset=LeaveType.objects.all().order_by('-pk'), empty_label='--------',
widget=RelatedFieldWidgetCanAdd(
LeaveType,
related_url='leave_type_add'
)
)
</code></pre>
<p>Now, if I'm logged in as a superuser in my dashboard, I can see the <strong>+</strong> button to create a related object. That's fine, but when I'm logged in as a normal user, it allows me to add related data. I know it's because I've set <strong><code>can_add_related</code></strong> to <strong><code>True</code></strong> by default. I want to update it while rendering it by checking the user's <strong><code>is_superuser</code></strong> or <strong><code>is_admin</code></strong> attribute. I tried to access the <strong>Request</strong> inside the widget, but I don't have access to the <strong>Request</strong> object. So, I can't do it in <strong><code>RelatedFieldWidgetCanAdd</code></strong>. There's a method named <strong><code>get_form</code></strong> that I tried to use for this, but it removes the <strong>css</strong> class and sets the <strong>queryset</strong> empty. Even if I provide all values again in <strong><code>get_form</code></strong>, it doesn't work. Is there any other simpler way to do this?</p>
| <python><django><django-forms><django-permissions><django-widget> | 2023-08-26 08:57:05 | 1 | 4,765 | Ankit Tiwari |
76,982,144 | 8,849,755 | scale parameter is producing a displacement in scipy distribution | <p>I am doing some stuff with the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.levy_stable.html" rel="nofollow noreferrer">SciPy's <code>levy_stable</code></a> distribution. The documentation says:</p>
<blockquote>
<p><code>levy_stable.pdf(x, alpha, beta, loc, scale)</code> is identically equivalent to <code>levy_stable.pdf(y, alpha, beta) / scale</code> with <code>y = (x - loc) / scale</code>.</p>
</blockquote>
<p>Then, I understand that <code>levy_stable.pdf(x, alpha=1, beta=1, loc=0, scale=10)</code> should be a stretched-around-0 but not displaced version of <code>levy_stable.pdf(x, alpha=1, beta=1, loc=0, scale=1)</code>. However, I am getting this:</p>
<p><a href="https://i.sstatic.net/h3cbo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h3cbo.png" alt="enter image description here" /></a></p>
<p>Note how the maximum is at a negative value when <code>scale=1</code> but at a positive value when <code>scale>=5</code>, which means that it has been displaced. Why? Bug or am I missing something? In other easier distributions, such as <code>norm</code>, the <code>loc</code> and <code>scale</code> parameters do exactly what you expect given their names: Translate and stretch the distribution. Actually, for <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.moyal.html" rel="nofollow noreferrer">the <code>moyal</code> distribution</a>, which is supposed to be an approximation to the other distribution, it behaves as expected:</p>
<p><a href="https://i.sstatic.net/IAAC0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IAAC0.png" alt="enter image description here" /></a></p>
<p>The code to produce those plots is this one:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.stats import levy_stable, moyal
import numpy
import plotly.express as px
import pandas
data = []
for loc in [0]:
for scale in [1,5,10]:
distributions = {
'levy_stable': levy_stable(alpha=1,beta=1,loc=loc,scale=scale),
'moyal': moyal(loc=loc,scale=scale),
}
for name,dist in distributions.items():
x = numpy.linspace(loc-5*scale,loc+30*scale,999)
df = pandas.DataFrame(
{
'x': x,
'x/scale': x/scale,
'y': dist.pdf(x),
}
)
df['loc'] = loc
df['scale'] = scale
df['distribution'] = name
data.append(df)
data = pandas.concat(data)
fig = px.line(
data,
x = 'x/scale',
y = 'y',
facet_row = 'scale',
color = 'distribution',
)
fig.update_yaxes(matches=None)
fig.write_html(f'deleteme/levy_stable_bug.html')
</code></pre>
| <python><scipy> | 2023-08-26 08:26:42 | 2 | 3,245 | user171780 |
76,981,764 | 5,902,284 | Using Concatenate and ParamSpec with a keyword argument | <p>NB: I first asked this in the <a href="https://discuss.python.org/t/using-concatenate-and-paramspec-with-a-keyword-argument/32283/1" rel="noreferrer">python forums</a></p>
<p>Hi all,</p>
<p>Do I understand PEP 612 right in that yt allows to annotate a decorator that "removes" the first parameter of the decorate function, but it's not possible (yet?) to fully annotate a decorator that would act on a KW only parameter?</p>
<p>An example of what I am trying to achieve:</p>
<pre class="lang-py prettyprint-override"><code>def call_authenticated(
func: Callable[Concatenate[AuthenticatedClient, P], Awaitable[R]],
client: AuthenticatedClient,
) -> Callable[P, Awaitable[R]]:
async def wrapped(*a, **k):
return await func(*a, **k, client=client)
return wrapped
</code></pre>
<p>The signature of the functions I would like to annotate looks like:</p>
<pre><code>async def(args1: str, arg2: int, *, client: AuthenticatedClient) -> Something:
...
</code></pre>
<p>But this triggers: <code>Argument "client" has incompatible type "AuthenticatedClient"; expected "P.kwargs"</code> from mypy.</p>
<p>What is the proper way to annotate my decorator here?</p>
<p>A MNWE:</p>
<p>Here's a minimal (non-)working example:</p>
<pre><code>from typing import Callable, TypeVar, ParamSpec, Awaitable, Concatenate
P = ParamSpec("P")
T = TypeVar("T")
async def to_be_wrapped(x: int, y: str, *, z: dict) -> str:
return str(x) + str(y) + str(z)
def force_z(
func: Callable[Concatenate[dict, P], Awaitable[T]]
) -> Callable[P, Awaitable[T]]:
async def wrapped(*args: P.args, **kwargs: P.kwargs):
# error: Argument 1 has incompatible type "*P.args"; expected "dict[Any, Any]" [arg-type]
# error: Argument "z" has incompatible type "dict[str, object]"; expected "P.kwargs" [arg-type]
return func(*args, **kwargs, z={"foo": "bar"})
return wrapped
async def main():
# error: Argument 1 to "force_z" has incompatible type
# "Callable[[int, str, NamedArg(dict[Any, Any], 'z')], Coroutine[Any, Any, str]]";
# expected "Callable[[dict[Any, Any], str, NamedArg(dict[Any, Any], 'z')], Awaitable[str]]" [arg-type]
await force_z(to_be_wrapped)(1, "")
</code></pre>
| <python><mypy><python-typing> | 2023-08-26 06:09:17 | 0 | 1,563 | nicoco |
76,981,702 | 3,380,902 | pandas dataframe aggregate based on json data | <p>I have a bunch of data saved as json strings in Pandas DataFrame. I'd like to aggregate the DataFrame based on json data. Here's some sample data:</p>
<pre><code>data = {
'id': [1, 2, 3],
'name': ['geo1', 'geo2', 'geo3'],
'json_data': [
'{"year": [2000, 2001, 2002], "val": [10, 20, 30]}',
'{"year": [2000, 2001, 2005], "val": [50, 60, 70]}',
'{"year": [2000, 2001, 2002], "val": [80, 90, 85]}'
]
}
df = pd.DataFrame(data)
</code></pre>
<p>I'd like to aggregate by <code>year</code> and calculate the <code>median</code> of val. So, if the data were a column, it would be something like:</p>
<pre><code>dff = df.groupby(['year'], as_index=False).agg({'val':'median'})
print(dff)
year val
2000 50
2001 60
2002 58
2005 70
</code></pre>
<p>In case of even #, round up the median. only integer values, no decimals.</p>
| <python><json><pandas> | 2023-08-26 05:37:51 | 6 | 2,022 | kms |
76,981,505 | 3,380,902 | Extract values from json data with condition and add them in as new columns | <p>I have a Pandas DataFrame with JSON data. I'd like to extract the most recent <code>year</code> and the corresponding <code>val</code> and add them in as new column.</p>
<p>Sample DataFrame:</p>
<pre><code>data = {
'id': [1, 2, 3],
'name': ['Alice', 'Bob', 'Charlie'],
'json_data': [
'{"year": [2000, 2001, 2002], "val": [10, 20, 30]}',
'{"year": [2003, 2004, 2005], "val": [50, 60, 70]}',
'{"year": [2006, 2007, 2008], "val": [80, 90, 85]}'
]
}
df = pd.DataFrame(data)
</code></pre>
<p>Expected output:</p>
<p>New columns:</p>
<p><code>Most Recent Year</code>
<code>Most Recent val</code></p>
<p>For row with <code>id</code> 1, this would the year <code>2002</code> and val <code>30</code>.</p>
| <python><json><pandas><algorithm><data-structures> | 2023-08-26 04:01:17 | 3 | 2,022 | kms |
76,981,327 | 4,348,400 | Why did the documentation example for OWSLib not work? | <p>I would like to learn <a href="https://pypi.org/project/OWSLib/" rel="nofollow noreferrer"><code>OSWlib</code></a> or an equivalent library with idea that I can download GIS datasets as part of my <a href="https://kedro.org/" rel="nofollow noreferrer"><code>kedro</code></a> data processing pipelines.</p>
<p>I started following this example on the PyPI readme:</p>
<pre class="lang-py prettyprint-override"><code>>>> from owslib.wms import WebMapService
>>> wms = WebMapService('http://wms.jpl.nasa.gov/wms.cgi', version='1.1.1')
>>> wms.identification.type
'OGC:WMS'
>>> wms.identification.version
'1.1.1'
>>> wms.identification.title
'JPL Global Imagery Service'
>>> wms.identification.abstract
'WMS Server maintained by JPL, worldwide satellite imagery.'
Available layers::
>>> list(wms.contents)
['us_landsat_wgs84', 'modis', 'global_mosaic_base', 'huemapped_srtm',
'srtm_mag', 'daily_terra', 'us_ned', 'us_elevation', 'global_mosaic',
'daily_terra_ndvi', 'daily_aqua_ndvi', 'daily_aqua_721', 'daily_planet',
'BMNG', 'srtmplus', 'us_colordem', None, 'daily_aqua', 'worldwind_dem',
'daily_terra_721']
Details of a layer::
>>> wms['global_mosaic'].title
'WMS Global Mosaic, pan sharpened'
>>> wms['global_mosaic'].boundingBoxWGS84
(-180.0, -60.0, 180.0, 84.0)
>>> wms['global_mosaic'].crsOptions
['EPSG:4326', 'AUTO:42003']
>>> wms['global_mosaic'].styles
{'pseudo_bright': {'title': 'Pseudo-color image (Uses IR and Visual bands,
542 mapping), gamma 1.5'}, 'pseudo': {'title': '(default) Pseudo-color
image, pan sharpened (Uses IR and Visual bands, 542 mapping), gamma 1.5'},
'visual': {'title': 'Real-color image, pan sharpened (Uses the visual
bands, 321 mapping), gamma 1.5'}, 'pseudo_low': {'title': 'Pseudo-color
image, pan sharpened (Uses IR and Visual bands, 542 mapping)'},
'visual_low': {'title': 'Real-color image, pan sharpened (Uses the visual
bands, 321 mapping)'}, 'visual_bright': {'title': 'Real-color image (Uses
the visual bands, 321 mapping), gamma 1.5'}}
Available methods, their URLs, and available formats::
>>> [op.name for op in wms.operations]
['GetTileService', 'GetCapabilities', 'GetMap']
>>> wms.getOperationByName('GetMap').methods
{'Get': {'url': 'http://wms.jpl.nasa.gov/wms.cgi?'}}
>>> wms.getOperationByName('GetMap').formatOptions
['image/jpeg', 'image/png', 'image/geotiff', 'image/tiff']
That's everything needed to make a request for imagery::
>>> img = wms.getmap( layers=['global_mosaic'],
... styles=['visual_bright'],
... srs='EPSG:4326',
... bbox=(-112, 36, -106, 41),
... size=(300, 250),
... format='image/jpeg',
... transparent=True
... )
>>> out = open('jpl_mosaic_visb.jpg', 'wb')
>>> out.write(img.read())
>>> out.close()
</code></pre>
<p>But I almost immediately ran into an error.</p>
<pre class="lang-py prettyprint-override"><code>(vewnv) [galen@orcus Downloads]$ python
Python 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from owslib.wms import WebMapService
>>> wms = WebMapService('http://wms.jpl.nasa.gov/wms.cgi', version='1.1.1')
Traceback (most recent call last):
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/urllib3/connection.py", line 203, in _new_conn
sock = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/urllib3/util/connection.py", line 60, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/socket.py", line 962, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
socket.gaierror: [Errno -2] Name or service not known
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request(
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/urllib3/connection.py", line 395, in request
self.endheaders()
File "/usr/lib/python3.11/http/client.py", line 1278, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.11/http/client.py", line 1038, in _send_output
self.send(msg)
File "/usr/lib/python3.11/http/client.py", line 976, in send
self.connect()
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/urllib3/connection.py", line 243, in connect
self.sock = self._new_conn()
^^^^^^^^^^^^^^^^
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/urllib3/connection.py", line 210, in _new_conn
raise NameResolutionError(self.host, self, e) from e
urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPConnection object at 0x7f77cd407310>: Failed to resolve 'wms.jpl.nasa.gov' ([Errno -2] Name or service not known)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='wms.jpl.nasa.gov', port=80): Max retries exceeded with url: /wms.cgi?service=WMS&request=GetCapabilities&version=1.1.1 (Caused by NameResolutionError("<urllib3.connection.HTTPConnection object at 0x7f77cd407310>: Failed to resolve 'wms.jpl.nasa.gov' ([Errno -2] Name or service not known)"))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/owslib/wms.py", line 50, in WebMapService
return wms111.WebMapService_1_1_1(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/owslib/map/wms111.py", line 75, in __init__
self._capabilities = reader.read(self.url, timeout=self.timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/owslib/map/common.py", line 65, in read
u = openURL(spliturl[0], spliturl[1], method='Get',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/owslib/util.py", line 209, in openURL
req = requests.request(method.upper(), url_base, headers=headers, **rkwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/galen/Downloads/vewnv/lib/python3.11/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='wms.jpl.nasa.gov', port=80): Max retries exceeded with url: /wms.cgi?service=WMS&request=GetCapabilities&version=1.1.1 (Caused by NameResolutionError("<urllib3.connection.HTTPConnection object at 0x7f77cd407310>: Failed to resolve 'wms.jpl.nasa.gov' ([Errno -2] Name or service not known)"))
</code></pre>
| <python><gis> | 2023-08-26 02:16:46 | 1 | 1,394 | Galen |
76,981,232 | 21,575,627 | Unpacking for list indices? | <p>I often find I have something like this:</p>
<pre><code>cur = [0, 0] # the indices into array
matrix = [[1,1,1]]
</code></pre>
<p>where I do</p>
<pre><code>matrix[cur[0]][cur[1]]
</code></pre>
<p>Is there any sort of unpacking syntax here? Like:</p>
<pre><code>matrix[*cur]
</code></pre>
| <python> | 2023-08-26 01:24:22 | 1 | 1,279 | user129393192 |
76,981,157 | 471,376 | passing a python logger instance as a stream | <p><em><strong>tl;dr</strong></em> how do I pass a <a href="https://docs.python.org/3/library/logging.html#logging.Logger" rel="nofollow noreferrer"><code>logging.Logger</code></a> instance as a "stream" (similar to <code>sys.stdout</code>) that logs a message for each line of text received?</p>
<p>I'm using the <code>invoke</code> module <a href="https://docs.pyinvoke.org/en/stable/api/runners.html#invoke.runners.Runner.run" rel="nofollow noreferrer"><code>Runner.run</code> method</a> to run a program <code>/usr/bin/printf</code>. The <code>run</code> method has a parameter <code>out_stream</code>. The docs for <code>Runner.run</code> read:</p>
<blockquote>
<p><code>run(command: str, **kwargs: Any) → Optional[invoke.runners.Result]</code></p>
<p>Execute command, returning an instance of Result once complete.</p>
<p><code>out_stream</code> – A file-like stream object to which the subprocess’ standard output should be written. If <code>None</code> (the default), <code>sys.stdout</code> will be used.</p>
</blockquote>
<p>The call to <code>run</code> looks like:</p>
<pre class="lang-python prettyprint-override"><code>import invoke
def print_foobar(context: invoke.context):
context.run("/usr/bin/printf foobar")
</code></pre>
<p>I want to pass a <code>Logger</code> instance as the <code>out_stream</code> like this (pseudo-code):</p>
<pre class="lang-python prettyprint-override"><code>import invoke
import logging
log = logging.getLogger()
def print_foobar(context: invoke.context):
context.run("/usr/bin/printf foobar", out_stream=log.stream)
</code></pre>
<p>I want the standard out of the child process <code>printf</code> to be logged by <code>Logger</code> instance <code>log</code>. Maybe one <code>log.info</code> message per line of text from the <code>printf</code> child process, or something like that. In other words, how do I get something like a <code>log.stream</code> shown in the prior code snippet?</p>
<p>How can I use the <code>Logger</code> instance as a "<em>file-like stream object</em>" that logs a message for each line of text received?</p>
| <python><logging><python-logging> | 2023-08-26 00:35:20 | 0 | 7,289 | JamesThomasMoon |
76,980,974 | 18,150,609 | Python, Why is my variable implicitly becoming false for nested directories? | <pre><code>import os
class TreeGenerator:
def __init__(self, line_prefix = '|-- ', last_levels_line_prefix = '`-- ', object_prefix = '| '):
self.line_prefix = line_prefix
self.last_levels_line_prefix = last_levels_line_prefix
self.object_prefix = object_prefix
self.base_spaces = ' ' * len(self.line_prefix)
def generate(self, path, indent='', print_tree=False):
if os.path.exists(path):
tree = ''
items = sorted(os.listdir(path))
for index, item in enumerate(items):
full_item_path = os.path.join(path, item)
is_last = index == len(items) - 1
line_prefix = self.last_levels_line_prefix if is_last else self.line_prefix
object_prefix = self.object_prefix if not is_last else ' ' * len(self.line_prefix)
treeline = f'{indent}{line_prefix}{item}'
tree += treeline + '\n'
if print_tree:
print(treeline)
else:
print('here')
if os.path.isdir(full_item_path):
next_indent = f'{indent}{object_prefix}'
self.generate(full_item_path, indent=next_indent)
else:
raise FileNotFoundError(f"No such file or directory found '{path}'")
tg = TreeGenerator()
tree = tg.generate('./test', print_tree=True)
</code></pre>
<p>Output:</p>
<pre><code>|-- deps.edn
|-- resources
here
|-- src
here
here
here
`-- test
here
here
here
here
here
</code></pre>
<p>But, it will work without the <code>if</code> statement:</p>
<pre><code>import os
class TreeGenerator:
def __init__(self, line_prefix = '|-- ', last_levels_line_prefix = '`-- ', object_prefix = '| '):
self.line_prefix = line_prefix
self.last_levels_line_prefix = last_levels_line_prefix
self.object_prefix = object_prefix
self.base_spaces = ' ' * len(self.line_prefix)
def generate(self, path, indent='', print_tree=False):
if os.path.exists(path):
tree = ''
items = sorted(os.listdir(path))
for index, item in enumerate(items):
full_item_path = os.path.join(path, item)
is_last = index == len(items) - 1
line_prefix = self.last_levels_line_prefix if is_last else self.line_prefix
object_prefix = self.object_prefix if not is_last else ' ' * len(self.line_prefix)
treeline = f'{indent}{line_prefix}{item}'
tree += treeline + '\n'
print(treeline)
if os.path.isdir(full_item_path):
next_indent = f'{indent}{object_prefix}'
self.generate(full_item_path, indent=next_indent)
else:
raise FileNotFoundError(f"No such file or directory found '{path}'")
tg = TreeGenerator()
tree = tg.generate('./test', print_tree=True)
</code></pre>
<p>Output:</p>
<pre><code>|-- deps.edn
|-- resources
| `-- metabase-plugin.yaml
|-- src
| `-- metabase
| `-- driver
| `-- sqlite.clj
`-- test
`-- metabase
|-- driver
| `-- sqlite_test.clj
`-- test
`-- sqlite.clj
</code></pre>
| <python><python-3.x> | 2023-08-25 23:18:06 | 1 | 364 | MrChadMWood |
76,980,898 | 1,169,010 | Python Pandas : Get unique records based on multiple columns | <p>Before I start, I am not sure which terminology can I use, so I may misuse terms like "unique" and "duplicate".</p>
<p>Pandas dataset has three columns, A B and C. Rows are considered the same if they have either of the columns, A, B or C the same.
If we have this table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>row num</th>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>A1</td>
<td>B1</td>
<td>C1</td>
</tr>
<tr>
<td>2</td>
<td>A1</td>
<td>B2</td>
<td>C2</td>
</tr>
<tr>
<td>3</td>
<td>A2</td>
<td>B2</td>
<td>C3</td>
</tr>
<tr>
<td>4</td>
<td>A3</td>
<td>B3</td>
<td>C3</td>
</tr>
</tbody>
</table>
</div>
<p>row 1 and 2 are the same because column A is the same, row 2 and 3 because of B, and row 3 and 4 because of C. This would mean that since 1 is duplicate of 2 which is duplicate of 3 which is duplicate of 4, I expect the number of unique records here is 1.</p>
<p>How would I write python pandas code to calculate that?</p>
| <python><pandas> | 2023-08-25 22:47:46 | 2 | 349 | Sale |
76,980,784 | 7,193,418 | Pyinstaller on macos doesn't load library with --add-binary | <p>I have a dev macOS VM and the builds work fine using pyinstaller 4.0.</p>
<p>As soon as I update pyinstaller anything other than 4.0, it fails to load my custom *.dylib files which are in the application folder when building pyinstaller.</p>
<p>I installed python using brew.</p>
<p>I am using this to build:</p>
<pre><code>/usr/local/opt/python@3.8/bin/python3.8 -m PyInstaller --add-binary *.dylib:. --clean --windowed --onedir --noupx --name "$AppName" --icon=main.icns main.py
</code></pre>
<p>I have this that adds the program path to system PATH and remember this works with pyinstaller 4.0:</p>
<pre><code>dllpath = os.path.dirname(os.path.realpath(sys.argv[0]))
if dllpath not in os.environ:
os.environ["PATH"] += os.pathsep + dllpath
</code></pre>
<p>but as soon as pyinstaller is a version greater than pyinstaller 4.0, it would show <code>cannot load library...</code>.</p>
<p>I have also tried installing the latest version of python and pyinstaller but having the same issue!</p>
<p>Any suggestions?</p>
| <python><python-3.x><macos><pyinstaller><dylib> | 2023-08-25 22:08:53 | 1 | 415 | Amin Persia |
76,980,757 | 1,319,998 | Can a UDP socket hang on "connect" in Python | <p>Say I have this program in Python the "connects" a UDP socket:</p>
<pre class="lang-py prettyprint-override"><code>import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.connect(('8.8.8.8', 53))
</code></pre>
<p>Can the call to connect hang, and so it's better to set a timeout? So something like:</p>
<pre class="lang-py prettyprint-override"><code>import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.settimeout(2)
sock.connect(('8.8.8.8', 53))
</code></pre>
<p>If so, why? I would have thought connect when using UDP wouldn't connect anything, and at worse it would immediately fail for some reason, rather than hang.</p>
<p>This is in reference to a bug reported in an asyncio Python DNS resolver I wrote: <a href="https://github.com/michalc/aiodnsresolver/pull/35" rel="nofollow noreferrer">https://github.com/michalc/aiodnsresolver/pull/35</a> where is <em>does</em> seem to hang at connect, but I'm not sure why.</p>
| <python><sockets><dns><udp><python-asyncio> | 2023-08-25 21:58:20 | 1 | 27,302 | Michal Charemza |
76,980,633 | 2,836,259 | Python, how to create a zip of a directory while excluding hidden files? | <p>I have a situation where I want to create a zip of a directory while excluding all hidden files when doing this.</p>
<p>Specifics of my use case: I have a directory that contains a small static site build, but also contains <code>.git/</code> and other hidden files that are very large. When I create a zip without ignoring the hidden files, the zip ends up being large (GBs), when the site itself is <5MB.</p>
<p>What's an efficient way to create the zip while ignoring these hidden files?</p>
| <python><zip> | 2023-08-25 21:29:41 | 1 | 7,375 | conner.xyz |
76,980,563 | 5,032,387 | Codegen causal ML inference too short compared to results in model card API | <p>I'm experimenting with the Salesforce/codegen-350M-mono model.</p>
<p>When I generate text using a prompt, the result on my machine is much shorter than what I get inputting the same prompt in the API window of the model card. On my machine it just adds 'return'</p>
<pre><code>import datasets
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "Salesforce/codegen-350M-mono"
device = "cpu"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(checkpoint, trust_remote_code=True).to(device)
text = """def round_to_multiple(num, mult):
\"""Rounds to nearest multiple of another number.\"""
"""
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
</code></pre>
<p>Here is the output</p>
<pre><code>The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
def round_to_multiple(num, mult):
"""Rounds to nearest multiple of another number."""
return
</code></pre>
<p>When I enter the same prompt in the <a href="https://huggingface.co/Salesforce/codegen-350M-mono?text=def%20round_to_multiple%28num%2C%20mult%29%3A%0A%20%20%20%20%5C%22%22%22Rounds%20to%20nearest%20multiple%20of%20another%20number.%5C%22%22%22" rel="nofollow noreferrer">model card</a> code-generation window, I get the following:</p>
<pre><code>def round_to_multiple(num, mult):
\"""Rounds to nearest multiple of another number.\"""
if (num % mult == 0):
return int(num)
else:
return
</code></pre>
<p>How would I get the result to be the same (or similar but longer) as on the web API?</p>
| <python><nlp><huggingface> | 2023-08-25 21:10:59 | 1 | 3,080 | matsuo_basho |
76,980,417 | 5,131,394 | What causes "Request body doesn't fulfill schema" error from GoDaddy API? | <p>The secrets are printing fine, but I just can't get the format right.</p>
<pre><code>def search_domains_on_godaddy(domains):
GODADDY_KEY = os.getenv('GODADDY_KEY')
GODADDY_SECRET = os.getenv('GODADDY_SECRET')
print("GODADDY_KEY", GODADDY_KEY)
print("GODADDY_SECRET", GODADDY_SECRET)
headers = {
'Authorization': f'sso-key {GODADDY_KEY}:{GODADDY_SECRET}',
'Content-Type': 'application/json',
}
# Prepare the payload
payload = {"domains": domains, "checkType": "FAST"}
print("payload", payload)
# Send POST request to check domain availability
response = requests.post('https://api.ote-godaddy.com/v1/domains/available', json=payload, headers=headers)
print("Response Status Code: ", response.status_code)
print("Response Headers: ", response.headers)
print("Response from godaddy: ", response.json())
if 'domains' in response.json():
available_domains = [item['domain'] for item in response.json()['domains'] if item['available']]
else:
available_domains = []
return available_domains
</code></pre>
<p>The response I'm getting:</p>
<pre class="lang-none prettyprint-override"><code>payload {'domains': ['BootBazaar.com', 'StridePride.com', 'StompShop.com', 'FootlooseBoots.com', 'KickKiosk.com', 'TrekTrend.com', 'HeelHaven.com'], 'checkType': 'FAST'}
Response Status Code: 400
Response Headers: {'Content-Type': 'application/json', 'Content-Length': '72', 'Vary': 'origin', 'X-Request-Id': 'rFD9xX3Yj5jjaJqi6Ae13v', 'X-DataCenter': 'US_WEST_2', 'Expires': 'Fri, 25 Aug 2023 20:35:23 GMT', 'Cache-Control': 'max-age=0, no-cache, no-store', 'Pragma': 'no-cache', 'Date': 'Fri, 25 Aug 2023 20:35:23 GMT', 'Connection': 'close', 'Set-Cookie': '_abck=FA584D58EDE2768489DAFF6752811C91~-1~YAAQjDlAF2Fy5fWJAQAAcqFoLgooMKkDdcO6yCgjGHHbyZdJmaZxA39/vnI80ubzap3i/K0ZirPYGhUA7IKUjuGsJA0z9aWirL/73dcnqGObrupNJg+91cF8eaVbdDh9aDzGQ0/dxvoQxjjAr14u9Hkz6brOqwaGfhgDdXctg2sRFY3P/UJSAWdPlb6OT28fRnxP8HiNLBH4zIdXTG6XqHZOgrbK7h9QBBAQm2S2WcxfkmHCngBlDFwNuSsxpvRzvKcK37pMidX8FPtAqeSlcVHxKtEc3l4DM97yhQbR0JKN/2YZHTuGHCR977+I7/SvZVoqegS1GVaS7fvYyq8mtBIyj6HxBTcF9VGKN+rlA2qzvLBmUcol0DQU3dP25g==~-1~-1~-1; Domain=.ote-godaddy.com; Path=/; Expires=Sat, 24 Aug 2024 20:35:23 GMT; Max-Age=31536000; Secure, bm_sz=29A4DF4ABD11D16BBD4C90AA366EFCC6~YAAQjDlAF2Jy5fWJAQAAc6FoLhTcFq6oxYSZJ+GGlZSJz4Q+L4Obdq7Kpj0IKEICad4hkESCPKZDE6/rP992rhFqje5S1IS4XYV9aKsE9Qdgq5kUbktESw1LAUlya6BlFILlx1MJ8GUxuFWQd42YosDGI+vYfHU5oV0hso2KSYEMhQuojpkeGtklgEDtywCrVswDBJUz/6KNsnyZKq2PAfnhCWQlDmcdV+yGhoGEO0+M4BeHi7ptsMXpEJXP+Z6VFncdCsZaf3xgKmknJG6IKiPaf3li+2mgYqIZw496J+PnIHtMDA/T9g==~3354928~3686708; Domain=.ote-godaddy.com; Path=/; Expires=Sat, 26 Aug 2023 00:35:22 GMT; Max-Age=14399'}
**Response from godaddy: {'code': 'INVALID_BODY', 'message': "Request body doesn't fulfill schema"}**
</code></pre>
<p>The API docs I've been referencing: <a href="https://developer.godaddy.com/doc/endpoint/domains#/v1/availableBulk" rel="nofollow noreferrer">https://developer.godaddy.com/doc/endpoint/domains#/v1/availableBulk</a></p>
| <python><json><godaddy-api> | 2023-08-25 20:40:49 | 1 | 435 | Norbert |
76,980,329 | 649,920 | Port XGBoost model with m2cgen: presence of nan | <p>I got into the same situation as the OP of <a href="https://stackoverflow.com/questions/58143075/port-xgboost-model-trained-in-python-to-another-system-written-in-c-c">this post</a>. I would definitely prefer just to see the doc on how to extract the data from the xgb model and how exactly to code up its forward propagation, but m2cgen sounded like a good alternative. I used the following code</p>
<pre><code>import xgboost as xgb
import seaborn as sns
import m2cgen as m2c
df = sns.load_dataset("diamonds")
X = df.drop(['cut', 'color', 'clarity', 'price'], axis = 1)
y = df.price
n = X.shape[0]
n_split = int(n*0.75)
model = xgb.XGBRegressor(objective ='reg:squarederror',
max_depth = 2,
n_estimators = 1,
eval_metric="rmse")
model.fit(X, y)
with open('./diamonds_model.c','w') as f:
code = m2c.export_to_c(model)
f.write(code)
</code></pre>
<p>and as a result I see</p>
<pre><code>double score(double * input) {
double var0;
if (input[0] >= 0.995) {
if (input[4] >= 7.1949997) {
var0 = 3696.243;
} else {
var0 = 1841.0602;
}
} else {
if (input[4] >= 5.535) {
var0 = 922.34973;
} else {
var0 = 317.401;
}
}
return nan + var0;
}
</code></pre>
<p>So I wonder what am I doing wrong and where does this nan come from. I'm on python 3.8.5 and xgb prints version 1.7.3</p>
| <python><machine-learning><xgboost> | 2023-08-25 20:22:44 | 1 | 357 | SBF |
76,980,237 | 5,228,890 | KeyError: 'mnli' error when using roberta model from meta hub | <p>I tried to use the <code>roberta</code> model from the torch hub, as:</p>
<pre><code>import torch
roberta = torch.hub.load('pytorch/fairseq', 'roberta.large')
tokens = roberta.encode('Roberta is a heavily optimized version of BERT.', 'Roberta is not very optimized.')
roberta.predict('mnli', tokens).argmax() # 0: contradiction
</code></pre>
<p>but I am getting the following torch error:</p>
<pre class="lang-none prettyprint-override"><code>│ 458 │
│ 459 │ @_copy_to_script_wrapper
│ 460 │ def __getitem__(self, key: str) -> Module:
│ ❱ 461 │ │ return self._modules[key]
│ 462 │
│ 463 │ def __setitem__(self, key: str, module: Module) -> None:
│ 464 │ │ self.add_module(key, module)
╰──────────────────────────────────────────────────────
KeyError: 'mnli'
</code></pre>
<p>How can I solve this?</p>
| <python><pytorch><fairseq> | 2023-08-25 20:05:15 | 0 | 1,464 | Afshin Oroojlooy |
76,980,131 | 10,308,255 | How to filter pandas dataframe so that the first and last rows within a group are retained? | <p>I have a <code>dataframe</code> like below:</p>
<pre><code>data = [
[123456, "2017", 150.235],
[123456, "2017", 160],
[123456, "2017", 135],
[123456, "2017", 135],
[123456, "2017", 135],
[123456, "2018", 202.5],
[123456, "2019", 168.526],
[123456, "2020", 175.559],
[123456, "2020", 176],
[123456, "2021", 206.667],
[789101, "2017", 228.9],
[789101, "2018", 208],
[789101, "2018", 208],
[789101, "2018", 208],
]
df = pd.DataFrame(
data,
columns=[
"ID",
"year",
"value",
],
)
df
</code></pre>
<p>In this <code>dataframe</code> I have an <code>ID</code> column and 2+ <code>years</code>. The <code>year</code> columns can contain 1 or more <code>value</code> columns.</p>
<p>I would like to filter this <code>dataframe</code> so that all of the <strong>earliest</strong> <code>year</code> rows (even if there are duplicate <code>values</code>) and all of the <strong>latest</strong> <code>year</code> rows (again, even if there are duplicate <code>values</code> I want them).</p>
<p>My desired output is:</p>
<p><a href="https://i.sstatic.net/NMtXD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NMtXD.png" alt="enter image description here" /></a></p>
<p>I found another <a href="https://stackoverflow.com/questions/53927414/get-only-the-first-and-last-rows-of-each-group-with-pandas">SO</a> question that was similar:</p>
<pre><code>g = df.groupby("ID")
(pd.concat([g.head(1), g.tail(1)])
.drop_duplicates()
.sort_values('ID')
.reset_index(drop=True))
</code></pre>
<p>but it only first to the first <code>value</code> within the first <code>year</code> and I want all of the <code>values</code>.</p>
<p>Can anyone please advise?!</p>
<p>Thank you !!</p>
| <python><pandas><dataframe><group-by> | 2023-08-25 19:46:24 | 6 | 781 | user |
76,980,079 | 14,183,155 | Proper way of running long sync task in python asyncio | <p>I have a setup where I pass messages asynchronously, but I also have some heavy workload running tasks (up to minutes). I can't wrap my head around how I should do that with asyncio. For example, what I want to achieve:</p>
<ul>
<li>long running async function</li>
<li>receive a message</li>
<li>do heavy computation</li>
<li>send the result</li>
</ul>
<p>Currently, I am doing the heavy computation in the asyncio loop, but this blocks the communication.</p>
<p>How can I start a heavy sync computation? Some code:</p>
<pre class="lang-py prettyprint-override"><code>def heavy(i: num) -> num:
result = ...
return result
async def main():
while True:
req = await getInput()
result = ??? heavy(req.num)
await sendResult()
if __name__ == '__main__':
asyncio.run(main())
</code></pre>
| <python><asynchronous><python-asyncio> | 2023-08-25 19:33:09 | 1 | 2,340 | Vivere |
76,980,074 | 4,231,821 | How to transform the dataframe in python | <p>I have a requirements where I need to transform my data coming from my api in this way .</p>
<p>I have fixed 7 columns</p>
<pre><code>EntityID , EntityName , ForYear , Labels , FiscalPeriodValue , ForDate , (ValueColumn)
</code></pre>
<p>My last column will always be the column which holds the value .</p>
<p>ForDate will be the same for each entity , ForYear will be the same but labels can be different .</p>
<p>The Api May return the data of 2 entities , and it may return the dataset of 2 or more entities (I dont know)</p>
<p>for example: My Api May Return the below data</p>
<pre><code>data = [
{
'EntityID': 3,
'EntityName': 'Trading Value',
'ForDate': '2023-07-13',
'Labels': '2023-07-13',
'FiscalPeriodValue': 'Daily',
'ForYear': 2023,
'DataValue': 7.7
},
{
'EntityID': 4,
'EntityName': 'Quarterly average',
'ForDate': '2023-07-13',
'Labels': '2023-Q3',
'FiscalPeriodValue': 'Daily',
'ForYear': 2023,
'DataValue': 7.05
},
{
'EntityID': 5,
'EntityName': 'Yearly average',
'ForDate': '2023-07-13',
'Labels': '2023',
'FiscalPeriodValue': 'Daily',
'ForYear': 2023,
'DataValue': 5.21
},
# ... (other data entries)
</code></pre>
<p>]</p>
<p>Now We can see that It has 3 Unique Entities and all of them have same dates but different Labels</p>
<p>What I want is to transform this data in this way</p>
<pre><code>data = [
{
'D': '2023-07-13',
'Y': 2023,
'L1': '2023-07-13',
'L2': '2023-Q3',
'L3': '2023',
'Trading Value': 7.7,
'Quarterly average': 7.05,
'Yearly average': 5.21
}
</code></pre>
<p>]</p>
<p>As we can see that Trading Value Label is date so I have make the L1 As the Label Value of Trading value entity , and so on ...</p>
<p>The sequence should be the same for L1 , L2 , L3 as the entity names</p>
<p>I have tried and make my desired dataset but without L1 , L2,L3</p>
<pre><code>grouped_data = df.groupby('ForDate').apply(lambda x: dict(zip(x['EntityName'], x[ValueColumn]))).reset_index()
grouped_data.columns = ['ForDate', 'Data']
new_dataset = grouped_data.apply(lambda row: {
**{
'D': row['ForDate'],
'Y': df[df['ForDate'] == row['ForDate']]['ForYear'].iloc[0],
},
**row['Data']
}, axis=1).tolist()
</code></pre>
<p>The above code is what i have tried so far .</p>
| <python><python-3.x><dataframe> | 2023-08-25 19:32:11 | 3 | 527 | Faizan Naeem |
76,980,047 | 2,095,676 | How to resolve models in CUD operations not on PK but on UUID? | <p>The default CUD operation in <code>srawberry_django</code> identifies the model to be mutated by the model's id (the database's id column, which is the Primary Key). How can I change this behavior to point to a column's/model's UUID without overriding <code>strawberry_django.mutations.update</code> method?</p>
<p>In simple terms, how to do a search on a field that is not id/PK in <a href="https://github.com/strawberry-graphql/strawberry-graphql-django" rel="nofollow noreferrer">strawberry-graphql-djang</a> without mending <code>update</code> method?</p>
| <python><django><strawberry-graphql> | 2023-08-25 19:26:48 | 2 | 13,970 | Lukasz Dynowski |
76,979,995 | 12,213,872 | PyTorch MSE Loss differs from direct calculation by factor of 2 | <p>Why does the result of <code>torch.nn.functional.mse_loss(x1,x2)</code> result differ from the direct computation of the MSE?</p>
<p>My test code to reproduce:</p>
<pre><code>import torch
import numpy as np
# Think of x1 as predicted 2D coordinates and x2 of ground truth
x1 = torch.rand(10,2)
x2 = torch.rand(10,2)
mse_torch = torch.nn.functional.mse_loss(x1,x2)
print(mse_torch) # 0.1557
mse_direct = torch.nn.functional.pairwise_distance(x1,x2).square().mean()
print(mse_direct) # 0.3314
mse_manual = 0
for i in range(len(x1)) :
mse_manual += np.square(np.linalg.norm(x1[i]-x2[i])) / len(x1)
print(mse_manual) # 0.3314
</code></pre>
<p>As we can see, the result from torch's <code>mse_loss</code> is <code>0.1557</code>, differing from the manual MSE computation which yields <code>0.3314</code>.</p>
<p><strong>In fact, the result from <code>mse_loss</code> is precisely as big as the direct result times the dimension of the points (here 2).</strong></p>
<p>What's up with that?</p>
| <python><pytorch><loss-function><mse> | 2023-08-25 19:15:01 | 1 | 329 | csstudent1418 |
76,979,954 | 2,590,824 | Django pymongo update_many ''Cannot use the part (y) of (x.y) to traverse the element ({x: []})'}' | <p>I have a collection like this with sample documents as:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"_id": {
"$oid": "64b28bafb2ea43dd940b920d"
},
"title": "Village Libraries",
"keywords": {
"bn": [
{
"$ref": "mcoll_keywords",
"$id": {
"$oid": "64b28badb2ea43dd940b920b"
}
}
],
"en": [
{
"$ref": "mcoll_keywords",
"$id": {
"$oid": "64b28aacb2ea43dd940b920a"
}
}
]
}
},
{
"_id": {
"$oid": "64b676b3b2ea43dd940b9230"
},
"title": "Folk Tales",
"keywords": {
"bn": [
{
"$ref": "mcoll_keywords",
"$id": {
"$oid": "64b67683b2ea43dd940b922d"
}
}
],
"en": [
{
"$ref": "mcoll_keywords",
"$id": {
"$oid": "64b676afb2ea43dd940b922e"
}
}
]
}
}
]
</code></pre>
<p>I would like to run a multi update query on this collection (using Python (Django and pymongo)) like:</p>
<pre><code>db.collection.update({},
{
"$pull": {
"keywords.bn": {
"$id": ObjectId("64b67683b2ea43dd940b922d")
}
}
},
{
"multi": true
})
</code></pre>
<p>But running the update query:</p>
<pre class="lang-python prettyprint-override"><code>...
from pymongo import MongoClient
...
print('q= '+str({'$pull': {'keywords.bn': {'$id': meta_bid}}}))
# OUTPUT: q= {'$pull': {'keywords.bn': {'$id': ObjectId('64b67683b2ea43dd940b922d')}}}
y= ent_col.update_many({}, {'$pull': {'keywords.bn': {'$id' : meta_bid}}})
</code></pre>
<p>results in the following error:</p>
<blockquote>
<p>full error: {'index': 0, 'code': 28, 'errmsg': 'Cannot use the part
(bn) of (keywords.bn) to traverse the element ({keywords: []})'}</p>
</blockquote>
<p>I have tested the query in Mongo Playground (<a href="https://mongoplayground.net/p/g5B7T9TtPY6" rel="nofollow noreferrer">link</a>) and it works fine there. So what am I doing wrong?</p>
<p>Thank you in advance for reading this so far and also thanks for giving a thought.</p>
| <python><django><mongodb><pymongo> | 2023-08-25 19:08:24 | 1 | 7,999 | sariDon |
76,979,503 | 8,921,867 | WKT moving polygon to center | <p>I have a WKT object and want to move its center to the origin (0,0).
Here is an example and what I tried:</p>
<pre><code>from shapely import wkt
poly_str = 'POLYGON ((14.217343909259455 -2.9030822376560224, 16.003619392313993 -2.639545672126154, 16.363681477720576 -5.080080154489572, 14.577405994666037 -5.34361672001944, 14.217343909259455 -2.9030822376560224))'
geom = wkt.loads(poly_str)
normalized = geom.normalize() # this does nothing
normalized == geom # TRUE
centroid = geom.centroid
moved_geom = geom - centroid # this seems logical, but does not achieve what I want
print(moved)
>>>> 'POLYGON ((16.003619392313993 -2.639545672126154, 16.363681477720576 -5.080080154489572, 14.577405994666037 -5.34361672001944, 14.217343909259455 -2.9030822376560224, 16.003619392313993 -2.639545672126154))'
</code></pre>
<p>Why is that last polygon not moved by the amount of the centroid and how would I obtain a shifted polygon from my original whose centroid would be at (0,0)?</p>
| <python><shapely><wkt> | 2023-08-25 17:44:54 | 1 | 2,172 | emilaz |
76,979,488 | 378,622 | Solving a simple matrix differential equation | <p>I want to use Sympy to solve the differential matrix equation: <code>du/dt = [[0,1],[1,0]]u</code> with initial value <code>u(0) = [[4],[2]]</code>.</p>
<p>The answer is</p>
<p><a href="https://i.sstatic.net/HQaI0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HQaI0.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/QqxAW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QqxAW.png" alt="enter image description here" /></a></p>
<p>So the complete final answer is: <code>3e^t[[1],[1]] + e^{-t}[[1],[-1]]</code></p>
<p>How could I solve this with SymPy?</p>
| <python><matrix><sympy><linear-algebra><differential-equations> | 2023-08-25 17:41:46 | 3 | 26,851 | Ben G |
76,979,365 | 9,588,300 | pandas dataframe copy not working, modifying the copied dataframe affects the original dataframe | <p>I have seen questions about the copy() method not working for nested data columns, since modifying something on the copy also altered the original dataframe. However, all I could find was about renaming a nested field of the dataframe on <a href="https://stackoverflow.com/questions/50562606/renaming-pandas-dataframe-column-on-copy-affects-the-original-dataframe">this question</a>.</p>
<p>Nonetheless, I am not renaming anything, I am altering a field of the nested column. So just wanted to confirm if that also does alters the original dataframe despite a copy was done. If that would be the case, then how can I make a copied dataframe that doesn't affects the original for nested columns?</p>
<p>For example in this code, I have a dataframe with a column of dictionaries. Each dictionary just has one field that is an array, it was expected it was all integers but some floats slipped in, so I want to convert them all to integers without altering the original dataframe.</p>
<p>However, if I apply a user defined function on a copied dataframe it affects the original as well</p>
<pre><code>df=pd.DataFrame({'a':[{'field':[1,2,3.0]},{'field':[1,2,4.0]},{'field':[1,2,5.0]}]})
print('printing the original dataframe: \n', df['a'])
def integer_converter(x):
x['a']['field']=[int(i) for i in x['a']['field']]
df2=df.copy(deep=True)
df2.apply(integer_converter,axis=1)
print('printing df2 after function: \n',df2['a'])
print('printing the original dataframe again: \n',df['a'])
</code></pre>
<p>The outputs were:</p>
<pre><code>printing the original dataframe:
0 {'field': [1, 2, 3.0]}
1 {'field': [1, 2, 4.0]}
2 {'field': [1, 2, 5.0]}
Name: a, dtype: object
printing df2 after function:
0 {'field': [1, 2, 3]}
1 {'field': [1, 2, 4]}
2 {'field': [1, 2, 5]}
Name: a, dtype: object
printing the original dataframe again:
0 {'field': [1, 2, 3]}
1 {'field': [1, 2, 4]}
2 {'field': [1, 2, 5]}
Name: a, dtype: object
</code></pre>
| <python><pandas><dataframe><numpy> | 2023-08-25 17:21:54 | 1 | 462 | Eugenio.Gastelum96 |
76,979,129 | 5,905,678 | aws websockets with cdk | <p>I would like to use aws websockets API with python cdk.
I can see, there are functions in the documentation <a href="https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk.aws_apigatewayv2_alpha/README.html" rel="nofollow noreferrer">here</a> that it is possible but experimental. Now I want to use these constructs but I cannot import them. My IDE does not recognize them as well.
I installed it after researching the experimental module explicitly <a href="https://pypi.org/project/aws-cdk.aws-apigatewayv2-alpha/#websocket-api" rel="nofollow noreferrer">see</a>
However, I cannot import this or anything related to this alpha module.
Does someone else experience this?</p>
<pre><code>from aws_cdk.aws_apigatewayv2_authorizers_alpha import WebSocketLambdaAuthorizer
</code></pre>
| <python><amazon-web-services><websocket><aws-api-gateway><aws-cdk> | 2023-08-25 16:47:23 | 1 | 1,518 | Khan |
76,979,116 | 4,721,937 | Change global state before the fixture | <p>I'm writing a set of tests for methods of the object whose initialization may be altered by environment variables. I started by putting object creation into a fixture and making a bunch of tests</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture()
def my_object():
# Initialise and return my object
return new_object
def test_method_1(my_object): ...
def test_method_2(my_object): ... # etc.
</code></pre>
<p>One of the tests needs to check the object's behaviour with an environment variable set, so I naively wrote:</p>
<pre class="lang-py prettyprint-override"><code>def test_method_with_env(my_object):
os.environ['VARIABLE'] = 'SPECIAL_VALUE'
assert my_object.method_under_test()
</code></pre>
<p>which obviously fails because the variable must be set during object initialization, not after it.</p>
<p>I also tried</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture
def patch_env(request):
marker = request.node.get_closest_marker('env')
if marker and marker.kwargs:
os.environ.update(marker.kwargs)
@pytest.mark.env(VARIABLE='SPECIAL_VALUE')
def test_method_with_env(patch_env, my_object):
assert my_object.method_under_test()
</code></pre>
<p>but that also fails, because there is no guarantee that <code>patch_env</code> is applied before <code>my_object</code> fixture.</p>
<p>In the end, I could make <code>my_object</code> depend on <code>patch_env</code> but I only need to modify the env variable for this one test, not all of them.
Is there any pytest-recommended way to go about it or am I misusing fixtures here?</p>
| <python><pytest><pytest-fixtures> | 2023-08-25 16:45:47 | 1 | 2,965 | warownia1 |
76,979,034 | 5,274,291 | Cognito: Using admin_user_global_sign_out in pre_authentication trigger sometimes fail due to apparent async behavior | <p>I'm trying to sing-out all old sessions from Cognito for a specific user credentials (revoke the Refresh tokens) when a new session is started using the same user credentials (in this case e-mail and password). This way the old sessions won't be able to renew the ID and access tokens with an revoked refresh token. The goal is to prevent two users to be signed in at the same time with the same credentials.</p>
<blockquote>
<p>Note: I know ID and Access tokens are still usable until they expire with the old sessions</p>
</blockquote>
<p>The best way I found to do that is via the<code>pre-authentication</code> lambda triggered by Cognito. My tests this far are basically running the following piece of code:</p>
<pre class="lang-py prettyprint-override"><code>import json
import boto3
client = boto3.client('cognito-idp')
def lambda_handler(event, context):
user_pool_id = event['userPoolId']
email = event['request']['userAttributes']["email"]
# Signs out all current active sessions so only this session
# being created remains active
client.admin_user_global_sign_out(
Username=email,
UserPoolId=user_pool_id)
sleep(1)
return event
</code></pre>
<p>In the first tests I did, I didn't have the <code>sleep(1)</code> as part of the source code. However, by that time, I noticed that the authorization failed from time to time. My first guess was that maybe the <code>admin_user_global_sign_out</code> is an async operation and it could be the case that between the time of signing out all old sessions and finish creating the new session, the <code>admin_user_global_sign_out</code> was delayed and executed after the new session was created, causing all the sessions (including the new one) to be revoked.</p>
<p>After adding this 1 second sleep, the number of failures of new sign ins dropped significantly (I have run this flow a ten of times and didn't see any errors anymore). Maybe this one second sleep has been enough to complete the sing out of all current active sessions before proceeding with the creation of the new one.</p>
<p>However as we know, things can go wrong with async distributed system if some part of the system becomes slower than the others, and I'm afraid that in the event of a sign out delay higher than 1 second I can face again the same problems in the future.</p>
<p>So my questions are:</p>
<ul>
<li>Is <code>admin_user_global_sign_out</code> an async operation and are my assumptions are right?</li>
<li>Is there a way to make this global sing out a synchronous operation? Maybe via on waiting some state change by calling Cognito APIs?</li>
</ul>
| <python><boto3><amazon-cognito> | 2023-08-25 16:32:56 | 1 | 1,578 | João Pedro Schmitt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.