QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,121,685
| 12,886,610
|
Airflow branching: A task that only sometimes depends on an upstream task
|
<p>I have two tasks: <code>task_a</code> and <code>task_b</code>. There are DAG-parameters <code>run_task_a</code> and <code>run_task_b</code> that determine whether each task should be run. There is further parameter that is an input for <code>task_a</code>. Here's the important part:</p>
<p><strong>If <code>task_a</code> is run, then <code>task_b</code> should start only after <code>task_a</code> has finished. However, if <code>task_a</code> is not run, then <code>task_b</code> can start whenever.</strong></p>
<p>(Motivation: <code>task_a</code> is the main task. A new run of <code>task_a</code> can result in defunct artifacts, which <code>task_b</code> cleans up. However, one may wish to trigger <code>task_b</code> independently.)</p>
<p>This is what I have written so far:</p>
<pre class="lang-py prettyprint-override"><code>from airflow.decorators import dag, task
from airflow.models.param import Param
from datetime import datetime
default_args = {
'owner': 'xyz',
'email_on_retry': False,
'email_on_failure': False,
'retries': 0,
'provide_context': True,
'depends_on_past': False
}
@dag(
default_args=default_args,
start_date=datetime(2024, 3, 7),
schedule_interval=None,
params={
'run_task_a': Param(
True,
type='boolean'),
'run_task_b': Param(
True,
type='boolean'),
'param_for_task_a': Param(
'foo',
enum=['foo','bar'],
type='string')
}
)
def my_dag():
@task
def get_context_values(**context):
context_values = dict()
context_values['params'] = context['params']
return context_values
@task.branch
def branching(context_values):
tasks_to_run = []
if context_values['params']['run_task_a']:
tasks_to_run.append('task_a')
if context_values['params']['run_task_b']:
tasks_to_run.append('task_b')
return tasks_to_run
@task
def task_a(context_values):
param_for_task_a = context_values['params']['param_for_task_a']
if param_for_task_a == 'foo':
# Do some stuff
pass
if param_for_task_a == 'bar':
# Do some different stuff
pass
return None
@task
def task_b():
# Do some more stuff
return None
# Taskflow
context_values = get_context_values()
branching(context_values) >> [task_a(context_values),task_b()]
my_dag()
</code></pre>
<p><a href="https://i.sstatic.net/d5hOs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d5hOs.png" alt="enter image description here" /></a></p>
<p>The problem is when <code>run_task_a == True</code> and <code>run_task_b == True</code>: Both tasks run, but of course <code>task_b</code> does not wait for <code>task_a</code> to finish before starting because there is no dependency. I've tried to add this dependency by making <code>task_b</code> a downstream task of <code>task_a</code>, but then <code>task_b</code> does not run if <code>run_task_a == False</code> and <code>run_task_b == True</code>. Trigger rules also don't seem to be the solution, since <code>task_b</code> should not be run if <code>run_task_b == False</code>.</p>
|
<python><airflow><airflow-taskflow>
|
2024-03-07 13:17:30
| 3
| 1,263
|
dwolfeu
|
78,121,643
| 842,622
|
How to create alias for first active item in reverse relation in Django models?
|
<p>I have a model called Item:</p>
<pre class="lang-py prettyprint-override"><code>class Item(models.Model):
...
</code></pre>
<p>Also I have another model called Content. It has relation to Item and a ChoiceField to select is the content is active or not:</p>
<pre class="lang-py prettyprint-override"><code>class Content(models.Model):
item = ForeignKey(Item related_name=)
is_active = BooleanField(default=False)
content = TextField()
def save(self, *args, **kwargs):
"""There's a logic to make sure every item has only one content"""
</code></pre>
<p>I have two questions.</p>
<ol>
<li>How to filter out the Items without any content or without any active content.</li>
<li>Can I do something like alias to call item.content to return active content of the item without causing db performance issues.</li>
</ol>
|
<python><django><django-models><django-queryset>
|
2024-03-07 13:09:23
| 1
| 651
|
Mirat Can Bayrak
|
78,121,558
| 7,422,392
|
Translated Fields not Rendering Correctly in Django CMS Plugin
|
<p>Am encountering an issue with rendering translated fields in a <code>Django CMS</code> <a href="https://github.com/django-cms/django-cms/blob/release/4.1.x/cms/plugin_base.py" rel="nofollow noreferrer">Plugin</a> using <code>django-modeltranslation</code>. The plugin is supposed to display translated content based on the selected language, but it only renders with the default language.</p>
<blockquote>
<p>All other non-CMS plugin parts on the page that also make use of <code>django-modeltranslation</code> translate and are displayed correctly.</p>
</blockquote>
<p>Here's a brief overview of the setup:</p>
<ul>
<li>I'm using <code>django==4.2.10</code>; <code>django-cms==4.1.0</code>; along with <code>django-modeltranslation==0.18.11</code> for model field
translations.</li>
<li>The plugin in question (HeaderPlugin) has a single field that is
translated using django-modeltranslation.</li>
<li>The template (cms_plugins/header/list_items.html) only includes <code>{{ instance }}</code>,<br />
but no value is displayed when non-default languages are selected. That is <code>{{ instance }}</code> only displays the instance when
the default language is selected.</li>
<li>The <a href="https://github.com/django-cms/django-cms/blob/release/4.1.x/cms/plugin_base.py" rel="nofollow noreferrer">Plugin</a> is placed within a <code>{% static_alias 'header' %}</code>. A noteworthy observation is that, when I want to set the plugin for a non-default language page via the menu dropdown <code>Aliases...</code>, it redirects to the default language (since the page does not exist). Hence I am only able to set the plugin from the default language.</li>
</ul>
<p>The plugin and associated model:</p>
<pre><code>#cms_plugins.py
@plugin_pool.register_plugin
class HeaderPlugin(CMSPluginBase):
model = HeaderPlugin
module = _("Header")
name = _("Header Plugin")
render_template = "cms_plugins/header/list_items.html"
def render(self, context, instance, placeholder):
print("Not displayed when rendering in a non-default language")
context.update({
'instance': instance,
'placeholder': placeholder,
'lan_nl': instance.html_nl,
'lan_en': instance.html_en,
})
return context
# models.py
class HeaderPlugin(CMSPlugin):
header = models.ForeignKey(Header,
on_delete=models.PROTECT,
blank=False, null=False,
help_text=_("Select a header"))
html = HTMLField(blank=True, default="",
help_text=_("Automatically generated field to reduce overhead"))
def __str__(self):
return f"Header plugin | {self.header.name}"
</code></pre>
<p>The template cms_plugins/header/list_items.html only contains {{ instance }}. The model (HeaderPlugin) is correctly configured for translation using django-modeltranslation, and the fields are registered for translation in the model's translation options. Translations exist for the HeaderPlugin instance. I've verified this using Django admin, the template {{ lan_nl }} and {{ lan_en }} (when the default language is selected).</p>
<p>Any suggestions or insights on how to troubleshoot and resolve this issue would be greatly appreciated.</p>
|
<python><django><django-cms><django-modeltranslation>
|
2024-03-07 12:57:12
| 0
| 1,006
|
sitWolf
|
78,121,507
| 4,867,977
|
python subprocess calling the local interpreter instead of the linked one
|
<p>I am attempting to execute a function within a Python script as a subprocess using its own interpreter, specified in the subprocess.run() call. This interpreter may include packages not present in my local Python environment. When I debug the file, errors occur due to these missing libraries in my local python instead of the one linked in the command. Am I misunderstanding something?</p>
<pre><code>try:
proc = subprocess.run(
[
r'C:\Users\user\AppData\Local\Programs\Python\Python311\python.exe',
r'C:\subprocess_python\my_file_subprocess.py',
str(self.x), str(self.y), str(self.z)
],
capture_output=True,
check=True
)
except subprocess.CalledProcessError as proc_err:
print("An exception occurred in the subprocess: \n ", proc_err)
print("stdout : \n", proc_err.stdout.decode())
print("stderr : \n", proc_err.stderr.decode())
exit(1)
</code></pre>
|
<python><python-3.x><subprocess><interpreter>
|
2024-03-07 12:49:39
| 1
| 1,494
|
Novice_Developer
|
78,121,376
| 12,858,691
|
Mocking Flask's request.get_json raises "RuntimeError: Working outside of request context" in unit test
|
<p>I develop unit tests for the backend of a large flask app. I am testing whether the helper function get_post_args() is handling empty requests correctly:</p>
<pre><code>from flask import request
from unittest.mock import patch
from errors import NoPostArguments
def get_post_args() -> Dict[str, Any]:
args = request.get_json()
if not isinstance(args, dict):
raise NoPostArguments(
"no arguments given for this POST request; request not served"
)
return args
def test_get_post_args_returns_none():
with patch(
"request.get_json",
return_value=None,
):
with unittest.TestCase().assertRaises(NoPostArguments):
get_post_args()
</code></pre>
<p>When I run this with pytest, I get the error:</p>
<pre><code>
test_get_post_args_returns_none failed: def test_get_post_args_returns_none():
> with patch(
"flask.request.get_json",
return_value=None,
):
tests\unit_tests\test_arg_validation.py:243:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
..\..\..\AppData\Local\Programs\Python\Python310\lib\unittest\mock.py:1438: in __enter__
original, local = self.get_original()
..\..\..\AppData\Local\Programs\Python\Python310\lib\unittest\mock.py:1401: in get_original
original = target.__dict__[name]
venv02\lib\site-packages\werkzeug\local.py:311: in __get__
obj = instance._get_current_object()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def _get_current_object() -> T:
try:
obj = local.get()
except LookupError:
> raise RuntimeError(unbound_message) from None
E RuntimeError: Working outside of request context.
E
E This typically means that you attempted to use functionality that needed
E an active HTTP request. Consult the documentation on testing for
E information about how to avoid this problem.
venv02\lib\site-packages\werkzeug\local.py:508: RuntimeError
</code></pre>
<p>Any ideas on how to correctly mock request.get_json()?</p>
|
<python><unit-testing><flask>
|
2024-03-07 12:28:11
| 1
| 611
|
Viktor
|
78,120,924
| 10,551,444
|
An error occurred while initializing Chrome with profile and logging: Message: unknown error: cannot parse internal JSON template: Line: 1
|
<p>I think there is an incompatibility issue:</p>
<p><strong>Environment:</strong></p>
<p>Windows 10</p>
<p>Selenium 4.10</p>
<p>Python 3.10.7</p>
<p>webdriver-manager 4.0.1</p>
<p>Chrome Version 122.0.6261.112 (Build officiel) (64 bits)</p>
<p><strong>I am making a python script. I want to open the Chrome browser with my Chrome profile.</strong></p>
<p>I tried everything ChatGPT suggested to me. I always get the same issue:</p>
<p><code>An error occurred while initializing Chrome with profile and logging: Message: unknown error: cannot parse internal JSON template: Line: 1, column: 1, Unexpected token.</code></p>
<p>So I asked chatGPT to ignore my code and to make a new script from scratch. The prompt:</p>
<p><code>ok. Well, nothing works. So let's do it another way. Ignore my code and make yourself from scratch a function that opens Chrome driver with selenium 4.18.1.</code></p>
<p>After correcting some minor issues, He gave me this code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService
from webdriver_manager.chrome import ChromeDriverManager
import os
def initialize_chrome_with_profile_and_logging():
try:
# Specify the path to the user's Chrome profile
profile_path = os.path.join(os.environ['LOCALAPPDATA'], 'Google', 'Chrome', 'User Data')
options = webdriver.ChromeOptions()
options.add_argument(f'user-data-dir={profile_path}') # Path to your chrome profile
# Enable verbose logging and specify log file path
service = ChromeService(ChromeDriverManager().install())
log_path = os.path.join(os.getcwd(), 'chromedriver.log')
service.start(args=['--verbose', '--log-path=' + log_path])
# Initialize the Chrome driver with options and service
driver = webdriver.Chrome(service=service, options=options)
return driver
except Exception as e:
print(f"An error occurred while initializing Chrome with profile and logging: {e}")
return None
</code></pre>
<p>Can anyone have time to reproduce this issue?</p>
<p>I don't know if it is my PC or is there a bug?</p>
|
<python><selenium-chromedriver><webdriver-manager>
|
2024-03-07 11:18:31
| 1
| 1,223
|
Gauthier Buttez
|
78,120,824
| 15,991,297
|
Unknown IMAP4 command: 'idle' When Accessing Inbox
|
<p>I am trying to check for new emails in real time. I believe the code below should work but I get an "AttributeError: Unknown IMAP4 command: 'idle'" error. Can anyone see what the issue is?</p>
<pre><code>import imaplib
import email
username = "test@xxxxx.com"
password = "xxxxx"
# Connect to the IMAP server
mail = imaplib.IMAP4_SSL('mail.xxxxx.com')
# Login to the server
mail.login(username, password)
# Select the INBOX folder
mail.select("INBOX")
# Start listening for new emails
mail.idle()
</code></pre>
|
<python><imaplib>
|
2024-03-07 11:03:50
| 1
| 500
|
James
|
78,120,471
| 2,508,672
|
Read s3 binary csv file python
|
<p>A lambda function uploads a csv file in 'wb' mode.</p>
<p>Code for uploading csv file which has read from email:</p>
<pre><code>open('/tmp/' + filename, 'wb').write(part.get_payload(decode=True))
s3r.meta.client.upload_file('/tmp/' + filename)
</code></pre>
<p>Now I want to read the file, and have below code</p>
<pre><code>client = boto3.client('s3')
data = client.get_object(Bucket=bucket_name, Key= key)
with io.TextIOWrapper(data["Body"], encoding='latin1') as text_file:
reader = csv.DictReader(text_file, delimiter=',')
for row in reader:
print(row)
</code></pre>
<p>But getting output as byte string like below</p>
<p><code>\x1bdº\x9e\x8f\x90dN\x18"m': '\x00\x80\x02\x00\x000\x8f\x18\x00\x00\x00', "à¥Ç\x9eDO97*\x82~§Èɸ8ÀOíc\x1c|n¦Ñ\x07ä\x04Eøÿ\x14ö\x11éºóÀB\x10ÉÀ!$}\x87íàÈé;{ìÐå[\x83îñ\x96é\x7f2þ\x06\x00\x00ÿÿ\x03\x00PK\x03\x04\x14\x00\x06\x00\x08\x00\x00\x00!\x00µU0#ô\x00\x00\x00L\x02\x00\x00\x0b\x00\x08\x02_rels/.rels ¢\x04\x02(\xa0\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00¬\x92MOÃ0\x0c\x86ïHü\x87È÷ÕÝ\x90\x10BKwAH»!T~\x80IÜ\x0fµ\x8d£$\x1bÝ¿'\x1c\x10T\x1a\x83\x03G\x7f½~üÊÛÝ<\x8dê</code></p>
<p>How can I get the rows? I need to change some data.</p>
|
<python><amazon-web-services><amazon-s3>
|
2024-03-07 10:16:12
| 0
| 4,608
|
Md. Parvez Alam
|
78,120,376
| 3,723,306
|
ThreadPoolExecutor too fast for CPU bound task
|
<p>I'm trying to understand how ThreadPoolExecutor and ProcessPoolExecutors work. My assumption for this test was that CPU-bound tasks such as increasing a counter wouldn't benefit from running on ThreadPoolExecutors because it doesn't release the GIL, so it can only use one process at a time.</p>
<pre><code>@measure_execution_time
def cpu_slow_function(item):
start = time.time()
duration = random()
counter = 1
while time.time() - start < duration:
counter += 1
return item, counter
def test_thread_pool__cpu_bound():
"""
100 tasks of average .5 seconds each, would take 50 seconds to complete sequentially.
"""
items = list(range(100))
with ThreadPoolExecutor(max_workers=100) as executor:
results = list(executor.map(cpu_slow_function, items))
for index, (result, counter) in enumerate(results):
assert result == index
assert counter >= 0.0
</code></pre>
<p>To my surprise, this test takes about ~5s to finish. Based on my assumptions, it should be taking ~50s, 100 tasks of an average of 0.5s each.</p>
<p>What am I missing?</p>
|
<python><concurrency><multiprocessing><threadpool><python-multithreading>
|
2024-03-07 10:00:12
| 2
| 1,480
|
JaviOverflow
|
78,120,278
| 5,640,161
|
Why is the lexsort warning thrown for some levels of pandas DataFrames but not others?
|
<p><strong>BACKGROUND</strong></p>
<p>Consider the MultiIndexed pandas DataFrame from the following code</p>
<pre><code>import numpy as np
import pandas as pd
N = 3
rangeN = list(range(1, N + 1))
index = pd.MultiIndex.from_product(
[rangeN, rangeN], names=["level1", "level2"]
)
columns = [
(
"col_B",
"col_B.1",
),
(
"col_B",
"col_B.2",
),
]
components = range(1, 3)
columns += [("col_A", "col_A.1", f"col_A.1.{c}") for c in components]
columns += [("col_A", "col_A.2", f"col_A.2.{c}") for c in components]
columns = pd.MultiIndex.from_tuples(columns)
df = pd.DataFrame(np.random.randint(0, 9, size=(9, 6)), columns=columns, index=index)
# df.loc[:, ("col_B", "col_B.2",)] = 7 # Warning
# df.loc[:, ("col_B", "col_B.2", slice(None))] = 7 # No warning
print("The whole df:\n", df) # No warning
print("A subset of the df:\n", df.loc[:, ("col_A")]) # No warning
print("A subsubset of the df:\n", df.loc[:, ("col_A", "col_A.1")]) # Warning
</code></pre>
<p>which returns</p>
<pre><code>The whole df:
col_B col_A
col_B.1 col_B.2 col_A.1 col_A.2
NaN NaN col_A.1.1 col_A.1.2 col_A.2.1 col_A.2.2
level1 level2
1 1 8 3 4 7 2 5
2 3 4 1 5 7 7
3 7 1 0 8 1 0
2 1 2 3 8 2 7 3
2 4 4 2 5 1 5
3 1 3 5 4 0 1
3 1 5 5 0 4 4 2
2 4 6 8 4 6 8
3 5 7 7 5 2 3
A subset of the df:
col_A.1 col_A.2
col_A.1.1 col_A.1.2 col_A.2.1 col_A.2.2
level1 level2
1 1 4 7 2 5
2 1 5 7 7
3 0 8 1 0
2 1 8 2 7 3
2 2 5 1 5
3 5 4 0 1
3 1 0 4 4 2
2 8 4 6 8
3 7 5 2 3
A subsubset of the df:
col_A.1.1 col_A.1.2
level1 level2
1 1 4 7
2 1 5
3 0 8
2 1 8 2
2 2 5
3 5 4
3 1 0 4
2 8 4
3 7 5
c:\users\tfovid\draft.py:30: PerformanceWarning: indexing past lexsort depth may impact performance.
print("A subsubset of the df:\n", df.loc[:, ("col_A", "col_A.1")]) # Warning
</code></pre>
<p><strong>QUESTION</strong></p>
<p>Why is indexing the higher level <code>df.loc[:, ("col_A")]</code> working fine while <code>df.loc[:, ("col_A", "col_A.1")]</code> throws a warning? I don't want to sort the array because of the current order of the columns is a requirement from the end user. Is there any way to get rid of this warning without sorting? If not, is there at least a hack (e.g., temporary sorting and then reverting back to the original column order) so that I get rid of this pesky warning?</p>
|
<python><pandas>
|
2024-03-07 09:46:16
| 1
| 863
|
Tfovid
|
78,120,062
| 1,473,517
|
How to draw a circle which should be clipped only within the diagonal line in Python?
|
<p>I am trying to draw a circle which is clipped by a diagonal line. Here is my non-working code:</p>
<pre><code>import matplotlib.pyplot as plt
# Create the circle with radius 6
circle = plt.Circle((0, 0), 6, color='r', fill=False)
# Set up the plot (reuse the previous grid settings)
plt.figure(figsize=(8, 8))
plt.xlim(0, 10)
plt.ylim(0, 10)
plt.grid()
# Add the circle to the plot
ax = plt.gca()
ax.add_patch(circle)
# Draw a diagonal line
plt.plot([0, 7], [7, 0], color='b', linestyle='--')
# Set aspect ratio to ensure square grid cells
ax.set_aspect("equal")
# Clip the circle using the diagonal line.
# This doesn't work
ax.set_clip_path(plt.Polygon([[0, 0], [7, 0], [0, 7]]))
# Show the plot
plt.title("Circle Centered at (0,0) (not) Clipped by Diagonal Line")
plt.show()
</code></pre>
<p>Here is what it shows currently.</p>
<p><a href="https://i.sstatic.net/wBS5d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wBS5d.png" alt="enter image description here" /></a></p>
<p>I don't want to show any of the circle that goes past the diagonal line.</p>
|
<python><matplotlib>
|
2024-03-07 09:14:11
| 2
| 21,513
|
Simd
|
78,120,042
| 14,953,535
|
Pytest IndexError: tuple index out of range
|
<p>Currently I'm trying to configure my django rest API with pytest. When I tried to use models with <code>@pytest.mark.django_db</code> I get an error mentioning that <code>tuple index out of range</code> even though I do not refer any tuples in the test case. But everything works fine when I do not do any database queries or not use <code>@pytest.mark.django_db</code></p>
<pre class="lang-py prettyprint-override"><code>import pytest
from report_tenant.models import VisualModelTableColumn
@pytest.mark.django_db
def test_visual_model_table_column_table():
col = Model.objects.create(key_1=1, key_2=1, key_5=1, key_3=1, key_4=1)
assert True == True
</code></pre>
|
<python><django-models><django-rest-framework><pytest><pytest-django>
|
2024-03-07 09:09:33
| 1
| 622
|
Shakya Peiris
|
78,120,007
| 3,383,722
|
FastAPI - Python - overrides dependencies
|
<p>Is it possible to override dependencies like this in FastAPI?</p>
<p><a href="https://i.sstatic.net/uBojS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uBojS.png" alt="override dependencies " /></a></p>
<p>I need to overide it from APIRouter</p>
|
<python><fastapi>
|
2024-03-07 09:03:06
| 0
| 1,965
|
Piotr
|
78,119,926
| 12,314,521
|
How to get index of different top-k at each row in a 2D tensor in Pytorch?
|
<p>Given:</p>
<ul>
<li>a positive integer tensor A: (batch_size, N) in which zero is the smallest value. For example:</li>
</ul>
<pre><code>tensor([[4, 3, 1, 4, 2],
[0, 0, 2, 3, 4],
[4, 4, 3, 0, 3]])
</code></pre>
<p>I want get the index of different k of k-th largest value at each row?</p>
<ul>
<li>k is a list of <code>batch_size</code> elements are chosen randomly in which its values only express only 2 cases:
first is largest (so k = 1) second is the smallest but ignore zero, e.g. if the row is [2,3,4,0] so the smallest index is 0 (value 2). (with possibility = 0.7 for largest and 0.3 for smallest)</li>
</ul>
<p>With the example above, <code>if k = [1,0,0]</code> (1 means get largest, 0 mean smallest) then the output indices will be</p>
<p><code>output = [0, 2, 2]</code> the correspond values are <code>[4, 2,3]</code></p>
<p>Notes: please vectorize these calculations.</p>
|
<python><pytorch>
|
2024-03-07 08:49:20
| 1
| 351
|
jupyter
|
78,119,798
| 14,739,428
|
install python3.11.7 get no module named '_ssl'
|
<p>Here is the server OS version</p>
<pre><code>[root@hdp1 bin]# lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.9.2009 (Core)
Release: 7.9.2009
Codename: Core
</code></pre>
<p>I tried to install python 3.11.7 but failed with <code>no module named '_ssl'</code></p>
<p>So I installed the openssl-1.1.1w</p>
<pre><code>[root@hdp1 bin]# openssl version
OpenSSL 1.1.1w 11 Sep 2023 (Library: OpenSSL 1.1.1k FIPS 25 Mar 2021)
[root@hdp1 bin]# whereis openssl
openssl: /usr/bin/openssl /usr/lib64/openssl /usr/local/bin/openssl /usr/include/openssl /usr/share/man/man1/openssl.1ssl.gz
[root@hdp1 bin]# which openssl
/usr/local/bin/openssl
</code></pre>
<p>I have installed the openssl and then install python as usual, but failed.
I usually do this and it always work:</p>
<pre><code>tar -zxvf openssl-1.1.1w.tar.gz
cd openssl-1.1.1w
./config –prefix=/usr/local/openssl shared zlib
make && make install
tar -zxvf Python-3.11.7.tgz
cd Python-3.11.7
./configure --with-openssl=/usr/local/openssl
make && make altinstall
</code></pre>
<p>This time, when installing on a new server, it was very strange. OpenSSL was not installed in the default directory /usr/local/openssl, but was scatteredly installed in various locations under /usr, similar to Python.</p>
<p>For example, typically OpenSSL is installed by default in /usr/local/openssl, and includes the following five folders: bin, include, lib, share, ssl. However, it appears to be scattered to the following locations: /usr/bin, /usr/include, /usr/lib64, as you can also see from the result of the whereis command above.</p>
<p>When I directly input openssl version and get the correct version number, trying to install python3.11.7 directly always tells me that SSL installation is incorrect. However, it seems that I cannot configure it using --with-openssl.</p>
<pre><code>The necessary bits to build these optional modules were not found:
_hashlib _ssl _tkinter
To find the necessary bits, look in setup.py in detect_modules() for the module's name.
Could not build the ssl module!
Python requires a OpenSSL 1.1.1 or newer
</code></pre>
<p>I found a folder named 'openssl11' in <code>/usr/lib64/openssl</code>. I attempted to specify the library files for Python using <code>./configure --with-openssl=/usr/lib64/openssl/openssl11 --enable-optimizations</code>, but it failed. SSL still cannot be used after installing.</p>
<p>This has caused me significant inconvenience. Now I am unable to delete the installed files and unable to configure my SSL using the usual methods. Can anyone help me?</p>
|
<python><linux><ssl><openssl><centos>
|
2024-03-07 08:26:28
| 1
| 301
|
william
|
78,119,454
| 2,056,878
|
How to configure `gr.ChatInterface` to return multiple outputs (response & source documents)?
|
<p>I have this <code>gr.ChatInterface</code> that I want to adjust to also show to the user document sources that were used on retrieval (meaning, adding another output)</p>
<pre class="lang-py prettyprint-override"><code>import gradio as gr
def generate_response(message, history):
print(f"\n\n[message] {message}")
# call LLM & generate response
return response.answer
demo = gr.ChatInterface(
fn=generate_response,
title="RAG app for Q&A",
description="Ask any question about Stuff",
).queue(default_concurrency_limit=2, max_size=10)
demo.launch(share=True)
</code></pre>
<p>I already tried <code>outputs</code> but it's not supported by <code>gr.ChatInterface</code>:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/workspaces/aider_repos/app.py", line 20, in <module>
demo = gr.ChatInterface(
^^^^^^^^^^^^^^^^^
TypeError: ChatInterface.__init__() got an unexpected keyword argument 'outputs'
</code></pre>
<p>How to configure <code>gr.ChatInterface</code> to return multiple outputs (response & source documents)?</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/workspaces/aider_repos/app.py", line 20, in <module>
demo = gr.ChatInterface(
^^^^^^^^^^^^^^^^^
TypeError: ChatInterface.__init__() got an unexpected keyword argument 'outputs'
</code></pre>
|
<python><user-interface><huggingface><gradio>
|
2024-03-07 07:25:54
| 0
| 1,150
|
devio
|
78,119,437
| 14,114,654
|
Keep selected pages from PDF
|
<p>I have a pandas dataframe, pdf_summary, which is sorted and has 50 unique rows. Each row is a particular combination of file_pages. How could I create a folder and PDF for each file_name?</p>
<pre><code>pdf_path = "Documents/menu.pdf"
</code></pre>
<pre><code>pdf_summary
file_name file_pages_to_keep
1 - Monday, Wednesday 1,3
2 - Monday 1
3 - Monday, Tuesday, Wednesday 1,2,3
...
50 - Friday 5
</code></pre>
<p>The expected output would be 50 folders, with one PDF inside each folder with only those file_pages taken from the <code>menu.pdf</code>.</p>
<pre><code>"Documents/1 - Monday, Wednesday/1 - Monday, Wednesday.pdf" (PDF only has pages 1 and 3 from menu.pdf)
...
</code></pre>
|
<python><pandas><pdf><pypdf>
|
2024-03-07 07:21:58
| 1
| 1,309
|
asd
|
78,119,374
| 3,467,698
|
How do I dynamically import a function by its pythonic path?
|
<p>I have a function in a submodule that normally can be imported like this:</p>
<pre><code>from core.somepack import my_func
</code></pre>
<p>Instead I would like to import it lazily by a given pythonic string <code>core.somepack.my_func</code>. What is the best way to do it?</p>
<pre><code>my_func = some_function_i_am_asking_for('core.somepack.my_func')
my_func()
</code></pre>
|
<python>
|
2024-03-07 07:09:56
| 1
| 9,971
|
Fomalhaut
|
78,119,223
| 12,379,095
|
Neural Language Model: Getting error - ValueError: cannot reshape array of size 380 into shape (1,1,10)
|
<p>I am trying to follow a tutorial on character based Neural Language Model, which attempts to predict "words in a sequence based on the specific words".</p>
<p>As instructed, I have generated the sequence of texts to a file, defined the language model and saved the model as well as the mapping characters (as <code>*.pkl</code>).</p>
<p>Next in order to generate texts (using the saved mapping), encoded the texts to integers and run the code for predicting characters in sequence using <code>predict_classes()</code>. Using the following function (code below) to predict a <em>sequence of characters</em> using <em>seed text</em>.</p>
<p>However I am running into the following error when I run the script:</p>
<blockquote>
<p>ValueError: cannot reshape array of size 380 into shape (1,1,10)</p>
</blockquote>
<p>This is the part where I am getting the error:</p>
<pre><code>def generate_seq(model, mapping, seq_length, seed_text, n_chars):
in_text = seed_text
# generate a fixed number of characters
for _ in range(n_chars):
# encode the characters as integers
encoded = [mapping[char] for char in in_text]
# truncate sequences to a fixed length
encoded = pad_sequences([encoded], maxlen=seq_length, truncating='pre')
# one hot encode
encoded = to_categorical(encoded, num_classes=len(mapping))
encoded = encoded.reshape(1, encoded.shape[0], encoded.shape[1]) # <--- Error line
# predict character
yhat = model.predict_classes(encoded, verbose=0)
# reverse map integer to character
out_char = ''
for char, index in mapping.items():
if index == yhat:
out_char = char
break
# append to input
in_text += out_char
return in_text
</code></pre>
<p>The function is being called with:</p>
<pre><code>print(generate_seq(model, mapping, 10, 'Sing a son', 20))
</code></pre>
<p>Why am I getting this error?</p>
|
<python><machine-learning><keras><nlm>
|
2024-03-07 06:41:41
| 0
| 574
|
Stop War
|
78,119,131
| 4,429,265
|
CERTIFICATE_VERIFY_FAILED when trying to use qdrant with docker-compose and https
|
<p>I have two containers, qdrant and searchai. qdrant is my qdrant container with this docker-compose setup:</p>
<pre><code>version: '3'
services:
qdrant:
image: qdrant/qdrant:latest
restart: always
container_name: qdrant
ports:
- "6333:6333"
- "6334:6334"
volumes:
- ./qdrant_data:/qdrant_data
configs:
- source: qdrant_config
target: /qdrant/config/production.yaml
configs:
qdrant_config:
file: ./qdrant_data/qdrant_custom_config.yaml
volumes:
qdrant_data:
</code></pre>
<p>And this is my qdrant_custom_config.yaml:</p>
<pre><code>service:
api_key: ${QDRANT_API_KEY}
enable_tls: true
tls:
# Server certificate chain file
cert: /qdrant_data/tls/qdrant_3.pem
# Server private key file
key: /qdrant_data/tls/qdrant_key.pem
</code></pre>
<p>I generated the .pem files using mkcert and I gave the qdrant container name (qdrant) alongside with localhost to mkcert for .pem generation:</p>
<pre><code>mkcert qdrant localhost 127.0.0.1 ::1
</code></pre>
<p>Then I have a function inside my django backend which is in the searchai container to connect to qdrant using:</p>
<pre><code>qdrant_client = QdrantClient(
url=kwargs.get("url", "https://qdrant"),
port=kwargs.get("port", 6333),
api_key=kwargs.get(
"apikey"
),
timeout=kwargs.get("timeout", 1000),
)
</code></pre>
<p>So far there are a lot of places to make mistakes. But I do not know what I have done wrong that when I try to call this function from inside a backend api using curl:</p>
<p>curl 172.31.0.3:80/products/search/?q=kadin+ayakkabi&language=en</p>
<p>I get this error:</p>
<pre><code>searchai | qdrant_client.http.exceptions.ResponseHandlingException: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)
</code></pre>
<p>I also checked the qdrant connection both from my host machine and from inside the searchai container:
When I ran this curl command from my host machine, I got:</p>
<pre><code>curl -X GET https://localhost:6333
{"title":"qdrant - vector search engine","version":"1.7.4"}
</code></pre>
<p>But the I went into the searchai container:</p>
<pre><code>docker exec -it searchai sh
curl -X GET https://qdrant:6333
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
</code></pre>
<p>I did check that the .pem files exactly exist in the specified dir /qdrant_data/tls. Other than this, I have no clue on how to solve this problem.</p>
|
<python><docker><ssl><qdrant><qdrantclient>
|
2024-03-07 06:19:07
| 0
| 417
|
Vahid
|
78,118,909
| 3,223,818
|
Drawing the outermost contour of a set of data points without losing resolution
|
<p>I have a set of data points (as scattered data points in black) and I want to draw their outermost contour. I have tried to calculate the convex hull of my points (code below) but I lose too much resolution and the shape loses its nuances.</p>
<pre><code># load usual stuff
from __future__ import print_function
import sys, os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import colors
from scipy.spatial import ConvexHull
# read input file
cbm_contour = sys.argv[1]
def parse_pdb_coords(file):
f = open(file, "r")
coords = X = np.empty(shape=[0, 3])
while True:
line = f.readline()
if not line: break
if line.split()[0] == "ATOM" or line.split()[0] == "HETATM":
Xcoord = float(line[30:38])
Ycoord = float(line[38:46])
Zcoord = float(line[46:54])
coords = np.append(coords, [[Xcoord, Ycoord, Zcoord]], axis=0)
return coords
##########################################################################
plt.figure(figsize=(11, 10))
# parse input file
cbm_coords = parse_pdb_coords(cbm_contour)
# consider only x- and y-axis coordinates
flattened_points = cbm_coords[:, :2]
x = cbm_coords[:,0]
y = cbm_coords[:,1]
# Find the convex hull of the flattened points
hull = ConvexHull(flattened_points)
for simplex in hull.simplices:
plt.plot(flattened_points[simplex, 0], flattened_points[simplex, 1], color='red', lw=2)
plt.scatter(cbm_coords[:,0], cbm_coords[:,1], s=1, c='black')
plt.xlabel('X-axis coordinate ($\mathrm{\AA} $)', size=16)
plt.ylabel('Y-axis distance ($\mathrm{\AA} $)', size=16)
plt.yticks(np.arange(-20, 24, 4),size=16)
plt.xticks(np.arange(-20, 24, 4),size=16)
plt.savefig("example.png", dpi=300, transparent=False)
plt.show()
</code></pre>
<p>Note that this can't be transformed to a 'minimal working example' due to the complexity of the data points, but my dataset can be downloaded <a href="https://pastebin.com/5GWdqHQc" rel="nofollow noreferrer">here</a>. The idea is to have a generalized solution for other datasets too.</p>
<p>Does anyone have a suggestion?</p>
<p><a href="https://i.sstatic.net/BGzl7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BGzl7.png" alt="enter image description here" /></a></p>
|
<python><numpy><matplotlib><convex-hull>
|
2024-03-07 05:12:52
| 1
| 813
|
mdpoleto
|
78,118,772
| 5,567,893
|
How to get the values from the list of tensors by matching indices in pytorch?
|
<p>I have a question about calling the values from the list of tensors with multiple indices.<br />
Although I think that there are similar questions such as <a href="https://stackoverflow.com/questions/75504084/select-multiple-indices-in-an-axis-of-pytorch-tensor/75505948#75505948">here</a>, I couldn't completely use it.</p>
<p>I have a dataset comprising the 4-dimensional features for about 108,000 nodes and their links.</p>
<pre class="lang-py prettyprint-override"><code>tmp = []
for _ in range(4):
tmp.append(torch.rand((107940, 4), dtype=torch.float).to(device))
tmp
# [tensor([[0.9249, 0.5367, 0.5161, 0.6898],
# [0.2189, 0.5593, 0.8087, 0.9893],
# [0.4344, 0.1507, 0.4631, 0.7680],
# ...,
# [0.7262, 0.0339, 0.9483, 0.2802],
# [0.8652, 0.3117, 0.8613, 0.6062],
# [0.5434, 0.9583, 0.3032, 0.3919]], device='cuda:0'),
# tensor([...], device='cuda:0'),
# tensor([...], device='cuda:0'),
# tensor([...], device='cuda:0')]
</code></pre>
<pre class="lang-py prettyprint-override"><code># batch.xxx: factors in the batch from the graph
# Note that batch.edge_index[0] is the target node and batch.edge_index[1] is the source node.
# If you need more information, please see the Pytorch Geometric data format.
print(batch.n_id[batch.edge_index])
print(batch.edge_index_class)
#tensor([[10231, 3059, 32075, 10184, 1187, 6029, 10134, 10173, 6521, 9400,
# 14942, 31065, 10087, 10156, 10158, 26377, 85009, 918, 4542, 10176,
# 10180, 6334, 10245, 10228, 2339, 7891, 10214, 10240, 10041, 10020,
# 7610, 10324, 4320, 5951, 9078, 9709],
# [ 1624, 1624, 6466, 6466, 6779, 6779, 7691, 7691, 8655, 8655,
# 30347, 30347, 32962, 32962, 34435, 34435, 3059, 3059, 32075, 32075,
# 1187, 1187, 6029, 6029, 10173, 10173, 6521, 6521, 9400, 9400,
# 31065, 31065, 10087, 10087, 10158, 10158]], device='cuda:0')
#tensor([3., 3., 2., 2., 0., 0., 3., 3., 2., 2., 0., 0., 2., 2., 2., 2., 3., 3.,
# 2., 2., 0., 0., 0., 0., 3., 3., 2., 2., 2., 2., 0., 0., 2., 2., 2., 2.],
# device='cuda:0')
</code></pre>
<p>In this case, I want the new tensor that contains the feature values matched to the edge_index_class.<br />
For example, <code>tmp_filled</code> will have the 1624, 10231, and 3059th values from the fourth dataset in <code>tmp</code> because they are labeled with <code>edge_index_class</code> as 3.
Similarly, 6466, 32075, and 10184th values in the third dataset in <code>tmp</code> will go into the same index in <code>tmp_filled</code>.</p>
<p>To do this, I tried the code as below:</p>
<pre class="lang-py prettyprint-override"><code>for k in range(len(batch.edge_index_class)):
tmp_filled[batch.n_id[torch.unique(batch.edge_index)]] = tmp[int(batch.edge_index_class[k].item())][batch.n_id[torch.unique(batch.edge_index)]]
tmp_filled
# tensor([[0., 0., 0., 0.],
# [0., 0., 0., 0.],
# [0., 0., 0., 0.],
# ...,
# [0., 0., 0., 0.],
# [0., 0., 0., 0.],
# [0., 0., 0., 0.]], device='cuda:0')
</code></pre>
<p>But it returned the wrong result.</p>
<pre class="lang-py prettyprint-override"><code>tmp_filled[1624]
# tensor([0.3438, 0.5555, 0.6229, 0.7983], device='cuda:0')
tmp[3][1624]
# tensor([0.6895, 0.3241, 0.1909, 0.1635], device='cuda:0')
</code></pre>
<p>When I need the <code>tmp_filled</code> data to consist of (107940 x 4) format, how should I correct my code?</p>
<p>Thank you for reading my question!</p>
|
<python><pytorch><pytorch-geometric>
|
2024-03-07 04:26:08
| 1
| 466
|
Ssong
|
78,118,754
| 5,635,892
|
Fit for a parameter when the function is obtained by numerical integration in Python
|
<p>I have the code below in python. What it does is to integrate numerically the function <code>func</code> between 2 values and save the last value in <code>counts_list</code>. One of the parameters of <code>func</code> is <code>omega_Rabi</code>. What I need to do is, after I obtain <code>counts_list</code> I would like to perform a fit of <code>counts_list</code> vs <code>delta_list</code> (knowing that <code>counts_list</code> was obtained from <code>func</code>) and get <code>omega_Rabi</code>. Basically I assume I know <code>counts_list</code> and I want to find <code>omega_Rabi</code> that generate it (which now I assume I don't know). How can I do this? I usually use <code>curve_fit</code> but the stuff I tried so far didn't work. Thank you!</p>
<pre><code>import numpy as np
from scipy.integrate import solve_ivp
from scipy.optimize import curve_fit
pi = np.pi
omega = 2*pi*50000
omega_Rabi = 2*pi*170*6
def func(t, y, omega, delta, omega_Rabi):
c_up, c_down = y
dydt = [-1j*(omega_Rabi*np.sin(omega*t))*c_down,-1j*(omega_Rabi*np.sin(omega*t))*c_up-1j*delta*c_down]
return dydt
t_init = 0
t_fin = 0.00005
t_eval =np.arange(t_init,t_fin,t_fin/10000)
delta_list = 2*pi*np.arange(-10000,10001,4000)
delta_list = delta_list[np.where(delta_list != 0)]
counts_list = np.zeros(len(delta_list))
y0 = [0+0j,1+0j]
for i in range(len(delta_list)):
delta = delta_list[i]
sol = solve_ivp(func, [t_init,t_fin], y0,rtol=1e-9, atol=1e-11, t_eval=t_eval, args=(omega,delta,omega_Rabi))
y_up = abs(sol["y"][0])**2
counts_list[i] = y_up[-1]
</code></pre>
|
<python><scipy><curve-fitting><numerical-integration>
|
2024-03-07 04:17:28
| 1
| 719
|
Silviu
|
78,118,612
| 1,765,397
|
python __init_subclass__ with multiple classes. why doesn't __init_subclass get called twice?
|
<p>I was searching for information on using __init_subclass__ with multiple classes and I came across this bug report: <a href="https://bugs.python.org/issue42674" rel="nofollow noreferrer">https://bugs.python.org/issue42674</a></p>
<p>the submitter stated that __init_subclass__ was only called once instead of twice</p>
<p>the issued was closed as not a bug, with this comment:</p>
<blockquote>
<p>The two subclasses in the test script are not calling super(). When they do, both <strong>init_subclasses</strong> are called.</p>
</blockquote>
<p>The report came with the following code:</p>
<pre><code>class ClassOne:
@classmethod
def __init_subclass__(cls):
print(f"ClassOne.__init_subclass__( cls = {cls} )")
class ClassTwo:
@classmethod
def __init_subclass__(cls):
print(f"ClassTwo.__init_subclass__( cls = {cls} )")
class MyClass(ClassOne, ClassTwo):
def __init__(self):
super().__init__()
super(ClassOne, self).__init__()
</code></pre>
<p>I can't figure out how to trigger both __init_subclass__ methods. Here's my attempt:</p>
<pre><code>class ClassOne:
@classmethod
def __init_subclass__(cls):
print(f"ClassOne.__init_subclass__( cls = {cls} )")
def __init__(self):
super().__init__()
print(f"ClassOne.__init__( self = {self} )")
class ClassTwo:
@classmethod
def __init_subclass__(cls):
print(f"ClassTwo.__init_subclass__( cls = {cls} )")
def __init__(self):
super().__init__()
print(f"ClassTwo.__init__( self = {self} )")
class MyClass(ClassOne, ClassTwo):
def __init__(self):
super(ClassOne, self).__init__()
super(ClassTwo, self).__init__()
</code></pre>
<p>and the results:</p>
<pre><code>> python3 init_subclass.py
ClassOne.__init_subclass__( cls = <class '__main__.MyClass'> )
</code></pre>
<p>I'm running python 3.7</p>
|
<python>
|
2024-03-07 03:29:29
| 2
| 1,730
|
kdubs
|
78,118,542
| 759,991
|
How can I pass a Django url parameter to template's url method?
|
<p>I have this urls.py file:</p>
<pre><code>...
urlpatterns = [
path('region_service_cost/<str:region>/', views.region_service_cost, name='region_service_cost'),
path('monthly-region-service-cost/<str:region>/', views.monthly_region_service_cost, name='monthly-region-service-cost')
]
</code></pre>
<p>And I have this views.py file:</p>
<pre><code># Create your views here.
from django.shortcuts import render
from django.http import JsonResponse
from .models import MonthlyRegionServiceCost
def region_service_cost(request, region):
return render(request, 'region-service-cost.html')
def monthly_region_service_cost(request, region='global'):
colors = ['#DFFF00', '#FFBF00', '#FF7F50', '#DE3163', '#9FE2BF', '#40E0D0', '#6495ED', '#CCCCFF', '#9CC2BF',
'#40E011', '#641111', '#CCCC00']
labels = [ym['year_month'] for ym in MonthlyRegionServiceCost.objects.values('year_month').distinct()][:12]
datasets = []
for ym in labels:
for i, obj in enumerate(MonthlyRegionServiceCost.objects.filter(region=region, year_month=ym)):
dataset = {
'label': obj.service,
'backgroundColor': colors[i % len(colors)],
'data': [c.cost for c in MonthlyRegionServiceCost.objects.filter(service=obj.service, region=region)]
}
datasets.append(dataset)
return JsonResponse(data={
'labels': labels,
'datasets': datasets
})
</code></pre>
<p>and here is my region-service-cost.html file:</p>
<pre><code>{% extends 'base.html' %}
{% block content %}
<div id="container" style="width: 75%;">
<canvas id="monthly-region-service-cost" data-url="{% url 'monthly-region-service-cost' region=region %}/{{ region | urlencode }}/"></canvas>
</div>
<script src="https://code.jquery.com/jquery-3.4.1.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script>
$(function () {
var $productCostChart = $("#monthly-region-service-cost");
$.ajax({
url: $productCostChart.data("url"),
success: function (data) {
console.log(data);
var ctx = $productCostChart[0].getContext("2d");
new Chart(ctx, {
type: 'bar',
data: { labels: data.labels, datasets: data.datasets, },
options: {
plugins: { title: { display: true, text: 'Stacked Bar chart for pollution status' }, },
scales: { x: { stacked: true, }, y: { stacked: true } }
}
});
}
});
});
</script>
{% endblock %}
</code></pre>
<p>When I point my browser to <a href="http://127.0.0.1:8000/region_service_cost/global/" rel="nofollow noreferrer">http://127.0.0.1:8000/region_service_cost/global/</a> I get this output:</p>
<pre><code>NoReverseMatch at /region_service_cost/global/
Reverse for 'monthly-region-service-cost' with keyword arguments '{'region': ''}' not found. 1 pattern(s) tried: ['monthly\\-region\\-service\\-cost/(?P<region>[^/]+)/\\Z']
Request Method: GET
Request URL: http://127.0.0.1:8000/region_service_cost/global/
Django Version: 4.2.10
Exception Type: NoReverseMatch
Exception Value:
Reverse for 'monthly-region-service-cost' with keyword arguments '{'region': ''}' not found. 1 pattern(s) tried: ['monthly\\-region\\-service\\-cost/(?P<region>[^/]+)/\\Z']
Exception Location: /Users/russell.cecala/COST_REPORTS/django/cost_explorer/venv/lib/python3.9/site-packages/django/urls/resolvers.py, line 828, in _reverse_with_prefix
Raised during: monthly_cost.views.region_service_cost
Python Executable: /Users/russell.cecala/COST_REPORTS/django/cost_explorer/venv/bin/python
Python Version: 3.9.6
</code></pre>
|
<python><python-3.x><django>
|
2024-03-07 03:03:52
| 2
| 10,590
|
Red Cricket
|
78,118,524
| 2,562,927
|
Better way to check dictionary for alternative keys
|
<p>I'm parsing a dictionary which may have the value I want under 4 possible keys (I have no control over the dictionary).</p>
<p>The key could be <code>"value"</code> <code>"_value"</code> <code>"amount"</code> or <code>"_amount"</code></p>
<p>Currently my only idea is</p>
<pre><code>try:
val = myDict["value"]
except KeyError:
try:
val = myDict["_value"]
except KeyError:
try:
val = myDict["amount"]
except KeyError:
val = myDict["_amount"]
</code></pre>
<p>But this is making me feel sick just looking at it. Any better ideas?</p>
|
<python><dictionary>
|
2024-03-07 02:57:12
| 3
| 1,133
|
desired login
|
78,118,164
| 1,717,931
|
PyG dataset showing more than 1 graph
|
<p>I am a newbie to PyG and attempting to build a PyG dataset from a small json file (with 5 records: 5 nodes, 8 edges). After building the dataset, when I print out the properties of the graph, I see that the number of graphs is 3 and number of nodes is 20. I expect only 5 nodes and only 1 graph. The number of edges is right (8). Perhaps because even though there are 5 nodes, there are 4 types of nodes (org, event, player and rated).</p>
<p>I am not sure where I am making a mistake in creating this dataset. <strong>Please note</strong> that at this point I am trying to learn to create the correct PyG dataset for a given input. I have not thought about any node-classification, link-prediction or anomaly-detection scenario yet.</p>
<p>The 'rated' field is assumed to be the label (y).</p>
<pre><code>Graph Dataset:
HeteroData(
org={ x=[5, 2] },
player={ x=[5, 3] },
event={ x=[5, 1] },
rated={ x=[5, 1] },
(event, is_related_to, event)={ edge_index=[2, 8] },
(player, is_rated, rated)={ y=[5] }
)
Number of graphs: 3
Number of nodes: 20
Number of edges: 8
Number of node-features: {'org': 2, 'player': 3, 'event': 1, 'rated': 1}
Number of edge-features: {('event', 'is_related_to', 'event'): 0, ('player', 'is_rated', 'rated'): 0}
Edges are directed: True
Graph has isolated nodes: True
Graph has loops: False
Node Types: ['org', 'player', 'event', 'rated']
Edge Attributes: 20
</code></pre>
<p>The code for building the dataset looks like this:</p>
<pre><code>def build_dataset(self, edge_index, org_X, player_X, event_X, rated_X, labels_y):
data = HeteroData()
data['org'].x = org_X
data['player'].x = player_X
data['event'].x = event_X
data['rated'].x = rated_X
data['event', 'is_related_to', 'event'].edge_index = edge_index
data['player', 'is_rated', 'rated'].y = labels_y
return data
</code></pre>
<p>I convert a node to its features is like this (for player-node):</p>
<pre><code>def extract_player_node_features(self, df):
sorted_player_df = df.sort_values(by='player_id').set_index('player_id')
sorted_player_df = sorted_player_df.reset_index(drop=False)
player_id_mapping = sorted_player_df['player_id']
#print(f'\nPlayer ID mapping:\n{player_id_mapping}')
# select player node features
player_node_features_df = df[['player_name', 'age', 'school']]
player_name_features_df = pd.DataFrame(player_node_features_df.player_name.values.tolist(), player_node_features_df.index).add_prefix('player_name_')
player_name_features_ohe = pd.get_dummies(player_name_features_df)
player_age_features_df = pd.DataFrame(player_node_features_df.age.values.tolist(), player_node_features_df.index).add_prefix('age_')
player_age_features_ohe = pd.get_dummies(player_age_features_df)
player_school_features_df = pd.DataFrame(player_node_features_df.school.values.tolist(), player_node_features_df.index).add_prefix('school_')
player_school_features_ohe = pd.get_dummies(player_school_features_df)
player_node_features = pd.concat([player_node_features_df, player_name_features_ohe], axis=1)
player_node_features = pd.concat([player_node_features, player_age_features_ohe], axis=1)
player_node_features = pd.concat([player_node_features, player_school_features_ohe], axis=1)
player_node_features.drop(columns=['player_name', 'age', 'school'], axis=1, inplace=True)
player_node_X = player_node_features.to_numpy(dtype='int32')
player_node_X = torch.from_numpy(player_node_X)
return player_node_X
</code></pre>
<p>Finally, the df (created from json input file) and the converted-df (to numeric, startng from 0 to make it compact) are below:</p>
<p>Input df:</p>
<pre><code> event_id event_type org_id org_name org_location player_id player_name age school related_event_id rated
0 1-ab3 qualifiers 305 milan tennis club Milan 1-b7a3-52d2 Alex 20 BCE [4-ab3, 3-ab3] no
1 2-ab3 under 18 finals 76 Nadal tennis academy madrid 2-b7a3-52d2 Bob 20 BCMS [5-ab3, 1-ab3] yes
2 3-ab3 womens tennis qualifiers 185 Griz tennis club budapest 3-b7a3-52d2 Mary 21 BCE [4-ab3] no
3 4-ab3 US professional tennis club 285 Nick Bolletieri Tennis Academy tampa 4-b7a3-52d2 Joe 21 BCMS [1-ab3, 3-ab3] yes
4 5-ab3 womens tennis circuit 305 milan tennis club Milan 5-b7a3-52d2 Bolt 22 LTHS [4-ab3] no
</code></pre>
<p>Sorted input df:</p>
<pre><code> related_event_id org_id org_name org_location player_id player_name event_id event_type age school rated
1 [4, 0] 0 1 2 1 1 1 2 0 1 1
2 [3] 1 0 1 2 4 2 4 1 0 0
3 [0, 2] 2 2 3 3 3 3 0 1 1 1
0 [3, 2] 3 3 0 0 0 0 1 0 0 0
4 [3] 3 3 0 4 2 4 3 2 2 0
</code></pre>
<p>When I ignore 'rated' as a node-type, <code>validate()</code> fn complains. So, I half-heartedly added 'rated' to node-types.</p>
<pre><code>ValueError: The node types {'rated'} are referenced in edge types but do not exist as node types
</code></pre>
<p>Any help/suggestion to why I am seeing what I am seeing here? Will be happy to provide code/input-file for you to reproduce.</p>
|
<python><pandas><dataframe><pytorch-geometric>
|
2024-03-07 00:41:23
| 0
| 2,501
|
user1717931
|
78,118,100
| 9,422,114
|
Pandas: Filter dataframe by difference between adjacent rows
|
<p>I have the following data in a dataframe.</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Timestamp</th>
<th>MeasureA</th>
<th>MeasureB</th>
<th>MeasureC</th>
<th>MeasureD</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.00</td>
<td>26.46</td>
<td>63.60</td>
<td>3.90</td>
<td>0.67</td>
</tr>
<tr>
<td>0.94</td>
<td>26.52</td>
<td>78.87</td>
<td>1.58</td>
<td>0.42</td>
</tr>
<tr>
<td>1.94</td>
<td>30.01</td>
<td>82.04</td>
<td>1.13</td>
<td>0.46</td>
</tr>
<tr>
<td>3.00</td>
<td>30.19</td>
<td>82.00</td>
<td>1.17</td>
<td>0.36</td>
</tr>
<tr>
<td>4.00</td>
<td>30.07</td>
<td>81.43</td>
<td>1.13</td>
<td>0.42</td>
</tr>
<tr>
<td>5.94</td>
<td>30.02</td>
<td>82.46</td>
<td>1.05</td>
<td>0.34</td>
</tr>
<tr>
<td>8.00</td>
<td>30.22</td>
<td>82.48</td>
<td>0.98</td>
<td>0.35</td>
</tr>
<tr>
<td>9.00</td>
<td>30.00</td>
<td>82.21</td>
<td>1.13</td>
<td>0.33</td>
</tr>
<tr>
<td>10.00</td>
<td>30.00</td>
<td>82.34</td>
<td>1.12</td>
<td>0.34</td>
</tr>
</tbody>
</table></div>
<p>And I'd like to filter the entries using some non-uniform intervals. Let say that my intervals are <code>[1.0, 1.5]</code></p>
<p>What I'm trying to achieve is that, we take the first row (<code>row0</code>), and to get the next valid row, we look at what is the next row whose <code>Timestamp</code> value is greater or equal to <code>row0 + 1.0</code>.</p>
<p>In this scenario, the next valid row will be the one with the <code>1.94</code> timestamp. Then, for the next valid row, we will use the next item in the intervals array. Which is <code>1.5</code>. That will make the next row the one with a <code>timestamp</code> value of <code>4.00</code>. Since <code>1.94 + 1.5</code> is equal to <code>3.44</code>.</p>
<p>For the next row, we go back and start from the beginning of the array of intervals.</p>
<p>After going through all the data, the resulting dataframe should be:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Timestamp</th>
<th>MeasureA</th>
<th>MeasureB</th>
<th>MeasureC</th>
<th>MeasureD</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.00</td>
<td>26.46</td>
<td>63.60</td>
<td>3.90</td>
<td>0.67</td>
</tr>
<tr>
<td>1.94</td>
<td>30.01</td>
<td>82.04</td>
<td>1.13</td>
<td>0.46</td>
</tr>
<tr>
<td>4.00</td>
<td>30.07</td>
<td>81.43</td>
<td>1.13</td>
<td>0.42</td>
</tr>
<tr>
<td>5.94</td>
<td>30.02</td>
<td>82.33</td>
<td>1.11</td>
<td>0.35</td>
</tr>
<tr>
<td>8.00</td>
<td>30.22</td>
<td>82.48</td>
<td>0.98</td>
<td>0.35</td>
</tr>
<tr>
<td>9.00</td>
<td>30.00</td>
<td>82.21</td>
<td>1.13</td>
<td>0.33</td>
</tr>
</tbody>
</table></div>
<p>Is there a way to achieve this with the existing filtering methods in pandas?</p>
|
<python><python-3.x><pandas><dataframe>
|
2024-03-07 00:15:11
| 1
| 1,401
|
Jacobo
|
78,118,020
| 1,879,366
|
Qdrant client scroll filter does not work
|
<p>I'm using Qdrant database through its Python client. I need to find entries in the database that have some metadata field set to a certain value (without using vector similarity). I'm trying to do it this way:</p>
<pre><code>from qdrant_client.http import models
condition = models.FieldCondition(key="field_name", match=models.MatchValue(value="some_value"))
scroll_filter = models.Filter(must=[condition])
results = client.scroll(collection_name="my_collection", scroll_filter=scroll_filter)
</code></pre>
<p>I know that there are points that have metadata "field_name=some_value", but the response is <code>([], None)</code> no matter what I try. Is there a way to query the database based on metadata only?</p>
<p>(Note: <code>client.scroll(collection_name="my_collection")</code> indeed returns results, some of them having the required key/value pair in the metadata. It's the filter that doesn't seem to work.)</p>
<p>EDIT: The problem was that the data was originally inserted using LangChain, which puts the metadata object within the payload. That further means that instead of using <code>condition=FieldCondition(key="field_name", ...</code>, one must use <code>condition=FieldCondition(key="metadata.field_name", ...</code>.</p>
|
<python><langchain><qdrant><qdrantclient>
|
2024-03-06 23:48:46
| 1
| 872
|
Lovro
|
78,117,968
| 4,107,537
|
`buildozer` does not place Python package in `site-packages` when compiling apk for android
|
<p>I have made a basic project skeleton <code>myapp</code> that replicates the general structure of my usual Python development flow, but adds Buildozer in for android targets. It looks like this.</p>
<pre><code>myapp-proj
├── buildozer.spec
├── main.py
├── myapp
│ ├── cli
│ │ ├── cli.py
│ │ └── __init__.py
│ ├── gui
│ │ ├── gui.py
│ │ └── __init__.py
│ ├── __init__.py
│ ├── myapp.egg-info
│ │ ├── dependency_links.txt
│ │ ├── entry_points.txt
│ │ ├── PKG-INFO
│ │ ├── requires.txt
│ │ ├── SOURCES.txt
│ │ └── top_level.txt
│ └── utils
│ ├── __init__.py
│ └── logging.py
└── setup.py
</code></pre>
<p>It's pretty simple. I have a <code>main.py</code> that just calls a function in <code>gui/gui.py</code> (imported correctly up into <code>gui/__init__.py</code>. The <code>main.py</code> looks like this:</p>
<pre><code>import os
import sys
import traceback
from myapp.gui import gui
def main():
print(f"sys.path: {sys.path}")
try:
gui()
except Exception as err:
print(f"Critical exception. Error:\n{err}")
traceback.print_exc()
if __name__=='__main__':
main()
</code></pre>
<p>And <code>gui.py</code>:</p>
<pre><code>import kivy
from kivy.app import App
from kivy.uix.label import Label
from myapp.utils.logging import log_generator
log = log_generator(__name__)
class AppGui(App):
def build(self):
return Label(text='myapp: hello world')
def gui():
log.info(f"hello world")
AppGui().run()
</code></pre>
<p>Logging is a boilerplate logger.</p>
<p>I am trying to compile this over into an <code>.apk</code>, and it does that successfully, but when run on the device or emulator (tried both), I get a <code>Module not found: myapp</code> error. Inspecting the <code>.apk</code> that gets generated, the <code>assets/private.tar</code> folder only contains <code>main.py</code> (and no subfolders).</p>
<h3>buildozer.spec</h3>
<p>Command:</p>
<pre class="lang-bash prettyprint-override"><code>buildozer android debug deploy run ; buildozer android logcat | grep myapp
</code></pre>
<p>Spec file:</p>
<pre><code>source.dir = .
source.include_exts = py,png,jpg,kv,atlas
source.include_patterns = main.py, myapp/__init__.py, myapp/gui/__init__.py, myapp/gui/gui.py, myapp/utils/__init__.py, myapp/utils/logging.py
version = 0.1
requirements = python3,kivy
orientation = portrait
#
# Android specific
#
fullscreen = 0
android.permissions = android.permission.INTERNET, (name=android.permission.WRITE_EXTERNAL_STORAGE;maxSdkVersion=18), android.permission.READ_EXTERNAL_STORAGE
android.archs = arm64-v8a, armeabi-v7a
android.allow_backup = True
p4a.branch = develop
android.no-byte-compile-python = True
[buildozer]
log_level = 2
warn_on_root = 1
</code></pre>
<h3>Logs</h3>
<p>The build output does not contain any problems. It builds successfully, it just doesn't have anything except <code>main.py</code> in <code>assets/private.tar</code>. I will then get this output on run:</p>
<pre><code>2024-03-06 13:51:17.390 5227-5299 python org.test.myapp I Initialized python
2024-03-06 13:51:17.390 5227-5299 python org.test.myapp I AND: Init threads
2024-03-06 13:51:17.400 5227-5299 python org.test.myapp I testing python print redirection
2024-03-06 13:51:17.415 5227-5299 python org.test.myapp I Android path ['.', '/data/user/0/org.test.myapp/files/app/_python_bundle/stdlib.zip', '/data/user/0/org.test.myapp/files/app/_python_bundle/modules', '/data/user/0/org.test.myapp/files/app/_python_bundle/site-packages']
2024-03-06 13:51:17.417 5227-5299 python org.test.myapp I os.environ is environ({'PATH': '/product/bin:/apex/com.android.runtime/bin:/apex/com.android.art/bin:/system_ext/bin:/system/bin:/system/xbin:/odm/bin:/vendor/bin:/vendor/xbin', 'ANDROID_BOOTLOGO': '1', 'ANDROID_ROOT': '/system', 'ANDROID_ASSETS': '/system/app', 'ANDROID_DATA': '/data', 'ANDROID_STORAGE': '/storage', 'ANDROID_ART_ROOT': '/apex/com.android.art', 'ANDROID_I18N_ROOT': '/apex/com.android.i18n', 'ANDROID_TZDATA_ROOT': '/apex/com.android.tzdata', 'EXTERNAL_STORAGE': '/sdcard', 'ASEC_MOUNTPOINT': '/mnt/asec', 'DOWNLOAD_CACHE': '/data/cache', 'BOOTCLASSPATH': '/apex/com.android.art/javalib/core-oj.jar:/apex/com.android.art/javalib/core-libart.jar:/apex/com.android.art/javalib/okhttp.jar:/apex/com.android.art/javalib/bouncycastle.jar:/apex/com.android.art/javalib/apache-xml.jar:/system/framework/framework.jar:/system/framework/framework-graphics.jar:/system/framework/ext.jar:/system/framework/telephony-common.jar:/system/framework/voip-common.jar:/system/framework/ims-common.jar:/apex/com.android.i18n/javalib/core-icu4j.jar:/apex/com.android.adservices/javalib/framework-adservices.jar:/apex/com.android.adservices/javalib/framework-sdksandbox.jar:/apex/com.android.appsearch/javalib/framework-appsearch.jar:/apex/com.android.btservices/javalib/framework-bluetooth.jar:/apex/com.android.configinfrastructure/javalib/framework-configinfrastructure.jar:/apex/com.android.conscrypt/javalib/conscrypt.jar:/apex/com.android.devicelock/javalib/framework-devicelock.jar:/apex/com.android.healthfitness/javalib/framework-healthfitness.jar:/apex/com.android.ipsec/javalib/android.net.ipsec.ike.jar:/apex/com.android.media/javalib/updatable-media.jar:/apex/com.android.mediaprovider/javalib/framework-mediaprovider.jar:/apex/com.android.ondevicepersonalization/javalib/framework-ondevicepersonalization.jar:/apex/com.android.os.statsd/javalib/framework-statsd.jar:/apex/com.android.permission/javalib/framework-permission.jar:/apex/com.android.permission/javalib/framework-permission-s.jar:/apex/com.android.scheduling/javalib/framework-scheduling.jar:/apex/com.android.sdkext/javalib/framework-sdkextensions.jar:/apex/com.android.tethering/javalib/framework-connectivity.jar:/apex/com.android.tethering/javalib/framework-connectivity-t.jar:/apex/com.android.tethering/javalib/framework-tethering.jar:/apex/com.android.uwb/javalib/framework-uwb.jar:/apex/com.android.virt/javalib/framework-virtualization.jar:/apex/com.android.wifi/javalib/framework-wifi.jar', 'DEX2OATBOOTCLASSPATH': '/apex/com.android.art/javalib/core-oj.jar:/apex/com.android.art/javalib/core-libart.jar:/apex/com.android.art/javalib/okhttp.jar:/apex/com.android.art/javalib/bouncycastle.jar:/apex/com.android.art/javalib/apache-xml.jar:/system/framework/framework.jar:/system/framework/framework-graphics.jar:/system/framework/ext.jar:/system/framework/telephony-common.jar:/system/framework/voip-common.jar:/system/framework/ims-common.jar:/apex/com.android.i18n/javalib/core-icu4j.jar', 'SYSTEMSERVERCLASSPATH': '/system/framework/com.android.location.provider.jar:/system/framework/services.jar:/apex/com.android.adservices/javalib/service-adservices.jar:/apex/com.android.adservices/javalib/service-sdksandbox.jar:/apex/com.android.appsearch/javalib/service-appsearch.jar:/apex/com.android.art/javalib/service-art.jar:/apex/com.android.configinfrastructure/javalib/service-configinfrastructure.jar:/apex/com.android.healthfitness/javalib/service-healthfitness.jar:/apex/com.android.media/javalib/service-media-s.jar:/apex/com.android.ondevicepersonalization/javalib/service-ondevicepersonalization.jar:/apex/com.android.permission/javalib/service-permission.jar:/apex/com.android.rkpd/javalib/service-rkp.jar', 'STANDALONE_SYSTEMSERVER_JARS': '/apex/com.android.btservices/javalib/service-bluetooth.jar:/apex/com.android.devicelock/javalib/service-devicelock.jar:/apex/com.android.os.statsd/javalib/service-statsd.jar:/apex/com.android.scheduling/javalib/service-scheduling.jar:/apex/com.android.tethering/javalib/service-connectivity.jar:/apex/com.android.uwb/javalib/s
2024-03-06 13:51:17.417 5227-5299 python org.test.myapp I Android kivy bootstrap done. __name__ is __main__
2024-03-06 13:51:17.417 5227-5299 python org.test.myapp I AND: Ran string
2024-03-06 13:51:17.417 5227-5299 python org.test.myapp I Run user program, change dir and execute entrypoint
2024-03-06 13:51:17.564 5227-5299 python org.test.myapp I Traceback (most recent call last):
2024-03-06 13:51:17.564 5227-5299 python org.test.myapp I File "main.py", line 5, in <module>
2024-03-06 13:51:17.567 5227-5299 python org.test.myapp I from myapp.gui import gui
2024-03-06 13:51:17.568 5227-5299 python org.test.myapp I ModuleNotFoundError: No module named 'myapp'
</code></pre>
<p>Here's some (not after <code>clean</code>) build output, though.</p>
<pre><code># Check configuration tokens
# Ensure build layout
# Check configuration tokens
# Preparing build
# Check requirements for android
# Search for Git (git)
# -> found at /usr/bin/git
# Search for Cython (cython)
# -> found at /home/bevans/.local/bin/cython
# Search for Java compiler (javac)
# -> found at /usr/lib/jvm/java-21-openjdk-21.0.2.0.13-1.rolling.el8.x86_64/bin/javac
# Search for Java keytool (keytool)
# -> found at /usr/lib/jvm/java-17-openjdk-17.0.10.0.7-2.el8.x86_64/bin/keytool
# Install platform
# Run ['git', 'config', '--get', 'remote.origin.url']
# Cwd /workspace/minexample/.buildozer/android/platform/python-for-android
https://github.com/kivy/python-for-android.git
# Run ['git', 'branch', '-vv']
# Cwd /workspace/minexample/.buildozer/android/platform/python-for-android
* develop b3cc0343 [origin/develop] Merge pull request #2978 from rivian/arch-ref-fix
# Run ['/usr/local/bin/python3.10', '-m', 'pip', 'install', '-q', '--user', 'appdirs', 'colorama>=0.3.3', 'jinja2', 'sh>=1.10, <2.0; sys_platform!="win32"', 'build', 'toml', 'packaging', 'setuptools']
# Cwd None
# Apache ANT found at /home/bevans/.buildozer/android/platform/apache-ant-1.9.4
# Android SDK found at /home/bevans/.buildozer/android/platform/android-sdk
# Recommended android's NDK version by p4a is: 25b
# Android NDK found at /home/bevans/.buildozer/android/platform/android-ndk-r25b
# Run ['/usr/local/bin/python3.10', '-m', 'pythonforandroid.toolchain', 'aab', '-h', '--color=always', '--storage-dir=/workspace/minexample/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a', '--ndk-api=21', '--ignore-setup-py', '--debug']
# Cwd /workspace/minexample/.buildozer/android/platform/python-for-android
usage: toolchain.py aab [-h] [--debug] [--color {always,never,auto}]
[--sdk-dir SDK_DIR] [--ndk-dir NDK_DIR]
[--android-api ANDROID_API]
[--ndk-version NDK_VERSION] [--ndk-api NDK_API]
[--symlink-bootstrap-files]
[--storage-dir STORAGE_DIR] [--arch ARCH]
[--dist-name DIST_NAME] [--requirements REQUIREMENTS]
[--recipe-blacklist RECIPE_BLACKLIST]
[--blacklist-requirements BLACKLIST_REQUIREMENTS]
[--bootstrap BOOTSTRAP] [--hook HOOK] [--force-build]
[--no-force-build] [--require-perfect-match]
[--no-require-perfect-match] [--allow-replace-dist]
[--no-allow-replace-dist]
[--local-recipes LOCAL_RECIPES]
[--activity-class-name ACTIVITY_CLASS_NAME]
[--service-class-name SERVICE_CLASS_NAME]
[--java-build-tool {auto,ant,gradle}] [--copy-libs]
[--no-copy-libs] [--add-asset ASSETS]
[--add-resource RESOURCES] [--private PRIVATE]
[--use-setup-py] [--ignore-setup-py] [--release]
[--with-debug-symbols] [--keystore KEYSTORE]
[--signkey SIGNKEY] [--keystorepw KEYSTOREPW]
[--signkeypw SIGNKEYPW]
options:
-h, --help show this help message and exit
--debug Display debug output and all build info
--color {always,never,auto}
Enable or disable color output (default enabled on
tty)
--sdk-dir SDK_DIR, --sdk_dir SDK_DIR
The filepath where the Android SDK is installed
--ndk-dir NDK_DIR, --ndk_dir NDK_DIR
The filepath where the Android NDK is installed
--android-api ANDROID_API, --android_api ANDROID_API
The Android API level to build against defaults to 33
if not specified.
--ndk-version NDK_VERSION, --ndk_version NDK_VERSION
DEPRECATED: the NDK version is now found automatically
or not at all.
--ndk-api NDK_API The Android API level to compile against. This should
be your *minimal supported* API, not normally the same
as your --android-api. Defaults to min(ANDROID_API,
21) if not specified.
--symlink-bootstrap-files, --ssymlink_bootstrap_files
If True, symlinks the bootstrap files creation. This
is useful for development only, it could also cause
weird problems.
--storage-dir STORAGE_DIR
Primary storage directory for downloads and builds
(default: /home/bevans/.local/share/python-for-
android)
--arch ARCH The archs to build for.
--dist-name DIST_NAME, --dist_name DIST_NAME
The name of the distribution to use or create
--requirements REQUIREMENTS
Dependencies of your app, should be recipe names or
Python modules. NOT NECESSARY if you are using Python
3 with --use-setup-py
--recipe-blacklist RECIPE_BLACKLIST
Blacklist an internal recipe from use. Allows
disabling Python 3 core modules to save size
--blacklist-requirements BLACKLIST_REQUIREMENTS
Blacklist an internal recipe from use. Allows
disabling Python 3 core modules to save size
--bootstrap BOOTSTRAP
The bootstrap to build with. Leave unset to choose
automatically.
--hook HOOK Filename to a module that contains python-for-android
hooks
--local-recipes LOCAL_RECIPES, --local_recipes LOCAL_RECIPES
Directory to look for local recipes
--activity-class-name ACTIVITY_CLASS_NAME
The full java class name of the main activity
--service-class-name SERVICE_CLASS_NAME
Full java package name of the PythonService class
--java-build-tool {auto,ant,gradle}
The java build tool to use when packaging the APK,
defaults to automatically selecting an appropriate
tool.
--add-asset ASSETS Put this in the assets folder in the apk.
--add-resource RESOURCES
Put this in the res folder in the apk.
--private PRIVATE the directory with the app source code files
(containing your main.py entrypoint)
--use-setup-py Process the setup.py of a project if present.
(Experimental!
--ignore-setup-py Don't run the setup.py of a project if present. This
may be required if the setup.py is not designed to
work inside p4a (e.g. by installing dependencies that
won't work or aren't desired on Android
--release Build your app as a non-debug release build. (Disables
gdb debugging among other things)
--with-debug-symbols Will keep debug symbols from `.so` files.
--keystore KEYSTORE Keystore for JAR signing key, will use jarsigner
default if not specified (release build only)
--signkey SIGNKEY Key alias to sign PARSER_APK. with (release build
only)
--keystorepw KEYSTOREPW
Password for keystore
--signkeypw SIGNKEYPW
Password for key alias
Whether to force compilation of a new distribution
--force-build
--no-force-build (this is the default)
--require-perfect-match
--no-require-perfect-match
(this is the default)
--allow-replace-dist (this is the default)
--no-allow-replace-dist
--copy-libs
--no-copy-libs (this is the default)
# Check application requirements
# Compile platform
# Run ['/usr/local/bin/python3.10', '-m', 'pythonforandroid.toolchain', 'create', '--dist_name=myapp', '--bootstrap=sdl2', '--requirements=python3,kivy', '--arch=arm64-v8a', '--arch=armeabi-v7a', '--copy-libs', '--color=always', '--storage-dir=/workspace/minexample/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a', '--ndk-api=21', '--ignore-setup-py', '--debug']
# Cwd /workspace/minexample/.buildozer/android/platform/python-for-android
# Build the application #52
# Copy application source from /workspace/minexample
# Create directory /workspace/minexample/.buildozer/android/app
# Copy /workspace/minexample/setup.py
# Copy /workspace/minexample/main.py
# Create directory /workspace/minexample/.buildozer/android/app/myapp
# Copy /workspace/minexample/myapp/__init__.py
# Create directory /workspace/minexample/.buildozer/android/app/myapp/utils
# Copy /workspace/minexample/myapp/utils/__init__.py
# Copy /workspace/minexample/myapp/utils/logging.py
# Create directory /workspace/minexample/.buildozer/android/app/myapp/cli
# Copy /workspace/minexample/myapp/cli/__init__.py
# Copy /workspace/minexample/myapp/cli/cli.py
# Create directory /workspace/minexample/.buildozer/android/app/myapp/gui
# Copy /workspace/minexample/myapp/gui/gui.py
# Copy /workspace/minexample/myapp/gui/__init__.py
# Create directory /workspace/minexample/.buildozer/android/app/myapp/myapp.egg-info
# Copy /workspace/minexample/myapp/myapp.egg-info/PKG-INFO
# Create directory /workspace/minexample/.buildozer/android/app/bin/inspect/assets
# Copy /workspace/minexample/bin/inspect/assets/main.py
# Create directory /workspace/minexample/.buildozer/android/app/bin/inspect/res/drawable-hdpi-v4
# Copy /workspace/minexample/bin/inspect/res/drawable-hdpi-v4/ic_launcher.png
# Create directory /workspace/minexample/.buildozer/android/app/bin/inspect/res/drawable-mdpi-v4
# Copy /workspace/minexample/bin/inspect/res/drawable-mdpi-v4/ic_launcher.png
# Create directory /workspace/minexample/.buildozer/android/app/bin/inspect/res/drawable-xhdpi-v4
# Copy /workspace/minexample/bin/inspect/res/drawable-xhdpi-v4/ic_launcher.png
# Create directory /workspace/minexample/.buildozer/android/app/bin/inspect/res/drawable-xxhdpi-v4
# Copy /workspace/minexample/bin/inspect/res/drawable-xxhdpi-v4/ic_launcher.png
# Create directory /workspace/minexample/.buildozer/android/app/bin/inspect/res/drawable
# Copy /workspace/minexample/bin/inspect/res/drawable/presplash.jpg
# Create directory /workspace/minexample/.buildozer/android/app/bin/inspect/res/mipmap
# Copy /workspace/minexample/bin/inspect/res/mipmap/icon.png
# Create directory /workspace/minexample/.buildozer/android/app/build/lib/utils
# Copy /workspace/minexample/build/lib/utils/__init__.py
# Copy /workspace/minexample/build/lib/utils/logging.py
# Create directory /workspace/minexample/.buildozer/android/app/build/lib/cli
# Copy /workspace/minexample/build/lib/cli/__init__.py
# Copy /workspace/minexample/build/lib/cli/cli.py
# Create directory /workspace/minexample/.buildozer/android/app/build/lib/gui
# Copy /workspace/minexample/build/lib/gui/gui.py
# Copy /workspace/minexample/build/lib/gui/__init__.py
# Package the application
# project.properties updated
# Run ['/usr/local/bin/python3.10', '-m', 'pythonforandroid.toolchain', 'apk', '--bootstrap', 'sdl2', '--dist_name', 'myapp', '--name', 'My Application', '--version', '0.1', '--package', 'org.test.myapp', '--minsdk', '21', '--ndk-api', '21', '--private', '/workspace/minexample/.buildozer/android/app', '--permission', 'android.permission.INTERNET', '--permission', '(name=android.permission.WRITE_EXTERNAL_STORAGE;maxSdkVersion=18)', '--permission', 'android.permission.READ_EXTERNAL_STORAGE', '--android-entrypoint', 'org.kivy.android.PythonActivity', '--android-apptheme', '@android:style/Theme.NoTitleBar', '--orientation', 'portrait', '--window', '--enable-androidx', '--copy-libs', '--no-byte-compile-python', '--arch', 'arm64-v8a', '--arch', 'armeabi-v7a', '--color=always', '--storage-dir=/workspace/minexample/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a', '--ndk-api=21', '--ignore-setup-py', '--debug']
# Cwd /workspace/minexample/.buildozer/android/platform/python-for-android
Copying main.py's ONLY, since other app data is expected in site-packages.
Applying Java source code patches...
Applying patch: src/patches/SDLActivity.java.patch
Warning: failed to apply patch (exit code 1), assuming it is already applied: src/patches/SDLActivity.java.patch
# Android packaging done!
# APK myapp-0.1-arm64-v8a_armeabi-v7a-debug.apk available in the bin directory
# Run ['/home/bevans/.buildozer/android/platform/android-sdk/platform-tools/adb', 'devices']
# Cwd None
List of devices attached
emulator-5554 device
# Deploy on emulator-5554
# Run ['/home/bevans/.buildozer/android/platform/android-sdk/platform-tools/adb', 'install', '-r', '/workspace/minexample/bin/myapp-0.1-arm64-v8a_armeabi-v7a-debug.apk']
# Cwd /home/bevans/.buildozer/android/platform
Performing Streamed Install
Success
# Application pushed.
# Run on emulator-5554
# Run ['/home/bevans/.buildozer/android/platform/android-sdk/platform-tools/adb', 'shell', 'am', 'start', '-n', 'org.test.myapp/org.kivy.android.PythonActivity', '-a', 'org.kivy.android.PythonActivity']
# Cwd /home/bevans/.buildozer/android/platform
Starting: Intent { act=org.kivy.android.PythonActivity cmp=org.test.myapp/org.kivy.android.PythonActivity }
# Waiting for application to start.
# Waiting for application to start.
# Application started.
</code></pre>
<p>I'm particularly drawn to the line</p>
<pre><code>Copying main.py's ONLY, since other app data is expected in site-packages.
</code></pre>
<p>I'm not sure that it's getting the <code>site-packages</code> correctly?</p>
<p>When I check <code>myapp</code>'s <code>site-packages</code> inside of the phone/emulator (replicated issue on both) I find that <code>myapp</code> is not listed.</p>
<p>Setting <code>p4a.setup_py = True</code> also does not solve the issue.</p>
|
<python><android><kivy><buildozer>
|
2024-03-06 23:27:57
| 1
| 419
|
Bradley Evans
|
78,117,884
| 1,689,987
|
dtreeviz python package is showing a split as being at 0 instead of the actual number
|
<p>The dtreeviz python package is showing a split at .216 as "0.000" as seen in this screenshot:
<a href="https://i.sstatic.net/qN2xc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qN2xc.png" alt="enter image description here" /></a></p>
<p>The code is:</p>
<pre><code>self.clf = DecisionTreeClassifier(random_state=1234, max_depth=self.treeDepth)
viz_model = dtreeviz.model(self.clf,
X_train=self.features, y_train=self.target,
feature_names=self.featureColumns,
target_name=self.targetColumn, class_names=["short", "long"])
v = viz_model.view(scale=2, precision=4)
v.save(path)
</code></pre>
<p>The textual representation shows the splits correctly:</p>
<p>Code:</p>
<pre><code> text_representation = tree.export_text(self.clf, decimals=3)
</code></pre>
<p>Output:</p>
<pre><code>|--- feature_1 <= 0.216
| |--- feature_1 <= 0.212
| | |--- feature_1 <= 0.208
| | | |--- class: 0
| | |--- feature_1 > 0.208
| | | |--- class: 1
| |--- feature_1 > 0.212
| | |--- class: 0
|--- feature_1 > 0.216
| |--- feature_1 <= 0.401
| | |--- feature_1 <= 0.379
| | | |--- class: 1
| | |--- feature_1 > 0.379
| | | |--- class: 1
| |--- feature_1 > 0.401
| | |--- feature_1 <= 0.430
| | | |--- class: 0
| | |--- feature_1 > 0.430
| | | |--- class: 1
</code></pre>
|
<python><dtreeviz>
|
2024-03-06 23:00:03
| 1
| 1,666
|
user1689987
|
78,117,535
| 1,082,438
|
Attaching timeseries to a dataframe
|
<p>I have a dataframe which looks like this:</p>
<pre><code>"2023-09-07 13:22" type1 12.7
"2023-09-07 14:07" type2 101.1
</code></pre>
<p>And separately a dataframe with reg spaced timeseries for each type:</p>
<pre><code> type1 type2
2023-09-07 08:00 1 2
2023-09-07 08:15 3 4
2023-09-07 08:30 5 6
...
2023-09-07 13:15 7 8
2023-09-07 13:30 9 10
2023-09-07 13:45 11 12
2023-09-07 14:00 13 14
2023-09-07 14:15 15 16
2023-09-07 14:30 17 18
...
</code></pre>
<p>I'd like to attach (as a row) to each line in the first dataframe 2 (or N) values from the second dataframe starting from the first timestamp after given.</p>
<p>So in this case, the answer would be</p>
<pre><code>"2023-09-07 13:22" type1 12.7 9 11
"2023-09-07 14:07" type2 101.1 16 18
</code></pre>
<p>I could loop over the rows in the first dataframe, and each time find a slice in the second dataframe, but that's pretty slow. Was wondering if there's a better solution. Seems like a pretty common task.</p>
<p>Thank you.</p>
<p>Code to generate input dataframes:</p>
<pre><code>df1 = pd.DataFrame(columns = ["date", "type", "val"])
df1.loc[0] = [pd.to_datetime("2023-09-07 13:22:00"), "type1", 12.1]
df1.loc[1] = [pd.to_datetime("2023-09-07 14:07:00"), "type2", 101.1]
df1 = df1.set_index("date")
</code></pre>
<pre><code>df2 = pd.DataFrame()
df2["date"] = pd.to_datetime(["2023-09-07 08:00", "2023-09-07 08:15","2023-09-07 08:30", "2023-09-07 13:15","2023-09-07 13:30", "2023-09-07 13:45","2023-09-07 14:00", "2023-09-07 14:15","2023-09-07 14:30"])
df2["type1"] = [1,3,5,7,9,11,13,15,17]
df2["type2"] = [2,4,6,8,10,12,14,16,18]
</code></pre>
|
<python><pandas><merge>
|
2024-03-06 21:22:38
| 2
| 506
|
LazyCat
|
78,117,522
| 7,480,820
|
How do you hide the terminals that spawn from subprocess.run?
|
<p>I have packaged my python application into a standalone executable with pyinstaller and every time <code>subprocess.run</code> is called, a terminal pops up, then disappears when the command finished executing. This can be quite distracting when dozens of terminals spawn in quick succession. How do I keep these terminals from showing?</p>
<h2>Example</h2>
<p><em>script.py</em></p>
<pre class="lang-py prettyprint-override"><code>import subprocess
command = ["powershell", "-Command", "Get-ChildItem"]
subprocess.run(command)
</code></pre>
<pre><code>PS> pyinstaller script.py
</code></pre>
<p>double click on <em>dist\script\script.exe</em></p>
|
<python><subprocess>
|
2024-03-06 21:20:32
| 1
| 1,282
|
Philip Nelson
|
78,117,414
| 2,593,383
|
precedence of python exception raised from within finally block
|
<p>I have some (python 3.12) code which really belongs in a finally block, but unfortunately that code can (in rare cases) raise an exception. Are exceptions raised in finally blocks guaranteed to take precedence over exceptions explicitly re-raised from exception blocks:</p>
<pre><code>try:
raise Exception('1')
except Exception as e:
raise
finally:
raise Exception('2')
</code></pre>
<p>This works as expected (this code effectively raises exception 2). It'd be nice to know if it's guaranteed to work this way, since I didn't see this case listed in the "complex cases" described in the <a href="https://docs.python.org/3/tutorial/errors.html" rel="nofollow noreferrer">tutorial</a>.</p>
|
<python><exception>
|
2024-03-06 20:58:06
| 1
| 3,593
|
nonagon
|
78,117,347
| 1,188,878
|
Improve performance on Networkx graphviz_layout for large volume of nodes and edges
|
<p>I have a network graph dataset which has around 12.5k root nodes and 70k edges which obviously would end up creating a huge graph. However, the end user would not be consuming the graph in its entirety but would be filtering on certain root nodes to see the network chart accordingly. The network is basically a lineage for objects hence the different levels and preference to use the "dot" representation since it shows more of top to bottom hierarchical representation in an org format.</p>
<p>I am using the below code to create the position mapping using <strong>networkx</strong>, <strong>graphviz_layout</strong>, <strong>dot</strong> program. However, the program crashes the python kernel due to memory issues.</p>
<pre><code>from networkx.drawing.nx_agraph import graphviz_layout
# Visualize the subgraph
pos = graphviz_layout(subgraph, prog='dot') # You can use different layout algorithms
</code></pre>
<p>I also tried processing each node in a loop and joblib parallel for faster processing and using 3 cores (I have 4 CPU cores with 16 GB RAM on Windows OS)</p>
<pre><code>from joblib import Parallel, delayed
# Define a function to calculate layout for a single node
def calculate_layout(node, subgraph):
return node, graphviz_layout(subgraph.subgraph([node]), prog='dot')
max_workers = 3
results = Parallel(n_jobs=max_workers, verbose=3)(delayed(calculate_layout)(node, subgraph) for node in tqdm(subgraph.nodes(), total=len(subgraph.nodes())))
pos = dict(results)
</code></pre>
<p>Here too the process terminates due to memory usage.</p>
<pre><code>[Parallel(n_jobs=3)]: Using backend LokyBackend with 3 concurrent workers.
[Parallel(n_jobs=3)]: Done 26 tasks | elapsed: 14.2s
[Parallel(n_jobs=3)]: Done 122 tasks | elapsed: 1.1min
[Parallel(n_jobs=3)]: Done 282 tasks | elapsed: 2.6min
[Parallel(n_jobs=3)]: Done 506 tasks | elapsed: 4.7min
[Parallel(n_jobs=3)]: Done 794 tasks | elapsed: 7.8min
[Parallel(n_jobs=3)]: Done 1146 tasks | elapsed: 12.4min
TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker.
</code></pre>
<p>Is there any better or more efficient way to accomplish this?</p>
|
<python><networkx><graphviz><pygraphviz>
|
2024-03-06 20:47:40
| 0
| 859
|
Kausty
|
78,117,318
| 7,233,155
|
Do wheels for a target have to be constructed on a machine with that architecture?
|
<p>I have used <code>maturin build</code> to build a distribution for a Python package on PyPi with a Rust extension using pyo3 bindings.</p>
<p>I am trying to build for the following common architectures:</p>
<pre><code>win_amd64 : aarch64-pc-windows-msvc
win_x86_64 : x86_64-pc-windows-gnu / msvc
macosx_11_0_arm64 : aarch64-apple-darwin
macosx_10_12_x86_64 : x86_64-apple-darwin
manylinux_2_24_aarch64 : aarch64-unknown-linux-gnu ?
manylinux_2_17_x86_64.manylinux2014_x86_64 : x86_64-unknown-linux-gnu
</code></pre>
<p>On a windows machine I can produce the <code>win_amd64</code> version.
On a mac I can produce the two mac versions.</p>
<p>On each machine I have installed the alternative <code>rust</code> components such as:</p>
<pre><code>rustup target add aarch64-pc-windows-msvc
</code></pre>
<p>However whenever I try and build a distribution on one of these other <code>targets</code> I get some error:</p>
<p>E.g. When running on Mac:</p>
<pre><code>maturin build --release --target x86_64-pc-windows-gnu
Found pyo3 bindings with abi3 support for Python ≥ 3.9
💥 maturin failed
Caused by: Failed to find a python interpreter
</code></pre>
<p>Or, similarly, when trying:</p>
<pre><code>maturin build --release --target aarch64-unknown-linux-gnu
error: linking with `cc` failed: exit status: 1
error: could not compile `rateslibrs` (lib) due to 1 previous error; 52 warnings emitted
💥 maturin failed
Caused by: Failed to build a native library through cargo
</code></pre>
<p>On Windows I get similar errors when trying to package for Mac.</p>
<p><strong>Is this something I just can't fundamentally achieve, or are my settings and configurations not correct?</strong></p>
|
<python><rust><build><packaging><maturin>
|
2024-03-06 20:41:01
| 0
| 4,801
|
Attack68
|
78,117,197
| 1,543,042
|
Azure DevOps Pipeline - intermittently unable to load python
|
<p>I created a ADO pipeline to deploy some code to a PyPi repo; however, intermittently the pipeline fails with the error</p>
<pre><code>python3: error while loading shared libraries: libpython3.9.so.1.0: cannot open shared object file: No such file or directory
</code></pre>
<p>The structure of the pipeline is a <code>Use Python Version</code> block followed by a <code>Command Line Script</code> block with the error being thrown as soon as python is called in the <code>Command Line Script</code></p>
|
<python><azure-devops>
|
2024-03-06 20:19:08
| 1
| 3,432
|
user1543042
|
78,117,090
| 12,390,973
|
how to model if else condition in the objective function in PYOMO?
|
<p>I am trying to understand how can model if-else conditions in the objective function. I know how to do it in a constraint using binary variable but I am not sure how to do that in the objective function itself. For example, I have created a very simple energy supply model, here are its configurations:</p>
<ol>
<li>Two generators <strong>gen1 and gen2</strong> which will get revenue if they supply continuous energy according to <strong>PPA</strong>.</li>
<li>If there is some <strong>shortfall</strong> then there is some penalty associated with it.</li>
<li>if there is an excess supply then there is revenue but with <strong>50% of PPA</strong> only.</li>
<li>All these calculations are happening monthly basis for every hour( For example if the model is running for 2 months then there will be a total of <strong>60 Days * 24 Hours = 1440 Hours</strong>)</li>
</ol>
<p>Here is my approach:</p>
<pre><code>import numpy as np
from pyomo.environ import *
no_of_days = 60
hours = 24
total_instance = no_of_days * hours
np.random.seed(total_instance)
load_profile = np.array([100] * total_instance)
DFR = 0.9
interval_freq = 60
gen1_capacity = 200
gen2_capacity = 310
PPA = 10
shortfall_penalty_rate = 15
excess_revenue_rate = 0.5 * PPA
model = ConcreteModel()
model.month_index = Set(initialize=list(range(1, 3)))
model.m_index = Set(initialize=list(range(len(load_profile))))
# variable
model.grid = Var(model.month_index, model.m_index, domain=NonNegativeReals)
# gen variable
model.gen1_use = Var(model.month_index, model.m_index, domain=NonNegativeReals)
model.gen2_use = Var(model.month_index, model.m_index, domain=NonNegativeReals)
# Load profile
model.load_profile = Param(model.month_index, model.m_index, initialize=lambda model, month, m: load_profile[month] * interval_freq / 60.0)
model.lost_load = Var(model.month_index, model.m_index, domain=NonNegativeReals)
# Objective function
def revenue(model):
monthly_demand = sum(
model.load_profile[month, m] for month in model.month_index for m in model.m_index
)
monthly_energy = sum(
model.gen1_use[month, m] + model.gen2_use[month, m]
for month in model.month_index for m in model.m_index
)
# Revenue Calculations
monthly_revenue = min(monthly_energy, monthly_demand) * PPA
# Excess Revenue Calculations
if monthly_energy > monthly_demand:
excess_revenue = (monthly_energy - monthly_demand) * excess_revenue_rate
else:
excess_revenue = 0
# Shortfall Penalty Calculations
actual_DFR = monthly_energy / monthly_demand
if actual_DFR < DFR:
shortfall_penalty = (DFR * monthly_demand - monthly_energy) * -shortfall_penalty_rate
else:
shortfall_penalty = 0
total_cost = monthly_revenue + excess_revenue + shortfall_penalty
return total_cost
model.obj = Objective(rule=revenue, sense=maximize)
def energy_balance(model, month, m):
return model.grid[month, m] == model.gen1_use[month, m] + model.gen2_use[month, m] + model.lost_load[month, m]
model.energy_balance = Constraint(model.month_index, model.m_index, rule=energy_balance)
def grid_limit(model, month, m):
return model.grid[month, m] >= model.load_profile[m]
model.grid_limit = Constraint(model.month_index, model.m_index, rule=grid_limit)
def max_gen1(model, month, m):
eq = model.gen1_use[month, m] <= gen1_capacity
return eq
model.max_gen1 = Constraint(model.month_index, model.m_index, rule=max_gen1)
def max_gen2(model, month, m):
eq = model.gen2_use[month, m] <= gen2_capacity
return eq
model.max_gen2 = Constraint(model.month_index, model.m_index, rule=max_gen2)
Solver = SolverFactory('gurobi')
Solver.options['LogFile'] = "gurobiLog"
# Solver.options['MIPGap'] = 0.0
print('\nConnecting to Gurobi Server...')
results = Solver.solve(model)
if (results.solver.status == SolverStatus.ok):
if (results.solver.termination_condition == TerminationCondition.optimal):
print("\n\n***Optimal solution found***")
print('obj returned:', round(value(model.obj), 2))
else:
print("\n\n***No optimal solution found***")
if (results.solver.termination_condition == TerminationCondition.infeasible):
print("Infeasible solution")
exit()
else:
print("\n\n***Solver terminated abnormally***")
exit()
grid_use = []
gen1 = []
gen2 = []
lost_load = []
load = []
for month in range(1, 13):
for i in range(len(load_profile)):
grid_use.append(value(model.grid[month, i]) * (60 / interval_freq))
gen1.append(value(model.gen1_use[month, i]) * (60 / interval_freq))
gen2.append(value(model.gen2_use[month, i]) * (60 / interval_freq))
lost_load.append(value(model.lost_load[month, i]) * (60 / interval_freq))
load.append(value(model.load_profile[month, i]) * (60 / interval_freq))
</code></pre>
<p>I am getting error in the objective function itself:</p>
<pre><code>if monthly_energy > monthly_demand:
File "C:\Users\nvats\AppData\Local\Continuum\anaconda3\envs\microgrid\lib\site-packages\pyomo\core\expr\numvalue.py", line 730, in __gt__
return _generate_relational_expression(_lt, other, self)
File "C:\Users\nvats\AppData\Local\Continuum\anaconda3\envs\microgrid\lib\site-packages\pyomo\core\expr\logical_expr.py", line 380, in _generate_relational_expression
"%s < %s" % ( prevExpr.to_string(), lhs, rhs ))
TypeError: Cannot create a compound inequality with identical upper and lower
bounds using strict inequalities: constraint infeasible:
288000.0 < gen1_use[1,0] + gen2_use[1,0] + gen1_use[1,1] + gen2_use[1,1] + gen1_use[1,2] + gen2_use[1,2] + gen1_use[1,3] + gen2_use[1,3] + gen1_use[1,4] + gen2_use[1,4] + .... so on
</code></pre>
<p>Can someone please help and explain what's wrong with the objective function and how can i correct it?</p>
|
<python><pyomo>
|
2024-03-06 19:54:31
| 0
| 845
|
Vesper
|
78,117,058
| 5,640,161
|
Is there anyway to avoid the lexsort warning without sorting the columns of a MultiIndexed DataFrame?
|
<p>I understand that there is some <a href="https://stackoverflow.com/questions/54307300/what-causes-indexing-past-lexsort-depth-warning-in-pandas">performance rationale</a> for sorting the columns (or indices) in a MultiIndexed pandas DataFrame. However, I have my own "user-interface" reasons for choosing the particular order of the columns. Is there any way to avoid the following warning without having to change the order of the columns?</p>
<pre><code>PerformanceWarning: indexing past lexsort depth may impact performance.
</code></pre>
<p>If it's impossible to do in a clean way, is there at least some way to temporarily sort the columns and then re-arrange their order back to what it was before the operation?</p>
<p>Here is some self-contained code:</p>
<pre><code>import pandas as pd
N = 3
rangeN = list(range(1, N + 1))
index = pd.MultiIndex.from_product(
[rangeN, rangeN], names=["level1", "level2"]
)
columns = [
(
"col_B",
"col_B.1",
),
(
"col_B",
"col_B.2",
),
]
components = range(1, 3)
columns += [("col_A", "col_A.1", f"col_A.1.{c}") for c in components]
columns += [("col_A", "col_A.2", f"col_A.2.{c}") for c in components]
columns = pd.MultiIndex.from_tuples(columns)
df = pd.DataFrame(columns=columns, index=index)
df.loc[:, ("col_B", "col_B.2",)] = 7 # Warning thrown here
print(df)
</code></pre>
<p>which returns</p>
<pre><code> col_B col_A
col_B.1 col_B.2 col_A.1 col_A.2
NaN NaN col_A.1.1 col_A.1.2 col_A.2.1 col_A.2.2
level1 level2
1 1 NaN 7 NaN NaN NaN NaN
2 NaN 7 NaN NaN NaN NaN
3 NaN 7 NaN NaN NaN NaN
2 1 NaN 7 NaN NaN NaN NaN
2 NaN 7 NaN NaN NaN NaN
3 NaN 7 NaN NaN NaN NaN
3 1 NaN 7 NaN NaN NaN NaN
2 NaN 7 NaN NaN NaN NaN
3 NaN 7 NaN NaN NaN NaN
c:\users\tfovid\draft.py:31: PerformanceWarning: indexing past lexsort depth may impact performance.
df.loc[:, ("col_B", "col_B.2",)] = 7
</code></pre>
|
<python><pandas>
|
2024-03-06 19:47:15
| 1
| 863
|
Tfovid
|
78,117,038
| 22,437,734
|
Matplotlib shrinking Value from thousands to 1's
|
<p>I have created a DataFrame from an object called <code>car_data</code>:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
class Car:
def __init__(self, make, year, price, mileage, color, buy_rate):
self.make = make
self.year = year
self.price = price
self.mileage = mileage
self.color = color
self.buy_rate = buy_rate
cars_data = [
{"make": "Toyota", "year": 2018, "price": 20000, "mileage": 50000, "color": "Blue", "buy_rate": 0.8},
{"make": "Honda", "year": 2019, "price": 25000, "mileage": 40000, "color": "Red", "buy_rate": 0.7},
{"make": "Ford", "year": 2020, "price": 28000, "mileage": 30000, "color": "Black", "buy_rate": 0.6},
{"make": "Chevrolet", "year": 2017, "price": 18000, "mileage": 60000, "color": "White", "buy_rate": 0.75},
{"make": "Nissan", "year": 2019, "price": 23000, "mileage": 35000, "color": "Silver", "buy_rate": 0.65},
{"make": "BMW", "year": 2021, "price": 35000, "mileage": 20000, "color": "Gray", "buy_rate": 0.55},
{"make": "Mercedes", "year": 2018, "price": 30000, "mileage": 45000, "color": "Black", "buy_rate": 0.8},
{"make": "Audi", "year": 2020, "price": 32000, "mileage": 25000, "color": "White", "buy_rate": 0.7},
{"make": "Subaru", "year": 2019, "price": 22000, "mileage": 35000, "color": "Blue", "buy_rate": 0.75},
{"make": "Hyundai", "year": 2020, "price": 26000, "mileage": 30000, "color": "Red", "buy_rate": 0.65},
{"make": "Kia", "year": 2017, "price": 20000, "mileage": 55000, "color": "Green", "buy_rate": 0.6},
{"make": "Volkswagen", "year": 2018, "price": 24000, "mileage": 40000, "color": "Black", "buy_rate": 0.8},
{"make": "Tesla", "year": 2022, "price": 60000, "mileage": 15000, "color": "Blue", "buy_rate": 0.85},
{"make": "Lexus", "year": 2019, "price": 35000, "mileage": 25000, "color": "Silver", "buy_rate": 0.75},
{"make": "Mazda", "year": 2018, "price": 21000, "mileage": 45000, "color": "Red", "buy_rate": 0.7},
{"make": "Jeep", "year": 2020, "price": 29000, "mileage": 20000, "color": "White", "buy_rate": 0.65},
{"make": "Volvo", "year": 2021, "price": 38000, "mileage": 30000, "color": "Gray", "buy_rate": 0.6},
{"make": "Chrysler", "year": 2019, "price": 27000, "mileage": 35000, "color": "Black", "buy_rate": 0.8},
{"make": "Buick", "year": 2017, "price": 22000, "mileage": 40000, "color": "Blue", "buy_rate": 0.7},
{"make": "Ferrari", "year": 2022, "price": 150000, "mileage": 10000, "color": "Red", "buy_rate": 0.9},
{"make": "Acura", "year": 2020, "price": 33000, "mileage": 22000, "color": "White", "buy_rate": 0.75},
{"make": "Porsche", "year": 2021, "price": 45000, "mileage": 18000, "color": "Black", "buy_rate": 0.85},
{"make": "Infiniti", "year": 2018, "price": 32000, "mileage": 28000, "color": "Gray", "buy_rate": 0.7},
{"make": "Land Rover", "year": 2019, "price": 55000, "mileage": 25000, "color": "Green", "buy_rate": 0.65},
{"make": "Jaguar", "year": 2020, "price": 60000, "mileage": 20000, "color": "Blue", "buy_rate": 0.6},
{"make": "Maserati", "year": 2021, "price": 70000, "mileage": 15000, "color": "Red", "buy_rate": 0.8},
{"make": "Bentley", "year": 2019, "price": 80000, "mileage": 12000, "color": "White", "buy_rate": 0.75},
{"make": "Rolls Royce", "year": 2020, "price": 100000, "mileage": 10000, "color": "Silver", "buy_rate": 0.9},
{"make": "Lincoln", "year": 2018, "price": 45000, "mileage": 20000, "color": "Black", "buy_rate": 0.8},
{"make": "Cadillac", "year": 2017, "price": 40000, "mileage": 30000, "color": "Blue", "buy_rate": 0.75},
{"make": "Aston Martin", "year": 2021, "price": 150000, "mileage": 8000, "color": "Red", "buy_rate": 0.85},
{"make": "Alfa Romeo", "year": 2019, "price": 60000, "mileage": 20000, "color": "White", "buy_rate": 0.7},
{"make": "Bugatti", "year": 2020, "price": 3000000, "mileage": 500, "color": "Blue", "buy_rate": 0.95},
]
cars = []
for car_data in cars_data:
car = Car(car_data["make"], car_data["year"], car_data["price"], car_data["mileage"], car_data["color"], car_data["buy_rate"])
cars.append(car)
car_data_dict = {
"Make": [car.make for car in cars],
"Year": [car.year for car in cars],
"Price": [car.price for car in cars],
"Mileage": [car.mileage for car in cars],
"Color": [car.color for car in cars],
"Buy Rate": [car.buy_rate for car in cars]
}
car_df = pd.DataFrame(car_data_dict)
print(car_df)
</code></pre>
<p>After this I tried to plot it with <code>plt.subplots</code>:</p>
<pre class="lang-py prettyprint-override"><code>fig, ax = plt.subplots(figsize=(9,5))
scatter = ax.scatter(x=car_df['Price'],
y=car_df['Year'],
c=car_df['Year'])
ax.set(title="Car data >=2024 ",
xlabel='Price',
ylabel='Year')
</code></pre>
<p>And this is what I got:</p>
<p><a href="https://i.sstatic.net/6NNs1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6NNs1.png" alt="Matplotlib plot" /></a></p>
<p>This is weird behavior, for, as you can see in the <code>car_data</code> dictionary above, it contains prices in thousands. The plot shows the prices as if it is a decimal!</p>
<p>On the contrary, if I print this:</p>
<pre><code>car_df['Price'][0]
</code></pre>
<p>It returns <code>20000</code> as expected.</p>
<p><em>Note: This didn't happen before I added Mileage.</em></p>
|
<python><pandas><dataframe><matplotlib><plot>
|
2024-03-06 19:43:48
| 1
| 473
|
Gleb
|
78,117,019
| 4,706,711
|
Should I expect the http data to be out of order in my http server for connections comming from a single client socket?
|
<p>I am implementing my own http server:</p>
<pre><code>import socket
import threading
import queue
import ssl
from manipulator.parser import LineBuffer,LoggableHttpRequest
class SocketServer:
"""
Basic Socket Server in python
"""
def __init__(self,host,port,max_threads,ssl_context:ssl.SSLContext=None):
print("Create Server For Http")
self.host = host
self.port = port
self.server_socket = self.initSocket()
self.max_threads = max_threads
self.request_queue = queue.Queue()
self.ssl_context=None
if(ssl_context != None):
print("Initialise SSL context")
self.ssl_context = ssl_context
def initSocket(self):
return socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def __accept(self):
self.server_socket.listen(5)
while True:
try:
client_socket, client_address = self.server_socket.accept()
if self.ssl_context is not None :
print(self.ssl_context)
client_socket = self.ssl_context.wrap_socket(client_socket, server_side=True)
self.request_queue.put((client_socket, client_address))
except:
print("Error Occured")
def __handle(self):
while True:
client_socket, address = self.request_queue.get()
print("Address",address)
try:
# Read HTTP Request
# Log Http Request
# Manipulate Http Request
# Forward or respond
buffer = LineBuffer()
request = HttpRequest(self.db)
buffer.pushData(client_socket.recv(2048))
line = buffer.getLine()
if(line is not None):
request.parse(line)
content = '<html><body>Hello World</body></html>\r\n'.encode()
headers = f'HTTP/1.1 200 OK\r\nContent-Length: {len(content)}\r\nContent-Type: text/html\r\n\r\n'.encode()
client_socket.sendall(headers + content)
finally:
client_socket.shutdown(socket.SHUT_RDWR)
client_socket.close()
self.request_queue.task_done()
def __initThreads(self):
for _ in range(self.max_threads):
threading.Thread(target=self.__handle, daemon=True).start()
def start(self):
self.server_socket.bind((self.host, self.port))
self.__initThreads()
self.__accept()
</code></pre>
<p>And the reason why I do this is that I want to log and analyze incomming httpo requests as fast as possible. Also, many 3rd party libs do require C bindings that I want to avoid.</p>
<p>So far, I made a line chunker that splits the request into \r\n:</p>
<pre><code>class LineBuffer:
def __init__(self):
self.buffer = b''
def pushData(self,line):
self.buffer += str.encode(line)
def getLine(self):
if b'\r\n' in self.buffer:
line,sep,self.buffer = self.buffer.partition(b'\r\n')
return line+sep
return None
</code></pre>
<p>And I want to parse each line and serialize it into an object representing an http request to I can pipe it further in a streaming manner:</p>
<pre><code>class HttpRequest:
def __init__(self,db):
self.headers={} #ParsedHeaderrs
self.body="" #Http Body
self.version=None
self.method=None
self.id=None
self.raw=""
class HttpParser:
def __init__(self,db):
self.db = db
self.currentRequest=None
def parse(line):
# do parsing here
return
</code></pre>
<p>What it worries me most is the scenario that a client will send 2 requests:</p>
<p>Request 1:</p>
<pre><code>GET / HTTP/1.1\r\n
HOST lala1.com \r\n
</code></pre>
<p>Request 2:</p>
<pre><code>POST /file HTTP/1.1\r\n
HOST lala2.com \r\n
\r\n
Qm9QUVM5NDMuLnEvXVN7O2E=
fDMpQjcpOlFodClgOGUzYQ==
NVgvNipmU1d3YFgtLFUhQiM=
MiZwSk0zKno9TkVxNyZFL3s=
NEhGJXZ7OGciOE8mYF5JNA==
dVlJLzpdKlUjXl4tcEpufQ==
XVgiXCdjQyckMjY/Ikt6Rw==
alksJlZ+XHFzQSYqaHlHIztt
YiRnPjdye0gvanV3ZGxaZkI=
MjgwTX0uYHw6M295RS52UDM=
YU0yQ2dQLmJUQVpCNS89PWJB
Ti10MHJBTjAqUFUlIU0sMyRN
</code></pre>
<p>But the sequence my server receives it is:</p>
<pre><code>GET / HTTP/1.1\r\n
POST /file HTTP/1.1\r\n
HOST lala1.com \r\n
\r\n\r\nQm9QUVM5ND
HOST lala2.com \r\n
MuLnEvXVN7O2E=
fDMpQjcpOlFodClgOGUzYQ==
NVgvNipmU1d3YFgtLFUhQiM=
MiZwSk0zKno9TkVxNyZFL3s=
NEhGJXZ7OGciOE8mYF5JNA==
dVlJLzpdKlUjXl4tcEpufQ==
XVgiXCdjQyckMjY/Ikt6Rw==
alksJlZ+XHFzQSYqaHlHIztt
YiRnPjdye0gvanV3ZGxaZkI=
MjgwTX0uYHw6M295RS52UDM=
YU0yQ2dQLmJUQVpCNS89PWJB
Ti10MHJBTjAqUFUlIU0sMyRN
\r\n
</code></pre>
<p>Is a feasible scenario in my case? Or the tcp socket handles the data order by itself?</p>
|
<python><http><sockets><server><tcp>
|
2024-03-06 19:40:01
| 1
| 10,444
|
Dimitrios Desyllas
|
78,116,950
| 1,231,450
|
Group reversed pandas dataframe
|
<p>I have the following code</p>
<pre><code>df = pd.read_csv("some_data.csv")
candles = [Candle(candle["close"].iloc[0], candle["close"].iloc[-1], max(candle["close"]), min(candle["close"]))
for _, candle in df.groupby(df.index // ticks)]
candles.reverse()
</code></pre>
<p>With a dataframe full of tick data. It works but feels a bit clumsy - so my question: isn't it possible to group the reversed dataframe in the first place?</p>
<hr>
<p>This is a snippet of the actual data:</p>
<pre><code>timestamp,close,security_code,volume,bid_volume,ask_volume
2024-02-28 01:00:00.358537+00:00,18002.5,NQ,1,0,1
2024-02-28 01:00:00.890809+00:00,18002.75,NQ,1,1,0
2024-02-28 01:00:00.890809+00:00,18002.75,NQ,1,1,0
2024-02-28 01:00:01.696411+00:00,18002.5,NQ,1,0,1
2024-02-28 01:00:02.268716+00:00,18002.25,NQ,1,0,1
2024-02-28 01:00:02.513397+00:00,18002.5,NQ,1,1,0
2024-02-28 01:00:03.716795+00:00,18002.5,NQ,1,0,1
2024-02-28 01:00:03.892441+00:00,18002.75,NQ,1,1,0
2024-02-28 01:00:03.893664+00:00,18002.25,NQ,1,0,1
2024-02-28 01:00:06.956017+00:00,18002.25,NQ,1,0,1
2024-02-28 01:00:08.144158+00:00,18002.25,NQ,1,1,0
2024-02-28 01:00:08.144158+00:00,18002.25,NQ,1,1,0
2024-02-28 01:00:08.772717+00:00,18002.0,NQ,1,0,1
2024-02-28 01:00:08.772717+00:00,18002.0,NQ,3,0,3
2024-02-28 01:00:09.966515+00:00,18002.25,NQ,1,1,0
2024-02-28 01:00:10.051715+00:00,18002.0,NQ,1,0,1
2024-02-28 01:00:11.053980+00:00,18001.75,NQ,1,0,1
2024-02-28 01:00:11.053980+00:00,18001.75,NQ,1,0,1
2024-02-28 01:00:11.296008+00:00,18002.0,NQ,1,1,0
2024-02-28 01:00:12.050765+00:00,18001.75,NQ,1,0,1
2024-02-28 01:00:12.050765+00:00,18001.5,NQ,1,0,1
2024-02-28 01:00:12.050765+00:00,18001.5,NQ,1,0,1
2024-02-28 01:00:12.050765+00:00,18001.5,NQ,1,0,1
2024-02-28 01:00:12.050765+00:00,18001.5,NQ,1,0,1
2024-02-28 01:00:12.050765+00:00,18001.5,NQ,1,0,1
2024-02-28 01:00:12.050765+00:00,18001.25,NQ,1,0,1
2024-02-28 01:00:12.050765+00:00,18001.25,NQ,1,0,1
2024-02-28 01:00:12.050765+00:00,18001.25,NQ,1,0,1
2024-02-28 01:00:12.050765+00:00,18001.25,NQ,2,0,2
</code></pre>
|
<python><pandas>
|
2024-03-06 19:29:43
| 1
| 43,253
|
Jan
|
78,116,908
| 6,769,082
|
pandas slice 3-level multiindex based on a list with 2 levels
|
<p>Here is a minimal example:</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(0)
idx = pd.MultiIndex.from_product([[1,2,3], ['a', 'b', 'c'], [6, 7]])
df = pd.DataFrame(np.random.randn(18), index=idx)
selection = [(1, 'a'), (2, 'b')]
</code></pre>
<p>I would like to select all the rows in <code>df</code> that have as index that starts with any of the items in <code>selection</code>. So I would like to get the sub dataframe of <code>df</code> with the indices:</p>
<pre><code>(1, 'a', 6), (1, 'a', 7), (2, 'b', 6), (2, 'b', 7)
</code></pre>
<p>What is the most straightforward/pythonian/pandasian way of doing this? What I found:</p>
<pre><code>sel = [id[:2] in selection for id in df.index]
df.loc[sel]
</code></pre>
|
<python><pandas><dataframe><slice><multi-index>
|
2024-03-06 19:20:52
| 2
| 481
|
Chachni
|
78,116,727
| 1,299,669
|
How to overcome a precision error in Python when summing a list of floating point numbers?
|
<p>In Python3, <code>0.35 * 10</code> does not show the same result as summing a list of 10 numbers <code>0.35</code>.</p>
<pre><code>Python 3.8.1 (v3.8.1:1b293b6006, Dec 18 2019, 14:08:53)
[Clang 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> 0.35 * 10
3.5
>>> a = [0.35 for i in range(0, 10)]
>>> sum(a)
3.5000000000000004
</code></pre>
<p>Is it possible to overcome this precision error with Python3 alone? By that I mean without using libraries like numpy.</p>
|
<python><math><precision>
|
2024-03-06 18:44:40
| 1
| 1,687
|
Raiyan
|
78,116,518
| 19,198,552
|
How can I switch the title of a tkinter window when it gets minimized?
|
<p>iI have an tkinter application which has in the window title the full path-name of the file which is loaded into the application. As the path-name is usually long it can only be displayed when the window is not minimized. When it is minimized (to an icon) only the start of the path-name is visible. So I want to switch the title to only the file-name at the moment, the window gets minimized. I found a solution for this <a href="https://stackoverflow.com/questions/3836489/running-a-command-on-window-minimization-in-tkinter">here</a> and adapted it to my problem:</p>
<pre><code>import tkinter as tk
title_saved = ""
def __made_to_window(event):
print("__made_to_window")
global title_saved
if title_saved!="":
root.title(title_saved)
title_saved = ""
def __made_to_icon(event):
print("__made_to_icon")
global title_saved
if title_saved=="":
title_saved = root.title()
root.title("filename.py")
root = tk.Tk()
root.title("C/folder1/folder2/folder3/folder4/folder5/filename.py")
canvas = tk.Canvas(root, height=100, width=400)
canvas.grid()
root.bind("<Unmap>", __made_to_icon)
root.bind("<Map>" , __made_to_window)
root.mainloop()
</code></pre>
<p>As you can see with my example code, the solution works. But I don't like it as the used binding is not only activated once but 2 times, when the window is minimized (in my big application it is activated 10 times when the window is minimized). Because of this the variable title_saved must be checked, if it is already not empty anymore.</p>
<p>So I am looking for a more elegant solution, especially because I believe changing the title at minimizing must be a common problem.</p>
<p>Do you have any ideas?</p>
|
<python><tkinter>
|
2024-03-06 18:03:45
| 1
| 729
|
Matthias Schweikart
|
78,116,450
| 8,615,884
|
Rasa Install ERROR: Could not install packages due to an OSError:
|
<p>So I am trying to install rasa with</p>
<pre><code>pip install rasa
</code></pre>
<p>I have python version 3.9.0</p>
<p>I get this error:</p>
<pre><code>ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: <directory>
</code></pre>
<p>Please help me I have no idea whats going on!</p>
|
<python><rasa>
|
2024-03-06 17:49:41
| 0
| 1,665
|
randomUser786
|
78,116,432
| 3,446,051
|
Use Python.exe located in a shared folder
|
<p>Sounds like a simple question, but I was not able to find an answer about that.<br />
I have my python and the environment installed on a different machine but in a shared folder which is accessible via UNC path.<br />
I wanted to use this python to run a python script from my machine (which is a different machine compared to the machine where the python.exe is installed).<br />
So I tried the following in the command window:<br />
<code>> \\servername\shared\environment\path\python.exe \\servername\path_to_script\myscript.py</code></p>
<p>But it does not work and I get the following error message:</p>
<pre><code>No Python at 'C:\ProgramData\conda\python.exe'
</code></pre>
<p>Seems to me that it tries to find the <code>python.exe</code> on my machine instead of using the <code>python.exe</code> in the share folder.<br />
Is it not possible to run the <code>python.exe</code> which is located in a shared folder from a different machine?</p>
|
<python><windows>
|
2024-03-06 17:46:20
| 1
| 5,459
|
Code Pope
|
78,116,222
| 4,427,777
|
Mouseover annotation/highlight of seaborn `pairplot`
|
<p>For the sake of mcve, I build the following <code>pairplot</code>:</p>
<pre><code>from sklearn.datasets import make_blobs
import pandas as pd
from sklearn.cluster import HDBSCAN
import seaborn as sns
import numpy as np ; np.random.seed(0)
centers = 4
data, c = make_blobs(n_samples = 20,
centers = centers,
n_features = 3,
cluster_std = np.random.rand(centers) * 2.5,
random_state = 0)
df = pd.DataFrame(data)
alg = HDBSCAN()
alg.fit(df)
df['Label'] = alg.labels_.astype(str)
g = sns.pairplot(df, hue = 'Label')
</code></pre>
<p>Simple <code>pairplot</code>, shows a few outliers, has an underlying <code>DataFrame</code> <code>df</code>.</p>
<p>What I want is for the functionality to show an annotation of <code>df.index</code> for a point on hovering over it, and to somehow highlight that point in all of the other plots.</p>
<p>I have found the hover-over annotation methodology <a href="https://stackoverflow.com/questions/7908636/how-to-add-hovering-annotations-to-a-plot">in this question</a> for the underlying <code>matplotlib.pyplot</code> objects, but the code there doesn't seem very extensible to a multi-<code>ax</code> <code>figure</code> like the <code>pairplot</code> above.</p>
<p>I have done this with <code>mplcursors</code> which gives me the labels (but only by including an additional package)</p>
<pre><code>def show_hover_panel(get_text_func=None):
cursor = mplcursors.cursor(hover=2)
if get_text_func:
cursor.connect(
event = "add",
func = lambda sel: sel.annotation.set_text(get_text_func(sel.index)),
)
return cursor
def on_add(index):
print(index)
ix = df.index[index]
#size = np.zeros(df.shape[0])
#size[index] = 1
#g.map_upper(sns.scatterplot, size = size)
#g.map_lower(sns.scatterplot, size = size)
return "{}".format(ix)
show_hover_panel(on_add)
</code></pre>
<p>The commented out part of the code is my (very) unsuccessful attempt to make it highlight all the related points. I leave the fairly comical output as an exercise to the reader.</p>
<p><a href="https://mplcursors.readthedocs.io/en/stable/examples/paired_highlight.html" rel="nofollow noreferrer">This example</a> shows how to link highlights via <code>mplcursors</code>, but requires every point be its own artist, which is incompatible with <code>seaborn</code>.</p>
<p>Is there any smarter way to do a multi-axis highlight, preferably doing it and the multi-axis annotation natively in <code>matplotlib</code> and <code>seaborn</code>?</p>
|
<python><pandas><matplotlib><seaborn><mplcursors>
|
2024-03-06 17:10:17
| 1
| 14,469
|
Daniel F
|
78,116,178
| 6,694,814
|
Python folium - only last record is shown on the map
|
<p>I am fetching data from the .csv file, but I don't know why just only the last file is visible on my map.</p>
<p>The code is like here:</p>
<pre><code>df = pd.read_csv("sur_geo.csv")
su = MarkerCluster(name="Surveyors").add_to(m)
su1 = plugins.FeatureGroupSubGroup(su, "Build Manager PAYE")
m.add_child(su1)
su2 = plugins.FeatureGroupSubGroup(su, "Supervisor PAYE")
m.add_child(su2)
su3 = plugins.FeatureGroupSubGroup(su, "Build Manager - Contractor")
m.add_child(su3)
su4 = plugins.FeatureGroupSubGroup(su, "Supervisor - Contractor")
m.add_child(su4)
su5 = plugins.FeatureGroupSubGroup(su, "Engineer")
m.add_child(su5)
for i,row in df.iterrows():
lat =df.at[i, 'lat']
lng = df.at[i, 'lng']
nm = df.at[i, 'name']
sp = df.at[i, 'title']
sk = df.at[i, 'skillset']
popup = str(sp)
if sp == "Build Manager PAYE":
folium.Marker(location=[lat,lng], popup=popup, icon = folium.Icon(color='red',
icon='glyphicon-calendar', icon_color='blue')).add_to(su1)
elif sp == "Supervisor PAYE":
folium.Marker(location=[lat,lng], popup=popup, icon = folium.Icon(color='red',
icon='glyphicon-calendar', icon_color='red')).add_to(su2)
elif sp == "Build Manager - Contractor":
folium.Marker(location=[lat,lng], popup=popup, icon = folium.Icon(color='red',
icon='glyphicon-calendar', icon_color='green')).add_to(su3)
elif sp == "Supervisor - Contractor":
folium.Marker(location=[lat,lng], popup=popup, icon = folium.Icon(color='red',
icon='glyphicon-calendar', icon_color='black')).add_to(su4)
else:
folium.Marker(location=[lat,lng], popup=popup, icon = folium.Icon(color='white',
icon='house', prefix='fa', icon_color='green')).add_to(su5)
</code></pre>
<p>csv file is correct, what is wrong with the file then?</p>
|
<python><folium>
|
2024-03-06 17:02:34
| 0
| 1,556
|
Geographos
|
78,116,013
| 1,214,800
|
Can I perform a Mypy assertion inside of a function that affects a primitive arg?
|
<p>Let's say I have a simple validation function:</p>
<pre><code>def is_valid_build_target(target: Any, throw=False) -> bool:
target = str(target)
allowed_targets = ["dev", "prod"]
is_allowed = target.lower() in ALLOWED_TARGETS
if not is_allowed and throw:
raise ValueError(
f"Invalid target '{target}'. Must be one of: {ALLOWED_TARGETS}"
)
assert target is not None
return is_allowed
</code></pre>
<p>But if I want to use this function, Mypy doesn't pass that assertion back up the stack (presumably because <code>target</code> is a primitive, and so Python localizes the variable to the function):</p>
<pre><code>Target = Literal["dev", "prod"]
target: Target | None = cast(Target | None, os.getenv("APP_TARGET", None))
if not is_valid_build_target(target):
raise ValueError(f"Invalid target, I could have used throw=True, but I wanted a custom error message")
# Mypy still thinks `target` could be None
</code></pre>
<p>If I do the function logic inline, or use the <code>assert</code> after, it then works, but now I'm not keeping the runtime validation logic in a self-contained function:</p>
<pre><code>if not is_valid_build_target(target):
raise ValueError(f"Invalid target...")
assert target is not None
# happy Mypy
</code></pre>
<p>Is there any way inside the validation function to validate this and keep Mypy happy?</p>
|
<python><python-3.x><mypy><python-typing>
|
2024-03-06 16:36:50
| 1
| 73,674
|
brandonscript
|
78,115,786
| 10,037,034
|
Kernel Dying when importing unstructured.partition.pdf
|
<p>I tried the following import but my kernel dies all the time, how can i solve this problem?</p>
<pre><code>from unstructured.partition.pdf import partition_pdf
path = 'data/llama.pdf'
raw_pdf_elements=partition_pdf(
filename=path,
extract_images_in_pdf=True,
infer_table_structure=True,
chunking_strategy="by_title",
max_characters=4000,
new_after_n_chars=3800,
combine_text_under_n_chars=2000,
image_outpur_dir_path='images/'
)
</code></pre>
<p>Issue in first line, but i needed to implement raw_pdf_elements line, then some issue occured because of the tesseract path, then i installed following</p>
<pre><code>pip install tesseract
pip install tesseract-ocr
</code></pre>
<p>After that my kernel started to dying.
Exit</p>
<pre><code>> 00:01:01.922 [error] Disposing session as kernel process died
> ExitCode: undefined, Reason: 00:01:01.922 [info] Dispose Kernel
> process 35807. 00:01:01.945 [info] End cell 98 execution after
> -1709672459.206s, completed @ undefined, started @ 1709672459206
</code></pre>
|
<python><kernel>
|
2024-03-06 16:00:27
| 1
| 1,311
|
Sevval Kahraman
|
78,115,726
| 11,001,493
|
How to adjust parameter from equation based on desirable output?
|
<p>I am trying to adjust an specific value (di) based on equation (Arps - for decline curve analysis) so the sum of my original + predicted values (new_sum) match my reference value (sum_reference).</p>
<p>The original values:</p>
<pre><code>df = pd.DataFrame({"YEAR":[2019, 2020, 2021, 2022, 2023],
"DATA":[0.5, 1, 2, 3, 4]})
df
Out[40]:
YEAR DATA
0 2019 0.5
1 2020 1.0
2 2021 2.0
3 2022 3.0
4 2023 4.0
</code></pre>
<p>Some parameters I should use and then the decline equation:</p>
<pre><code>sum_reference = 15 # the reference number I should have after the sum of original and predicted values
n_periods = 3 # number of years I should do my prediction for
qi = 4 # last data
di = 0.5 # temporary decline rate
# Equation
eq = qi / (1 + di * np.arange(1, n_periods + 1))
eq_df = pd.DataFrame({"YEAR":np.nan,
"DATA":eq},
index=np.arange(1, n_periods + 1))
# Adding new data to dataframe
df = pd.concat([df, eq_df])
df
Out[43]:
YEAR DATA
0 2019.0 0.500000
1 2020.0 1.000000
2 2021.0 2.000000
3 2022.0 3.000000
4 2023.0 4.000000
1 NaN 2.666667
2 NaN 2.000000
3 NaN 1.600000
# New sum
new_sum = df["DATA"].sum()
</code></pre>
<p>When I look at the difference between the sums (sum_reference - new_sum), I get -1.766. Now I need to adjust my temporary decline rate "di" (automatically find a new number) so this sum difference be 0.</p>
<p>I was working on this problem in Excel and I could find this value while using Goal Seeker tool. But how I can use the same in Python? Or maybe another method?</p>
|
<python><prediction><hypothesis-test>
|
2024-03-06 15:49:04
| 0
| 702
|
user026
|
78,115,451
| 10,480,181
|
How to set type hints for a function that can return multiple values?
|
<p>I have a function that runs mysql select query and returns a list of values. However I am struggling with type hints.</p>
<p>Function:</p>
<pre><code>def my_function(
self,
param1: List[str],
param2: date,
param3: int,
param4: List[str],
param5: List[int],
param6: List[str],
param7: int,
) -> List[Optional[str]]:
"""#TODO: Write docstring."""
cnx = self.mysql
cursor = cnx.cursor()
query = files(sql).joinpath("ready_batches.sql").read_text()
param1_str = ",".join([f"'{lob}'" for lob in param1])
param4_str = ",".join([f"'{clm_type}'" for clm_type in param4])
param5_str = ",".join([f"{client_id}" for client_id in param5])
param6_str = ",".join([f"{module_id}" for module_id in param6])
with cnx, cursor:
cursor.execute(
query,
{
"lob": param1_str,
"clm_type": param4_str,
"client_id": param5_str,
"param7": param7,
"module_id": param6_str,
"param3": param3,
"param2": param2,
},
)
result = cursor.fetchall()
return [value for value, _ in result]
</code></pre>
<p>But mypy gives me the following problem in return list value:</p>
<blockquote>
<p>List comprehension has incompatible type List[Union[float, int,
Decimal, str, bytes, date, timedelta, Set[str], None, Any]]; expected
List[str]</p>
</blockquote>
<p>I am certain the <code>value</code> will be a <code>str</code>. Hence I want to force the return type to <code>List[Optional(str)]</code>.</p>
|
<python><mypy><python-typing>
|
2024-03-06 15:09:43
| 1
| 883
|
Vandit Goel
|
78,115,382
| 4,329,853
|
Second celery task in chain executing before database updates from first task are completed
|
<p>I have a a chain of celery tasks and the second task needs to run, not just after the first task is complete, but after the database updates from the first task are complete. I've managed to get this working in my test by using a <code>while</code> loop to wait for the first chord in the chain to end but that seems wrong. I've tried other things like <code>transaction.on_commit</code> but that didn't work either. What's the right way to do this so that there isn't a race condition?</p>
<pre><code>
# My simple model
class TestModel(BaseModel):
# The datetime the item occurred
occurred_at = models.DateTimeField(null=False, blank=False, db_index=True)
# A fake quantity
quantity = models.DecimalField(
max_digits=20,
decimal_places=10,
null=True,
blank=True
)
@shared_task
def race_condition_tester() -> None:
zulu = dateTime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ')
workflow = chain(
# This chord should asynchronously write a bunch of things to the database
chord(
generate_list_of_test_items.s(
zulu=zulu,
),
spawn_tasks_from_list_of_test_items_and_write_each_item_to_database.s(),
),
# This should run after the chord, but also needs to run after the database
# updates are complete
test_if_items_are_in_database.s(
zulu=zulu,
),
)
workflow.apply_async()
return None
@shared_task
def generate_list_of_test_items(
results=None,
zulu: str = None,
) -> List:
logger.info(f"generate_list_of_test_items: {zulu=}")
task_param_lists = []
for i in range(10):
task_param_lists.append([i, zulu])
return task_param_lists
@shared_task
def spawn_tasks_from_list_of_test_items_and_write_each_item_to_database(
results=None,
) -> None:
task_param_list = results[0]
# I use a chord here to asynchronously write all the items to the db
chord_task = chord(
[write_item_to_db.si(
i=task_params[0],
zulu=task_params[1],
) for task_params in task_param_list],
chord_finisher.s()
)
# Using this we get a resulting_sum of 36 which is incorrect
# chord_task()
# Using this we get a resulting_sum of 36 which is incorrect
# chord_task.apply_async()
# Using this we get a resulting_sum of 36 which is incorrect
# chord_task.apply_async()
# time.sleep(5)
# Using this we get a resulting_sum of 36 which is incorrect
# from django.db import transaction
# transaction.on_commit(lambda: chord_task.apply_async())
# This works... we get 45
result = chord_task.apply_async()
while not result.ready():
time.sleep(1)
return None
@shared_task
def chord_finisher(*args, **kwargs): # noqa
"""
Just a simple task for the chord
"""
return "OK"
@shared_task
def write_item_to_db(i, zulu):
import time
from apps.pw import logics
logger.info(f"write_item_to_db: {i=}, {zulu=}")
time.sleep(i)
models.TestModel.objects.create(
occurred_at=dateTime.strptime(
zulu, '%Y-%m-%dT%H:%M:%SZ'
).replace(tzinfo=TimeZone.utc),
quantity=Decimal(i),
)
return None
@shared_task
def test_if_items_are_in_database(
results=None,
zulu: str = None,
):
from apps.pw import models
from apps.pw import logics
logger.info(f"test_if_items_are_in_database")
activities = models.TestModel.objects.filter(
occurred_at=dateTime.strptime(
zulu, '%Y-%m-%dT%H:%M:%SZ'
).replace(tzinfo=TimeZone.utc)
)
resulting_sum = Decimal(0)
for activity in activities:
resulting_sum += activity.quantity
# resulting_sum should be 45
if resulting_sum != Decimal(45):
logger.error(f"test_if_items_are_in_database: {resulting_sum=} but should be 45")
else:
logger.info(f"test_if_items_are_in_database: {resulting_sum=}")
return None
</code></pre>
|
<python><django><celery>
|
2024-03-06 14:57:50
| 0
| 962
|
Brett Elliot
|
78,115,234
| 6,435,921
|
Checking derivative tensor in Pytorch
|
<p>In <a href="https://math.stackexchange.com/questions/4561173/derivative-tensor-of-fracaxx-top-ax-top-aa-x-with-a-symmetric-positiv">this</a> question on Math StackExchange people are discussing the derivative of a function <code>f(x) = Axx'A / (x'AAx)</code> where <code>x</code> is a vector and <code>A</code> is a symmetric, positive semi-definite square matrix.</p>
<p>The derivative of this function at a point <code>x</code> is a tensor. And when "applied" to another vector <code>h</code> it is a matrix. The answers under that post differ in terms of expressions for this matrix, so I would like to check them numerically using <code>Pytorch</code> or <code>Autograd</code>.</p>
<p>Here is my attempt with Pytorch</p>
<pre><code>import torch
def P(x, A):
x = x.unsqueeze(1) # Convert to column vector
vector = torch.matmul(A, x)
denom = (vector.transpose(0, 1) @ vector).squeeze()
P_matrix = (vector @ vector.transpose(0, 1)) / denom
return P_matrix.squeeze()
A = torch.tensor([[1.0, 0.5], [0.5, 1.3]], dtype=torch.float32)
x = torch.tensor([1.0, 2.0], dtype=torch.float32, requires_grad=True)
h = torch.tensor([2.0, -1.0], dtype=torch.float32)
Pxh = torch.matmul(P(x, A), h)
# compute gradient
Pxh.backward()
</code></pre>
<p>But this doesn't work. What am I doing wrong?</p>
<h1>JAX</h1>
<p>I am also happy with a Jax Solution. I tried <code>jax.grad</code> but does not work.</p>
|
<python><pytorch><derivative><autograd>
|
2024-03-06 14:37:08
| 1
| 3,601
|
Euler_Salter
|
78,114,974
| 2,335,020
|
Trying to use Python to upload a blob to Azure using a SAS_TOKEN
|
<p>I'm trying to follow the official documentation here (<a href="https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-upload-python" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-upload-python</a>) to upload a file to Azure.</p>
<p>I got the URL and SAS_TOKEN, but no login to Azure myself to check things.</p>
<p>I'd prefer uploading in a sync way, but the installed libraries want to await something.
The official docs seem to be just wrong, for example expecting a self parameter in classless functions.</p>
<p>I'm keeping my secrets in a settings.py file at the moment.</p>
<pre><code>import asyncio
import os
from azure.identity.aio import DefaultAzureCredential
from azure.storage.blob.aio import BlobServiceClient, BlobClient, ContainerClient
from settings import ACCOUNT_NAME, SAS_TOKEN, AZURE_CONTAINER_NAME
from pathlib import Path
def get_blob_service_client_sas(sas_token: str = SAS_TOKEN):
account_url = f"https://{ACCOUNT_NAME}.blob.core.windows.net"
blob_service_client = BlobServiceClient(account_url, credential=sas_token)
return blob_service_client
def upload_blob_file(blob_service_client: BlobServiceClient, container_name: str = AZURE_CONTAINER_NAME):
container_client = blob_service_client.get_container_client(container=container_name)
data = Path('demo.png').read_bytes()
container_client.upload_blob(name="demo.png", data=data, overwrite=True)
# Define another async function that calls the first one
async def main():
bsc = get_blob_service_client_sas()
print(bsc)
await upload_blob_file(blob_service_client=bsc)
# Run the async function from a synchronous context
asyncio.run(main())
</code></pre>
<p>This code crashes with</p>
<pre><code>TypeError: object NoneType can't be used in 'await' expression
</code></pre>
<p>I'd appreciate any working example to how to upload a file - even if it's curl-code, requests or httpx based.</p>
|
<python><azure><azure-blob-storage>
|
2024-03-06 13:58:11
| 1
| 8,442
|
576i
|
78,114,957
| 5,576,938
|
Is it possible to find the source code where mpmath.siegelz(t) is implemented?
|
<p>I want to do some work with Riemann zeta functions in Python similar to <a href="https://github.com/azeynel/jupyter-riemann/blob/main/horton-zeros.ipynb" rel="nofollow noreferrer">this code</a> (The code is broken the way it is now)</p>
<p>But I would like to dig a little deeper and understand how <code>siegelz(t)</code> is implemented. I’ve looked at mathematical papers where Riemann-Siegel Z-Function is treated, but I want to see how it is implemented in practice. I think that will help me understand it better.</p>
<p>Is it possible to find the source code?</p>
<p><a href="https://mpmath.org/doc/current/functions/zeta.html#siegelz" rel="nofollow noreferrer">Documentation page for <code>siegelz()</code></a></p>
|
<python>
|
2024-03-06 13:56:21
| 0
| 353
|
zeynel
|
78,114,944
| 5,197,329
|
Not seeing any speedup when multiprocessing c++ code in python?
|
<p>I have some python code, which relies on a piece of c++ code that essentially does a large tree search. I need to run this tree search n times in a for loop and since each iteration was taking several second I figured this would be an obvious place to add some multiprocessing in python to speed things up. My multiprocessing is fairly standard:</p>
<pre><code> with Pool() as pool:
probs = pool.starmap(self._probs_multi, zip(self.features, self.scales, repeat(labels), repeat(nr_classes)))
</code></pre>
<p>I can see that when I run it, I get n cores that operate at 100% load (when n is smaller than my number of cores), as opposed to the sequential case where I had 1 core operating at 100% load. However the total runtime of the for loop is still taking the same amount of time!</p>
<p>The c++ library is right now loaded using something like this:</p>
<pre><code>import ctypes
import numpy.ctypeslib as ctl
libfile = os.path.dirname(__file__) + '/km_dict_lib.so'
lib = ctypes.cdll.LoadLibrary(libfile)
py_km_tree_multi = lib.build_km_tree_multi
# say which inputs the function expects
py_km_tree_multi.argtypes = [ctl.ndpointer(ctypes.c_double, flags="C_CONTIGUOUS"),
ctypes.c_int, ctypes.c_int, ctypes.c_int, ctypes.c_int,
ctypes.c_int, ctypes.c_int, ctypes.c_int, ctypes.c_int,
ctypes.c_bool,
ctl.ndpointer(ctypes.c_double, flags="C_CONTIGUOUS")]
py_km_tree_multi(image, rows, cols, channels, n_im,
self.patch_size, self.number_layers, self.branching_factor,
number_training_patches, self.normalization, self.tree)
</code></pre>
<p>I can think of two reasons why the sequential run is as fast as the multiprocessor run. The first reason is that the job is not actually cpu bound but memory bound instead, I guess this could be the case for a tree search I am not really that experienced with them, but if this is the case it seems strange to me that all n cores are having 100% load which to me suggest it should be cpu bound?
The other option is that somehow the multiprocessing isn't actually working when the python script is calling an external c++ function or that my naive approach for this is insufficient.</p>
<p>Does anyone have any insight into how I can move forward with this and figure out what is wrong?</p>
|
<python><c++><performance><multiprocessing>
|
2024-03-06 13:54:23
| 1
| 546
|
Tue
|
78,114,871
| 11,064,604
|
Turning list of indices into numpy array
|
<p>I have an <code>nxd</code> numpy array of zeros. For every row in this array, I am tasked with converting a specified column to be a 1. To this end, I have been given a list of size <code>n</code> such that the <em>ith</em> value of this list is the index to be turned into a 1.</p>
<p>This task can be accomplished via a <code>for</code> loop as below:</p>
<pre><code>import numpy as np
N=5; D =3
array = np.zeros(shape=(N,D))
ones_index = [0,2,1,0,1]
for row, column in enumerate(ones_index):
array[row,column] = 1
</code></pre>
<p>While this works just fine, I imagine that numpy has some function to achieve this above much more cleanly. <strong>Does there exist a numpy function that converts a list of indices into a certain values in an array?</strong></p>
|
<python><arrays><numpy>
|
2024-03-06 13:42:05
| 2
| 353
|
Ottpocket
|
78,114,862
| 1,662,268
|
What is the "underlying code" of a Python `with` statement?
|
<p>Suppose you have:</p>
<pre><code>with with_target_expression() as with_variable:
with_block_contents(with_variable)
</code></pre>
<p>I understand the basic high-level intent here - that the target / <code>with_variable</code> will be "gotten rid of" sensibly after the <code>with_block_contents</code> completes.</p>
<p>But what is the full "raw" / "basic" Python being called/implied by this?</p>
|
<python>
|
2024-03-06 13:41:28
| 1
| 8,742
|
Brondahl
|
78,114,392
| 661,589
|
Wrong Hungarian (hu_HU) sort order
|
<pre><code>import locale
locale.setlocale(locale.LC_COLLATE,'hu_HU.ISO8859-2')
print(sorted(['c', 'á', 'b', 'z', 'é', 'a', 'd', 'e', 'f', 'Ő', 'Ű', 'ő', 'ű'], key=locale.strxfrm))
</code></pre>
<p>Expected: ['a', 'á', 'b', 'c', 'd', 'e', 'é', 'f', 'Ő', 'ő', 'Ű', 'ű', 'z']</p>
<p>Actual: ['a', 'á', 'b', 'c', 'd', 'e', 'é', 'f', 'z', 'Ő', 'ő', 'Ű', 'ű']</p>
<p>Note that 'z' is supposed to be the last letter.</p>
<p>I know my code "works" because it does put the 'á' and 'é' in the right place (and the "regular" sorted puts them after 'z'), so the bug I is in the locale definition.</p>
<p>I have MacOS Venture 13.6.4, python 3.11.7</p>
<p>How could I "update" the locale definitions? Is it something in the python or does it use the system locales?</p>
<p>Note: I tried to install PyICU and zope.ucol, but both failed during the installation, so don't tell me to use them.</p>
|
<python><locale>
|
2024-03-06 12:28:34
| 1
| 19,251
|
Gavriel
|
78,114,361
| 7,233,155
|
Building package for noarch with Maturin for Python >= 3.9
|
<p><strong>Current Version</strong></p>
<p>I have published a package previously to PyPi and Conda. It was written in pure Python and used some of the following settings in <code>pyproject.toml</code> and was built with the native <code>python -m build</code>:</p>
<pre class="lang-py prettyprint-override"><code>[build-system]
requires = ["setuptools>=61.0.0", "wheel"]
build-backend = "setuptools.build_meta"
</code></pre>
<p>This produces two distribution files: a source and wheels, which are uploaded:</p>
<pre><code>myproject-1.0.0.tar.gz
myproject-1.0.0-py3-none-any.whl
</code></pre>
<p><strong>New Version</strong></p>
<p>To improve performance I have now written some extensions in <strong>Rust</strong> and used pyo3 bindings, with build system maturin. Everything works locally.</p>
<p>I want to produce these generic distribution files, but obviously I don't fully understand the Python package distribution system.</p>
<p>If I change my <code>pyproject.toml</code> to read:</p>
<pre><code>[build-system]
requires = ["maturin>=1.0,<2.0"]
build-backend = "maturin"
</code></pre>
<p>Then running <code>maturin build --release --sdist --bindings pyo3</code> (on windows with Py3.11) produces the files</p>
<pre><code>myproject-1.1.0.tar.gz
myproject-1.1.0-cp311-none-win_amd64.whl
</code></pre>
<p><strong>Question</strong></p>
<p>I want to understand how to cover all bases for distribution and what are the risks. For example,</p>
<p>If I upload these files to PyPi will it cause problems for users?</p>
<p>Can users who dont have Rust installed install my package from the source distibution?</p>
<p>Does the source distribution also need to be architecture dependent?</p>
<p>How can maturin produce wheels for all Python versions >= 3.9?</p>
|
<python><rust><packaging><pyo3><maturin>
|
2024-03-06 12:23:27
| 1
| 4,801
|
Attack68
|
78,113,717
| 7,447,867
|
Why Python is running as 32 bit on 64 bit Windows 10 with 64-bit Python installed?
|
<p>I have a Python script that runs on Windows 10 Pro x64.</p>
<p>When I open Task Manager, it shows that Python is running as 32-bit application.</p>
<p><a href="https://i.sstatic.net/Uyf9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Uyf9F.png" alt="enter image description here" /></a></p>
<p>This is weird because:</p>
<ul>
<li>Python 3.12.2 is installed as 64-bit variant (double-checked)</li>
<li>Installer got from <a href="https://www.python.org/ftp/python/3.12.2/python-3.12.2-amd64.exe" rel="nofollow noreferrer">here</a></li>
<li>This is the only Python instance installed on this PC</li>
</ul>
<p><a href="https://i.sstatic.net/Y43Mg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y43Mg.png" alt="enter image description here" /></a></p>
<p>Script was launched via double-click on it (if this is important info)</p>
<p>Please, explain me:</p>
<ul>
<li>Why it is still shown as 32-bit in task manager? Or is it error of Task Manager?</li>
<li>Is it possible to force running all python scripts as 64-bit? If yes: how?</li>
</ul>
<p><strong>UPDATE / ANSWER</strong></p>
<ul>
<li>According to <a href="https://peps.python.org/pep-0397/#:%7E:text=The%20launcher%20that%20is%20installed,and%2064%2Dbit%20Windows%20installations." rel="nofollow noreferrer">this documentation</a> it is the 32-bit launcher, but the Python will be still 64-bit version</li>
<li>to bypass this, right click on the python scripts and select "Open with" and select there the 64-bit Python executable. Typically, something like this :<code>C:\Program Files\Python312\python.exe</code>, than when double-clicking the python script it will appear in task manager like this: <br />
<a href="https://i.sstatic.net/AVbZN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AVbZN.png" alt="enter image description here" /></a></li>
<li>do this only if this is the only instance of Python installed on PC</li>
</ul>
|
<python><python-3.x>
|
2024-03-06 10:46:55
| 1
| 722
|
Araneus0390
|
78,113,572
| 13,520,498
|
can't load trained keras model with custom regularization class
|
<p>I'm training the PointNet3D object classification model with my own dataset following the Tutorial here in Keras: <a href="https://keras.io/examples/vision/pointnet/#point-cloud-classification-with-pointnet" rel="nofollow noreferrer">https://keras.io/examples/vision/pointnet/#point-cloud-classification-with-pointnet</a></p>
<p>Now for the training part, I've been able to do everything just fine but after training I'm facing issues loading the trained model.
The main issue I think is with this part below, <code>OrthogonalRegularizer</code> class object might not be registered properly when I'm saving the model:</p>
<pre><code>
@keras.saving.register_keras_serializable('OrthogonalRegularizer')
class OrthogonalRegularizer(keras.regularizers.Regularizer):
def __init__(self, num_features, **kwargs):
super(OrthogonalRegularizer, self).__init__(**kwargs)
self.num_features = num_features
self.l2reg = 0.001
self.eye = tf.eye(num_features)
def __call__(self, x):
x = tf.reshape(x, (-1, self.num_features, self.num_features))
xxt = tf.tensordot(x, x, axes=(2, 2))
xxt = tf.reshape(xxt, (-1, self.num_features, self.num_features))
return tf.math.reduce_sum(self.l2reg * tf.square(xxt - self.eye))
def get_config(self):
config = {}
config.update({"num_features": self.num_features, "l2reg": self.l2reg, "eye": self.eye})
return config
def tnet(inputs, num_features):
# Initialise bias as the identity matrix
bias = keras.initializers.Constant(np.eye(num_features).flatten())
reg = OrthogonalRegularizer(num_features)
x = conv_bn(inputs, 32)
x = conv_bn(x, 64)
x = conv_bn(x, 512)
x = layers.GlobalMaxPooling1D()(x)
x = dense_bn(x, 256)
x = dense_bn(x, 128)
x = layers.Dense(
num_features * num_features,
kernel_initializer="zeros",
bias_initializer=bias,
activity_regularizer=reg,
)(x)
feat_T = layers.Reshape((num_features, num_features))(x)
# Apply affine transformation to input features
return layers.Dot(axes=(2, 1))([inputs, feat_T])
</code></pre>
<p>After training when I try to load the model by the following, I see the following error:</p>
<pre><code>model.save('my_model.h5')
model = keras.models.load_model('my_model.h5', custom_objects={'OrthogonalRegularizer': OrthogonalRegularizer})
</code></pre>
<p>The error message:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-18-05f700f433a8> in <cell line: 2>()
1 model.save('my_model.h5')
----> 2 model = keras.models.load_model('my_model.h5', custom_objects={'OrthogonalRegularizer': OrthogonalRegularizer})
2 frames
/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_api.py in load_model(filepath, custom_objects, compile, safe_mode, **kwargs)
260
261 # Legacy case.
--> 262 return legacy_sm_saving_lib.load_model(
263 filepath, custom_objects=custom_objects, compile=compile, **kwargs
264 )
/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
/usr/local/lib/python3.10/dist-packages/keras/src/engine/base_layer.py in from_config(cls, config)
868 return cls(**config)
869 except Exception as e:
--> 870 raise TypeError(
871 f"Error when deserializing class '{cls.__name__}' using "
872 f"config={config}.\n\nException encountered: {e}"
TypeError: Error when deserializing class 'Dense' using config={'name': 'dense_2',
'trainable': True, 'dtype': 'float32', 'units': 9, 'activation': 'linear', 'use_bias':
True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros',
'config': {}, 'registered_name': None}, 'bias_initializer': {'module':
'keras.initializers', 'class_name': 'Constant', 'config': {'value': {'class_name':
'__numpy__', 'config': {'value': [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0], 'dtype':
'float64'}}}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer':
None, 'activity_regularizer': {'module': None, 'class_name': 'OrthogonalRegularizer',
'config': {'num_features': 3, 'l2reg': 0.001, 'eye': {'class_name': '__tensor__',
'config': {'value': [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]], 'dtype':
'float32'}}}, 'registered_name': 'OrthogonalRegularizer>OrthogonalRegularizer'},
'kernel_constraint': None, 'bias_constraint': None}.
Exception encountered: object.__init__() takes exactly one argument (the instance to initialize)
</code></pre>
<p>What I understand so far is that while saving I'm not able to save the <code>OrthogonalRegularizer</code> class object properly. Please let me know what I'm doing wrong.</p>
<p>The minimal version of the code is uploaded here in this collab notebook:
<a href="https://colab.research.google.com/drive/1akpfoOBVAWThsZl7moYywuZIuXt_vWCU?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1akpfoOBVAWThsZl7moYywuZIuXt_vWCU?usp=sharing</a></p>
<p>One possible similar question is this: <a href="https://stackoverflow.com/questions/57140272/load-customized-regularizer-in-keras">Load customized regularizer in Keras</a></p>
|
<python><tensorflow><keras><deep-learning><classification>
|
2024-03-06 10:24:55
| 1
| 1,991
|
Musabbir Arrafi
|
78,113,411
| 20,075,659
|
Partitioning Parquet AWS Wrangler with LakeFs
|
<p>I was trying to partition the parquet on S3 and it worked with AWS Wrangler.</p>
<pre><code>basename_template = 'part.'
partitioning = ['cust_id', 'file_name', 'added_year', 'added_month', 'added_date']
loop = asyncio.get_event_loop()
s3_path = "s3://customer-data-lake/main/parquet_data"
await loop.run_in_executor(None, lambda: wr.s3.to_parquet(
df=batch.to_pandas() ,
path=s3_path,
dataset=True,
max_rows_by_file=MAX_ROWS_PER_FILE,
use_threads=True,
partition_cols = partitioning,
mode='append',
boto3_session=s3_session,
filename_prefix=basename_template
))
</code></pre>
<p>Then I tried to convert it to lakeFs, I changed the endpoint to LakeFS</p>
<pre><code>wr.config.s3_endpoint_url = lakefsEndPoint
</code></pre>
<p>Then suddenly partitioning was not working anymore. It just appends to the same partition.</p>
<p>This image is the original S3 one
<a href="https://i.sstatic.net/OQLlq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OQLlq.png" alt="enter image description here" /></a></p>
<p>Then this is after I changed to lakeFs
<a href="https://i.sstatic.net/Pb0TL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pb0TL.png" alt="enter image description here" /></a></p>
<p>It just appends to the csv_1. What am I doing wrong here?</p>
|
<python><dataframe><aws-data-wrangler><lakefs>
|
2024-03-06 10:01:24
| 0
| 396
|
Anon
|
78,113,333
| 4,269,851
|
Python list of lists to one dimensional using comprehension
|
<p>What would be the syntax to convert this loop into one line comprehension operator?</p>
<pre><code>lst = [
[1,2,3],
[4,5,6],
[7,8,9],
[10,11,12]
]
all_records = []
for entry in lst:
all_records.extend(entry)
#[1, 2, 3, 3, 0, 1, 5, 2, 5, 10, 11]
</code></pre>
<p>When i am doing</p>
<pre><code>all_records = [entry for entry in lst]
</code></pre>
<p>it gives</p>
<pre><code>[[1, 2, 3], [3], [0, 1, 5], [2, 5, 10, 11]]
</code></pre>
|
<python><list-comprehension>
|
2024-03-06 09:51:02
| 1
| 829
|
Roman Toasov
|
78,112,900
| 2,794,152
|
How do I set no "ticks" at all in the color bar?
|
<p>I want to set the colorbar with no ticks in any form. I've searched and set the tick parameter at two places. However, still there are some ticks at right boarder.</p>
<pre><code>cb = mpl.colorbar.ColorbarBase(ax2, cmap=cmap, norm=norm, spacing='proportional', boundaries=bounds, format='%.1f')
ax2.set_xlabel('t', size=12)
cb.ax.tick_params(size=0) # these two lines are my searching effort
cb.set_ticks([])
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/3qnvL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3qnvL.png" alt="enter image description here" /></a></p>
<h2>Edit</h2>
<p>Here is a self contained code, still has tiny ticks.</p>
<pre><code>fig, ax = plt.subplots(1, 1, figsize=(0.05, 6))
tmax=5
tmin=-5
color_num=6
bounds = np.linspace(tmin, tmax, color_num+1)
norm = mpl.colors.BoundaryNorm(bounds, color_num)
cmap = plt.cm.tab20
cmaplist = [cmap(i) for i in range(color_num)]
cmap = mpl.colors.LinearSegmentedColormap.from_list('custom map', cmaplist, color_num)
cb = mpl.colorbar.ColorbarBase(ax, cmap=cmap, norm=norm, spacing='proportional')
cb.ax.tick_params(size=0) # these two lines are my searching effort
cb.set_ticks([])
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/Se0pP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Se0pP.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><matplotlib><colorbar>
|
2024-03-06 08:40:46
| 2
| 4,904
|
an offer can't refuse
|
78,112,700
| 3,727,079
|
Why am I getting "Mean of empty slice" warnings without NaNs?
|
<p>I've got a dataframe <code>keptdata</code>. I have a for loop to search through the rows of <code>keptdata</code>, and at one point need to average the previous values.</p>
<p>This is the relevant line in my code, that produces the warning:</p>
<pre><code>avg = np.average(keptdata.iloc[i-500:i].price[keptdata.price != 0])
</code></pre>
<p>Here <code>i</code> is the variable that's being looped over in the for loop, <code>price</code> is a column in the dataframe, and I'm including the row in the average only if <code>price</code> is not zero.</p>
<p>Why am I getting a warning? Googling for the warning it seems to happen if there are <code>NaNs</code> in the dataframe, but I checked for that using <code>keptdata.isnull.any().any()</code> which returns <code>false</code>.</p>
<p>Another possibility is that if <code>i-500</code> is negative, that might be causing problems, but I rewrote the for loop to start with <code>i = 700</code> and I still get the warning.<br />
Removing <code>keptdata.price != 0</code> seems to resolve the issue, but that changes the line fundamentally.</p>
<p>Can the cause of the warning be fixed? Do I have any options other than suppressing the warning?</p>
|
<python><pandas>
|
2024-03-06 08:06:01
| 2
| 399
|
Allure
|
78,112,570
| 12,494,839
|
Saving data to JSON file in Python - Issue with appending multiple keys
|
<p>I have big data. from the data I want create script. From the big data , I want save data like this :</p>
<p>In pAccountIds1 it will save first 99 ids then go pAccountIds2 next 99 ids as strings and so on. then save it in Parameters directory. Everything ids are saved in pAccountIds1.</p>
<p>This is my expected output:</p>
<pre><code>{"Parameters": [
{"pAccountIds1": "886180295749,575789942587,331377892512"},
{"pAccountIds2": "886180295749,575789942587,331377892512"}
]}
</code></pre>
<p>This is actual behaviour:</p>
<pre><code>{
"Parameters": [
{
"pAccountIds1": "886180295749,169278231308,888561797329,316900773169,452451531881,263111390741,774531687947,307175455232,160582862483,503763934565,628239060389,732071894519,851207678364,176876819377,852942366732,697301814574,463173411868,813366789735,434423952232,104239629908,850131272446,173873129414,758190182387,917707497382,813660687632,295585687189,946660130177,531405577506,803054876607,150802796093,231981811420,288035531821,187585725025,381266788059,913104880535,109470036896,843076529994,554635727446,384741278002,179697366565,115248717328,834696924337,137711249429,241488429314,574589139538"
}
]
</code></pre>
<p>}</p>
<p>this is my sample data:</p>
<pre><code>{
"deployment_map_source": "S3",
"deployment_map_name": "deployment_maps.yaml",
"pipeline_definition": {
"name": "logs",
"default_providers": {
"source": {
"provider": "codecommit",
"properties": {
"account_id": 715151534,
"branch": "main"
}
},
"deploy": {
"provider": "cloudformation",
"properties": {
"action": "replace_on_failure",
"stack_name": "subscription"
}
}
},
"params": {
"restart_execution_on_update": true
},
"targets": [
{
"target": 1716335251,
"properties": {
"template_filename": "management.yml",
"param_filename": "gen_parameter.json"
},
"regions": "us-east-1",
"path": [
82446615151
]
},
{
"target": [
96342414163,
99926626625,
362514193959
],
"regions": "us-west-1",
"path": [
96342414163,
99926626625,
362514193959
]
}
]
},
"pipeline_input": {
"environments": {
"targets": [
[
[
{
"id": "715151515151",
"name": "logs-pro",
"path": 715151515151,
"step_name": ""
},
{
"id": "286261515151",
"name": "logs-dev",
"path": 286261515151,
"step_name": ""
}
]
],
[
[
{
"id": "7363514399199001",
"name": "logs-pro-dada",
"path": 7363514399199001,
"step_name": ""
},
{
"id": "u2716166633444",
"name": "logs-dev",
"path": 2716166633444,
"step_name": ""
}
]
]
]
}
}
}
</code></pre>
<p>here is my script:</p>
<pre><code>import json
IGNORE_ACCOUNTID = '981813074321'
OUTPUT_FILE = 'params/gen_parameter.json'
def chunk_list(lst, chunk_size):
"""Helper function to chunk a list into smaller lists."""
for i in range(0, len(lst), chunk_size):
yield lst[i:i + chunk_size]
def extract_ids_from_targets(targets):
extracted_ids = []
for target_group in targets:
for target_list in target_group:
for account in target_list:
if 'id' in account and account['id'] != IGNORE_ACCOUNTID:
extracted_ids.append(str(account['id']))
return extracted_ids
def main():
with open("display.json") as f:
data = json.load(f)
targets = data.get("pipeline_input", {}).get("environments", {}).get("targets", [])
print(f"Total targets: {sum(map(len, targets))}")
# Split the targets into groups of 99
grouped_targets = list(chunk_list(targets, 99))
print(f"Total groups: {len(grouped_targets)}")
# Create the final JSON structure
result = []
for i, group in enumerate(grouped_targets, start=1):
extracted_ids = extract_ids_from_targets(group)
result.append({f"pAccountIds{i}": ','.join(extracted_ids)})
final_data = {"Parameters": result}
json_str = json.dumps(final_data, indent=4)
# Save the result to gen_parameter.json
with open(OUTPUT_FILE, 'w') as f:
f.write(json_str)
if __name__ == '__main__':
main()
</code></pre>
|
<python><json><list>
|
2024-03-06 07:40:34
| 1
| 3,533
|
Krisna
|
78,112,526
| 513,140
|
Adding 1KM grids to Folium map
|
<p>Could someone suggest a way to adding 1KM wide grids/graticules to folium map? The folium example here uses lat/longitude intervals but doesn't render the grid.</p>
<pre class="lang-py prettyprint-override"><code>
import folium
# Bangkok coordinates
center_lat = 13.7563
center_lon = 100.4911
m = folium.Map(location=[center_lat, center_lon], zoom_start=11)
# Interval in degrees (1 kilometer ≈ 0.00694444 degrees)
interval = 0.00694444
# Create grid lines
grid_lines = []
# East-west lines (from -90 to 90 latitude)
for lat in range(-90, 91, int(1 / interval)):
west_point = (-180, lat)
east_point = (180, lat)
grid_lines.append(folium.PolyLine([west_point, east_point], color="black", weight=0.5, opacity=0.5))
# North-south lines (from -180 to 180 longitude)
for lon in range(-180, 181, int(1 / interval)):
south_point = (lon, -90)
north_point = (lon, 90)
grid_lines.append(folium.PolyLine([south_point, north_point], color="black", weight=0.5, opacity=0.5))
# Add lines to the map
for line in grid_lines:
line.add_to(m)
# Display the map
m
</code></pre>
|
<python><maps><folium>
|
2024-03-06 07:31:38
| 1
| 390
|
Arky
|
78,112,459
| 2,316,068
|
How can you access the patch register in a python test?
|
<p>I have python test which is part of a larger test suite. Inside that test there is a <code>patch</code> expression similar to this:</p>
<pre><code>def test_my_feature():
[...]
with patch("my.module") as mocked_method:
[...]
mocked_method.assert_called
</code></pre>
<p>The problem is that when running this test in isolation the assert_called passes, while running it with the entire suite the test fails. There are a myriad of reasons that the test could fail, but one of the investigations I wanted to conduct is to ensure that no other test patches something and doesn't tear down the patching correctly. And for that I would like to have access to some kind of "patch register" to see at a certain point to check what modules have an active patch.</p>
<p>Is there some kind of way to access such patching registry?</p>
|
<python><testing>
|
2024-03-06 07:17:21
| 0
| 3,121
|
David Jiménez Martínez
|
78,112,288
| 10,715,700
|
How do I get the dot products of corresponding rows in two arrays?
|
<p>How do I perform this without using for loops and just numpy functions?</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
arr_x = np.array([[1,2,3], [4,5,6]])
arr_y = np.array([[1,2,3], [4,5,6]])
res = []
for x, y in zip(arr_x, arr_y):
res.append(np.dot(x, y))
np.array(res) # array([14, 77])
</code></pre>
<p>I tried <code>tensordot</code> with axes=1, but that throws a shape mismatch error.</p>
|
<python><numpy>
|
2024-03-06 06:42:28
| 2
| 430
|
BBloggsbott
|
78,112,247
| 11,720,066
|
truth value of empty user-defined data objects
|
<p>This is more of a philosophical question.</p>
<p>In python, <code>bool([])</code> evaluates to <code>False</code>.
On the other hand, consider the following:</p>
<pre><code>from dataclasses import dataclass
@dataclass
class Lists:
items: list[str]
other_items: list[str]
assert bool(Lists(items=[], other_items=[]))
</code></pre>
<p>The above snippet doesn't raise <code>AssertionError</code>.</p>
<p>Why is that?</p>
<p>I encountered a use-case where it seems to make sense to deduce an object's truth value by aggregating the truth values of its attributes.</p>
<p>Are there any caveats to keep in mind before doing so?</p>
<p>(I'm talking about data classes, like <code>dataclass</code> or <code>pydantic.BaseModel</code>)</p>
|
<python><boolean><pydantic><python-dataclasses>
|
2024-03-06 06:34:19
| 1
| 613
|
localhost
|
78,112,229
| 4,633,735
|
Can someone explain the difference between the two and why one produces a wrong result; Python instance variables
|
<p>Maybe a very noob question but the below gives wrong results when vecDict is initialized outside of init. Can someone please explain why..</p>
<pre><code>class Vector:
#vecDict = {} ## vecDict declared here gives wrong results vs in the init method.
def __init__(self, nums):
"""
:type nums: List[int]
"""
self.vecDict = {}
for i in range(len(nums)):
self.vecDict[i]=nums[i]
def getDict(self):
return self.vecDict
# Return the dotProduct of two sparse vectors
def dotProduct(self, vec):
print(vec.getDict())
print(vec.vecDict)
"""
:type vec: 'SparseVector'
:rtype: int
"""
# print(vec.getDict())
dotProd = 0
secDict = vec.getDict()
for k in secDict:
if k in self.vecDict:
dotProd += (self.vecDict[k] * secDict[k])
# print(self.vecDict)
# print(secDict)
return dotProd
</code></pre>
<p>Vector object will be instantiated and called as:</p>
<pre><code>v1 = Vector(numsA)
v2 = Vector(numsB)
ans = v1.dotProduct(v2)
</code></pre>
<p>Ex input:
numsA = [1,0,0,2,3]</p>
<p>numsB = [0,3,0,4,0]</p>
<p>ans = 8</p>
|
<python><constructor>
|
2024-03-06 06:31:10
| 1
| 640
|
Hemanth Gowda
|
78,111,859
| 3,904,031
|
Gmsh example t4.py unable to find t4_image.png, other examples run fine
|
<p>Goal is to solve my previous question <a href="https://stackoverflow.com/q/78055800/3904031">Generate a mesh from my polygon geometry to iterate FEM for geometry optimization?</a> by myself using Gmsh</p>
<p>I've installed <a href="https://gitlab.onelab.info/gmsh/gmsh/-/tree/master" rel="nofollow noreferrer">Gmsh</a> (<a href="https://gmsh.info/" rel="nofollow noreferrer">also</a>) 4.12.2 using pip on macOS 10.15.7 Intel core i5 with Anaconda Python 3.7.</p>
<p>I ran the first five examples t1 through t5.py and all work nicely except t4.</p>
<p><a href="https://gmsh.info/dev/doc/texinfo/gmsh.html#t4" rel="nofollow noreferrer">https://gmsh.info/dev/doc/texinfo/gmsh.html#t4</a></p>
<p>I get</p>
<pre><code>Exception: Could not open file `/Users/davido/Documents/Gmsn/../t4_image.png'
</code></pre>
<p><strong>note:</strong> A friend ran t4.py under Windows and Python 3.11.5 and it worked fine. Right now I don't have the option to move to a new computer/OS, nor upgrade my Anaconda "just to see what happens".</p>
<p>I've added t4.py at the bottom of the question, it uses path specifications in a way I'm not familiar with (I'm more of a numerical person) and I don't know what <code>/../</code> means.</p>
<p>I can't find any instance of <code>t4_image.png</code> on my computer except one from a few weeks ago within Anaconda packages when I first installed Gmsh.</p>
<p>Since making holes in a mesh is EXACTLY what I need to do to answer the linked question, I would like to get this example working.</p>
<p>Error:</p>
<pre><code>(base) davido@Davidos-MacBook-Air Gmsn % python -i t4.py
Info : Meshing 1D...
Info : [ 0%] Meshing curve 1 (Line)
Info : [ 10%] Meshing curve 2 (Line)
[lines removed]
Info : [ 90%] Meshing curve 19 (Circle)
Info : [100%] Meshing curve 20 (Line)
Info : Done meshing 1D (Wall 0.00703093s, CPU 0.024232s)
Info : Meshing 2D...
Info : [ 0%] Meshing surface 22 (Plane, Frontal-Delaunay)
Info : [ 50%] Meshing surface 24 (Plane, Frontal-Delaunay)
Info : Done meshing 2D (Wall 0.0233768s, CPU 0.084051s)
Info : 785 nodes 1629 elements
Info : Writing 't4.msh'...
Info : Done writing 't4.msh'
Error : Could not open file `/Users/davido/Documents/Gmsn/../t4_image.png'
Traceback (most recent call last):
File "t4.py", line 180, in <module>
gmsh.fltk.run()
File "/Users/davido/anaconda3/lib/python3.7/site-packages/gmsh.py", line 10032, in run
raise Exception(logger.getLastError())
Exception: Could not open file `/Users/davido/Documents/Gmsn/../t4_image.png'
</code></pre>
<p>Original script t4.py</p>
<pre><code># ------------------------------------------------------------------------------
#
# Gmsh Python tutorial 4
#
# Holes in surfaces, annotations, entity colors
#
# ------------------------------------------------------------------------------
import gmsh
import math
import sys
import os
gmsh.initialize()
gmsh.model.add("t4")
cm = 1e-02
e1 = 4.5 * cm
e2 = 6 * cm / 2
e3 = 5 * cm / 2
h1 = 5 * cm
h2 = 10 * cm
h3 = 5 * cm
h4 = 2 * cm
h5 = 4.5 * cm
R1 = 1 * cm
R2 = 1.5 * cm
r = 1 * cm
Lc1 = 0.01
Lc2 = 0.003
def hypot(a, b):
return math.sqrt(a * a + b * b)
ccos = (-h5 * R1 + e2 * hypot(h5, hypot(e2, R1))) / (h5 * h5 + e2 * e2)
ssin = math.sqrt(1 - ccos * ccos)
# We start by defining some points and some lines. To make the code shorter we
# can redefine a namespace:
factory = gmsh.model.geo
factory.addPoint(-e1 - e2, 0, 0, Lc1, 1)
factory.addPoint(-e1 - e2, h1, 0, Lc1, 2)
factory.addPoint(-e3 - r, h1, 0, Lc2, 3)
factory.addPoint(-e3 - r, h1 + r, 0, Lc2, 4)
factory.addPoint(-e3, h1 + r, 0, Lc2, 5)
factory.addPoint(-e3, h1 + h2, 0, Lc1, 6)
factory.addPoint(e3, h1 + h2, 0, Lc1, 7)
factory.addPoint(e3, h1 + r, 0, Lc2, 8)
factory.addPoint(e3 + r, h1 + r, 0, Lc2, 9)
factory.addPoint(e3 + r, h1, 0, Lc2, 10)
factory.addPoint(e1 + e2, h1, 0, Lc1, 11)
factory.addPoint(e1 + e2, 0, 0, Lc1, 12)
factory.addPoint(e2, 0, 0, Lc1, 13)
factory.addPoint(R1 / ssin, h5 + R1 * ccos, 0, Lc2, 14)
factory.addPoint(0, h5, 0, Lc2, 15)
factory.addPoint(-R1 / ssin, h5 + R1 * ccos, 0, Lc2, 16)
factory.addPoint(-e2, 0.0, 0, Lc1, 17)
factory.addPoint(-R2, h1 + h3, 0, Lc2, 18)
factory.addPoint(-R2, h1 + h3 + h4, 0, Lc2, 19)
factory.addPoint(0, h1 + h3 + h4, 0, Lc2, 20)
factory.addPoint(R2, h1 + h3 + h4, 0, Lc2, 21)
factory.addPoint(R2, h1 + h3, 0, Lc2, 22)
factory.addPoint(0, h1 + h3, 0, Lc2, 23)
factory.addPoint(0, h1 + h3 + h4 + R2, 0, Lc2, 24)
factory.addPoint(0, h1 + h3 - R2, 0, Lc2, 25)
factory.addLine(1, 17, 1)
factory.addLine(17, 16, 2)
# Gmsh provides other curve primitives than straight lines: splines, B-splines,
# circle arcs, ellipse arcs, etc. Here we define a new circle arc, starting at
# point 14 and ending at point 16, with the circle's center being the point 15:
factory.addCircleArc(14, 15, 16, 3)
# Note that, in Gmsh, circle arcs should always be smaller than Pi. The
# OpenCASCADE geometry kernel does not have this limitation.
# We can then define additional lines and circles, as well as a new surface:
factory.addLine(14, 13, 4)
factory.addLine(13, 12, 5)
factory.addLine(12, 11, 6)
factory.addLine(11, 10, 7)
factory.addCircleArc(8, 9, 10, 8)
factory.addLine(8, 7, 9)
factory.addLine(7, 6, 10)
factory.addLine(6, 5, 11)
factory.addCircleArc(3, 4, 5, 12)
factory.addLine(3, 2, 13)
factory.addLine(2, 1, 14)
factory.addLine(18, 19, 15)
factory.addCircleArc(21, 20, 24, 16)
factory.addCircleArc(24, 20, 19, 17)
factory.addCircleArc(18, 23, 25, 18)
factory.addCircleArc(25, 23, 22, 19)
factory.addLine(21, 22, 20)
factory.addCurveLoop([17, -15, 18, 19, -20, 16], 21)
factory.addPlaneSurface([21], 22)
# But we still need to define the exterior surface. Since this surface has a
# hole, its definition now requires two curves loops:
factory.addCurveLoop([11, -12, 13, 14, 1, 2, -3, 4, 5, 6, 7, -8, 9, 10], 23)
factory.addPlaneSurface([23, 21], 24)
# As a general rule, if a surface has N holes, it is defined by N+1 curve loops:
# the first loop defines the exterior boundary; the other loops define the
# boundaries of the holes.
factory.synchronize()
# Finally, we can add some comments by creating a post-processing view
# containing some strings:
v = gmsh.view.add("comments")
# Add a text string in window coordinates, 10 pixels from the left and 10 pixels
# from the bottom:
gmsh.view.addListDataString(v, [10, -10], ["Created with Gmsh"])
# Add a text string in model coordinates centered at (X,Y,Z) = (0, 0.11, 0),
# with some style attributes:
gmsh.view.addListDataString(v, [0, 0.11, 0], ["Hole"],
["Align", "Center", "Font", "Helvetica"])
# If a string starts with `file://', the rest is interpreted as an image
# file. For 3D annotations, the size in model coordinates can be specified after
# a `@' symbol in the form `widthxheight' (if one of `width' or `height' is
# zero, natural scaling is used; if both are zero, original image dimensions in
# pixels are used):
png = os.path.join(os.path.dirname(os.path.abspath(__file__)), os.pardir,
't4_image.png')
print('Hey! png = ', png)
gmsh.view.addListDataString(v, [0, 0.09, 0], ["file://" + png + "@0.01x0"],
["Align", "Center"])
# The 3D orientation of the image can be specified by proving the direction
# of the bottom and left edge of the image in model space:
gmsh.view.addListDataString(v, [-0.01, 0.09, 0],
["file://" + png + "@0.01x0,0,0,1,0,1,0"])
# The image can also be drawn in "billboard" mode, i.e. always parallel to
# the camera, by using the `#' symbol:
gmsh.view.addListDataString(v, [0, 0.12, 0], ["file://" + png + "@0.01x0#"],
["Align", "Center"])
# The size of 2D annotations is given directly in pixels:
gmsh.view.addListDataString(v, [150, -7], ["file://" + png + "@20x0"])
# These annotations are handled by a list-based post-processing view. For
# large post-processing datasets, that contain actual field values defined on
# a mesh, you should use model-based post-processing views instead, which
# allow to efficiently store continuous or discontinuous scalar, vector and
# tensor fields, or arbitrary polynomial order.
# Views and geometrical entities can be made to respond to double-click
# events, here to print some messages to the console:
gmsh.view.option.setString(v, "DoubleClickedCommand",
"Printf('View[0] has been double-clicked!');")
gmsh.option.setString(
"Geometry.DoubleClickedLineCommand",
"Printf('Curve %g has been double-clicked!', "
"Geometry.DoubleClickedEntityTag);")
# We can also change the color of some entities:
gmsh.model.setColor([(2, 22)], 127, 127, 127) # Gray50
gmsh.model.setColor([(2, 24)], 160, 32, 240) # Purple
gmsh.model.setColor([(1, i) for i in range(1, 15)], 255, 0, 0) # Red
gmsh.model.setColor([(1, i) for i in range(15, 21)], 255, 255, 0) # Yellow
gmsh.model.mesh.generate(2)
gmsh.write("t4.msh")
# Launch the GUI to see the results:
if '-nopopup' not in sys.argv:
gmsh.fltk.run()
gmsh.finalize()
</code></pre>
|
<python><python-3.x><macos><finite-element-analysis><gmsh>
|
2024-03-06 04:47:05
| 1
| 3,835
|
uhoh
|
78,111,803
| 9,795,817
|
How is scikit-learn's RFECV `cv_results` attribute ordered by?
|
<p>I fit an <code>RFECV</code> instance on my training data using a binary classifier <code>clf</code>.</p>
<p>My training data has 154 features and I used 10-fold cross-validation to drop five features per iteration.</p>
<pre class="lang-py prettyprint-override"><code>rfecv = RFECV(
estimator=clf,
step=5,
min_features_to_select=10,
cv=10,
scoring='precision',
verbose=10,
n_jobs=1,
importance_getter='auto'
)
</code></pre>
<p>I do not understand how the resulting <code>rfecv.cv_results_</code> dictionary is ordered by.
After turning it to a pandas dataframe, I noticed that the number of rows corresponds to the number of features tested at each step (i.e., 154, 149, 145, ..., 24, 19, 14).</p>
<p>However, I'd like to know which row number corresponds to which number of features. For example, does the first row represent the 10 models that used 154 features?</p>
<p>My mean test scores (<code>rfecv.cv_results_.get('mean_test_score')</code>), in their current order, look as follows:</p>
<p><a href="https://i.sstatic.net/wutn0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wutn0.png" alt="RFECV mean test scores" /></a></p>
<p>It seems to me that the results are ranked in ascending order (first the 10 models with 14 features, then 19, then 24, etc.). However, this seems counter-intuitive to me because the elimination process is recursive, which is why I'm asking for help.</p>
|
<python><scikit-learn><rfe>
|
2024-03-06 04:25:45
| 0
| 6,421
|
Arturo Sbr
|
78,111,656
| 272,920
|
How to validate a copy of the pydantic model created with `model_copy`?
|
<p>Consider the following code</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, ConfigDict, ValidationError
class MyModel(BaseModel):
model_config = ConfigDict(
frozen=True,
extra='forbid',
strict=True,
)
a: int
# will fail with a ValidationError
try:
MyModel(a="a")
except ValidationError:
print("The right exception was thrown")
# works but not as I would expect
m = MyModel(a=1)
assert m.a == 1
m_copy = m.model_copy(update={"a": "a-string"})
assert m_copy.a == "a-string"
# works
m_copy.model_validate(m_copy, strict=True)
print("model is valid ... but why?")
</code></pre>
<p>If you run the code, the output would be:</p>
<pre><code>The right exception was thrown
model is valid ... but why?
</code></pre>
<p>The question is, how can I validate a model after creating a copy of it?</p>
<p>Thanks.</p>
|
<python><pydantic>
|
2024-03-06 03:18:46
| 1
| 5,093
|
Anton Koval'
|
78,111,355
| 11,628,437
|
Unable to import functions from modules
|
<p>I tried to create a module using <code>setup.py</code> to practise my understanding of packages and <code>__init__.py</code> files. But I get the following error -</p>
<pre><code>Traceback (most recent call last):
File "/home/thoma/PycharmProjects/test_gymnasium/test.py", line 1, in <module>
from test_final_1 import my_add
ImportError: cannot import name 'my_add' from 'test_final_1' (/home/thoma/anaconda3/envs/test_gymnasium/lib/python3.8/site-packages/test_final_1/__init__.py)
</code></pre>
<p>Here's the structure of my Python folder <code>test-final-1</code> -</p>
<pre><code>├── build
│ ├── bdist.linux-x86_64
│ └── lib
│ └── test_final_1
│ ├── calc.py
│ └── __init__.py
├── setup.py
├── test_final_1
│ ├── calc.py
│ └── __init__.py
└── test_final_1.egg-info
├── dependency_links.txt
├── PKG-INFO
├── requires.txt
├── SOURCES.txt
└── top_level.txt
</code></pre>
<p>The <code>__init__</code> file contains the following -</p>
<pre><code>from test_final_1.calc import *
</code></pre>
<p>The <code>calc</code> file contains the following -</p>
<pre><code>def my_add(a,b):
return a+b
def my_sub(a,b):
return a-b
</code></pre>
<p>Here is my <code>test</code> file which throws an error -</p>
<pre><code>from test_final_1 import my_add
a = 2
b = 4
my_add(2,4)
</code></pre>
<p>Here is the <code>setup</code> file -</p>
<pre><code>from setuptools import setup
setup(
name="test_final_1",
version="0.0.1",
install_requires=[],
)
</code></pre>
<p>This is how I run the <code>setup</code> file from the parent directory - <code>pip install e .</code></p>
<p><strong>Edit 1</strong></p>
<p>Based on of the comments I received, I went to <code>site-packages</code>. Here I observed that, the <code>__init__</code> file is the same. But the <code>calc</code> file is strangely empty. Why would that have happened?</p>
|
<python><python-3.x><setup.py>
|
2024-03-06 01:26:02
| 0
| 1,851
|
desert_ranger
|
78,111,214
| 2,774,885
|
how can I detect if a file is open by windows app in WSL and either overwrite or rename?
|
<p>I have a python script that dumps data into an Excel file. My most common user error is that I have the Excel file open, and then I run the script again which attempts to write the same file. This fails silently within the Python program and does NOT update the excel file with the new data, leaving me to facepalm myself, remember that I have to explicitly close the excel file, then I have to re-run the script which takes quite a while. This makes me feel dumb and I would like to feel less dumb. So, some questions...</p>
<p>is there a way in this environment (python 3.10 under WSL on a windows 11 machine) to identify if the given potential target file is already open? If so, is there a way to force-overwrite it anyway (I think not....)</p>
<p>Alternatively, if I can identify that the file is open I could throw some warning or "press to continue after closing file" or whatever... or i could write some logic that just chooses a derivitive filename from what's given and writes to that with a warning...</p>
<p>but I think the main part is figuring out whether the file is open by some other process and whether or not that behavior is always the same (is there a way to open a file that does NOT block other writes?)</p>
|
<python><windows-subsystem-for-linux><file-handling>
|
2024-03-06 00:30:32
| 0
| 1,028
|
ljwobker
|
78,111,191
| 3,261,292
|
Invalid predicate error due to double quote inside an html attribute
|
<p>I have the following html script:</p>
<pre><code><body class="item"> <a title="spre_|_"Marketing_|_Specialist""> Marketing Specialist </a> </body>
</code></pre>
<p>As you can see in <code><a></code> tag, the value of title attribute has double quotes inside the main quotes. When I use <code>beautifulsoup</code> to get the element using xpath, I keep getting this error:</p>
<pre><code> File "src/lxml/etree.pyx", line 2314, in lxml.etree._ElementTree.xpath
File "src/lxml/xpath.pxi", line 357, in lxml.etree.XPathDocumentEvaluator.__call__
File "src/lxml/xpath.pxi", line 225, in lxml.etree._XPathEvaluatorBase._handle_result
lxml.etree.XPathEvalError: Invalid predicate
</code></pre>
<p>This is my code:</p>
<pre><code>from lxml import etree
from io import StringIO
html_ = """<body class="loop"> <a title="spre_|_"Marketing_|_Specialist""> Marketing Specialist </a> </body>"""
xpath = """//a[@title="spre_|_"Marketing_|_Specialist""][1]"""
print(etree.parse(StringIO(html_), etree.HTMLParser()).xpath(xpath))
</code></pre>
<p>I tried to escape the double quotes with <code>\</code>, but nothing change.</p>
|
<python><html><beautifulsoup><double-quotes>
|
2024-03-06 00:23:42
| 0
| 5,527
|
Minions
|
78,111,001
| 3,606,192
|
Periodic/sinusoid MSE loss in the custom implementation of linear regression
|
<p>I was implementing PyTorch-like modules (for educational purposes), and ran a simple training routine to check. However, my loss is oscillating, and I am not sure why.</p>
<p>Below is the code. I put the loop first, but the implementation of the layers is below (might need to rearrange if running locally).</p>
<h1>Data Generation</h1>
<pre class="lang-py prettyprint-override"><code># These are the parameters that we want to learn
parameters = np.array([1.3, 0.0])
def make_data(N, a, b, *, noise=0.1, x_min=0.0, x_max=1.0):
X = np.random.rand(N) * (x_max - x_min) + x_min
X = X.reshape(-1, 1)
y = X * a + b + np.random.randn(N, 1) * noise
X_line = np.array([x_min, x_max])
y_line = X_line * a + b
return (X, y), (X_line, y_line)
(X, y), (Xline, yline) = make_data(50, *parameters, noise=0.05)
(X_validation, y_validation), _ = make_data(50, *parameters, noise=0.05)
plt.scatter(X, y)
plt.scatter(X_validation, y_validation, alpha=0.5)
plt.plot(Xline, yline)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
</code></pre>
<p><a href="https://i.sstatic.net/FPju4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FPju4.png" alt="enter image description here" /></a></p>
<h1>Training loop (see below for module implementations)</h1>
<pre class="lang-py prettyprint-override"><code>criterion = MSELoss()
model = Sequential(
Linear(1, 5, bias=True),
ReLU(),
Linear(5, 1, bias=True)
)
num_epochs = 1000
sgd_params = {
'learning_rate': 1e-3,
'weight_decay': 0.0,
'schedule_scale': 1.0,
}
history = {
'train': {
'loss': [],
'epoch': [],
},
'validation': {
'loss': [],
'epoch': [],
}
}
for epoch in range(num_epochs):
with TrainingContext(model, criterion) as tc:
# Forward pass
y_hat = model(X)
loss = criterion(y_hat, y)
# Backward pass in reverse order
dL = criterion.backward(loss)
model.backward(dL)
# Update gradients
model.update(**sgd_params)
criterion.update(**sgd_params)
# Scheduler
sgd_params['learning_rate'] = sgd_params['learning_rate'] * sgd_params['schedule_scale']
history['train']['epoch'].append(epoch)
history['train']['loss'].append(loss)
# Validation
y_hat = model(X_validation)
loss = criterion(y_hat, y_validation)
history['validation']['epoch'].append(epoch)
history['validation']['loss'].append(loss)
# Tracking
if (epoch+1) % 100 == 0:
print(f'{epoch+1} / {num_epochs}: Training: {history["train"]["loss"][-1]:.2e} Validation: {history["validation"]["loss"][-1]:.2e}')
plt.plot(history['train']['epoch'], history['train']['loss'], label='Training')
plt.plot(history['validation']['epoch'], history['validation']['loss'], label='Validation')
plt.legend()
</code></pre>
<p><a href="https://i.sstatic.net/I22Qs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I22Qs.png" alt="enter image description here" /></a></p>
<p>If I change the model to</p>
<pre class="lang-py prettyprint-override"><code>model = Sequential(
Linear(1, 1, bias=True),
)
</code></pre>
<p>I get
<a href="https://i.sstatic.net/gqX1Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gqX1Z.png" alt="enter image description here" /></a></p>
<h1>Modules definition</h1>
<h2>Base module, Sequential wrapper, and Training context manager</h2>
<pre class="lang-py prettyprint-override"><code>class Module:
def __init__(self):
self._save_for_backward = {}
self._grad = {}
self.is_training = False # Don't set this manually
def __call__(self, *args, **kwargs):
return self.forward(*args, **kwargs)
def zero_grad(self):
# print(f'===> DEBUG: zero_grad')
if not self.is_training:
raise RuntimeError('Please run zero_grad inside the training context')
self._grad = {}
def reset(self):
self.zero_grad()
self._save_for_backward = {}
def save_for_backward(self, *args, **kwargs):
r'''Saves or retrieves anything needed for training
- If called with a positional argument ==> Returns saved value
- If called with a keyword argument (with value assignment) ==> Saves the value
'''
if len(args) > 0 and len(kwargs) > 0:
raise ValueError(f'Cannot save for backward and retrieve at the same time')
elif len(args) == 0 and len(kwargs) == 0:
return self._save_for_backward
elif len(args) > 0:
result = []
for arg in args:
result.append(self._save_for_backward[arg])
if len(result) == 1:
return result[0]
else:
return result
elif self.is_training:
for key, value in kwargs.items():
self._save_for_backward[key] = value
return None
def update(self, *args, **kwargs):
pass
class Sequential(Module):
def __init__(self, *modules):
self.modules = modules
super().__init__()
def forward(self, X, *args, **kwargs):
for mod in self.modules:
X = mod(X)
return X
def backward(self, dLdy):
grad = dLdy
# print(grad.shape)
for mod in self.modules[::-1]:
grad = mod.backward(grad)
# print(grad.shape)
return grad
def update(self, *args, **kwargs):
for mod in self.modules:
mod.update(*args, **kwargs)
@property
def is_training(self):
is_training = []
for mod in self.modules:
is_training.append(mod.is_training)
return is_training
@is_training.setter
def is_training(self, value):
if not isinstance(value, (list, tuple)):
value = [value] * len(self.modules)
for idx, mod in enumerate(self.modules):
mod.is_training = value[idx]
class TrainingContext:
r'''Makes sure the modules are in the training mode
Usage:
with TrainingContext(layer1, layer2, loss) as tc:
...
'''
def __init__(self, *modules, reset_on_exit=False):
self.modules = modules
self.old_states = []
self.reset_on_exit = reset_on_exit
def __enter__(self):
for mod in self.modules:
self.old_states.append(mod.is_training)
mod.is_training = True
def __exit__(self, *args, **kwargs):
for idx, mod in enumerate(self.modules):
mod.is_training = self.old_states[idx]
if self.reset_on_exit:
mod.reset()
</code></pre>
<h2>MSE Loss and ReLU</h2>
<pre class="lang-py prettyprint-override"><code>class MSELoss(Module):
def forward(self, y_hat, y):
diff = y_hat - y
self.save_for_backward(diff=diff, k=len(y))
diff_sq = diff * diff
return 0.5 * diff_sq.mean()
def backward(self, loss):
diff = self.save_for_backward('diff')
k = self.save_for_backward('k')
self._grad['loss'] = self._grad.get('loss', np.zeros_like(diff))
self._grad['loss'] += diff / k
return self._grad['loss']
class ReLU(Module):
def forward(self, X):
zeromask = X <= 0.0
self.save_for_backward(zeromask=zeromask)
y = X.copy()
y[zeromask] = 0.0
return y
def backward(self, dLdy):
dLdX = dLdy.copy()
zeromask = self.save_for_backward('zeromask')
dLdX[zeromask] = 0.0
return dLdX
</code></pre>
<h2>Linear Layer</h2>
<pre class="lang-py prettyprint-override"><code>class Linear(Module):
def __init__(self, Cin, Cout, bias=True):
super().__init__()
self.Cin = Cin
self.Cout = Cout
self.weight = np.random.randn(Cin, Cout)
self.bias = np.zeros(Cout) if bias else None
def forward(self, X):
# print(f'===> DEBUG: forward')
if X.ndim == 1:
X = X.reshape(-1, 1)
if self.is_training:
self.save_for_backward(X=X.copy())
y = X @ self.weight
return y
def backward(self, dLdy):
# dLdy.shape = N x Cout
# dydw.shape = N x Cin
# print(f'===> DEBUG: backward')
if not self.is_training:
raise RuntimeError('Please run backward inside the training context')
dydX = self.weight.T
dLdX = dLdy @ dydX
dydw = self.save_for_backward('X')
self._grad['weight'] = self._grad.get('weight', np.zeros_like(self.weight))
self._grad['weight'] += dydw.T @ dLdy
if self.bias is not None:
self._grad['bias'] = self._grad.get('bias', np.zeros_like(self.bias))
self._grad['bias'] += dLdy.sum(0)
return dLdX
def update(self, learning_rate=1e-3, weight_decay=1e-4, zero_grad=True, *args, **kwargs):
# print(f'===> DEBUG: update')
if not self.is_training:
raise RuntimeError('Please run update inside the training context')
self.weight -= learning_rate * (self._grad['weight'] + weight_decay * self.weight)
if self.bias is not None:
self.bias -= learning_rate * (self._grad['bias'] + weight_decay * self.bias)
if zero_grad:
self.zero_grad()
</code></pre>
|
<python><pytorch><linear-regression><mse>
|
2024-03-05 23:06:33
| 0
| 4,642
|
RafazZ
|
78,110,934
| 310,370
|
How to draw a moving line as slider for the image comparison video generator script in Python with FFmpeg
|
<p>I have the below script to generate a video slider with FFmpeg</p>
<p>You can see example here (first 10 seconds) : <a href="https://youtu.be/F_wf1uHqZRA" rel="nofollow noreferrer">https://youtu.be/F_wf1uHqZRA</a></p>
<p>I am trying to replicate effect of imgsli like a person is moving the slider smoothly</p>
<p>Here below the code I used to generate first 10 seconds of above video</p>
<pre><code>import subprocess
def create_comparison_video(image_a, image_b, output_video, duration=5, frame_rate=30, video_width=1920, video_height=1080):
ffmpeg_cmd = [
'ffmpeg',
'-y', # Overwrite output file if it exists
'-loop', '1', # Loop input images
'-i', image_a,
'-loop', '1',
'-i', image_b,
'-filter_complex',
f"[0]scale={video_width}:{video_height}[img1];" # Scale image A
f"[1]scale={video_width}:{video_height}[img2];" # Scale image B
f"[img1][img2]blend=all_expr='if(gte(X,W*T/{duration}),A,B)':shortest=1," # Slide comparison
f"format=yuv420p,scale={video_width}:{video_height}", # Format and scale output
'-t', str(duration), # Duration of the video
'-r', str(frame_rate), # Frame rate
'-c:v', 'libx264', # Video codec
'-preset', 'slow', # Encoding preset
'-crf', '12', # Constant rate factor (quality)
output_video
]
subprocess.run(ffmpeg_cmd, check=True)
</code></pre>
<p>So I want to have a slider like in below image which will move in the video</p>
<p><a href="https://i.sstatic.net/QOaLZ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QOaLZ.jpg" alt="enter image description here" /></a></p>
|
<python><ffmpeg><python-3.10><image-slider>
|
2024-03-05 22:43:12
| 1
| 23,982
|
Furkan Gözükara
|
78,110,879
| 5,790,653
|
python how to concatenate dynamic parts of html template together and then print them as one html
|
<p>this is my <code>template.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>first = '''<html><head></head>
<body>
<h1>This is header H1</h1>
<table>
<tr>
<th>ID</th>
<th>Name</th>
<th>Phone</th>
<th>Email</th>
</tr>
<tr>
<td>'''
uid='{uid}'
second='''</td>
<td>'''
name='{name}'
third='''</td>
<td>'''
phone='{phone}'
fourth='''</td>
<td>'''
email='{email}'
fifth='''</td>
</tr>
</table>
</body>
</html>'''
</code></pre>
<p>I'm going to email this, but I think if I print as the expected output, it should work.</p>
<p>I have a list like this:</p>
<pre class="lang-py prettyprint-override"><code>list1 = [
{'uid': 1, 'name': 'saeed1', 'phone': '+989100000000', 'email': 'sample1@gmail.com'},
{'uid': 2, 'name': 'saeed2', 'phone': '+989200000000', 'email': 'sample2@gmail.com'},
{'uid': 3, 'name': 'saeed3', 'phone': '+989300000000', 'email': 'sample3@gmail.com'},
{'uid': 4, 'name': 'saeed4', 'phone': '+989400000000', 'email': 'sample4@gmail.com'},
]
</code></pre>
<p>Expected output is:</p>
<pre class="lang-html prettyprint-override"><code><html><head></head>
<body>
<h1>This is header H1</h1>
<table>
<tr>
<th>ID</th>
<th>Name</th>
<th>Phone</th>
<th>Email</th>
</tr>
<tr>
<td>1</td>
<td>saeed1</td>
<td>+989100000000</td>
<td>sample1@gmail.com</td>
</tr>
<tr>
<td>2</td>
<td>saeed2</td>
<td>+989200000000</td>
<td>sample2@gmail.com</td>
</tr>
<tr>
<td>3</td>
<td>saeed3</td>
<td>+989300000000</td>
<td>sample3@gmail.com</td>
</tr>
<tr>
<td>4</td>
<td>saeed4</td>
<td>+989400000000</td>
<td>sample4@gmail.com</td>
</tr>
</table>
</body>
</html>
</code></pre>
<p>This is my attempt:</p>
<pre class="lang-py prettyprint-override"><code>import template
for n in list1:
uid = template.uid.format(uid=n['uid'])
name = template.name.format(name=n['name'])
phone = template.phone.format(phone=n['phone'])
email = template.email.format(email=n['email'])
combined = template.first + uid + template.second + name + template.third + phone + template.fourth + email + template.fifth
print(combined)
</code></pre>
<p>I know this is not correct, and I think objects like <code>template.first</code>, <code>template.second</code>, etc. which are static should be outside the <code>for</code> loop, but I have not idea what to do next and how to combine them.</p>
|
<python>
|
2024-03-05 22:28:28
| 1
| 4,175
|
Saeed
|
78,110,735
| 19,130,803
|
dash library error library is not registered
|
<p>I am developing a dash app using docker. On running I am getting this error for <code>dash_bootstrap_components</code> and <code>dash_ag_grid</code>. This error occurs randomly, no fix pattern or time so I can not pin point on specific code block. But on clicking the refresh button of browser the error goes off and app works normally.</p>
<p>I am using below version:</p>
<pre><code>plotly = "^5.19.0"
dash = {extras = ["celery"], version = "^2.16.0"}
dash-bootstrap-components = "^1.5.0"
dash-bootstrap-templates = "^1.1.2"
dash-ag-grid = "^31.0.1"
</code></pre>
<p><strong>For dash_bootstrap_components</strong></p>
<pre><code>web | created_at=2024-03-05 20:11:57, level_name=ERROR, message=Exception on /_dash-component-suites/dash_bootstrap_components/_components/dash_bootstrap_components.v1_5_0m1709669432.min.js [GET], line_no=828, logger_name=web.flask_app, module_name=app, func_name=log_exception, exc_info=(<class 'dash.exceptions.DependencyException'>, DependencyException('Error loading dependency. "dash_bootstrap_components" is not a registered library.\nRegistered libraries are:\n[\'dash\', \'plotly\']'), <traceback object at 0x7f6e1d6bd140>)
web | Traceback (most recent call last):
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 1463, in wsgi_app
web | response = self.full_dispatch_request()
web | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 872, in full_dispatch_request
web | rv = self.handle_user_exception(e)
web | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 870, in full_dispatch_request
web | rv = self.dispatch_request()
web | ^^^^^^^^^^^^^^^^^^^^^^^
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 855, in dispatch_request
web | return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
web | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
web | File "/proj/.venv/lib/python3.12/site-packages/dash/dash.py", line 1005, in serve_component_suites
web | _validate.validate_js_path(self.registered_paths, package_name, path_in_pkg)
web | File "/proj/.venv/lib/python3.12/site-packages/dash/_validate.py", line 365, in validate_js_path
web | raise exceptions.DependencyException(
web | dash.exceptions.DependencyException: Error loading dependency. "dash_bootstrap_components" is not a registered library.
web | Registered libraries are:
web | ['dash', 'plotly']
web | --- Logging error ---
web | Traceback (most recent call last):
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 1463, in wsgi_app
web | response = self.full_dispatch_request()
web | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 872, in full_dispatch_request
web | rv = self.handle_user_exception(e)
web | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 870, in full_dispatch_request
web | rv = self.dispatch_request()
web | ^^^^^^^^^^^^^^^^^^^^^^^
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 855, in dispatch_request
web | return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
web | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
web | File "/proj/.venv/lib/python3.12/site-packages/dash/dash.py", line 1005, in serve_component_suites
web | _validate.validate_js_path(self.registered_paths, package_name, path_in_pkg)
web | File "/proj/.venv/lib/python3.12/site-packages/dash/_validate.py", line 365, in validate_js_path
web | raise exceptions.DependencyException(
web | dash.exceptions.DependencyException: Error loading dependency. "dash_bootstrap_components" is not a registered library.
web | Registered libraries are:
web | ['dash', 'plotly']
web |
web | During handling of the above exception, another exception occurred:
web |
web | Traceback (most recent call last):
web | File "/usr/local/lib/python3.12/logging/__init__.py", line 1160, in emit
web | msg = self.format(record)
web | ^^^^^^^^^^^^^^^^^^^
web | File "/usr/local/lib/python3.12/logging/__init__.py", line 999, in format
web | return fmt.format(record)
web | ^^^^^^^^^^^^^^^^^^
web | File "/proj/work/utils/log/std_logger.py", line 46, in format
web | raise e
web | File "/proj/work/utils/log/std_logger.py", line 44, in format
web | json_string = json.dumps(log_message)
web | ^^^^^^^^^^^^^^^^^^^^^^^
web | File "/usr/local/lib/python3.12/json/__init__.py", line 231, in dumps
web | return _default_encoder.encode(obj)
web | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
web | File "/usr/local/lib/python3.12/json/encoder.py", line 200, in encode
web | chunks = self.iterencode(o, _one_shot=True)
web | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
web | File "/usr/local/lib/python3.12/json/encoder.py", line 258, in iterencode
web | return _iterencode(o, 0)
web | ^^^^^^^^^^^^^^^^^
web | File "/usr/local/lib/python3.12/json/encoder.py", line 180, in default
web | raise TypeError(f'Object of type {o.__class__.__name__} '
web | TypeError: Object of type type is not JSON serializable
web | Call stack:
web | File "/usr/local/lib/python3.12/threading.py", line 1030, in _bootstrap
web | self._bootstrap_inner()
web | File "/usr/local/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
web | self.run()
web | File "/usr/local/lib/python3.12/threading.py", line 1010, in run
web | self._target(*self._args, **self._kwargs)
web | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 92, in _worker
web | work_item.run()
web | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run
web | result = self.fn(*self.args, **self.kwargs)
proxy | 172.21.0.1 - - [05/Mar/2024:20:11:57 +0000] "GET /_dash-component-suites/dash_bootstrap_components/_components/dash_bootstrap_components.v1_5_0m1709669432.min.js HTTP/1.1" 500 265 "http://localhost/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:123.0) Gecko/20100101 Firefox/123.0" "-"
web | File "/proj/.venv/lib/python3.12/site-packages/gunicorn/workers/gthread.py", line 282, in handle
web | keepalive = self.handle_request(req, conn)
web | File "/proj/.venv/lib/python3.12/site-packages/gunicorn/workers/gthread.py", line 334, in handle_request
web | respiter = self.wsgi(environ, resp.start_response)
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 1488, in __call__
web | return self.wsgi_app(environ, start_response)
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 1466, in wsgi_app
web | response = self.handle_exception(e)
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 807, in handle_exception
web | self.log_exception(exc_info)
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 828, in log_exception
web | self.logger.error(
web | Message: 'Exception on /_dash-component-suites/dash_bootstrap_components/_components/dash_bootstrap_components.v1_5_0m1709669432.min.js [GET]'
web | Arguments: ()
proxy | 172.21.0.1 - - [05/Mar/2024:20:11:57 +0000] "GET /assets/css/bootstrap/theme/light/bootstrap.min.css.map HTTP/1.1" 404 207 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:123.0) Gecko/20100101 Firefox/123.0" "-"
</code></pre>
<p><strong>In Browser(F12)</strong></p>
<pre><code>GET
http://localhost/_dash-component-suites/dash_bootstrap_components/_components/dash_bootstrap_components.v1_5_0m1709669432.min.js
[HTTP/1.1 500 INTERNAL SERVER ERROR 14ms]
Uncaught (in promise)
error { target: script, isTrusted: true, srcElement: script, eventPhase: 0, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, composed: false, timeStamp: 1216, … }
</code></pre>
<p><strong>For dash_ag_grid</strong></p>
<pre><code>| --- Logging error ---
web | Traceback (most recent call last):
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 1463, in wsgi_app
web | response = self.full_dispatch_request()
web | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 872, in full_dispatch_request
web | rv = self.handle_user_exception(e)
web | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 870, in full_dispatch_request
web | rv = self.dispatch_request()
web | ^^^^^^^^^^^^^^^^^^^^^^^
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 855, in dispatch_request
web | return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
web | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
web | File "/proj/.venv/lib/python3.12/site-packages/dash/dash.py", line 1005, in serve_component_suites
web | _validate.validate_js_path(self.registered_paths, package_name, path_in_pkg)
web | File "/proj/.venv/lib/python3.12/site-packages/dash/_validate.py", line 365, in validate_js_path
web | raise exceptions.DependencyException(
web | dash.exceptions.DependencyException: Error loading dependency. "dash_ag_grid" is not a registered library.
web | Registered libraries are:
web | ['dash', 'plotly', 'dash_bootstrap_components']
web |
web | During handling of the above exception, another exception occurred:
web |
web | Traceback (most recent call last):
web | File "/usr/local/lib/python3.12/logging/__init__.py", line 1160, in emit
web | msg = self.format(record)
web | ^^^^^^^^^^^^^^^^^^^
web | File "/usr/local/lib/python3.12/logging/__init__.py", line 999, in format
web | return fmt.format(record)
web | ^^^^^^^^^^^^^^^^^^
web | File "/proj/work/utils/log/std_logger.py", line 46, in format
web | raise e
web | File "/proj/work/utils/log/std_logger.py", line 44, in format
web | json_string = json.dumps(log_message)
web | ^^^^^^^^^^^^^^^^^^^^^^^
web | File "/usr/local/lib/python3.12/json/__init__.py", line 231, in dumps
web | return _default_encoder.encode(obj)
web | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
web | File "/usr/local/lib/python3.12/json/encoder.py", line 200, in encode
web | chunks = self.iterencode(o, _one_shot=True)
web | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
web | File "/usr/local/lib/python3.12/json/encoder.py", line 258, in iterencode
web | return _iterencode(o, 0)
web | ^^^^^^^^^^^^^^^^^
web | File "/usr/local/lib/python3.12/json/encoder.py", line 180, in default
web | raise TypeError(f'Object of type {o.__class__.__name__} '
web | TypeError: Object of type type is not JSON serializable
web | Call stack:
web | File "/usr/local/lib/python3.12/threading.py", line 1030, in _bootstrap
web | self._bootstrap_inner()
web | File "/usr/local/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
web | self.run()
web | File "/usr/local/lib/python3.12/threading.py", line 1010, in run
web | self._target(*self._args, **self._kwargs)
web | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 92, in _worker
web | work_item.run()
web | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run
web | result = self.fn(*self.args, **self.kwargs)
web | File "/proj/.venv/lib/python3.12/site-packages/gunicorn/workers/gthread.py", line 282, in handle
web | keepalive = self.handle_request(req, conn)
web | File "/proj/.venv/lib/python3.12/site-packages/gunicorn/workers/gthread.py", line 334, in handle_request
web | respiter = self.wsgi(environ, resp.start_response)
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 1488, in __call__
web | return self.wsgi_app(environ, start_response)
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 1466, in wsgi_app
web | response = self.handle_exception(e)
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 807, in handle_exception
web | self.log_exception(exc_info)
web | File "/proj/.venv/lib/python3.12/site-packages/flask/app.py", line 828, in log_exception
web | self.logger.error(
web | Message: 'Exception on /_dash-component-suites/dash_ag_grid/dash_ag_grid.v31_0_1m1709565075.min.js [GET]'
</code></pre>
<p>Any suggestions how to avoid this error?</p>
|
<python><plotly><plotly-dash>
|
2024-03-05 21:49:04
| 0
| 962
|
winter
|
78,110,698
| 11,918,054
|
Concatenate dictionary of lists into a single list
|
<p>I have the following dictionary:</p>
<pre><code>myDict = dict({'red': [1, 2],
'blue': [3, 4]})
</code></pre>
<p>And I would like to concatenate the key and value pairs into a single list:</p>
<pre><code>['red_1', 'red_2', 'blue_3', 'blue_4']
</code></pre>
<p>What is the most efficient way to accomplish this?</p>
|
<python>
|
2024-03-05 21:39:14
| 2
| 555
|
djc55
|
78,110,697
| 16,332,690
|
converting numpy datetime64 in a numba jitclass to unix timestamp
|
<p>For the sake of readability I want to be able to supply a numpy.datetime64 object to a numba jitclass which is converted to a unix epoch timestamp in float format within the class itself.</p>
<p>I currently have to resort to calculating the unix timestamp prior to creating the jitclass object and supply this as a parameter, e.g.:</p>
<pre><code>>>> import numpy as np
>>> (np.datetime64('2024-01-01T00:00:00') - np.datetime64('1970-01-01T00:00:00')) / np.timedelta64(1, 's')
1704067200.0
</code></pre>
<p>Suppose that I create the following jitclass that takes a numpy datetime as parameter, how can I create a method within the class that converts the datetime into a unix timestamp? The ultimate goal would be to supply a start and end date when creating the jitclass object, which are then converted to unix timestamps in order to create an array of timestamps using <code>np.arange()</code>.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numba.experimental import jitclass
from numba import types
spec=[
('start', types.NPDatetime('s'))
]
@jitclass(spec)
class Foo():
def __init__(self, start):
self.start = start
obj = Foo(np.datetime64('2024-01-01T00:00:00'))
</code></pre>
<pre><code>>>> obj.start
numpy.datetime64('2024-01-01T00:00:00')
</code></pre>
|
<python><numba>
|
2024-03-05 21:38:42
| 1
| 308
|
brokkoo
|
78,110,543
| 3,388,962
|
Save Jupyter notebook as a functional custom-format / single-page PDF
|
<p>The Jupyter ecosystem offers different ways to create a PDF, however, none suits me well so far – either because the process is buggy, because not all features are supported, or because it looks cluttered.</p>
<p>To narrow down the question: I generally like the output of <code>pdfviahtml</code>:</p>
<pre class="lang-bash prettyprint-override"><code>jupyter nbconvert --to pdfviahtml notebook.ipynb
</code></pre>
<p>Specifically, I like the math typesetting, the typesetting in general, the support of vector graphics (SVGs), the aesthetics of code blocks and also the custom format, (almost) single-page PDF.</p>
<p>The last function in particular contributes to readability, as longer code segments are displayed as a single block.</p>
<p>However, unfortunately, external images that are added to a Markdown cell are lost (see <a href="https://github.com/jupyter/nbconvert/issues/1938" rel="nofollow noreferrer">this issue</a>). Also, it displays links to sections within the same notebooks. However, the links are not operational.</p>
<p>For me, the next best alternative is to convert the notebook to HTML (which also looks great), and to create a PDF by using the browser's printing / save as PDF feature.</p>
<pre class="lang-bash prettyprint-override"><code>jupyter nbconvert --to html notebook.ipynb
# + save HTML as PDF in browser
</code></pre>
<p>However, the result is a multi-page PDF that is not easy to read. Therefore the...</p>
<p><strong>Question</strong>: Does anyone now how to create a custom-format / single-page PDF from a HTML page that maintains links and generally preserves the appearance? A solution directly using Jupyter is preferred, but the use of browser extensions or other tricks are also welcome.</p>
<hr />
<p><strong>Details</strong>: I explored the following methods with Jupyter and nbconvert</p>
<pre class="lang-bash prettyprint-override"><code># 1: --to pdf:
# Pro: Generated using LaTeX
# Con: Creates a multi-page PDF, and therefore looks cluttered
jupyter nbconvert --to pdf 01-signals-solutions.ipynb
# 2: --to pdfviahtml:
# Pro: Nice code blocks, nice typesetting, also of equations
# Pro: Custom-format / single-page PDF
# Con: Main problem: external images in Markdown cells are not included
#
jupyter nbconvert --to pdfviahtml 01-signals-solutions.ipynb
# 3: --to latex + separate call to pdflatex or xelatex
# Pro: Looks like a LaTeX dokument
# Con: Looks similar to option 1
jupyter nbconvert --to latex 01-signals-solutions.ipynb
# 4: --to html + save as PDF
# Pro: Looks great in HTML
# Con: Features are not preserved when saved as PDF
# 5: --to webpdf
# Similar as 2: --to pdfviahtml
# Pro: Similar output as --to pdfviahtml,
# Pro: Within-document links are operational
# Con: Code-generate plots are often cropped
# Con: External images in Markdown cells are lost
jupyter nbconvert --to html 01-signals-solutions.ipynb
# 6: --to PDFviaHTML
# Similar as 2, if not same
jupyter nbconvert --to PDFviaHTML 01-signals-solutions.ipynb
</code></pre>
<p>I tested with Jupyter 1.0.0 and nbconvert 7.16.2, with Python 3.10 and 3.11.</p>
|
<python><pdf><jupyter-notebook><file-conversion>
|
2024-03-05 21:04:52
| 0
| 9,959
|
normanius
|
78,110,521
| 20,075,659
|
boto3 change the endpoint of session
|
<p>I want to use LakeFS configurations for my boto session. I tried using this, but still, it gives me the default one</p>
<pre><code>s3 = boto3.resource('s3',
endpoint_url=lakefsEndPoint,
aws_access_key_id=lakefsAccessKey,
aws_secret_access_key=lakefsSecretKey
)
s3_session = boto3.Session(
aws_access_key_id=lakefsAccessKey,
aws_secret_access_key=lakefsSecretKey
)
</code></pre>
<p>How can I get the new session? I want to write the data using aws wrangler, that's why I need to get the BOTO session.</p>
|
<python><lakefs>
|
2024-03-05 20:58:59
| 1
| 396
|
Anon
|
78,110,407
| 22,407,544
|
'ProgrammingError: column does not exist' in Django
|
<p>I've been moving development of my website over to using Docker. I replaced sqlite as my database with postgresql then ran the command <code>docker-compose exec web python manage.py migrate</code> in my Docker environment. I also updated <code>MEDIA ROOT</code> and <code>MEDIA</code> settings in my settings.py file and updated mysite/urls.py. When I go to <code>127.0.0.1:8000/admin</code> to the look at stored/uploaded files I get an error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The above exception (column human_requirementschat.alias does not exist
LINE 1: SELECT "human_requirementschat"."id", "human_requirementscha...
^
) was the direct cause of the following exception:
File "/usr/local/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/contrib/admin/options.py", line 688, in wrapper
return self.admin_site.admin_view(view)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/utils/decorators.py", line 134, in _wrapper_view
response = view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/views/decorators/cache.py", line 62, in _wrapper_view_func
response = view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/contrib/admin/sites.py", line 242, in inner
return view(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/utils/decorators.py", line 46, in _wrapper
return bound_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/utils/decorators.py", line 134, in _wrapper_view
response = view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/contrib/admin/options.py", line 2065, in changelist_view
"selection_note": _("0 of %(cnt)s selected") % {"cnt": len(cl.result_list)},
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 380, in __len__
self._fetch_all()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 1881, in _fetch_all
self._result_cache = list(self._iterable_class(self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 91, in __iter__
results = compiler.execute_sql(
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1560, in execute_sql
cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 102, in execute
return super().execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Exception Type: ProgrammingError at /admin/human/requirementschat/
Exception Value: column human_requirementschat.alias does not exist
LINE 1: SELECT "human_requirementschat"."id", "human_requirementscha...
^
</code></pre>
<p>Here are the relevant files:</p>
<p>urls.py:</p>
<pre><code>from django.conf import settings # new
from django.conf.urls.static import static # new
from django.contrib import admin
from django.urls import include, path
#from translator import views
urlpatterns = [
path('', include('homepage.urls')),#redirects to transcribe/,
path('transcribe/', include('transcribe.urls')),
path('human/', include('human.urls')),
path('admin/', admin.site.urls),
]+ static(
settings.MEDIA_URL, document_root=settings.MEDIA_ROOT
) # new
</code></pre>
<p>settings.py:</p>
<pre><code>DATABASES = {
"default": env.dj_db_url("DATABASE_URL",
default="postgres://postgres@db/postgres")
}#new
MEDIA_URL='/media/' #new
MEDIA_ROOT = BASE_DIR/'media' #new
</code></pre>
<p>models.py:</p>
<pre><code>class RequirementsChat(models.Model):
id = models.CharField(primary_key=True, max_length=38)
#id = models.UUIDField(default=uuid.uuid4, unique=True, primary_key=True, max_length=37)
#id_temp = models.UUIDField(default=uuid.uuid4, unique=True)
alias = models.CharField(max_length=20, blank=True, null=True)
email = models.CharField(max_length=80, blank=True, null=True)
language = models.CharField(max_length=10, blank=True, null=True)
due_date = models.CharField(max_length=10, blank=True, null=True)
subtitle_type = models.CharField(max_length=10, blank=True, null=True)
transcript_file_type = models.CharField(max_length=10, blank=True, null=True)
additional_requirements = models.TextField(max_length=500, blank=True, null=True)
date = models.DateTimeField(auto_now_add=True, blank=True, null=True)
url = models.CharField(max_length=250, blank=True, null=True)
task_completed = models.BooleanField(default=False)
class UploadedFile(models.Model):
input_file = models.FileField(upload_to='human_upload/')#new
chat_id = models.CharField(max_length=33, null= True)
requirements_chat = models.ForeignKey(RequirementsChat, on_delete=models.CASCADE, related_name='human_upload', null=True)
</code></pre>
<p>I deleted all my migration files and ran <code>makemigrations</code>, <code>makemigrations human</code> and <code>migrate</code> several times but the same error always occurs. Grateful for any help.</p>
|
<python><sql><django><postgresql><docker-compose>
|
2024-03-05 20:29:53
| 1
| 359
|
tthheemmaannii
|
78,110,341
| 8,547,986
|
Rectangle appearing before replacement transform
|
<p>So I am trying to re-create googles logo using manim.
The part where I am struggling is, the blue slab appears before transformation. My understanding of ReplacementTranform is, that the target does not appear before transformation and as animation proceeds it start to appear. But in my case somehow, the blue slab appears before the transformation.</p>
<p>Is my understanding of replacement transform correct or am i using it wrong?</p>
<pre class="lang-py prettyprint-override"><code>from manim import *
class Props:
blue = "#4285F4"
red = "#DB4437"
yellow = "#F4B400"
green = "#0F9D58"
dot_radius = 0.5
arc_inner_radius = 1
arc_outer_radius = 1.5
arc_angle = PI / 2
blue_start_angle = -PI / 4
red_start_angle = -7 * PI / 4
yellow_start_angle = -5 * PI / 4
green_start_angle = -3 * PI / 4
class GoogleLogo(Scene):
def create_dot(self, color, radius, **kwargs):
return Dot(color=color, radius=radius, **kwargs)
def arrange_dots(self, blue, red, yellow, green):
blue.move_to((-3, 0, 0))
red.move_to((-1, 0, 0))
yellow.move_to((1, 0, 0))
green.move_to((3, 0, 0))
def annulus_sector(
self,
color,
start_angle,
angle=PI / 2,
inner_radius=Props.arc_inner_radius,
outer_radius=Props.arc_outer_radius,
):
return AnnularSector(
inner_radius=inner_radius,
outer_radius=outer_radius,
color=color,
fill_opacity=1.0,
start_angle=start_angle,
angle=angle,
stroke_width=0,
)
def construct(self):
blue, red, green, yellow = [
self.create_dot(c, Props.dot_radius)
for c in (Props.blue, Props.red, Props.green, Props.yellow)
]
self.arrange_dots(blue, red, yellow, green)
dots = VGroup(blue, red, yellow, green)
bounce_animation = [
AnimationGroup(
x.animate(rate_func=there_and_back).shift(UP * 0.5),
x.animate(rate_func=there_and_back).shift(DOWN * 0.5),
)
for x in dots
]
for _ in range(1):
self.play(LaggedStart(*bounce_animation, lag_ratio=0.1))
# transform dots to letter G
blue_annulus, red_annulus, yellow_annulus, green_annulus = [
self.annulus_sector(c, a, ang)
for c, a, ang in zip(
[Props.blue, Props.red, Props.yellow, Props.green],
[
Props.blue_start_angle,
Props.red_start_angle,
Props.yellow_start_angle,
Props.green_start_angle,
],
[PI / 4] + [PI / 2] * 3,
)
]
dots_minus_blue = VGroup(red, yellow, green)
sectors_minus_blue = VGroup(red_annulus, yellow_annulus, green_annulus)
transformations = [
ReplacementTransform(dot, sect, path_func=utils.paths.path_along_arc(-PI))
for dot, sect in zip(dots_minus_blue, sectors_minus_blue)
]
blue_slab = Rectangle(height=0.5, width=1.5, color=Props.blue, fill_opacity=1)
blue_slab.next_to(blue_annulus, UP, aligned_edge=UR, buff=0)
blue_slab.set_stroke(width=0)
blue_slab.shift(UP * 0.1)
blue_annulus.set_stroke(width=0)
dot_to_slab = ReplacementTransform(blue, blue_slab)
slab_to_annulus = ReplacementTransform(blue_slab.copy(), blue_annulus)
self.play(
LaggedStart(
*transformations,
Succession(dot_to_slab, slab_to_annulus),
lag_ratio=0.9
)
)
self.wait(3)
</code></pre>
|
<python><manim>
|
2024-03-05 20:15:46
| 1
| 1,923
|
monte
|
78,110,149
| 53,491
|
Can I get a copy of the raw request from python's requests library?
|
<p>code with requests working correctly, using requests_AWS4auth to connect to AWS.</p>
<pre><code> from requests_aws4auth import AWS4Auth
aws4auth = AWS4Auth(access_key, secret_key, 'us-east-1', 's3')
response = requests.put(
url,
auth=aws4auth,
data=content)
</code></pre>
<p>I'm trying to get this to work in aiohttp and asyncio. aiohttp only natively deals with user/password authorization, so I need to pass the actual headers.</p>
<p>I'm trying to get the headers out of the response object, but obviously, they're not in there... those are the headers of the response. I want the headers of the request!</p>
<p>if I do:</p>
<pre><code> request = requests.Request("put",
url,
auth= aws4auth,
data=content)
</code></pre>
<p>it just puts the aws4auth object in the request object, and has nothing in the headers field.</p>
<p>I'd like to know exactly what is being sent over the wire. Is that possible?</p>
|
<python><python-requests>
|
2024-03-05 19:31:28
| 1
| 12,317
|
Brian Postow
|
78,109,993
| 1,047,788
|
Prepend a marker string in front of each line of another program's output, while preserving aligned tab-formatted tables
|
<p>I have a program that prints tab-formatted output. (SO renders tabs below as spaces, in question edit mode it is tabs.)</p>
<pre><code>Some Key: Value
Another: Different value
</code></pre>
<p>When I run this program as subprocess, I intentionally prepend <code>>:</code> in front of every line, to distinguish what is my output and what is coming from the subprocess. Problem is, prepending these two chars sometimes breaks the column alignment.</p>
<pre><code>>: Some Key: Value
>: Another: Different value
</code></pre>
<p>Is there a way to preserve the alignment while adding few characters in front of the lines? Ideally, I would want not to introduce tabs to outputs that originally did not contain tabs. Also, I am printing the output line by line as the subprocess is running, so in the beginning I might not yet know if it will produce tabs or not.</p>
<p>I looked at Unicode, if there is some magic hiding there, and found nothing likely.</p>
<p>I also investigated the TBC control sequence <a href="https://terminalguide.namepad.de/seq/csi_sg/" rel="nofollow noreferrer">https://terminalguide.namepad.de/seq/csi_sg/</a>. My outputs go into Jenkins build console, so I can plausibly use ANSI control sequences.</p>
|
<python><unicode><vertical-alignment><ansi-escape><tabstop>
|
2024-03-05 19:00:12
| 0
| 29,820
|
user7610
|
78,109,990
| 4,852,094
|
Ignore a Field in a type annotation
|
<p>If I have a pydantic class like:</p>
<pre><code>from typing import Annotated, get_origin
from pydantic import BaseModel
class IgnorableBaseModel(BaseModel):
_ignored: ClassVar[dict[str, Any]] = {}
def __getattr__(self, attr):
"""do something with _ignored, else fallback to default"""
def __init_subclass__(cls, **kwargs) -> None:
del_keys = []
for key, annotation in cls.__annotations__.items():
if key.startswith("_"): # exclude protected/private attributes
continue
if get_origin(annotation) is Annotated:
if get_args(annotation)[1] == "ignore me"
cls._ignored[key] = cls.__annotations__[key]
del_keys.append(key)
for key in del_keys:
del cls.__annotations__[key]
class MyClass(IgnorableBaseModel):
name: Annotated[str, "ignore me"] # ignore this
x: int
</code></pre>
<p>I am using this name for a variable that is defined during the <code>__init__</code> and accessed via <code>__getattr__</code> so I can't set a value for it. Is there a way to tell mypy to ignore this, or do I need to always override the init args like:</p>
<pre><code>class MyClass(IgnorableBaseModel):
name: Annotated[str, "ignore me"]
x: int
def __init__(self, x: int):
self.x = x
</code></pre>
<p>It'd be great if there was an annotation I could add that would inform mypy that this variable was not needed during the init.</p>
<p>"ignore me" is used here to indicate that I don't want to see the error:</p>
<pre><code>Missing named argument "name" for "MyClass"Mypycall-arg
</code></pre>
<p>To provide some background, I'm trying to make a python DSL and so it helps to be able to have some attributes type hinted, but not actually require values.</p>
|
<python><python-typing><pydantic>
|
2024-03-05 18:58:55
| 1
| 3,507
|
Rob
|
78,109,742
| 12,309,386
|
Polars scan_ndjson does not work with streaming?
|
<p>I am attempting to read data from a large (300GB) newline-delimited JSON file, extract specific fields of interest and write them to a parquet file. This question is a follow-up on <a href="https://stackoverflow.com/questions/78104587/polars-efficiently-extract-subset-of-fields-from-array-of-json-objects-list-of">my previous SO question</a> which has more background and the structure of the data I'm working with.</p>
<p>Each line/JSON object is independent of the others, so I would have imagined this could be handled in a streaming fashion, processing the file (which is too large to fit in memory) in chunks.</p>
<p>The code that does the actual scan, collect and write is very simple:</p>
<pre class="lang-py prettyprint-override"><code># define the schema...
pl.scan_ndjson(
'data/input/myjson.jsonl',
schema=prschema)\
.collect(streaming=True)\
.write_parquet('data/output/myparquet.parquet',
compression='snappy',
use_pyarrow=True
)
</code></pre>
<p>However, as I work with increasingly larger subsets of my final file, I see that memory consumption increases linearly with input file size.</p>
<p>If I check the explain plan using <code>explain(streaming=True)</code> I see that streaming is NOT being used:</p>
<pre><code>Anonymous SCAN
PROJECT */6 COLUMNS
</code></pre>
<p>So my question is, why does streaming not appear to work for this seemingly straightforward read/write use case?</p>
<hr />
<p><strong>UPDATED</strong></p>
<p>Using <code>sink_parquet</code> instead of <code>write_parquet</code> does not work (in fact it's what I had originally tried). To be sure that it wasn't due to the complex nature of my JSON files, I even tried a very simplified version that just attempts to write two scalar (no nested objects) fields:</p>
<pre class="lang-py prettyprint-override"><code>pl.scan_ndjson('data/input/myjson.jsonl')\
.select('id', 'standing')\
.sink_parquet(
'data/output/myparquet.parquet',
compression='snappy'
)
</code></pre>
<p>This throws <code>InvalidOperationError: sink_Parquet(ParquetWriteOptions { compression: Snappy, statistics: false, row_group_size: None, data_pagesize_limit: None, maintain_order: true }) not yet supported in standard engine. Use 'collect().write_parquet()'</code></p>
|
<python><json><dataframe><python-polars>
|
2024-03-05 18:11:42
| 1
| 927
|
teejay
|
78,109,592
| 1,217,178
|
Pytest/Mock keeping around extra object references in case of caught exceptions
|
<p>I am running into a strange issue using pytest and mock: I am trying to create a call to <code>__del__</code> by deleting an object using <code>del ...</code>. According to the documentation, <code>del</code> only reduces the reference counter on the object that is being "deleted" and only actually deletes the object if nobody else is still holding a reference to it. It seems like if Mock throws an exception, that somehow leads to someone grabbing and keeping an extra reference to the object.</p>
<p>I put together a quick demo test to show the issue: The only difference between <code>test_del_passes</code> (which completes just fine) and <code>test_del_fails</code> (which fails with the last assertion, i.e. <code>del del_test</code> does not cause a call to <code>del_test.__del__()</code>) is that in the first one, <code>test_fn</code> returns a value, whereas in the second one, <code>test_fn</code> throws a <code>TimeoutError</code>. I tried deleting the <code>test_fn</code> object, or assigning <code>TimeoutError</code> to a variable instead and deleting that, but I simply can't find a way to get the second test to pass. So somewhere in the testing infra, someone is keeping an extra reference to <code>del_test</code> and I don't know who or why or how to get rid of it.</p>
<pre><code>import unittest.mock as mock
class DelTest:
def __init__(self, flags, test_fn):
self.flags = flags
self.test_fn = test_fn
def __del__(self):
self.flags[0] = 1
def run(self):
try:
self.test_fn()
except TimeoutError:
pass
def test_del_passes():
flags = [0]
test_fn = mock.Mock(side_effect=[True])
del_test = DelTest(flags, test_fn)
del_test.run()
assert flags[0] == 0
del del_test
assert flags[0] == 1
def test_del_fails():
flags = [0]
test_fn = mock.Mock(side_effect=[TimeoutError()])
del_test = DelTest(flags, test_fn)
del_test.run()
assert flags[0] == 0
del del_test
assert flags[0] == 1
</code></pre>
|
<python><mocking><pytest><reference-counting><del>
|
2024-03-05 17:43:23
| 1
| 12,842
|
Markus A.
|
78,109,543
| 10,020,283
|
ValueError when calling inspect.signature on hashlib.md5
|
<p>I am encountering an exception when attempting to retrieve the signature of the hashlib.md5
function:</p>
<pre><code>inspect.signature(hashlib.md5)
ValueError: 'usedforsecurity=?' is not a valid parameter name
</code></pre>
<p>on Python 3.10.8. What could be the reason and how can I avoid this?</p>
|
<python><gcc><conda><hashlib>
|
2024-03-05 17:35:06
| 1
| 6,792
|
mcsoini
|
78,109,262
| 893,254
|
How do I fix this circular dependency in Python?
|
<p>I have a circular dependency between two classes in Python. I am not sure how to resolve this circular dependency.</p>
<p>Both classes should be in their own module. At the moment, I am constrained to have both classes in the same module because of the dependency between them.</p>
<p>Example:</p>
<ul>
<li>There are two classes which depend on each other. <code>Scheduler</code> and <code>SystemCall</code>.</li>
</ul>
<pre><code>class Scheduler:
def run(self):
while self.task_map:
task = self.ready_queue.get()
result = task.run()
if isinstance(result, SystemCall):
result.set_task_and_scheduler(task, self)
class SystemCall:
def set_task_and_scheduler(self, task: Task, scheduler: Scheduler) -> None:
if not is instance(scheduler, Scheduler):
print(f'scheduler must be of type Scheduler')
</code></pre>
<p>The <code>ready_queue</code> can hold different tasks, some of which are of type <code>SystemCall</code>. If a system call is seen, then a function needs to be called to set information about the originating task, and scheduler, which is used as part of a callback mechanism.</p>
<p>With other OOP languages, it is possible to insert a forward declaration of the class to resolve the problem. Python doesn't have a way to do this.</p>
<p>Another way to resolve the problem might be to refactor the code to remove the inter-dependency. But I can't see an easy way to do this.</p>
|
<python>
|
2024-03-05 16:44:33
| 1
| 18,579
|
user2138149
|
78,109,250
| 1,557,060
|
Running a python file that imports from Airflow package, requires airflow instance?
|
<p>I am running into a weird import issue with Airflow. I want to create a module from which others can import. I also want to run unit tests on this module. However, I noticed that as soon as you import anything from the airflow package, it will try and run Airflow.</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code># myfile.py
from airflow import DAG
print("Hello world")
</code></pre>
<p>Then run it with <code>python myfile.py</code>, results in:</p>
<pre><code>(.venv) c:\Users\Jarro\Development\airflow-tryout-import>python myfile.py
WARNING:root:OSError while attempting to symlink the latest log directory
Traceback (most recent call last):
File "c:\Users\Jarro\Development\airflow-tryout-import\myfile.py", line 1, in <module>
from airflow import DAG
File "C:\Users\Jarro\Development\airflow-tryout-import\.venv\Lib\site-packages\airflow\__init__.py", line 68, in <module>
settings.initialize()
File "C:\Users\Jarro\Development\airflow-tryout-import\.venv\Lib\site-packages\airflow\settings.py", line 559, in initialize
configure_orm()
File "C:\Users\Jarro\Development\airflow-tryout-import\.venv\Lib\site-packages\airflow\settings.py", line 237, in configure_orm
raise AirflowConfigException(
airflow.exceptions.AirflowConfigException: Cannot use relative path: `sqlite:///C:\Users\Jarro/airflow/airflow.db` to connect to sqlite. Please use absolute path such as `sqlite:////tmp/airflow.db`.
</code></pre>
<p>Aside from the error itself, I am actually way more concerned that it seems I am not able to import things from Airflow, without there being side-effects (such as database initializations). Am I going all wrong about this? Is there another way I can import things from Airflow without these side effects, for example for typing purposes?</p>
|
<python><airflow><python-import>
|
2024-03-05 16:42:46
| 2
| 5,604
|
JarroVGIT
|
78,109,019
| 22,371,917
|
how to use subdomains in flask?
|
<pre class="lang-py prettyprint-override"><code>from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return 'This is the main domain.'
@app.route('/', subdomain='<subdomain>')
def subdomain(subdomain):
return f'This is the subdomain: {subdomain}'
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>this doesnt work i run it locally and localhost:5000 gives main and sub.localhost:5000 gives main not sub so, i add this to hosts file:</p>
<p>127.0.0.1 sub.localhost</p>
<p>but it didnt work it still just opens sub.localhost:5000 and shows main not sub and i tried hosting services like render and it gives error and ngrok tunnel from localhost gives error,how do i use subdomains and have them actually work??</p>
<p>so like how to make it work locally, and using tunnel services like ngrok and using hosting services like render</p>
|
<python><flask>
|
2024-03-05 16:01:52
| 1
| 347
|
Caiden
|
78,108,984
| 4,269,851
|
Find all overlapping records in multiple lists
|
<p>How to find overlapping records in multiple lists (about 100)?</p>
<pre><code>dct = {'One': [1,2,3],
'Two': [3],
'Three': [0,1,5],
'Four': [2,5,10,11]}
</code></pre>
<p>I have difficulty on the planning stage, if I to draw block diagram with sequence what steps it should include?</p>
<p>My solution is comparing every list against each all other lists minus the one, but this will use so many loops, guess there should be better approach. I know about <code>set.intersection()</code>, but how can I apply it here with more than two lists?</p>
<p>Output needs to count numbers that repeat in every list i.e.</p>
<pre><code>[1, 2, 3, 5]
</code></pre>
|
<python>
|
2024-03-05 15:54:01
| 2
| 829
|
Roman Toasov
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.