QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,719,585
| 10,633,596
|
How to read headers that are sent in the webhook URL request
|
<p>I'm trying to read the additional custom headers that were sent when I made the curl request using the webhook URL of a Slack channel but I cannot find a way to read the custom header or query parameters. May I know if there is any way to determine them?</p>
<pre><code>from typing import Optional
import slack_sdk
import os
import logging
from pathlib import Path
from dotenv import load_dotenv
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
logging.basicConfig(level=logging.DEBUG)
env_path = Path('.') / '.env'
load_dotenv(dotenv_path=env_path)
SLACK_VERIFICATION_TOKEN = os.environ['SLACK_VERIFICATION_TOKEN']
SLACK_SIGNING_SECRET = os.environ['SLACK_SIGNING_SECRET']
SLACK_BOT_TOKEN = os.environ['SLACK_BOT_TOKEN']
SLACK_APP_TOKEN = os.environ['SLACK_APP_TOKEN']
# Install the Slack app and get xoxb- token in advance
app = App(token=SLACK_BOT_TOKEN, signing_secret=SLACK_SIGNING_SECRET)
@app.event("message")
def handle_message(event, say, context):
text = event["text"]
channel = event["channel"]
target_channel='#slave2_public'
# Access query parameters
# query_params = context.request.query
# print("Query Parameters:", query_params)
# Access headers from the event
headers = event.get("headers")
print("Headers:", headers)
# Access headers
# headers = context.request.headers
# print("Headers:", headers)
if __name__ == "__main__":
# export SLACK_APP_TOKEN=xapp-***
# export SLACK_BOT_TOKEN=xoxb-***
handler = SocketModeHandler(app, SLACK_APP_TOKEN)
handler.start()
</code></pre>
<p>My curl request to webhook URL:-</p>
<pre><code>curl -X POST -H 'Content-type: application/json' --data '{"text":"Hello, World!"}' -H "Channel_Name:slave1_private" <webhook URL>
</code></pre>
|
<python><slack-bolt>
|
2023-12-27 00:31:44
| 1
| 1,574
|
vinod827
|
77,719,512
| 12,719,086
|
How to send audio stream to an Icecast server in Python
|
<p>I am having some issues trying to properly send audio data from a file to an Icecast server in Python.</p>
<p>Here is my class :</p>
<pre><code>import requests
from base64 import b64encode
class IcecastClient:
def __init__(self, host, port, mount, user, password, audio_info):
self.host = host
self.port = port
self.mount = mount
self.user = user
self.password = password
self.audio_info = audio_info # Additional audio information
self.stream_url = f"http://{host}:{port}{mount}"
def connect(self):
# Basic Auth Header
auth_header = b64encode(f"{self.user}:{self.password}".encode()).decode("ascii")
self.headers = {
'Authorization': f'Basic {auth_header}',
'Content-Type': 'audio/mpeg',
'Ice-Public': '1',
'Ice-Name': 'Auralyra Stream',
'Ice-Description': 'Streaming with Auralyra',
'Ice-Genre': 'Various',
'Ice-Audio-Info': self.audio_info
}
def stream_audio_file(self, file_path, chunk_size=4096):
with requests.Session() as session:
session.headers = self.headers
with open(file_path, 'rb') as audio_file:
while True:
chunk = audio_file.read(chunk_size)
if not chunk:
break # End of file
try:
response = session.put(self.stream_url, data=chunk)
if response.status_code != 200:
print(f"Streaming failed: {response.status_code} - {response.reason}")
break
except requests.RequestException as e:
print(f"Error while sending audio chunk: {e}")
break
if response.status_code == 200:
print("Streaming successful")
def send_audio(self, audio_chunk):
try:
# Send the chunk using the session with predefined headers
response = self.session.put(self.stream_url, data=audio_chunk)
if response.status_code != 200:
print(f"Streaming failed: {response.status_code} - {response.reason}")
except Exception as e:
print(f"Error while sending audio chunk: {e}")
</code></pre>
<p>The problem is that although Icecast recognizes the stream and the status of the mountpoint looks good, trying to listen to the stream doesnt work at all. My guess is that this as to do with how I send the data to the server.</p>
<p>PS: I am trying to avoid using librairies like 'shout' to do that</p>
|
<python><http><python-requests><icecast><icecast2>
|
2023-12-26 23:50:04
| 1
| 471
|
Polymood
|
77,719,408
| 15,494,335
|
How to make a python package's __init__.py file execute when debugging the __main__.py file?
|
<p>If I execute <code>python -m some_package</code> on the command line, with <code>some_package/__init__.py</code> containing <code>print("hello __init__.py")</code> and <code>some_package/__main__.py</code> containing <code>print("hello __main__.py")</code>, then both lines get printed, each once. But if I'm debugging <code>__main__.py</code> in an IDE (VSCode), only <code>__main__.py</code> executes, which would break behavior for anything other than a simple example. If I attempt to debug <code>some_package/__init__.py</code> instead, <code>__main__.py</code> never gets executed. How to fix this?</p>
<p>Update:
I just realized, if I import some_package from another script, only <code>__init__.py</code> is executed. So I guess I'm missing some important statements in <code>__init__.py</code> and that I should initiate debugging on that file instead?</p>
<p>Update 2:
From <a href="https://docs.python.org/3/tutorial/modules.html#packages" rel="nofollow noreferrer">Python Packages</a> I'm starting to question the purpose of <strong>main</strong>.py altogether?</p>
|
<python><vscode-debugger>
|
2023-12-26 23:02:35
| 2
| 707
|
mo FEAR
|
77,719,249
| 1,171,746
|
How to create a TypeGuard that mimics isinstance
|
<p>I have to check an object that may have been created by an API.
When I try using <code>isinstance(obj, MyClass)</code> it get a <code>TypeError</code> if <code>obj</code> was created by the API.</p>
<p>I wrote a custom function to handle this.</p>
<pre class="lang-py prettyprint-override"><code>def is_instance(obj: Any, class_or_tuple: Any) -> bool:
try:
return isinstance(obj, class_or_tuple)
except TypeError:
return False
</code></pre>
<p>The issue I am having is using <code>is_instance()</code> instead of the builtin <code>isinstance()</code> does not have any TypeGuard support, so the type checker complains.</p>
<pre class="lang-py prettyprint-override"><code>def my_process(api_obj: int | str) -> None:
if is_instance(api_obj, int):
process_int(api_obj)
else:
process_str(api_obj)
</code></pre>
<p>"Type int | str cannot be assigned to parameter ..."</p>
<p>How could I create a TypeGuard for this function?</p>
|
<python><python-typing><typeguards>
|
2023-12-26 22:01:51
| 1
| 327
|
Amour Spirit
|
77,719,065
| 1,874,170
|
Different repr for REPL?
|
<p>I want to present an alternative <code>repr</code> for an object <strong>iff</strong> it's being dumped into the REPL directly, not as part of a <code>Collection</code> and not as part of a different function call; this alternate representation contains a lot of very useful debugging information, but inherently breaks its use as a <em>composable</em> “repr” result.</p>
<p>Is this achievable in Python?</p>
<p>I tried this, but it didn't work as the "alternate" representation got printed unconditionally (but the stack isn't any deeper):</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
...
def __str__(self):
# Return a string which can be passed to __new__ to reconstruct this object
...
def __repr__(self):
if inspect.stack()[1].filename != '<stdin>':
# Use a "proper" representation which is composeable
return f'{self.__class__.__name__}({str(self)!r})'
# Object is being dumped as a singleton into the REPL, break out the fireworks
return f'{self.__pretty_print_extra_debug_repr()}\n{self.__class__.__name__}({str(self)!r})'
</code></pre>
|
<python><debugging>
|
2023-12-26 20:55:46
| 1
| 1,117
|
JamesTheAwesomeDude
|
77,718,512
| 893,254
|
Is there a way to prevent Visual Studio Code automatically hiding the "Add Code" and "Add Markdown" buttons when working with Jupyter Notebooks?
|
<p>When working with Jupyter Notebook files in VS Code, two buttons "Add Code" and "Add Markdown" appear below a Cell, when hovering over the location of the buttons.</p>
<p>Is there any way to prevent them from being automatically hidden when not being hovered-over by the mouse?</p>
<p>It seems like a bit of a strange UI design to hide them. I would find them easier to click if they were always shown by default.</p>
|
<python><visual-studio-code><jupyter-notebook>
|
2023-12-26 17:53:14
| 1
| 18,579
|
user2138149
|
77,718,293
| 12,696,223
|
Why BLAS cblas_sgemm in C is slower than np.dot?
|
<p>I made a simple benchmark between Python NumPy and C OpenBLAS to multiply two 500x500 matrices. It seems that <code>np.dot</code> performs almost 9 times faster than <code>cblas_sgemm</code>. Is there anything I'm doing wrong?</p>
<p>Results:</p>
<ul>
<li>Python: 0.001830 seconds</li>
<li>C: 0.016374 seconds</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import time
np.random.seed(42)
matrix_a = np.random.rand(500, 500)
matrix_b = np.random.rand(500, 500)
start_time = time.time()
result = np.dot(matrix_a, matrix_b)
end_time = time.time()
elapsed_time = end_time - start_time
print(f"{elapsed_time:.6f} seconds")
</code></pre>
<pre class="lang-c prettyprint-override"><code>#include <stdio.h>
#include <stdlib.h>
#include <cblas.h>
#include <time.h>
#define N 500
int main(void) {
srand(42);
float *matrix_a = (float *)malloc(N * N * sizeof(float));
float *matrix_b = (float *)malloc(N * N * sizeof(float));
float *result = (float *)malloc(N * N * sizeof(float));
for (int i = 0; i < N * N; ++i) {
matrix_a[i] = (float)rand() / RAND_MAX;
matrix_b[i] = (float)rand() / RAND_MAX;
}
clock_t start_time = clock();
cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans,
N, N, N,
1.0, matrix_a, N, matrix_b, N,
0.0, result, N);
clock_t end_time = clock();
float elapsed_time = (float)(end_time - start_time) / CLOCKS_PER_SEC;
printf("%f seconds\n", elapsed_time);
free(matrix_a);
free(matrix_b);
free(result);
return 0;
}
</code></pre>
<p>Compiling with</p>
<pre><code>cc -lopenblas main.c
</code></pre>
<p>Also <code>gcc -O3 -march=native</code>: 0.016420 seconds</p>
<p>I installed openblas with <code>brew install openblas</code> on MacOS 10.15.7. Also the output of <code>np.show_config()</code>:</p>
<pre><code>openblas64__info:
libraries = ['openblas64_', 'openblas64_']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)]
runtime_library_dirs = ['/usr/local/lib']
blas_ilp64_opt_info:
libraries = ['openblas64_', 'openblas64_']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)]
runtime_library_dirs = ['/usr/local/lib']
openblas64__lapack_info:
libraries = ['openblas64_', 'openblas64_']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)]
runtime_library_dirs = ['/usr/local/lib']
lapack_ilp64_opt_info:
libraries = ['openblas64_', 'openblas64_']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)]
runtime_library_dirs = ['/usr/local/lib']
Supported SIMD extensions in this NumPy install:
baseline = SSE,SSE2,SSE3
found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2
not found = AVX512F,AVX512CD,AVX512_KNL,AVX512_SKX,AVX512_CLX,AVX512_CNL,AVX512_ICL
</code></pre>
|
<python><c><numpy><blas>
|
2023-12-26 16:54:52
| 0
| 990
|
Momo
|
77,718,249
| 13,146,544
|
Deploying a Flask app on Vercel is showing 404 Not Found
|
<p>My project structure:</p>
<pre><code>api
|
|--fsk.py (file)
|
|--static (folder)
| |
| |--css (folder containing css files)
| |
| |--scripts (folder containing js files)
|
|--templates (folder containing html files)
|
|--vercel.json (file)
|
|--requirements.txt (file)
</code></pre>
<p>The contents of the files are as follows:-</p>
<p>vercel.json:</p>
<pre><code>{
"version": 2,
"builds": [
{
"src": "./fsk.py",
"use": "@vercel/python"
},
{
"src": "./static/**",
"use": "@vercel/static"
},
{
"src": "./templates/**",
"use": "@vercel/templates
}
],
"routes": [
{
"src": "/(.*)",
"dest": "./templates/index.html"
}
],
"outputdirectory": "templates"
}
</code></pre>
<p>requirements.txt:</p>
<pre><code>Flask==2.2.2
</code></pre>
<p>The file fsk.py contains render_templates. For example:</p>
<pre><code>@app.route('/')
def index():
return render_template('index.html')
</code></pre>
<p>I tried to search on Google, but did not find any tutorial/example that shows how to deploy a flask app using render_template on Vercel.</p>
<p><strong>Note</strong>: I am not using <code>app.run()</code> in fsk.py. Should I use it?</p>
|
<python><html><flask><vercel>
|
2023-12-26 16:44:00
| 0
| 866
|
Samudra Ganguly
|
77,717,714
| 14,533,334
|
How to Set Event Reminders in Google Calendar for Multiple Users using Python API without Explicit Consent?
|
<p>I'm developing an application where I need to set event reminders in multiple users' Google Calendars without explicitly asking for their permission. When exploring the Google Calendar API in Python, I encountered limitations due to privacy and security restrictions.</p>
<p>I'm seeking insights into whether it's possible to set reminders for users' events without direct access or explicit permission for each calendar. Any guidance, code examples, or alternative methods using the Google Calendar API or other approaches that might help achieve this functionality would be highly appreciated.</p>
|
<python><django><google-api><google-calendar-api>
|
2023-12-26 14:24:57
| 1
| 493
|
Fasil K
|
77,717,687
| 16,525,263
|
Create a new row based on other column values in pyspark dataframe
|
<p>I have a pyspark dataframe as below:</p>
<pre><code>c1 c2
111 null
null 222
333 444
null null
</code></pre>
<p>I need to have a final dataframe with an additional column like below</p>
<pre><code>c1 c2 new_col
111 null 111
null 222 222
333 444 333
333 444 444
null null null
</code></pre>
<p>If both cols are having values, then I need to create a new row with values from both cols1 and col2.
This is my code below as of now.</p>
<pre><code>df = df.withColumn('new_col', when(col('c1').isNull(), col('c2')) \
.otherwise(when(col('c2').isNull(), col('c1')).otherwise(col(c2'))))
</code></pre>
<p>Im stuck to create a new row if both columns c1 and c2 are having vaules, to create a new row.
Can anyone suggest a solution.</p>
|
<python><apache-spark><pyspark>
|
2023-12-26 14:15:53
| 4
| 434
|
user175025
|
77,717,398
| 3,372,061
|
Getting the path to the file located by FileSensor
|
<p>I am building a DAG that waits for filesystem changes then runs some analysis on newly appearing or modified files, for which I'm using a <code>FileSensor</code>. The path definition I'm monitoring contains both a jinja template and either a wildcard or a glob. When a file is found, I would like to provide its absolute path to the subsequent callbacks and tasks. Then, the file's metadata will be compared against some data structure to determine whether it needs to be processed.</p>
<p><strong>The problem</strong>: how to "exfiltrate" the found file's path from the sensor? I looked at the <a href="https://github.com/apache/airflow/blob/main/airflow/sensors/filesystem.py" rel="nofollow noreferrer">source code of <code>FileSensor</code></a>, but it only logs the found file, without storing the path anywhere. Is there any way to use the path template and/or the context to reconstruct the path without doing additional filesystem queries?</p>
<p>I thought of a couple of workarounds, but before I go down either path, I wanted to make sure that there's a good reason for it. My ideas are:</p>
<ol>
<li>Pass the data path template as-is to the subsequent task(s) and hope it works automagically if used in a <code>PythonOperator</code>.</li>
<li>Force the rendering of the jinja template through its <a href="https://stackoverflow.com/a/51718259/3372061">context/environment</a> or by using <a href="https://stackoverflow.com/a/65873023/3372061"><code>BashOperator</code> + echo</a>, then re-query the filesystem.</li>
</ol>
<hr />
<p>Here's a simplified outline of my configuration:</p>
<pre class="lang-py prettyprint-override"><code># <DAG initialization code>
...
path_template: str = os.path.join(
"/basepath/",
"{{ data_interval_start.format('YYYYMMDD') }}",
f"{source}.{{{{ data_interval_start.format('YYYYMMDD') }}}}*.csv.gz"
)
fs: FileSensor = FileSensor(
task_id=f"{source}_data_sensor",
filepath=path_template,
poke_interval=int(duration(minutes=5).total_seconds()),
timeout=int(duration(hours=1, minutes=30).total_seconds()),
mode="reschedule",
pre_execute=log_execution,
on_success_callback=partial(log_found_file, path_template),
)
fs >> convert(source) >> analyze_data(source)
</code></pre>
<p>Where <code>log_found_file</code> is given below:</p>
<pre class="lang-py prettyprint-override"><code>def log_found_file(data_path_template: str, ctx: Context) -> None:
"""Logs the discovery of a data file."""
data_path = f(data_path_template, ctx) # <<<<<<<<<<<<<<< Need help with this
stats: os.stat_result = os.stat(data_path)
logger.success(
f"Detected data file {data_path} "
f"of size {stats.st_size}; "
f"created on {stats.st_ctime}; "
f"and last modified on {stats.st_mtime}."
)
</code></pre>
<p>I'm working with Airflow 2.8.0 in case it matters.</p>
|
<python><airflow><parameter-passing><glob><file-watcher>
|
2023-12-26 13:01:01
| 1
| 24,239
|
Dev-iL
|
77,717,390
| 2,148,416
|
Inconsistent declaration of `ndarray.__array_ufunc__`
|
<p>While writing an ndarray subclass, Mypy flags a problem related to the declaration of the <code>method</code> argument in <code>__array_ufunc__</code>.</p>
<p>The <a href="https://numpy.org/doc/stable/user/basics.subclassing.html#array-ufunc-for-ufuncs" rel="nofollow noreferrer"><code>__array_ufunc__</code> docs</a> say:</p>
<ul>
<li><em>method</em> is a string indicating how the Ufunc was called, either
<code>"__call__"</code> to indicate it was called directly, or one of its methods:
<code>"reduce"</code>, <code>"accumulate"</code>, <code>"reduceat"</code>, <code>"outer"</code>, or <code>"at"</code>.</li>
</ul>
<p>Which is consistent with the <a href="https://numpy.org/doc/stable/reference/ufuncs.html#methods" rel="nofollow noreferrer">ufunc methods</a> documentation.</p>
<p>However, Numpy declares <a href="https://github.com/numpy/numpy/blob/284a2f09bf304ea6a98cc1ad2f3dfc4c203b3c93/numpy/__init__.pyi#L1468" rel="nofollow noreferrer"><code>__array_ufunc__</code></a> as:</p>
<pre><code>def __array_ufunc__(
self,
ufunc: ufunc,
method: L["__call__", "reduce", "reduceat", "accumulate", "outer", "inner"],
*inputs: Any,
**kwargs: Any,
) -> Any: ...
</code></pre>
<p>Method <code>"at"</code> is missing, replaced by a mysterious <code>"inner"</code>, which is not an ufunc method.</p>
<p>Is this a bug? Am I missing something?</p>
|
<python><numpy><numpy-ufunc>
|
2023-12-26 12:59:21
| 0
| 3,437
|
aerobiomat
|
77,717,255
| 893,254
|
Python iterate between two datetime objects
|
<p>How can I iterate between two <code>datetime.date</code> objects in Python?</p>
<p>For example, if I have two dates:</p>
<pre><code>date1 = datetime.date(2024, 1, 10)
date2 = datetime.date(2024, 1, 20)
</code></pre>
<p>and I want to do something like this:</p>
<pre><code>while date1 < date2:
# do something
date1 += 1 # days
</code></pre>
<p>How can I increment a <code>datetime.date</code> object by <code>1</code> day? There doesn't appear to be a way to manipulate a <code>date</code> object like that in Python?</p>
<p><a href="https://docs.python.org/3/library/datetime.html#date-objects" rel="nofollow noreferrer">https://docs.python.org/3/library/datetime.html#date-objects</a></p>
|
<python><date><datetime>
|
2023-12-26 12:34:05
| 2
| 18,579
|
user2138149
|
77,717,231
| 815,443
|
Why is __radd__ not called for the same class in Python?
|
<p>I thought returning <code>NotImplemented</code> from <code>__add__</code> will pass to <code>__radd__</code> under all circumstances, but apparently this is not true if it's the same class?</p>
<pre><code>class A:
def __add__(self, other):
print("add")
return NotImplemented
def __radd__(self, other):
print("radd")
return 100
A()+A()
# TypeError: unsupported operand type(s) for +: 'A' and 'A'
</code></pre>
<p>Am I missing something?</p>
|
<python>
|
2023-12-26 12:26:39
| 1
| 12,817
|
Gere
|
77,717,029
| 1,469,954
|
Redisearch full text index not working with Python client
|
<p>I am trying to follow <a href="https://redis.io/docs/connect/clients/python/" rel="noreferrer">this</a> Redis documentation link to create a small DB of notable people searchable in real time (using Python client).</p>
<p>I tried a similar code, but the final line, which queries by "s", should return two documents, instead, it returns a blank set. Can anybody help me find out the mistake I am making?</p>
<pre><code>import redis
from redis.commands.json.path import Path
import redis.commands.search.aggregation as aggregations
import redis.commands.search.reducers as reducers
from redis.commands.search.field import TextField, NumericField, TagField
from redis.commands.search.indexDefinition import IndexDefinition, IndexType
from redis.commands.search.query import NumericFilter, Query
d1 = {"key": "shahrukh khan", "pl": '{"d": "mvtv", "id": "1234-a", "img": "foo.jpg", "t: "act", "tme": "1965-"}', "org": "1", "p": 100}
d2 = {"key": "salman khan", "pl": '{"d": "mvtv", "id": "1236-a", "img": "fool.jpg", "t: "act", "tme": "1965-"}', "org": "1", "p": 100}
d3 = {"key": "aamir khan", "pl": '{"d": "mvtv", "id": "1237-a", "img": "fooler.jpg", "t: "act", "tme": "1965-"}', "org": "1", "p": 100}
schema = (
TextField("$.key", as_name="key"),
NumericField("$.p", as_name="p"),
)
r = redis.Redis(host='localhost', port=6379)
rs = r.ft("idx:au")
rs.create_index(
schema,
definition=IndexDefinition(
prefix=["au:"], index_type=IndexType.JSON
)
)
r.json().set("au:mvtv-1234-a", Path.root_path(), d1)
r.json().set("au:mvtv-1236-a", Path.root_path(), d2)
r.json().set("au:mvtv-1237-a", Path.root_path(), d3)
rs.search(Query("s"))
</code></pre>
|
<python><redis><redis-py><redisearch>
|
2023-12-26 11:33:29
| 3
| 5,353
|
NedStarkOfWinterfell
|
77,716,952
| 1,473,517
|
How to stop dots being cut off at edge of grid
|
<p>I have a simple grid of dots which I make with:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Add blue dots
for x in np.arange(0, 10, 1):
for y in np.arange(0, 10, 1):
plt.scatter(x, y, color='blue', s=20) # Adjust the size (s) as needed
# Customize plot
ax = plt.gca()
ax.set(xlim=(0, 9), ylim=(0, 9), xlabel='', ylabel='')
ax.set_xticks(np.arange(0, 9.01, 1))
ax.set_yticks(np.arange(0, 9.01, 1))
ax.invert_yaxis()
ax.set_aspect('equal', adjustable='box')
ax.tick_params(left=False, bottom=False)
ax.grid()
</code></pre>
<p>This looks like:</p>
<p><a href="https://i.sstatic.net/jGjpH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jGjpH.png" alt="enter image description here" /></a></p>
<p>The dots on the edge of the grid are cut off. How can I show them complete without changing anything else about the picture? I don't want to extend the grid lines for example.</p>
|
<python><matplotlib>
|
2023-12-26 11:13:27
| 1
| 21,513
|
Simd
|
77,716,847
| 3,563,894
|
Calculating power of (1-1/x)^y for very large x,y
|
<p>I am thinking about calculating $(1-1/x)^y$ for very large x,y numbers in python.
One way todo so, it to use <code>exp(y*log1p(-1/x))</code>, but I am not sure about its accuracy.</p>
<p>Are they any results on using such accuracy? e.g., bound the error of such result?</p>
<p>I guess it better than using the approximation <code>exp(-y/x))</code>.</p>
|
<python><numpy><floating-accuracy>
|
2023-12-26 10:42:16
| 1
| 367
|
user3563894
|
77,716,829
| 7,254,247
|
How do I deserialize a protobuf string in js sent from a python backend?
|
<p>I have a Flask server that sends a serialized protocol buffer message. I've de-serialized this message in a few environments successfully, including a C++ client. However, when I attempt to de-serialize my message in js, nothing works. I am using <a href="https://github.com/protobufjs" rel="nofollow noreferrer">protobuf.js</a>. According to the documentation, I think I am doing it correctly.</p>
<p>Here is the protobuff file <code>vtkMessage.proto</code>:</p>
<pre><code>syntax = "proto3";
package vtkMessage;
message hash_element {
string name = 1;
repeated float v = 2;
}
message hash {
repeated hash_element elem = 1;
}
message vtkMsg {
repeated int32 tris=1;
repeated float verts=2;
hash vals=3;
}
</code></pre>
<p>Here is my Flask server <code>test.py</code>:</p>
<pre><code>from flask import Flask
from flask import render_template, request
import vtkMessage_pb2
app = Flask(__name__, static_folder='')
@app.route('/test',methods = ['POST'])
def test():
data = request.get_json()
ret = vtkMessage_pb2.vtkMsg()
ret.tris.extend([1,2,3])
ret.verts.extend([0.45,0.35,0.11, 0.66,0.78,0.23, 0.11,0.01,0.14])
a = ret.vals.elem.add()
a.name = 'test1'
a.v.extend([0.1, 0.2])
a = ret.vals.elem.add()
a.name = 'test2'
a.v.extend([0.3, 0.5])
print(ret)
return ret.SerializeToString() # this decodes successfully when I use C++
@app.route('/protoTest')
def protoTest():
return render_template('protoTest.html')
</code></pre>
<p>Here is <code>templates/protoTest.html</code>:</p>
<pre><code><html>
<meta charset="UTF-8">
<script src="https://code.jquery.com/jquery-3.7.1.min.js" integrity="sha256-/JqT3SQfawRcv/BIHPThkBvs0OEvtFFmqPF/lYI/Cxo=" crossorigin="anonymous"></script>
<script src="//cdn.jsdelivr.net/npm/protobufjs@7.2.5/dist/protobuf.js"></script>
<script type="text/javascript">
window.onload = main;
function main() {
var dat = JSON.stringify({
step: 0,
frames: 1,
fname: "openfoam.vtk"
});
protobuf.load('vtkMessage.proto', function(err,root) {
window.vtkMsg = root.lookupType("vtkMessage.vtkMsg");
var proto_obj = {};
$.ajax({
url:'/test',
data: dat,
type: "POST",
contentType: "application/json;charset=utf-8",
async: false,
success: function(res) {
console.log(res);
var buffer = new TextEncoder("utf-8").encode(res);
proto_obj = vtkMsg.decode(buffer); // WHY IS THIS FAILING???
console.log(proto_obj);
},
error: function(res) {
console.error(res); // sometimes this also happens
}
});
console.log(proto_obj);
});
}
</script>
<body>
<h1>Protobuf test, view in console</h1>
</body>
</html>
</code></pre>
<p>To get the <code>vtkMessage_pb2.py</code> you will need to run:
<code>protoc -I=. --python_out=. vtkMessage.proto</code></p>
<p>I suspect that <code>var buffer = new TextEncoder("utf-8").encode(res);</code> is incorrect, but I don't see any documentation on how this should be done. Also, my fear is that flask and/or jquery does something funky in handling the <code>POST</code> request.</p>
|
<javascript><python><protocol-buffers>
|
2023-12-26 10:38:19
| 2
| 772
|
lee
|
77,716,718
| 1,668,622
|
Bug or wrong usage? Scrollbar broken for textual.Log when printing styled (coloured) lines
|
<p>I'm trying to render colored text inside a <code>textual.Log</code> widget using <code>rich.Style</code>, but somehow the scrollbar gets scrambled when applying a <code>Style</code> to the individual lines.</p>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>from textual import work
from textual.app import App
from textual.widgets import Log
from rich.style import Style
class BrokenLog(App[None]):
CSS = "Log { border: solid $accent; }"
def compose(self):
yield Log()
async def on_mount(self):
self.query_one(Log).write_lines(
(Style(color="red") if i % 2 else Style(color="green")).render(f"line {i}")
for i in range(100)
)
BrokenLog().run()
</code></pre>
<p>And here is what the result looks like:</p>
<p><a href="https://i.sstatic.net/sYgnI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sYgnI.png" alt="enter image description here" /></a></p>
<p>I suspect the width of the lines to be computed without the control sequences, but I don't know how to tell the <code>Log</code> widget to take them into account.</p>
<p>Am I doing something wrong? Is there another way to draw lines in a <code>Log</code> window with different color each?</p>
<p>I'm using <code>textual=0.46.0</code> and <code>rich=13.7.0</code> with Python <code>3.11.4</code></p>
|
<python><rich><python-textual>
|
2023-12-26 10:12:43
| 1
| 9,958
|
frans
|
77,716,670
| 972,733
|
Trouble Sending Messages to Numbers Outside Twilio Sandbox via WhatsApp API
|
<p>I'm integrating the Twilio WhatsApp API into a Flask application to send automated messages. While I can send messages to my own WhatsApp number that's subscribed to the sandbox with the <code>join <given code></code> command, I'm unable to send messages to other numbers. The peculiar part is that there are no errors shown in the console or Twilio's debugger to indicate what might be going wrong.</p>
<p>Here's a snippet from the <code>/bot</code> endpoint in my Flask app which is supposed to handle incoming messages and send replies:</p>
<pre class="lang-py prettyprint-override"><code>@app.route('/bot', methods=['POST'])
def bot():
# ... [code to handle incoming messages] ...
# Prepare the Twilio response
twilio_response = MessagingResponse()
twilio_response.message(latest_response)
# ... [more code] ...
# Attempt to send a message
try:
message = twilio_client.messages.create(
body="Your appointment is coming up on July 21 at 3PM",
from_="whatsapp:14155238886", # This is the Twilio phone number
to='whatsapp:+44074XXX' # Trying to send to this number outside sandbox
)
print(f"Message sent to whatsapp:+44074XXX. Message SID: {message.sid}")
except Exception as e:
print(f"Error occurred: {e}")
return str(twilio_response)
</code></pre>
<p>This <code>/bot</code> endpoint is triggered after receiving a webhook from WhatsApp. It's supposed to send a reply based on the message processed by the assistant. This is working perfectly for my personal number, which is registered with the sandbox:</p>
<pre class="lang-py prettyprint-override"><code>twilio_response = MessagingResponse()
twilio_response.message(latest_response)
</code></pre>
<p>However, when using <code>twilio_client.messages.create</code> to send a message outside of the sandbox environment, I don't receive any errors, and yet the message does not get delivered.</p>
<p>The console prints the following, suggesting the message is being created:</p>
<pre><code>Message sent to whatsapp:+440746XXXX. Message SID: SM17c2671921f3d77d68640a904c687223
</code></pre>
<p>Is there an additional step I'm missing, or is there something else I need to configure to send messages to numbers outside of the Twilio sandbox environment?</p>
|
<python><twilio><whatsapi>
|
2023-12-26 10:02:37
| 1
| 318
|
Wardruna
|
77,716,293
| 6,494,707
|
There is no current wx.App object - creating one now
|
<p>I am trying to visualize my hypercube data using <code>spectral</code> package and I follow the commands in the <a href="https://www.spectralpython.net/graphics.html" rel="nofollow noreferrer">main page</a> of spectral by <code>ipython --pylab=wx</code> and then I loaded the data to view::</p>
<pre><code>import spectral
from pylab import *
from spectral import *
spectral.settings.WX_GL_DEPTH_SIZE = 16
hsi_root = "mypath_to_hsi/"
hypercubes = os.listdir(hsi_root)
hsi_list = os.listdir(hsi_root)
idx = 3
filename = "cube_envi32"
data_path = os.path.join(hsi_root, hsi_list[idx], filename)
# Load hypercube data
data = SpyFile.load(envi.open(data_path + '.hdr', data_path + '.dat')) # (512, 640, 92)
</code></pre>
<p>when I am trying to view the cube , I am getting the following message:</p>
<pre><code>view_cube(data,bands=[29, 19, 9])
There is no current wx.App object - creating one now.
warnings.warn('\nThere is no current wx.App object - creating one now.',
Out[17]: <spectral.graphics.graphics.WindowProxy at 0x7f9dd694ef10>
</code></pre>
<p>and the output is a blank window. How to solve this?</p>
<p><strong>UPDATED:</strong>
I added the following codes to initiate <code>wx.APP()</code>. I followed the <a href="https://discuss.wxpython.org/t/instance-of-wx-app/36663/2" rel="nofollow noreferrer">post here</a>:</p>
<p><a href="https://discuss.wxpython.org/t/instance-of-wx-app/36663/2" rel="nofollow noreferrer">https://discuss.wxpython.org/t/instance-of-wx-app/36663/2</a></p>
<pre><code>app = wx.App()
print(app)
print(wx.GetApp())
print(wx.App.Get())
print(wx.App.GetInstance())
print(wx.AppConsole.GetInstance())
</code></pre>
<p>it opens a pycharm window by showing nothing and then it forces to close the window
<a href="https://i.sstatic.net/Eh7Vc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Eh7Vc.png" alt="enter image description here" /></a></p>
|
<python><user-interface><wxpython><wxwidgets><spectral-python>
|
2023-12-26 08:16:19
| 0
| 2,236
|
S.EB
|
77,716,028
| 1,330,810
|
Inserting many rows in psycopg3
|
<p>I used to use <code>execute_values</code> in psycopg2 but it's gone in psycopg3. I tried following the advice in <a href="https://stackoverflow.com/questions/76713315/passing-nested-tuple-to-values-in-psycopg3">this</a> answer or this <a href="https://github.com/psycopg/psycopg/issues/114" rel="noreferrer">github post</a>, but it just doesn't seem to work for my use case. I'm trying to insert multiple values, my SQL is like so:</p>
<pre><code>sql = INSERT INTO activities (type_, key_, a, b, c, d, e)
VALUES %s
ON CONFLICT (key_) DO UPDATE
SET
a = EXCLUDED.a,
b = EXCLUDED.b,
c = EXCLUDED.c,
d = EXCLUDED.d,
e = EXCLUDED.e
values = [['type', 'key', None, None, None, None, None]]
</code></pre>
<p>But doing <code>cursor.executemany(sql, values)</code> results in <code>{ProgrammingError}the query has 1 placeholder but 7 parameters were passed</code>. I tried many variations with extra parentheses etc. but always it results in some error. For example doing <code>self.cursor.executemany(sql, [values])</code> results in <code>syntax error near or at "$1": Line 3: VALUES $1</code>.</p>
|
<python><postgresql><psycopg2><psycopg3>
|
2023-12-26 06:57:06
| 2
| 5,825
|
Idan
|
77,715,955
| 4,309,170
|
Best approach to find next True value from an index in a boolean array
|
<p>I have a boolean list like this:</p>
<pre><code>test_dict = [False, False, True, False, False, True]
</code></pre>
<p>I need help with an algorithm that can find the next True value from a given position. Please note that the position is also obtained by iterating a list. Hence, please consider that loop as well when calculating time complexity.</p>
<pre><code>> Input
> test_dict = [False, False, True, False, False, True]
Position = 3
Output:
Next true value is at 5th position
</code></pre>
<p>I have tried this with using a nested for loop, however, not sure if that is 100% optimal.
Appreciate your suggestions.</p>
<p>Edit: Sample Code:</p>
<pre><code>test_dict = [False, False, True, False, False, True]
dict_sample = {"1": "2", "11":"3"}
for position, val in dict_sample.items():
#here position is the number from where we have to start
# searching the test_dict list
</code></pre>
|
<python><algorithm>
|
2023-12-26 06:33:22
| 2
| 628
|
Han
|
77,715,786
| 1,965,449
|
Numpy shape function
|
<pre><code> A = np.array([
[-1, 3],
[3, 2]
], dtype=np.dtype(float))
b = np.array([7, 1], dtype=np.dtype(float))
print(f"Shape of A: {A.shape}")
print(f"Shape of b: {b.shape}")
</code></pre>
<p>Gives the following output :</p>
<pre><code>Shape of A: (2, 2)
Shape of b: (2,)
</code></pre>
<p>I was expecting shape of b to be (1,2) which is one row and two columns, why is it (2,) ?</p>
|
<python><numpy><numpy-slicing>
|
2023-12-26 05:28:28
| 1
| 2,931
|
user1965449
|
77,715,073
| 22,466,650
|
How to select some values of a dataframe based on a list of coordinates?
|
<p>My inputs are a dataframe and a list of coodinates :</p>
<pre><code>df = pd.DataFrame({
'col1': ['A', 'B', 'C', 'A', 'G'],
'col2': ['B', 'E', 'F', 'F', 'H'],
'col3': ['C', 'D', 'E', 'A', 'I']})
coords = [(2, 0), (3, 2)]
</code></pre>
<pre><code>print(df)
col1 col2 col3
0 A B C
1 B E D
2 C F E
3 A F A
4 G H I
</code></pre>
<p>Basically I'm trying to select some cells in <code>df</code> based on the coordinates <code>X</code> and <code>Y</code> that we take from the list <code>coords</code>. And since I may need to invert the selection, I created a function with a boolean so it has two workflows. Until now I was able to produce one of the two selections (the easy one):</p>
<pre><code>def select(df, coords, inverted=False):
if inverted is True:
for c in coords:
df.iat[c[0], c[1]] = ''
return df
</code></pre>
<p>Can you guys show me how make the other one ?</p>
<p>My expected output is one of these two :</p>
<pre><code># inverted = False
col1 col2 col3
0
1
2 C
3 A
4
# inverted = True
col1 col2 col3
0 A B C
1 B E D
2 F E
3 A F
4 G H I
</code></pre>
|
<python><pandas>
|
2023-12-25 22:03:06
| 2
| 1,085
|
VERBOSE
|
77,715,017
| 9,669,142
|
multiprocessing uses all cores for the same code
|
<p>I have a list of tuples which are used inside a loop to perform some calculations. The calculations can take some time, hence I want to try to divide the tasks over multiple processing cores (to speed things up). Something goes wrong here.</p>
<p>First of all: I checked the performance of each of the cores when I run the code and all cores are used, but it seems that (based on the results) the same calculation is performed for each of the cores. For example: if I want to do 1+1, 2+2 and 3+3, I want to make sure one core does 1+1, one core does 2+2 and another core does 3+3. Now all three cores do 1+1, 2+2 and 3+3 (so I get the results three times then).</p>
<p>The second thing is that I checked the calculation time and with multiprocessing takes more time than without multiprocessing, which shouldn't happen I think.</p>
<p>I made a small code example below. All calculations that take some time are performed under the for-loop in the function 'func' and this is the part where I want to use more CPU cores. The example code can be tested directly and the list is reduced to the first 20 iterations so that it does not take too long (adjust this if needed).</p>
<pre><code>from multiprocessing import Process
import numpy as np
import itertools
import time
import math
def func(list_products):
# for i in range(len(list_total_product)):
for i in range(len(list_products)):
# Perform calculations
i
""" Creating the list """
# Input
item1_deviation = 1 #%
item1_steps = 10
item2_deviation = 10 #%
item2_N = 3 # minimum of 2
# Create first list
item1_base = 360
item1_deviation_rounded = item1_base * (item1_deviation / 100)
list_item1s = np.arange(item1_base - item1_deviation_rounded,
item1_base + item1_deviation_rounded+item1_deviation,
item1_steps).tolist()
list_item1s = [int(math.ceil(item / item1_steps)) * item1_steps for item in list_item1s]
# Create second list
item2_base = 0.77
list_item2s = np.linspace(item2_base - (item2_base * (item2_deviation / 100)),
item2_base + (item2_base * (item2_deviation / 100)),
item2_N).tolist()
list_item2s = [round(item, 3) for item in list_item2s]
# Create the number of combinations
list_total_sequence = []
for i in range(6):
list_sequence = [list_item1s, list_item2s]
list_product = list(itertools.product(*list_sequence))
list_total_sequence.append(list_product)
list_total_product = list(itertools.product(*list_total_sequence))
""" Without Multiprocessing """
t1 = time.perf_counter()
# for i in range(len(list_total_product)):
for i in range(len(list_total_product[0:20])):
# Perform calculations
i
t2 = time.perf_counter()
print(t2-t1)
""" With Multiprocessing """
if __name__ == '__main__':
t1 = time.perf_counter()
p = Process(target=func, args=(list_total_product[0:20],))
p.start()
p.join()
t2 = time.perf_counter()
print(t2-t1)
</code></pre>
<p>One can enable/disable the code under "Without Multiprocessing" and "With Multiprocessing"</p>
<p>What am I doing wrong?</p>
<p>UPDATE: suggested by @diedthreetimes</p>
<p>I think I understand it, but this also seems not to work properly.</p>
<p>Thus I have the following code, thanks to diedthreetimes:</p>
<pre><code>if __name__ == '__main__':
processes = []
num_cores = 1
num_items = 700
num_items_per_core = int(num_items / num_cores) # = 5; watch for rounding errors
ipc = num_items_per_core
t1 = time.perf_counter()
for i in range(0, num_cores):
p = Process(target=func, args=(list_total_product[ipc*i:ipc*i+ipc],))
processes.append(p)
p.start()
for p in processes:
p.join()
t2 = time.perf_counter()
print(t2-t1)
</code></pre>
<p>You can see I changed the num_items to 700 ans using one core only. This takes around 0.183 seconds. Running the same code in a for-loop without multiprocessing lasts 3.68e-05 seconds. Now it obvouisly doesn't matter, but the actual code is more time consuming and there this will matter quite a lot.</p>
|
<python><multiprocessing>
|
2023-12-25 21:29:41
| 1
| 567
|
Fish1996
|
77,715,012
| 7,961,594
|
Multiple CSS Providers in Python GTK3
|
<p>I've read lots of posts about using CSS in GTK3 in Python, but I never really seemed to find anything about changing styles programmatically. Usually there is one big triple-quoted string that has all of the CSS definitions in it. But what if I want to change a single style programmatically in the code? If I modify the provider (load_from_data) with that single style, all of the other styles are lost (i.e. they revert to default).</p>
<p>I found one way that works, but I'd like to get some advice on whether that's the best way or not. I found that I can create and add <strong>another</strong> provider, and I can give it a higher priority, so any styles in it would override the styles in the big triple-quoted CSS string. The priorities are discussed here: <a href="https://lazka.github.io/pgi-docs/Gtk-3.0/classes/StyleContext.html#Gtk.StyleContext.add_provider_for_screen" rel="nofollow noreferrer">https://lazka.github.io/pgi-docs/Gtk-3.0/classes/StyleContext.html#Gtk.StyleContext.add_provider_for_screen</a>, and the different priorities are explained here: <a href="https://lazka.github.io/pgi-docs/Gtk-3.0/constants.html#Gtk.STYLE_PROVIDER_PRIORITY_APPLICATION" rel="nofollow noreferrer">https://lazka.github.io/pgi-docs/Gtk-3.0/constants.html#Gtk.STYLE_PROVIDER_PRIORITY_APPLICATION</a>.</p>
<p>Here's some sample code to demonstrate. Yes, I realize it's pretty strange code... create a window with a couple of entries with weird background colors. If you change the text in the <strong>second</strong> entry field, then the background of the two entries changes. But the point is that there is some point in the script where we want to change some styles on the fly (e.g. a font chooser has been executed to choose a new text font), and this seems to give us a mechanism for doing that. You will need to use the .glade file that is found in this post to run this code: <a href="https://stackoverflow.com/questions/77517627/issues-with-some-css-selectors-when-using-python-gtk3">Issues with some CSS Selectors when using Python GTK3</a>.</p>
<pre><code>#!/usr/bin/env python3
from gi.repository import Gtk, Gdk
CSS = b"""
window {
background-color: yellow;
}
entry {
background-color: green;
}
"""
def text_changed(self, provider):
text = self.get_text()
print(text)
newcss = "entry {background-color: red}"
provider.load_from_data(bytes(newcss.encode()))
myBuilder = Gtk.Builder()
myBuilder.add_from_file("pygtk3test.glade")
window = myBuilder.get_object("mainWindow")
#objects = myBuilder.get_objects()
label2 = myBuilder.get_object("label2")
entry2 = myBuilder.get_object("entry2")
# set up the global CSS provider
style_provider = Gtk.CssProvider()
style_provider.load_from_data(CSS)
Gtk.StyleContext.add_provider_for_screen(Gdk.Screen.get_default(), style_provider,
Gtk.STYLE_PROVIDER_PRIORITY_APPLICATION)
user_provider = Gtk.CssProvider()
user_provider.load_from_data(bytes("".encode()))
Gtk.StyleContext.add_provider_for_screen(Gdk.Screen.get_default(), user_provider,
Gtk.STYLE_PROVIDER_PRIORITY_USER)
entry2.connect("changed", text_changed, user_provider)
window.connect("delete-event", Gtk.main_quit)
window.show_all()
Gtk.main()
</code></pre>
<p>I'm open to any suggestions as to how this could be done better. Thanks.</p>
|
<python><css><gtk3>
|
2023-12-25 21:26:53
| 1
| 1,095
|
Jeff H
|
77,714,986
| 4,706,711
|
Why running a container via docker sdk in python I get different output compared to the actual command line?
|
<p>In a shell session I have run:</p>
<pre><code> docker run --rm --net host busybox /bin/sh -c "ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p'"
</code></pre>
<p>The output of the command is:</p>
<pre><code>10.0.2.15
</code></pre>
<p>But for a utility script that interfaces docker directly using python and <a href="https://docs.docker.com/engine/api/sdk/" rel="nofollow noreferrer">docker-sdk</a> I run the same command as well:</p>
<pre><code>import docker
client = docker.from_env()
print(client.containers.run('busybox',["/bin/sh","-c","ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p'"],remove=True,network="host"))
</code></pre>
<p>But I get the following output:</p>
<pre><code>b'\x01\n'
</code></pre>
<p>I tried to convert into ascii like this:</p>
<pre><code>print(client.containers.run('busybox',["/bin/sh","-c","ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p'"],remove=True,network="host").decode('ascii'))
</code></pre>
<p>But seem not to print any printable character despite spaces. Any idea how I can retrieve the same output using python?</p>
<h1>Edit 1</h1>
<p>I also tried to capture the stdout:</p>
<pre><code>client.containers.run('busybox',["/bin/sh","-c","ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p'"],remove=True,network="host",stdout=True)
</code></pre>
<p>But same result occur.</p>
<p>Though if I ommit the sed part seems to work fine (but having the result not filtered):</p>
<pre><code>print(client.containers.run('busybox',["/bin/sh","-c","ip route get 1"],remove=True,network="host"))
</code></pre>
<p>Prints:</p>
<pre><code>b'1.0.0.0 via 10.0.2.2 dev enp0s3 src 10.0.2.15 \n'
</code></pre>
<p>Therefore for some reason sed part seems to break the output.</p>
|
<python><docker>
|
2023-12-25 21:11:36
| 2
| 10,444
|
Dimitrios Desyllas
|
77,714,963
| 3,381,280
|
Python access property from parent class
|
<p>I am following <a href="https://edube.org/learn/pe-2/queue-aka-fifo-part-2-1" rel="nofollow noreferrer">this tutorial</a> on Python Institute.</p>
<p>I have three classes as shown below. The parent class Queue has put and get functions. The <em>get</em> function raises a QueueError if the queue is empty. The child class SuperQueue has to add the isempty function which returns true if the queue is empty. My first approach was to call the parent class <em>get</em> function inside a <em>try</em> block and return its returned value if present. Otherwise, catch the error in the except block and return true.</p>
<p>My second approach shown in code was to create a <em>get</em> function for the child class. In that function, I grab the returned value in a variable. If the returned value is not empty, I put it back into the queue and return it to the caller. In this way, when I call the <em>get</em> function inside the <em>isempty</em> function, the queue is intact.</p>
<p>The problem here is that the driver program, after entering some items into queue, calls the <em>isempty</em> function and if the queue is not empty, it calls the <em>get</em> function to print the items. So the <em>get</em> function is called twice and my code does not print the entered items in the order they were entered.</p>
<p>How can I implement the <em>isempty</em> function?</p>
<pre><code>class QueueError:
pass
class Queue:
def __init__(self):
self.queue=[]
def put(self,elem):
self.queue.insert(0,elem)
def get(self):
if len(self.queue) > 0:
elem = self.queue[-1]
del self.queue[-1]
return elem
else:
raise QueueError
class SuperQueue(Queue):
def __init__(self):
Queue.__init__(self)
def get(self):
try:
v= Queue.get(self)
return v
except:
print('exception')
def isempty(self):
v=self.get()
if v:
self.put(v)
return False
return True
que = SuperQueue()
que.put(1)
que.put('dog')
que.put(False)
for i in range(4):
if not que.isempty():
print(que.get())
else:
print("Queue empty")
# I cannot get queue items in order. I get the second Item first and cannot get the third
(False)
</code></pre>
|
<python>
|
2023-12-25 21:00:49
| 1
| 467
|
kobosh
|
77,714,962
| 11,038,635
|
What is the purpose of END_FINALLY bytecode in older Python versions?
|
<p>Consider the following code:</p>
<pre class="lang-py prettyprint-override"><code>try:
helloworld()
except:
failure()
</code></pre>
<p>and its disassembly (this is in Python 2.7):</p>
<pre><code> 1 0 SETUP_EXCEPT 11 (to 14)
2 3 LOAD_NAME 0 (helloworld)
6 CALL_FUNCTION 0
9 POP_TOP
10 POP_BLOCK
11 JUMP_FORWARD 14 (to 28)
3 >> 14 POP_TOP
15 POP_TOP
16 POP_TOP
4 17 LOAD_NAME 1 (failure)
20 CALL_FUNCTION 0
23 POP_TOP
24 JUMP_FORWARD 1 (to 28)
27 END_FINALLY
>> 28 LOAD_CONST 0 (None)
31 RETURN_VALUE
</code></pre>
<p>Assuming that <code>helloworld()</code> raises an exception, the code follows from address 14 onwards. Because this except handler is generic, it makes sense that three POP_TOPs follow, and the <code>failure()</code> function call. However, afterwards, there is a <code>24 JUMP_FORWARD</code> which "jumps over" <code>27 END_FINALLY</code>, so that it doesn't get executed. What's its purpose here?</p>
<p>I noticed similar behaviour in versions 3.5, 3.6, 3.7 and 3.8. In 3.9 it seems like it's renamed to RERAISE: <a href="https://godbolt.org/z/YbeqPf3nx" rel="nofollow noreferrer">https://godbolt.org/z/YbeqPf3nx</a></p>
<p>Some context: After simplifying an obfuscated pyc and a lot of debugging I've found that such structure breaks <code>uncompyle6</code></p>
|
<python><python-2.7><bytecode><cpython>
|
2023-12-25 21:00:16
| 1
| 482
|
musava_ribica
|
77,714,951
| 5,287,011
|
Create a dictionary inside of FOR loop and the Create a data frame
|
<p>I have a list of lists</p>
<pre><code>tours = [[0, 4], [0, 5], [0, 6], [1, 13], [2, 0], [3, 8], [4, 9], [5, 10],
[6, 7], [7, 1], [8, 2], [9, 3], [10, 11], [11, 14], [12, 0], [13, 12], [14, 0]]
</code></pre>
<p>I also have a dataframe <code>df</code>:</p>
<pre><code> Node X Y Demand Profit
1 2 5.7735 0.00 40.0 16.0
2 3 2.8867 5.00 40.0 16.0
3 4 -2.8868 5.00 40.0 16.0
4 5 -5.7735 0.00 40.0 16.0
5 6 -2.8867 -5.00 40.0 16.0
6 7 2.8868 -5.00 40.0 16.0
7 8 8.6603 5.00 40.0 24.0
8 9 0.0000 10.00 40.0 24.0
9 10 -8.6603 5.00 40.0 24.0
10 11 -8.6603 -5.00 40.0 24.0
11 12 0.0000 -10.00 40.0 24.0
12 13 8.6603 -5.00 40.0 24.0
13 14 5.3405 0.75 10.0 10.0
14 15 3.3198 4.25 10.0 10.0
15 16 6.4952 -1.25 10.0 11.0
</code></pre>
<p>In the list of lists "tour" the first element of a list member defines the element in column <code>X</code> and the second element of the list member defines the element in column <code>Y</code>.
For example <code>[0,4]</code> points to <code>X=5.7735</code>, <code>Y= -5.00</code>.</p>
<p>I need to create a new dataframe <code>coord</code> with two columns <code>X</code> and <code>Y</code> where each row represents an <code>X</code>,<code>Y</code> pair defined by an element in <code>tours</code>.</p>
<p>My plan is to create a dictionary and then create a new dataframe but I am struggling with getting column names to a new dataframe or having a duplicated index column and splitting the elements of the list.</p>
<p>Here is my code</p>
<pre><code>d = {}
for t,tour in enumerate(tours):
xi = tour[0]
yi = tour[1]
key = t
d[key] = df["X"].iloc[xi], df["Y"].iloc[yi]
print(d)
coord = pd.DataFrame(d.items(),columns=['X', 'Y'])
print(coord)
</code></pre>
<p>Output:</p>
<pre><code>{0: (5.7735, -5.0), 1: (5.7735, -5.0), 2: (5.7735, 5.0), 3: (2.8867, 4.25), 4: (-2.8868, 0.0), 5: (-5.7735, 5.0), 6: (-2.8867, -5.0), 7: (2.8868, -10.0), 8: (8.6603, 10.0), 9: (0.0, 5.0), 10: (-8.6603, 5.0), 11: (-8.6603, 0.0), 12: (0.0, -5.0), 13: (8.6603, -1.25), 14: (5.3405, 0.0), 15: (3.3198, 0.75), 16: (6.4952, 0.0)}
X Y
0 0 (5.7735, -5.0)
1 1 (5.7735, -5.0)
2 2 (5.7735, 5.0)
3 3 (2.8867, 4.25)
4 4 (-2.8868, 0.0)
5 5 (-5.7735, 5.0)
6 6 (-2.8867, -5.0)
7 7 (2.8868, -10.0)
8 8 (8.6603, 10.0)
9 9 (0.0, 5.0)
10 10 (-8.6603, 5.0)
11 11 (-8.6603, 0.0)
12 12 (0.0, -5.0)
13 13 (8.6603, -1.25)
14 14 (5.3405, 0.0)
15 15 (3.3198, 0.75)
16 16 (6.4952, 0.0)
</code></pre>
<p>I greatly appreciate your help!</p>
<p>My ultimate goal is to plot a full rout using <code>X</code> and <code>Y</code> as coordinates. Any advise will be greatly appreciated.</p>
<p>PS.
Here is what I came up with. It works but I will appreciate any improvements:</p>
<pre><code>d = {}
for t,tour in enumerate(tours):
xi = tour[0]
yi = tour[1]
key = t
d[key] = df["X"].iloc[xi], df["Y"].iloc[yi]
coord = pd.DataFrame.from_dict(d, orient='index', columns= ['X','Y'])
</code></pre>
<p>OUTPUT:</p>
<pre><code> X Y
0 5.7735 -5.00
1 5.7735 -5.00
2 5.7735 5.00
3 2.8867 4.25
4 -2.8868 0.00
5 -5.7735 5.00
6 -2.8867 -5.00
7 2.8868 -10.00
8 8.6603 10.00
9 0.0000 5.00
10 -8.6603 5.00
11 -8.6603 0.00
12 0.0000 -5.00
13 8.6603 -1.25
14 5.3405 0.00
15 3.3198 0.75
16 6.4952 0.00
</code></pre>
<p>Is there an easier way that LoL to dict to dataframe ?</p>
|
<python><pandas><dataframe><list>
|
2023-12-25 20:53:32
| 2
| 3,209
|
Toly
|
77,714,801
| 12,349,101
|
Tkinter - Scrolling notebook tabs without expanding the window width
|
<p>I'm trying to make a tkinter window with tabs, and a text widget for each tabs. I ended up using this <a href="https://stackoverflow.com/a/59803188/12349101">answer</a> as basis. Started with this:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
from tkinter import ttk
tab_count = 1
def create_tab():
global tab_count
tab = ttk.Frame(notebook)
notebook.add(tab, text=f"<Untitled> {tab_count}")
text_widget = tk.Text(tab)
text_widget.pack(fill="both", expand=True)
notebook.select(tab)
text_widget.focus_set()
tab_count += 1 # Increment the tab_count
def switch_tab(event=None):
current_tab = notebook.select()
notebook.select(current_tab)
current_tab_widget = notebook.nametowidget(current_tab)
text_widget = current_tab_widget.winfo_children()[0]
text_widget.focus_set()
root = tk.Tk()
root.title("Text Editor")
frame = ttk.Frame(root)
frame.pack(fill="both", expand=True)
notebook = ttk.Notebook(frame)
notebook.pack(fill="both", expand=True)
create_tab() # Create the initial tab
add_button = ttk.Button(frame, text="New Tab", command=create_tab)
add_button.pack(side="bottom")
notebook.bind("<ButtonRelease-1>", switch_tab)
root.mainloop()
</code></pre>
<p>to this:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
from tkinter import ttk
from tkinter import NE, LEFT, RIGHT
class ScrollableNotebook(ttk.Frame):
def __init__(self, parent, *args, **kwargs):
ttk.Frame.__init__(self, parent, *args)
self.xLocation = 0
self.notebookContent = ttk.Notebook(self, **kwargs)
self.notebookContent.pack(fill="both", expand=True)
self.notebookTab = ttk.Notebook(self, **kwargs)
self.notebookTab.bind("<<NotebookTabChanged>>", self._tabChanger)
slideFrame = ttk.Frame(self)
slideFrame.place(relx=1.0, x=0, y=1, anchor=NE)
leftArrow = ttk.Label(slideFrame, text="\u25c0")
leftArrow.bind("<1>", self._leftSlide)
leftArrow.pack(side=LEFT)
rightArrow = ttk.Label(slideFrame, text=" \u25b6")
rightArrow.bind("<1>", self._rightSlide)
rightArrow.pack(side=RIGHT)
self.notebookContent.bind("<Configure>", self._resetSlide)
def _tabChanger(self, event):
self.notebookContent.select(self.notebookTab.index("current"))
def _rightSlide(self, event):
if self.notebookTab.winfo_width() > self.notebookContent.winfo_width() - 30:
if (self.notebookContent.winfo_width() - (self.notebookTab.winfo_width() + self.notebookTab.winfo_x())) <= 35:
self.xLocation -= 20
self.notebookTab.place(x=self.xLocation, y=0)
def _leftSlide(self, event):
if not self.notebookTab.winfo_x() == 0:
self.xLocation += 20
self.notebookTab.place(x=self.xLocation, y=0)
def _resetSlide(self, event):
self.notebookTab.place(x=0, y=0)
self.xLocation = 0
def add(self, frame, **kwargs):
if len(self.notebookTab.winfo_children()) != 0:
self.notebookContent.add(frame, text="", state="hidden")
else:
self.notebookContent.add(frame, text="")
self.notebookTab.add(ttk.Frame(self.notebookTab), **kwargs)
def forget(self, tab_id):
self.notebookContent.forget(tab_id)
self.notebookTab.forget(tab_id)
def hide(self, tab_id):
self.notebookContent.hide(tab_id)
self.notebookTab.hide(tab_id)
def identify(self, x, y):
return self.notebookTab.identify(x, y)
def index(self, tab_id):
return self.notebookTab.index(tab_id)
def insert(self, pos, frame, **kwargs):
self.notebookContent.insert(pos, frame, **kwargs)
self.notebookTab.insert(pos, frame, **kwargs)
def select(self, tab_id):
self.notebookContent.select(tab_id)
self.notebookTab.select(tab_id)
def tab(self, tab_id, option=None, **kwargs):
return self.notebookTab.tab(tab_id, option=None, **kwargs)
def tabs(self):
return self.notebookContent.tabs()
def enable_traversal(self):
self.notebookContent.enable_traversal()
self.notebookTab.enable_traversal()
tab_count = 1
def create_tab():
global tab_count
tab = ttk.Frame(notebook.notebookContent)
notebook.add(tab, text=f"<Untitled> {tab_count}")
text_widget = tk.Text(tab)
text_widget.pack(fill="both", expand=True)
notebook.notebookContent.select(tab)
text_widget.focus_set()
tab_count += 1
def switch_tab(event=None):
current_tab = notebook.notebookContent.select()
current_tab_widget = notebook.notebookContent.nametowidget(current_tab)
if current_tab_widget.winfo_children(): # Check if there are any children widgets
text_widget = current_tab_widget.winfo_children()[0]
text_widget.focus_set()
root = tk.Tk()
root.title("Text Editor")
frame = ttk.Frame(root)
frame.pack(fill="both", expand=True)
notebook = ScrollableNotebook(frame)
notebook.pack(fill="both", expand=True)
create_tab()
add_button = ttk.Button(frame, text="New Tab", command=create_tab)
add_button.pack(side="bottom")
notebook.notebookContent.bind("<<NotebookTabChanged>>", switch_tab)
root.mainloop()
</code></pre>
<p>Now this works at first glance, but I noticed whenever tabs get added, and they exceed the window's width, they end up increasing the current width of the window (unless it is maximized/fullscreen)</p>
<p>How do I prevent the window's width from increasing when windowed and adding tabs?</p>
|
<python><tkinter><text><tabs>
|
2023-12-25 19:44:41
| 0
| 553
|
secemp9
|
77,714,644
| 4,601,931
|
Sudden issue with Python installation via Homebrew
|
<p>I have Python 3.12 and virtualenv 20.25.0, both installed via Homebrew. The last time I used my computer, virtualenv was functioning properly. The very next time I try to use <code>virtualenv</code>, I am greeted with this error:</p>
<pre><code>> $ virtualenv --version
Traceback (most recent call last):
File "/usr/local/bin/virtualenv", line 5, in <module>
from virtualenv.__main__ import run_with_catch
File "/usr/local/lib/python3.12/site-packages/virtualenv/__init__.py", line 3, in <module>
from .run import cli_run, session_via_cli
File "/usr/local/lib/python3.12/site-packages/virtualenv/run/__init__.py", line 7, in <module>
from virtualenv.app_data import make_app_data
File "/usr/local/lib/python3.12/site-packages/virtualenv/app_data/__init__.py", line 11, in <module>
from .read_only import ReadOnlyAppData
File "/usr/local/lib/python3.12/site-packages/virtualenv/app_data/read_only.py", line 5, in <module>
from virtualenv.util.lock import NoOpFileLock
File "/usr/local/lib/python3.12/site-packages/virtualenv/util/lock.py", line 12, in <module>
from filelock import FileLock, Timeout
ModuleNotFoundError: No module named 'filelock'
</code></pre>
<p>The internet happens to not be very helpful and I have done nothing myself in the time between using virtualenv successfully and now. What could've happened here?</p>
|
<python><virtualenv><homebrew>
|
2023-12-25 18:38:51
| 1
| 5,324
|
user4601931
|
77,714,538
| 19,366,064
|
How to reuse AsyncClient fixture
|
<p>Here is a simple code snippet to demonstrate the problem, it seems that the fixture client is not properly injected into the testing functions.</p>
<pre><code>from contextlib import asynccontextmanager
from fastapi import FastAPI
import asyncio
from httpx import AsyncClient
import pytest
@asynccontextmanager
async def lifespan(app: FastAPI):
print("starting up")
yield
print("shutting down")
app = FastAPI(lifespan=lifespan)
@app.get("/hello")
async def hello():
await asyncio.sleep(2)
return {"hello": "world"}
@pytest.fixture(scope="session")
async def client():
async with lifespan(app) as test_app:
async with AsyncClient(app=test_app, base_url="http://localhost") as client:
yield client
@pytest.mark.asyncio
async def test_hello(client):
response = await client.get("/hello")
assert response.json() == {"hello": "world"}
</code></pre>
<p>Running pytest will produce the following error message</p>
<pre><code>client = <async_generator object client at 0x00000175B3EA1990>
@pytest.mark.asyncio
async def test_hello(client):
> response = await client.get("/hello")
E AttributeError: 'async_generator' object has no attribute 'get'
</code></pre>
|
<python><fastapi><httpx>
|
2023-12-25 17:50:03
| 0
| 544
|
Michael Xia
|
77,714,536
| 7,136,640
|
JSON parsing error when loading Firebase service account from .env file in Python
|
<p>I'm encountering an issue with parsing a Firebase service account stored in a .env file using Python. Here's my approach:</p>
<pre><code>import firebase_admin
from firebase_admin import credentials
from dotenv import load_dotenv
import os, json
load_dotenv()
firebase_config = os.getenv("FIREBASE_CONFIG")
firebase_config = json.loads(firebase_config) # Error occurs here
cred = credentials.Certificate(firebase_config)
firebase_admin.initialize_app(cred)
</code></pre>
<p>The <code>json.loads()</code> function raises a parsing error, likely due to newline characters (\n) or other special characters within the service account string. How can I effectively load the service account from the .env file and use it with the Firebase Admin SDK in Python?</p>
<h2>Specific questions:</h2>
<ul>
<li>What's the recommended way to handle special characters in service account strings when storing them in .env files?</li>
<li>Are there alternative approaches to loading service accounts in Python that might be more robust?</li>
</ul>
|
<python><python-3.x><firebase><firebase-authentication><python-dotenv>
|
2023-12-25 17:49:21
| 1
| 345
|
Huzaifa Zahoor
|
77,714,534
| 3,679,377
|
kbar keyboard shortcuts troubleshooting
|
<p>I am trying to create a component for plotly dash using react-kbar.</p>
<p>I am able to run it without any issues for the most part. However, for some reason, the keyboard shortcuts defined for the actions are not working. The keyboard shortcut for kbar itself is working fine (ctlr + k, escape), but the action shortcuts are not. The actions and the shortcuts are registered fine. I can see the shortcut key combinations appearing against each entry in the list.</p>
<p>Below is snippet of code where I use <code>KBar</code>. What am I doing wrong? Any pointer on debugging will also be of great help!</p>
<pre class="lang-js prettyprint-override"><code> ...
return (
<KBarProvider id={id} options={{disableScrollbarManagement: true}}>
<KBarPortal>
<KBarPositioner>
<KBarAnimator
style={{
maxWidth: mergedStyle.maxWidth,
width: mergedStyle.width,
borderRadius: '8px',
overflow: 'hidden',
boxShadow: '0 0 20px rgba(0, 0, 0, 0.1)',
background: mergedStyle.background,
color: 'grey',
fontFamily: mergedStyle.fontFamily,
}}
>
<KBarSearch
style={{
padding: '12px 16px',
fontSize: '16px',
width: '100%',
boxSizing: 'border-box',
outline: 'none',
border: 'none',
background: mergedStyle.searchBackground,
color: mergedStyle.searchTextColor,
}}
/>
<RenderResults {...props} mergedStyle={mergedStyle} />
<ActionRegistration
actions={actions}
setProps={setProps}
debug={debug}
/>
</KBarAnimator>
</KBarPositioner>
</KBarPortal>
{children}
</KBarProvider>
);
};
function ActionRegistration(props) {
const action_objects = props.actions.map((action) => {
if (action.noAction) return createAction(action);
action.perform = () => {
if (props.debug) {
console.log('Performing action', action);
}
props.setProps({selected: action.id});
};
return createAction(action);
});
useRegisterActions(action_objects);
return null;
}
</code></pre>
|
<python><reactjs><plotly>
|
2023-12-25 17:48:34
| 1
| 1,921
|
najeem
|
77,714,443
| 12,058,407
|
calculate date and time difference between 2 dates (& time) using robot framework
|
<p>Need to calculate date and time difference between 2 dates (& time). one will be input in the format of 'm/d/yy h:m:s' and other is current date and time. I see methods like "Subtract Date From Date" & "Subtract Time From Date" in robot framework documentation. But both are not working as I need to calculate both date and time difference and result should be in minutes. I have written below script, which is not working. Giving error.
Any help/suggestion to get it right would be appreciated.</p>
<pre><code>*** Test Cases ***
GetDateAndTimeDifference
${date} = Get Current Date result_format=%m/%d/%y
Log To Console date : ${date}
${time} = Get Current Date result_format=%I:%M:%S %p
Log To Console time : ${time}
${datetime} = Catenate ${date} ${time}
Log To Console datetime : ${datetime}
${diff} = Subtract Date From Date ${datetime} 11/10/19 11:11:11 PM
Log To Console diff : ${diff}
${minutes} = Convert Time To Seconds ${diff}
Log To Console minutes : ${minutes}
</code></pre>
<p>Below is the error :</p>
<pre><code>ValueError: time data '1225-23-10 36:36:00.000000' does not match format '%Y-%m-%d %H:%M:%S.%f'
</code></pre>
|
<javascript><python><selenium-webdriver><robotframework>
|
2023-12-25 17:14:29
| 4
| 307
|
Balaji211
|
77,714,338
| 15,893,581
|
why my Linear Least-Squares does not fit right the data-points
|
<p>I'm getting not very fitted 2D-plane with such a code:</p>
<pre><code>import numpy as np
from scipy import linalg as la
from scipy.linalg import solve
# data
f1 = np.array([1., 1.5, 3.5, 4.])
f2 = np.array([3., 4., 7., 7.25])
# z = np.array([6., 6.5, 8., 9.])
A= X= np.array([f1, f2]).T
b= y= np.array([0.5, 1., 1.5, 2.]).T
##################### la.lstsq
res= la.lstsq(A,b)[0]
print(res)
##################### custom lu
#custom OLS
def ord_ls(X, y):
A = X.T @ X
b = X.T @ y
beta = solve(A, b, overwrite_a=True, overwrite_b=True,
check_finite=True)
return beta
res = ord_ls(X, y)
print(res)
##################### plot
# use the optimized parameters to plot the fitted curve in 3D space.
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# Create 3D plot of the data points and the fitted curve
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(f1, f2, y, color='blue')
x_range = np.linspace(0, 7, 100)
y_range = np.linspace(0, 7,100)
X, Y = np.meshgrid(x_range, y_range)
Z = res[0]*X + res[1]
ax.plot_surface(X, Y, Z, color='red', alpha=0.5)
ax.set_xlabel('feat.1')
ax.set_ylabel('feat.2')
ax.set_zlabel('target')
plt.show()
# [0.2961165 0.09475728]
# [0.2961165 0.09475728]
</code></pre>
<p>though coeffs seems to be the same , still <em>plot is distorting</em>. Is there any explanation or correction? or some regularization, like in least-squares is needed ? or two features are collinear & that is the reason ? (I'm not very familiar with Linalg yet)</p>
<p>p.s. <a href="https://docs.scipy.org/doc/scipy-0.18.0/scipy-ref-0.18.0.pdf" rel="nofollow noreferrer">scipy-0.18.0 docs</a>, - thought of LU-factorization at p.184 <code>p, l, u = la.lu(A)</code></p>
|
<python><scipy><least-squares><matrix-factorization>
|
2023-12-25 16:41:49
| 1
| 645
|
JeeyCi
|
77,714,037
| 7,862,953
|
python sort groupby data by key/function
|
<p>I would like to sort the data by the month however not alphabetically rather by the month order, i.e first the sales for January and then February etc,
see the illustrated data I created</p>
<pre><code>month=['January','February','March','April','January','February','March','April']
sales=[10,100,130,145,13409,670,560,40]
dict = {'month': month, 'sales': sales}
df = pd.DataFrame(dict)
df.groupby('month')['sales'].mean().sort_values()
</code></pre>
<p>in this case I received data by the sales average, however I would like to sort the value by
the month order</p>
|
<python><pandas><sorting>
|
2023-12-25 14:50:36
| 3
| 537
|
zachi
|
77,713,818
| 2,952,838
|
pip/pipx installation and adding path
|
<p>I run <a href="https://en.wikipedia.org/wiki/Debian#Forks_and_derivatives" rel="nofollow noreferrer">Debian Stable</a> on my box, and I want to install the <a href="https://github.com/binance/binance-connector-python" rel="nofollow noreferrer">Binance connector</a> library to query <a href="https://en.wikipedia.org/wiki/Binance" rel="nofollow noreferrer">Binance</a>.</p>
<p>I use pipx and everything seems to be OK (see below).</p>
<pre class="lang-none prettyprint-override"><code>sudo pipx install binance-connector --include-deps
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>⚠️ Note: normalizer was already on your PATH at /usr/bin/normalizer
installed package binance-connector 3.5.1, installed using Python 3.11.2
These apps are now globally available
- normalizer
- wsdump
⚠️ Note: '/root/.local/bin' is not on your PATH environment variable. These
apps will not be globally accessible until your PATH is updated. Run `pipx
ensurepath` to automatically add it, or manually modify your PATH in your
shell's config file (i.e. ~/.bashrc).
done! ✨ 🌟 ✨
</code></pre>
<p>And:</p>
<pre class="lang-none prettyprint-override"><code>pipx ensurepath
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>/home/lorenzo/.local/bin is already in PATH.
⚠️ All pipx binary directories have been added to PATH. If you are sure you
want to proceed, try again with the '--force' flag.
</code></pre>
<p>And:</p>
<pre class="lang-none prettyprint-override"><code>sudo pipx ensurepath
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>[sudo] password for lorenzo:
/root/.local/bin has been been added to PATH, but you need to open a new
terminal or re-login for this PATH change to take effect.
You will need to open a new terminal or re-login for the PATH changes to take
effect.
Otherwise pipx is ready to go! ✨ 🌟 ✨
</code></pre>
<p>At this point, I open a new shell, and I type:</p>
<pre class="lang-none prettyprint-override"><code>python3.11
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Python 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from binance.spot import Spot
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'binance'
</code></pre>
<p>What is going wrong and why is the Binance library not found?</p>
|
<python><pip><debian><binance><pipx>
|
2023-12-25 13:27:08
| 1
| 1,543
|
larry77
|
77,713,438
| 5,594,008
|
Wagtail, can't add settings
|
<p>I'm trying to add custom settings to wagtail (version 4.0.4).</p>
<p>Following the guide from the <a href="https://docs.wagtail.org/en/v4.0.4/reference/contrib/settings.html" rel="nofollow noreferrer">docs</a>, I added <code>wagtail.contrib.settings</code> to INSTALLED_APPS and added this to <em>models.py</em>:</p>
<pre><code>from wagtail.contrib.settings.models import register_setting, BaseGenericSetting
from django.db import models
@register_setting
class Authors(BaseGenericSetting):
facebook = models.URLField()
</code></pre>
<p>Then made makemigrations/migrate. But nothing appeared in Wagtail settings.</p>
<p>Then I tried to register model in admin. In wagtail_admin.py</p>
<pre><code>from wagtail.contrib.modeladmin.options import (
ModelAdmin,
modeladmin_register,
)
from .models import Authors
@modeladmin_register
class AuthorsAdmin(ModelAdmin):
model = Authors
list_display = ("facebook",)
add_to_settings_menu = True
menu_label = "Authors"
</code></pre>
<p>But still the same result. What do I miss? Some extra config or ...?</p>
<p>Only after removing <code>add_to_settings_menu = True</code>, Authors appears in the side-menu (but, of course, not in the settings). With this line - nothing.</p>
|
<python><django><wagtail><django-cms>
|
2023-12-25 11:07:27
| 1
| 2,352
|
Headmaster
|
77,713,384
| 12,285,101
|
nbdev_export ; pip install doesn't work on nbdev with windows
|
<p>I have downloaded nbdev based on [this tutorial][1] and [the help I got in this post][2].
Before I used nbdev on bash , but now unfortunately I have to work with windows.
Now, I have two notebooks, and I want to import function from notebook1 to notebook2.
In order to do this , when I used bash I run this command on the CLI:</p>
<pre><code>nbdev_export && pip install ./
</code></pre>
<p>However, now ,as I use windows, I tried to run the following command:</p>
<pre><code>nbdev_export ; pip install
</code></pre>
<p>But I receive this error:</p>
<pre><code>(.venv) PS C:\git\my_repo> nbdev_export ; pip install
ERROR: You must give at least one requirement to install (see "pip help install")
[notice] A new release of pip is available: 23.2.1 -> 23.3.2 ##this version allowed me to install quarto
[notice] To update, run: python.exe -m pip install --upgrade pip
</code></pre>
<p>When I run nbdev_export alone it works, so seems like I only missing the right way to use pip install.</p>
<p><strong>My question is, how can I apply the "pip install" part in windows?</strong>
[1]: <a href="https://nbdev.fast.ai/tutorials/tutorial.html" rel="nofollow noreferrer">https://nbdev.fast.ai/tutorials/tutorial.html</a>
[2]: <a href="https://stackoverflow.com/questions/77675755/cannot-install-quarto-with-nbdev-install-quarto-importerror-cannot-import-nam/77689819#77689819">Cannot install quarto with nbdev_install_quarto - ImportError: cannot import name 'uname' from 'os'</a></p>
|
<python><pip><windows-subsystem-for-linux><nbdev>
|
2023-12-25 10:43:39
| 1
| 1,592
|
Reut
|
77,713,103
| 20,176,161
|
Dataframe: Keep unique values in a list for each row
|
<p>I have a dataframe as follows:</p>
<pre><code>Ville match
Paris Talborjt,Talborjt,Ville Nouvelle
Rome Hay Najah,Hay Najah,Najah
</code></pre>
<p>I would like to keep the unique values per row of the column match. The desired output is as follows:</p>
<pre><code>Ville match contains
Paris Talborjt,Talborjt,Ville Nouvelle Talborjt,Ville Nouvelle
Rome Hay Najah,Hay Najah,Najah Hay Najah, Najah
</code></pre>
<p>I have tried the following but did not manage to get the desired output.</p>
<pre><code>all_quartiers['contains']=all_quartiers['match'].apply(set).apply(list)
all_quartiers['contains']=all_quartiers['match'].apply(lambda x: list(set(x)))
all_quartiers['contains']=all_quartiers.explode('match')['match'].unique()
</code></pre>
|
<python><pandas><dataframe>
|
2023-12-25 09:07:06
| 1
| 419
|
bravopapa
|
77,713,072
| 1,985,409
|
How to accelerate assignment problem with ortools.linear_solver
|
<p>I am new to ortools.linear_solver, and built the follow code which assigns workers to tasks under the follow demands:</p>
<ol>
<li>I wish to find an assignment where the difference between worker with higher cost, and the one with lower cost is minimized.</li>
<li>Each worker has one of the follow Ids ("A", "B", "C" , "D") and some constrains available:</li>
</ol>
<ul>
<li>Some tasks can have only workers with specific Id</li>
<li>Some set of tasks must have workers with the same Id</li>
<li>Some set of tasks must have workers ids that their sum is limited (where "A" value is 0, "B" value is 1 and so on).</li>
</ul>
<p>The follow code does the job perfectly, but when the number of workers/tasks is above 40 , the time for it to solve the problem begin to be not practical , and above 50 I stopped the running after 10 minutes.</p>
<p>How can I accelerate it to a realistic time for N up to says 70 ?</p>
<ul>
<li>Are there parameters I can set in order to tune the algorithm? For example, I saw there is flag PRESOLVE_ON, but I didn't succeed to find a function in the python api for settig it, I even don't know if it helps.</li>
<li>Is it possible to give an initial assignment solution which may accelerate the algorithm for reaching the optimal solution?</li>
</ul>
<pre><code>from ortools.linear_solver import pywraplp
import numpy as np
# number of workers and tasks
N = 40
# cost table for each worker-task pairs
np.random.seed(0)
costs = np.random.rand(N,N)*100
# workers IDs
workers_id = (np.random.rand(N)*4).astype(np.uint32)
id_2_idsrt_dict = {0: 'A', 1: 'B', 2: 'C', 3: 'D'}
workers_id_str = [id_2_idsrt_dict[val] for val in workers_id]
print(workers_id_str)
idsrt_2_id_dict = {}
for id, idstr in id_2_idsrt_dict.items():
idsrt_2_id_dict[idstr] = id
print(idsrt_2_id_dict)
num_workers = len(costs)
num_tasks = len(costs[0])
max_cost_limit = np.max(costs)
min_cost_limit = np.min(costs)
# Solver
# Create the mip solver with the SCIP backend.
solver = pywraplp.Solver.CreateSolver("SCIP") # better for non integer values
# Variables (updated with algorithm iterations)
# x[i, j] is an array of 0-1 variables, which will be 1
# if worker i is assigned to task j.
x = {}
for i in range(num_workers):
for j in range(num_tasks):
x[i, j] = solver.IntVar(0, 1, "")
# Variables (updated with algorithm iterations)
# tasks_ids[j] is a list of integers variables contains each task's assigned worker id.
tasks_ids = []
for j in range(num_tasks):
tasks_ids.append( solver.Sum([workers_id[i]*x[i, j] for i in range(num_workers)]) )
# Constraint
# Each worker is assigned to exactly one task.
for i in range(num_workers):
solver.Add(solver.Sum([x[i, j] for j in range(num_tasks)]) == 1)
# Constraint
# Each task is assigned to exactly one worker.
for j in range(num_tasks):
solver.Add(solver.Sum([x[i, j] for i in range(num_workers)]) == 1)
# Constraint
# Task 1 can be assigned only with workers that have the id "A"
solver.Add(tasks_ids[1] == idsrt_2_id_dict["A"])
# Constraint
# Tasks 2,4,6 must assigned with workers of the same id
solver.Add(tasks_ids[2] == tasks_ids[4])
solver.Add(tasks_ids[2] == tasks_ids[6])
# Constraint
# Tasks 10,11,12 must assigned with workers of the same id
solver.Add(tasks_ids[10] == tasks_ids[11])
solver.Add(tasks_ids[11] == tasks_ids[12])
# Constraint
# Tasks 1,2,3 sum of ids <= 4
solver.Add((tasks_ids[1] + tasks_ids[2] + tasks_ids[3]) <= 4)
# Constraint
# Tasks 4,5,6 sum of ids <= 4
solver.Add((tasks_ids[4] + tasks_ids[5] + tasks_ids[6]) <= 4)
# Constraint
# Tasks 7,8,9 sum of ids <= 3
solver.Add((tasks_ids[7] + tasks_ids[8] + tasks_ids[9]) <= 3)
# Objective
# minimize the difference of assignment higher cost worker and lower cost worker
# list of workers costs for an assignment
assignment_workers_costs_list = []
for i in range(num_workers):
assignment_workers_costs_list.append( sum([costs[i][j] * x[i, j] for j in range(num_tasks)]) )
# Additional variables for max and min costs
max_cost = solver.Var(lb = min_cost_limit, ub = max_cost_limit, integer =False, name = 'max_cost')
min_cost = solver.Var(lb = min_cost_limit, ub = max_cost_limit, integer =False, name = 'min_cost')
# Constraints to update max and min costs
for i in range(num_workers):
solver.Add(assignment_workers_costs_list[i] <= max_cost)
solver.Add(assignment_workers_costs_list[i] >= min_cost)
# Minimize the difference between max and min costs
solver.Minimize(max_cost - min_cost)
# Solve
print(f"Solving with {solver.SolverVersion()}")
status = solver.Solve()
# Print solution.
if status == pywraplp.Solver.OPTIMAL or status == pywraplp.Solver.FEASIBLE:
print(f"Differnece= {solver.Objective().Value()}\n")
for i in range(num_workers):
for j in range(num_tasks):
if x[i, j].solution_value() > 0.5:
print(f"Worker {i} ({workers_id_str[i]})assigned to task {j}." + f" Cost: {costs[i][j]}")
else:
print("No solution found.")
</code></pre>
|
<python><or-tools><cp-sat>
|
2023-12-25 08:57:30
| 1
| 671
|
audi02
|
77,712,635
| 8,509,235
|
How to disable border in subplot for a 3D plot in matplotlib
|
<p>I have the following code which produces two plots in matplotlib.</p>
<p>However, each I'm unable to remove this border in the second subplot despite feeding all these arguments.</p>
<p>How can I remove the black border around the second subplot (see attached image)</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
beta, gamma = np.linspace(-np.pi / 2, np.pi / 2, 500), np.linspace(-np.pi / 2, np.pi / 2, 500)
B, G = np.meshgrid(beta, gamma)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6))
# 2D Contour plot
ax1.imshow(obj_vals.T, origin='lower', cmap='hot', extent=(-np.pi/2, np.pi/2, -np.pi/2, np.pi/2))
ax1.set_xlabel(r'$\gamma$')
ax1.set_ylabel(r'$\beta$')
ax1.set_xticks([])
ax1.set_yticks([])
ax2 = fig.add_subplot(122, projection='3d')
# Make panes transparent
ax2.xaxis.pane.fill = False # Left pane
ax2.yaxis.pane.fill = False # Right pane
ax2.zaxis.pane.fill = False # Right pane
# Remove grid lines
ax2.grid(False)
# Remove tick labels
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_zticklabels([])
# Transparent spines
ax2.xaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax2.yaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax2.zaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax2.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax2.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
# No ticks
ax2.set_xticks([])
ax2.set_yticks([])
ax2.set_zticks([])
# Surface plot
surf = ax2.plot_surface(B, G, obj_vals.T, cmap='hot')
plt.axis('off')
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/62ZyJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/62ZyJ.png" alt="enter image description here" /></a></p>
|
<python><numpy><matplotlib>
|
2023-12-25 05:59:48
| 1
| 666
|
Vivek Katial
|
77,712,413
| 90,506
|
Remove element from list if list of sub strings in list
|
<p>Fairly a newbie in Python3. Was wondering what the most efficient way is to accomplish this, i.e. if a directory listing contains a sub directory, then remove that item from the directory listing.</p>
<pre><code>dirs = [ "/mnt/user/dir1", "/mnt/user/dir1/filea", "/mnt/user/dir2", "/mnt/user/dir3", "/mnt/user/dir4" ]
exclude_dirs = [ "/mnt/user/dir1", "/mnt/user/dir3" ]
</code></pre>
<p>if any element of exclude_dirs occurs in dirs, then remove that item from dirs. In above example, the matching would be a substring match, i.e. I'd expect these elements to be removed from dirs:</p>
<pre><code>"/mnt/user/dir1"
"/mnt/user/dir1/filea"
"/mnt/user/dir3"
</code></pre>
<p>Help! :)</p>
<p>Thanks!</p>
|
<python><python-3.x><python-3.8>
|
2023-12-25 03:45:07
| 1
| 1,581
|
farhany
|
77,712,392
| 21,305,238
|
Type alias for a union of two @runtime_checkable Protocols triggers mypy's "Cannot use parameterized generics in instance checks"
|
<p>Here's my code, simplified a lot (<a href="https://mypy-play.net/?flags=strict&mypy=master&python=3.12&gist=02326c887487ca6c05e31867ddf66412" rel="nofollow noreferrer">playground</a>, <a href="https://mypy-play.net/?flags=strict&mypy=master&python=3.12&gist=0b719a8f72ee21851b84492a81a78629" rel="nofollow noreferrer">using <code>_typeshed.SupportsTrunc</code></a>):</p>
<pre class="lang-py prettyprint-override"><code>from typing import Protocol, runtime_checkable, SupportsIndex, SupportsInt
@runtime_checkable
class SupportsTrunc(Protocol):
def __trunc__(self) -> int:
...
_ConvertibleToInt = SupportsInt | SupportsIndex | SupportsTrunc
</code></pre>
<pre class="lang-py prettyprint-override"><code>def f(o: object) -> None:
if isinstance(o, _ConvertibleToInt):
# error: Parameterized generics cannot be used with class or instance checks
# error: Argument 2 to "isinstance" has incompatible type "<typing special form>"; expected "_ClassInfo"
...
</code></pre>
<p>All of the three <code>Protocol</code>s <a href="https://docs.python.org/3/library/typing.html#protocols" rel="nofollow noreferrer">are <code>@runtime_checkable</code></a>. As far as I'm aware, these are clearly not parameterized as mypy claimed. What am I doing wrong? Or is this a mypy bug?</p>
<hr />
<p>Thanks to @wjandrea, I have found a more minimal example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import SupportsIndex, SupportsInt
_ConvertibleToInt = SupportsInt | SupportsIndex
def f(o: object) -> None:
if isinstance(o, _ConvertibleToInt): # error
...
</code></pre>
<p>The same error does not happen if there is no alias (<a href="https://mypy-play.net/?flags=strict&mypy=master&python=3.12&gist=a8c09fe38ce5c66773a4ff6583a0aa4d" rel="nofollow noreferrer">playground</a>):</p>
<pre class="lang-py prettyprint-override"><code>def f(o: object) -> None:
if isinstance(o, SupportsInt | SupportsIndex): # fine
...
</code></pre>
<p>...or if the alias is for one <code>Protocol</code> and not a union (<a href="https://mypy-play.net/?flags=strict&mypy=master&python=3.12&gist=4ccd4accb20a6b36784103408d999071" rel="nofollow noreferrer">playground</a>):</p>
<pre class="lang-py prettyprint-override"><code>_ConvertibleToInt = SupportsInt
def f(o: object) -> None:
if isinstance(o, _ConvertibleToInt): # fine
...
</code></pre>
<p>A union of the same <code>Protocol</code> repeated twice still triggers this error (<a href="https://mypy-play.net/?flags=strict&mypy=master&python=3.12&gist=dedf6e8e884fb14a1fbe0fc0c799a99d" rel="nofollow noreferrer">playground</a>):</p>
<pre class="lang-py prettyprint-override"><code>_ConvertibleToInt = SupportsInt | SupportsInt
def f(o: object) -> None:
if isinstance(o, _ConvertibleToInt): # error
...
</code></pre>
|
<python><mypy><python-typing>
|
2023-12-25 03:34:26
| 1
| 12,143
|
InSync
|
77,712,334
| 8,497,844
|
Real size of directory with subdirectories and files on disk
|
<p>I need to create <code>img</code> container for archive some directories and files.</p>
<p>For that I need to create it with <code>dd</code>. It's no problem but I can't create it 'cause I don't know real size of the some directories and files. As I understand <code>img</code> container don't support dynamic size.</p>
<p>As we can see at image of <a href="https://www.geeksforgeeks.org/how-to-get-size-of-folder-using-python/" rel="nofollow noreferrer">link</a> are two positions <code>Size</code> and <code>Size on disk</code>.
As I understand I need <code>Size on disk</code>, but there is no Python example with <code>Size on disk</code>.</p>
<p>If I will use <code>Size</code>, maybe I will get an error <code>Not enough space</code> when use container.</p>
<p>Of course I can get real <code>Size on disk</code> with helping <code>Block size</code> of my partition and <code>Size</code> and calculate <code>Size on disk</code> by hand (like <a href="https://stackoverflow.com/a/62602765/8497844">here</a>) but I think it may be easier.</p>
<p>Q: How do get real <code>Size on disk</code> with Python?</p>
|
<python>
|
2023-12-25 02:52:14
| 1
| 727
|
Pro
|
77,712,302
| 1,487,336
|
How to speed up `pandas.where`?
|
<p>I have two dataframes: <code>df1</code> with shape <code>(34, 1151649)</code>, <code>df2</code> with shape <code>(76, 3467)</code>. I would like to perform <code>pandas.where.groupby</code> operation on them. They are quite slow. I'm wondering is there's any way to speed up the code. A sample code is as follows.</p>
<pre><code>df1 = pd.DataFrame(np.arange(6).reshape(2, 3), index=[1, 2], columns=pd.MultiIndex.from_tuples((('a', 1), ('a', 2), ('b', 3)), names=['n1', 'n2']))
df2 = pd.DataFrame(np.arange(6).reshape(3, 2), index=[0, 1, 2], columns=pd.Index(['a', 'c'], name='n1'))
df1
df2
df1.where(df2 == 2).groupby(level=1, axis=1).sum()
</code></pre>
<p>The output is as follows:</p>
<p><a href="https://i.sstatic.net/WMPzQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WMPzQ.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2023-12-25 02:31:42
| 2
| 809
|
Lei Hao
|
77,712,087
| 6,467,736
|
Use python script to schedule several tasks in Windows Task Scheduler, then exit. Each task should generate Win toast notification. How to do?
|
<p>There is an online calendar of events. I will fetch the names, dates and times of these events. I then want to use python to pass these events to Windows Task Scheduler (running Win 10), then have the script exit. For each task, at the appropriate time, I want Task Scheduler to generate a Windows toast notification. Once the toast notification is generated, the task is finished, and I'd want to remove the completed task from Task Scheduler.</p>
<p>Is this the most efficient way, using python (and not keeping a script running all day, as these events are hours apart), to fetch events from an online calendar and generate toast notifications?</p>
<p>If so, can you give me a general outline of how to do this?</p>
<p>I've played around with ChatGPT and Bard, and they've sent me down endless rabbit holes, with many hallucinations about properties that don't exist for various objects. So now I'm wondering if the basic concept of using python to schedule several tasks in one shot, then exit, and have Task Scheduler generate toast notifications, and then delete the completed tasks from Task Scheduler, is even feasible. Or if there are superior methods to accomplish my goal?</p>
|
<python><toast><windows-task-scheduler>
|
2023-12-24 23:16:16
| 1
| 427
|
jub
|
77,711,996
| 453,673
|
How to display graphical information infinitely downward in Python?
|
<p>Example:<br />
<a href="https://i.sstatic.net/K4tUn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K4tUn.png" alt="enter image description here" /></a></p>
<p>I need to display more rows like this (in the direction of the red arrow), with more of those squares. Along with the squares I could also use other shapes of various colours, and the objects need to be placed in very specific coordinates. When I hover over any of those objects, it has to show a tooltip relevant to that object. The text shown, would be rendered as a graphical object, and not as text that can be selected with a mouse pointer. Basically, everything is fully graphically rendered.<br />
As more data is generated, more rows will be added and it goes on infinitely. Obviously a scrollbar is needed.<br />
So far, from libraries like Matplotlib, I've known only options of creating a graphical screen of fixed size. I considered <a href="https://stackoverflow.com/questions/30216927/pyqt-embedding-graphics-object-in-widget">PyQt widgets</a>, but it doesn't seem to have the desired functionality.<br />
I considered HTML (since adding new rows infinitely is easy) and JavaScript, but it's too cumbersome to export the data from Python and load and parse it in JavaScript.</p>
<p>Is there any way to do such a display in Python? Demonstrating one way of achieving this objective effectively would suffice.<br />
Purpose: I'm creating this to visualize how certain foods and sleep loss lead to health issues. Displaying it like this allows me to see patterns across weeks or months. It's not just about plotting points and text, it's also about being able to dynamically update their color and size on clicking any of the graphical elements.</p>
|
<python><matplotlib><pyqt>
|
2023-12-24 22:12:13
| 2
| 20,826
|
Nav
|
77,711,955
| 360,826
|
Pytorch parameters are not updating
|
<p>This seems to be a common question for folks who start using pytorch and I guess this one is my version of it.</p>
<p>Is it clear to a more experienced pytorch user why my params are not updating as my code loops?</p>
<pre><code>import torch
import numpy as np
np.random.seed(10)
def optimize(final_shares: torch.Tensor, target_weight, prices, loss_func=None):
final_shares = final_shares.clamp(0.)
mv = torch.multiply(final_shares, prices)
w = torch.div(mv, torch.sum(mv))
print(w)
return loss_func(w, target_weight)
def main():
position_count = 16
cash_buffer = .001
starting_shares = torch.tensor(np.random.uniform(low=1, high=50, size=position_count), dtype=torch.float64)
prices = torch.tensor(np.random.uniform(low=1, high=100, size=position_count), dtype=torch.float64)
prices[-1] = 1.
x_param = torch.nn.Parameter(starting_shares, requires_grad=True)
target_weights = ((1 - cash_buffer) / (position_count - 1))
target_weights_vec = [target_weights] * (position_count - 1)
target_weights_vec.append(cash_buffer)
target_weights_vec = torch.tensor(target_weights_vec, dtype=torch.float64)
loss_func = torch.nn.MSELoss()
eta = 0.01
optimizer = torch.optim.SGD([x_param], lr=eta)
for epoch in range(10000):
optimizer.zero_grad()
loss_incurred = optimize(final_shares=x_param, target_weight=target_weights_vec,
prices=prices, loss_func=loss_func)
loss_incurred.backward()
optimizer.step()
optimize(final_shares=x_param.data, target_weight=target_weights_vec,
prices=prices, loss_func=loss_func)
if __name__ == '__main__':
main()
</code></pre>
|
<python><pytorch>
|
2023-12-24 21:49:41
| 1
| 6,863
|
jason m
|
77,711,951
| 13,520,498
|
script using mediapipe breaks after making it executable with pyinstaller - FileNotFoundError: The path does not exist
|
<p>I made a desktop-application using <code>PyQT5</code> library. The main library that I am using is <code>opencv-python-headless</code>, <code>mediapipe</code> and <code>pyserial</code>. So, my python script does exactly what it should do and works perfectly. But after I make it executable with <code>pyinstaller</code> it breaks. Mainly the <code>mediapipe</code> library throws this error:</p>
<pre><code>Exception in thread Thread-3 (getReady_screen):
Traceback (most recent call last):
File "threading.py", line 1016, in _bootstrap_inner
File "threading.py", line 953, in run
File "app.py", line 344, in getReady_screen
File "app.py", line 193, in detect_squat
File "mediapipe/python/solutions/pose.py", line 146, in __init__
File "mediapipe/python/solution_base.py", line 271, in __init__
FileNotFoundError: The path does not exist.
</code></pre>
<p>I tried to make my script executable with this two commands, both of them throws the same error:</p>
<pre><code>pyinstaller --onedir app.py
</code></pre>
<pre><code>pyinstaller --onefile app.py
</code></pre>
<p>This is the user-name and path to my directory <code>squat@squat-OptiPlex-3050:~/squat-counting-application$ </code></p>
<p>These are the contents in my <code>requirements.txt</code> file:</p>
<pre><code>absl-py==2.0.0
attrs==23.1.0
cffi==1.16.0
contourpy==1.1.1
cycler==0.12.1
docopt==0.6.2
flatbuffers==23.5.26
fonttools==4.46.0
importlib-resources==6.1.1
kiwisolver==1.4.5
matplotlib==3.7.4
mediapipe==0.10.8
num2words==0.5.13
numpy==1.24.4
opencv-python-headless==4.8.1.78
packaging==23.2
Pillow==10.1.0
protobuf==3.20.3
pycparser==2.21
pyparsing==3.1.1
PyQt5==5.15.10
PyQt5-Qt5==5.15.2
PyQt5-sip==12.13.0
pyqtspinner==2.0.0
pyserial==3.5
python-dateutil==2.8.2
PyYAML==6.0.1
six==1.16.0
sounddevice==0.4.6
zipp==3.17.0
</code></pre>
<h5>My system specs are:</h5>
<ul>
<li>OS Ubuntu 22.04 LTS x86_64</li>
<li>CPU Intel i5-6500T (4) @ 3.100GHz</li>
<li>GPU Intel HD Graphics 530</li>
<li>python 3.10.12</li>
<li>pip 22.0.2</li>
</ul>
|
<python><pyinstaller><mediapipe>
|
2023-12-24 21:44:02
| 1
| 1,991
|
Musabbir Arrafi
|
77,711,768
| 5,675,229
|
No latest price for CCXT's fetch_ohlcv API
|
<p>I am using CCXT's fetch_ohlcv method to get the historical price of the coins. For a trading strategy I need to get the last N number timeframe of price records. Let's say if the timeframe is '15m' and N is 99, if the current time is 00:00, then I will need to extract the records starting from around 23:15 of the previous night. The code would be something like this:</p>
<pre><code>from_ts = exchange.parse8601('2023-12-24 00:00:00')
exchange.fetch_ohlcv('SOLUSDT', '15m', since=from_ts, limit=1000)
</code></pre>
<p>But somehow CCXT can only provide around 80 records, which the last couple of hours are missing. I tried Bybit and it is just the same, may I know if certain restriction is applied by CCXT? What is the correct way to extract historic OHLCV records up to the latest moment?</p>
|
<python><cryptocurrency><ccxt>
|
2023-12-24 20:04:52
| 1
| 509
|
Wonderjimmy
|
77,711,708
| 11,009,696
|
Extracting text from a PDF - python
|
<p>I am new to Python and I am developing a program that takes a PDF file as input and converts it into text, I am using Python 3 and tried the PyPDF2 and PDFMiner.six packages.</p>
<p>For the first PDF file it convert well and print it on console, but when I use another PDF file which contains some empty pages, it prints out the text, and when it reach the empty page, these error appears,</p>
<p>Here is the code:</p>
<h2>PyPDF2</h2>
<pre><code>import PyPDF2
def extract_txt_pdf(pdf_file:str) ->[str]:
# open the file and read it as bit
with open(pdf_file, 'rb') as pdf:
reader = PyPDF2.PdfReader(pdf,strict = False)
pdf_text= []
for page in reader.pages:
content = page.extract_text()
pdf_text.append(content)
return pdf_text
</code></pre>
<h2>PDFMiner.six</h2>
<pre><code>from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from pdfminer.pdfpage import PDFPage
from io import StringIO
def convert_pdf_to_txt(path):
rsrcmgr = PDFResourceManager()
retstr = StringIO()
codec = 'utf-8'
laparams = LAParams()
device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams)
fp = open(path, 'rb')
interpreter = PDFPageInterpreter(rsrcmgr, device)
password = ""
maxpages = 0
caching = True
pagenos=set()
for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages,
password=password,caching=caching, check_extractable=True):
interpreter.process_page(page)
text = retstr.getvalue()
fp.close()
device.close()
retstr.close()
return text
</code></pre>
<p>and Here is the error for pdfminer</p>
<pre><code> print(t)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2032.0_x64__qbz5n2kfra8p0\Lib\encodings\cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode character '\u25b6' in position 0:
character maps to <undefined>
</code></pre>
|
<python><pdf><pypdf><text-extraction><pdfminersix>
|
2023-12-24 19:41:30
| 0
| 1,800
|
Sana'a Al-ahdal
|
77,711,555
| 5,962,981
|
FASTapi doesn't handle Pydatnic validation error
|
<p>in FASTapi aplication I have a <code>model.py</code></p>
<pre><code>class Testing(BaseModel):
a: Optional[str]
b: Optional[str]
@root_validator(pre=True)
def check_all_values(cls, values):
if len(values) == 0:
raise ValueError('Error...')
return values
</code></pre>
<p>in <code>main.py</code> I have imported <code>Testing</code> class from <code>model.py</code></p>
<pre><code>app = FastAPI()
@app.post('/', response_model=Testing)
async def postSomething(values: Testing):
try:
return values
except ValueError as e:
raise HTTPException(status_code=422, detail=f'{e}')
</code></pre>
<p>Now if you run this code and pass None to both arguments via API, the FASTapi will return 200, despite it should throw up an error. Why is this happening and what is the correct way or best practice to handle Pydatnic Validation errors via FASTapi. Would really appreciate your reply.</p>
|
<python><fastapi><pydantic>
|
2023-12-24 18:35:51
| 1
| 923
|
filtertips
|
77,711,531
| 2,063,900
|
In odoo How to add search in Product Variants screen with a field in product template model
|
<p>I want to add a search in Product Variants list screen with a custom field, I am already added to the product.template model</p>
<p>the field name is model_number
and I added with this code</p>
<pre><code>class ProductCoNew(models.Model):
_inherit = 'product.template'
model_number = fields.Char(string='Model', required=True)
</code></pre>
<p>and this is my code to add the search</p>
<pre><code><record id="res_product_product_tree_new" model="ir.ui.view">
<field name="name">product.product_search_form_view</field>
<field name="model">product.product</field>
<field name="inherit_id" ref="product.product_search_form_view"/>
<field name="arch" type="xml">
<xpath expr="//field[@name='name']" position="after">
<field name="model_number_search" string="Model Number"
domain="[('product_tmpl_id.model_number', 'ilike', self)]"/>
</xpath>
</field>
</record>
</code></pre>
<p>and</p>
<pre><code>class ProductProduct(models.Model):
_inherit = 'product.product'
model_number_search = fields.Char(related='product_tmpl_id.model_number',
string='DES 1', readonly=True, store=True)
</code></pre>
<p>but this code not working and giving me an error.</p>
<pre><code>Domain on non-relational field "model_number_search" makes no sense (domain:[('product_tmpl_id.model_number', 'ilike', self)])
</code></pre>
<p>I do not know what is the problem,
please can anyone help me,</p>
|
<python><odoo><odoo-16>
|
2023-12-24 18:28:19
| 1
| 361
|
ahmed mohamady
|
77,711,480
| 4,398,966
|
Python readline() bug, writes at eof
|
<p>In python3 it seems that there is a bug with the readline() method.</p>
<p>I have a file txt.txt that contains two lines:</p>
<pre><code>1234567890
abcdefghij
</code></pre>
<p>I then run the following code:</p>
<pre><code>g = open("txt.txt","r+")
g.write("xxx")
g.flush()
g.close()
</code></pre>
<p>It modifies the file as expected:</p>
<pre><code>xxx4567890
abcdefghij
</code></pre>
<p>I then run the following code:</p>
<pre><code>g = open("txt.txt","r+")
g.readline()
Out[99]: 'xxx4567890\n'
g.tell()
Out[100] 12
g.write("XXX")
g.flush()
g.close()
I get the following:
xxx4567890
abcdefghij
XXX
</code></pre>
<p>Why is "XXX" being written to the end of the file instead of just after the first line?</p>
<p>If I run the following:</p>
<pre><code>g = open("txt.txt","r+")
g.readline()
Out[99]: 'xxx4567890\n'
g.tell()
Out[100] 12
g.seek(12)
g.tell()
g.write("XXX")
g.flush()
g.close()
</code></pre>
<p>I get:</p>
<pre><code>xxx4567890
XXXdefghij
XXX
</code></pre>
<p>seems like this is a bug in readline() - it says the cursor is at 12 but writes at EOF unless I use seek()</p>
<p>I'm running all of this on window 11 with Spyder as the IDE. Someone suggested top stop caching but I'm not sure how to do that. I do remove all variables before start in Spyder</p>
|
<python><python-3.x><readline>
|
2023-12-24 18:10:39
| 1
| 15,782
|
DCR
|
77,711,443
| 1,473,517
|
How can I plot a ring shape instead of the whole circular part?
|
<p>This code plots a grid and then draws two circles of different sizes on it.</p>
<pre><code>from math import sqrt
# Calculate the center coordinates and add blue dots
for x in np.arange(0.5, 10, 1):
for y in np.arange(0.5, 10, 1):
plt.scatter(x, y, color='blue', s=10) # Adjust the size (s) as needed
# Draw a circle with center in the top left and radius to touch one of the blue dots
d = 3
circle1 = plt.Circle((0, 0), radius=sqrt(2) * (d + 0.5), color='red', alpha=0.5)
plt.gca().add_patch(circle1)
# Draw the second circle with a different radius
circle2 = plt.Circle((0, 0), radius=sqrt(2) * (d + 1 + 0.5), color='blue', alpha=0.5)
plt.gca().add_patch(circle2)
print(f"radius of smaller circle is {sqrt(2) * (d + 0.5)}")
print(f"radius of larger circle is {sqrt(2) * (d + 1 + 0.5)}")
# Draw a square at (4, 4) with a really thick edge
#square = patches.Rectangle((4, 4), 1, 1, fill=None, edgecolor='green', linewidth=5)
#plt.gca().add_patch(square)
plt.yticks(np.arange(0, 10.01, 1))
plt.xticks(np.arange(0, 10.01, 1))
plt.xlim(0,10)
plt.ylim(0,10)
# Manually set tick positions and labels
plt.gca().invert_yaxis()
# Set aspect ratio to be equal
plt.gca().set_aspect('equal', adjustable='box')
plt.grid()
</code></pre>
<p>This gives:</p>
<p><a href="https://i.sstatic.net/3INqI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3INqI.png" alt="enter image description here" /></a></p>
<p>I would like to only show the part that is in the larger circle and not the smaller one. That is, the part in blue. How can I do that?</p>
|
<python><matplotlib>
|
2023-12-24 17:58:38
| 1
| 21,513
|
Simd
|
77,711,415
| 10,922,372
|
FastAPI - Unable to test error responses with custom error handler
|
<p>In FastAPI 0.104.1, I have this custom error handler:</p>
<p><em>main.py:</em></p>
<pre><code>def create_app() -> FastAPI:
app = FastAPI()
@app.exception_handler(Exception)
async def api_exception_handler(request: Request, exc: Exception):
if isinstance(exc, APIException): # APIException instances are raised by me for client errors.
return JSONResponse(
status_code=exc.status_code,
content={'message': exc.message}
)
return JSONResponse( # Unhandled errors are returned as a generic 500.
status_code=HTTPStatus.INTERNAL_SERVER_ERROR,
content={'message': 'Internal server error.'}
)
include_routers(app) # this is a separate function where I do all the app.include_router(...)s
return app
api = create_app()
</code></pre>
<p>My test class, which is being ran with pytest:</p>
<pre><code>from fastapi.testclient import TestClient
from app.main import api # FastAPI instance that I showed.
class TestBrands:
@property
def client(self) -> TestClient:
return TestClient(api, base_url=f'http://localhost:8000/brands')
def test_create_brand_already_exists_exception(self):
self.client.post('', json={'name': 'schecter', 'founded_in': 1976})
response = self.client.post('', json={'name': 'schecter'}) # name is already taken and it will raise a subclass of APIException, which the custom error handler is indeed capturing.
assert response.status_code == 422 # not possible to assert because exception is raised and I don't have any response.
</code></pre>
<p>The problem is that when I run this test, since this is an API test and I am doing an http request to the test client, I want to have the response itself and assert its status code and error message. But instead, FastAPI is propagating the exception even when it's beign captured by the error handler.
Take into account that if I do the same test with postman, I can see that the error handler is working and returning my custom status code and message.</p>
|
<python><pytest><fastapi>
|
2023-12-24 17:47:07
| 1
| 1,010
|
Gonzalo Dambra
|
77,711,328
| 15,835,974
|
How can I resolve circular references between two class in Python?
|
<p>I have 2 python class that depend on each other (a circular dependency)</p>
<ol>
<li>The class <code>FontFile</code> can contain multiple <code>FontFace</code> (1 to n).</li>
<li>The class <code>FontFace</code> contain 1 <code>FontFile</code>.</li>
</ol>
<p>Should I refactor the code so there isn't a circular reference between my 2 classes?
If so, how? I don't see how I could do it.</p>
<p>Note: The <code>FontFaceList</code> ensures that the attribute <code>font_file</code> of the class <code>FontFace</code> is always correctly set.</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from .factory_font_face import FactoryFontFace
from os import PathLike
from os.path import realpath
from time import time
from typing import Iterable, TYPE_CHECKING
if TYPE_CHECKING:
from .font_face import FontFace
class FontFaceList(list):
def __init__(self: FontFaceList, font_file: FontFile, font_faces: Iterable[FontFace]):
self.font_file = font_file
for font_face in font_faces:
if not isinstance(font_face, FontFace):
raise TypeError(f"The value is not of type {FontFace}")
font_face.font_file = self.font_file
super().__init__(font_faces)
def append(self: FontFaceList, value: FontFace):
if not isinstance(value, FontFace):
raise TypeError(f"The value is not of type {FontFace}")
value.font_file = self.font_file
super().append(value)
def extend(self: FontFaceList, iterable):
for font_face in iterable:
if not isinstance(font_face, FontFace):
raise TypeError(f"The value is not of type {FontFace}")
font_face.font_file = self.font_file
super().extend(iterable)
def insert(self: FontFaceList, i, value: FontFace):
if not isinstance(value, FontFace):
raise TypeError(f"The value is not of type {FontFace}")
value.font_file = self.font_file
super().insert(i, value)
class FontFile:
def __init__(
self: FontFile,
filename: PathLike[str],
font_faces: Iterable[FontFace],
last_loaded_time: float = time()
) -> FontFile:
self.filename = realpath(filename)
self.font_faces = FontFaceList(self, font_faces)
self.last_loaded_time = last_loaded_time
@classmethod
def from_font_path(cls: FontFile, filename: PathLike[str]) -> FontFile:
font_faces = FactoryFontFace.from_font_path(filename)
return cls(filename, font_faces)
</code></pre>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from .name import Name
from typing import List, Optional, TYPE_CHECKING
if TYPE_CHECKING:
from .font_file import FontFile
class FontFace():
def __init__(
self: FontFace,
font_index: int,
family_names: List[Name],
exact_names: List[Name],
weight: int,
is_italic: bool,
is_glyph_emboldened: bool,
font_type: FontType,
font_file: Optional[FontFile] = None,
) -> FontFace:
self.font_index = font_index
self.family_names = family_names
self.exact_names = exact_names
self.weight = weight
self.is_italic = is_italic
self.is_glyph_emboldened = is_glyph_emboldened
self.font_type = font_type
self.font_file = font_file
</code></pre>
|
<python><circular-reference>
|
2023-12-24 17:13:09
| 1
| 597
|
jeremie bergeron
|
77,711,315
| 7,290,715
|
Pandas: get the difference from the previous row for a given column value
|
<p>I have a dataframe like below:</p>
<pre><code>countryname yr US_Election_Year id_score Dem_Score
Albania 1992 1990 0.688809 0.366570
Albania 1997 1996 0.024751 0.247750
Argentina 1995 1992 0.081818 0.398908
Argentina 1999 1996 -0.521796 0.247759
Argentina 2003 2000 -0.293386 -0.102298
</code></pre>
<p>What I am trying to do is to take the difference between <code>id_score</code> and next row value of <code>Dem_Score</code> i.e.</p>
<pre><code>id_score in ith row - Dem_Score in i+1 row for a given countryname
</code></pre>
<p>Eg. for the <code>countryname</code> Argentina and for the <code>yr</code> 1999, the desired difference would be</p>
<pre><code>0.081818 - 0.247759 = -0.165941
</code></pre>
<p>So the final dataframe should look like below:</p>
<pre><code>countryname yr US_Election_Year id_score Dem_Score Delta_Dem_Dist
Albania 1992 1990 0.688809 0.366570
Albania 1997 1996 0.024751 0.247750 0.441130
Argentina 1995 1992 0.081818 0.398908
Argentina 1999 1996 -0.521796 0.247759 -0.165941
Argentina 2003 2000 -0.293386 -0.102298 -0.624094
</code></pre>
<p>So a simple <code>df['id_score'] - df['Dem_Score'].shift(-1)</code> may not work.</p>
<p>Any clue on how to get the above approach for a given country?</p>
|
<python><pandas>
|
2023-12-24 17:07:33
| 3
| 1,259
|
pythondumb
|
77,711,295
| 12,035,877
|
Getting real time data for a set of symbols from BINANCE API
|
<p>I'm trying to retrieve the latest information about a set of Binance symbols every, say, 15 seconds, using the <code>binance-connector-python</code> library.</p>
<p>For example: <code>pairs=['BTCUSDT','ETHUSDT',...]</code></p>
<p>I can do it by using:</p>
<pre><code>info=[spot_client.klines(pair, "1s", limit=1)['data'] for pair in pairs]
</code></pre>
<p>but this will make multiple calls, which takes a fair amount of time and therefore fails to get me real time data. This method, as far as I could check, doesn't let me ask for information about more than one symbol at a time.</p>
<p>I also tried with:</p>
<pre><code>spot_client.ticker_price(symbols=pairs)
</code></pre>
<p>but the information retrieved by this method is updated every minute, which, again, fails to get real time data.</p>
<p>Is there a method in the <code>binance-connector-python</code> library that I can use to get real time information about a set of symbols?</p>
|
<python><binance><cryptocurrency>
|
2023-12-24 17:00:32
| 0
| 547
|
José Chamorro
|
77,711,250
| 20,830,264
|
Deploying Django project in an Amazon EC2 Ubuntu instance
|
<p>I have developed a Django website and hosted the application in an Amazon EC2 instance.
AMI name: ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20231207
Instance type: t2.micro</p>
<p>I run the project following these steps:</p>
<ol>
<li>SSH into the EC2 instance from my terminal.</li>
<li>I locate into the proper django project folder.</li>
<li>I run this command: python3 manage.py runserver 0.0.0.0:8000</li>
</ol>
<p>Now, following these steps, I'm able to run the project and it works fine. But when I close the cmd opened in my local PC that I used to SSH into the EC2 instance and to run the application, then the application doesn't work anymore.
My aim would be to simply run the django project from my cmd once (of course after having SSH to the EC2) and then close my laptop and having the application still alive.</p>
<p>Do you know how can I do this?</p>
|
<python><django><amazon-web-services><amazon-ec2>
|
2023-12-24 16:42:15
| 3
| 315
|
Gregory
|
77,711,209
| 4,458,718
|
TextIOWrapper gzip buffer dropping records at the end even with flush
|
<p>I'm having an issue where writing to the gzip buffer drops records at the end.</p>
<pre><code>print(f'Data has {len(data)} rows')
try:
with gzip.GzipFile(mode="w", fileobj=gz_buffer) as gz_file:
writer = csv.DictWriter(TextIOWrapper(gz_file, "utf8"), fieldnames=data[0].keys())
writer.writeheader()
for i, row in enumerate(data):
writer.writerow(row)
print(f"{len(data)} records written") # Log each record
gz_file.flush()
except Exception as e:
print(f"Error writing record {e}")
raise e
gz_buffer.seek(0) # Important: seek back to the start of the buffer
with gzip.GzipFile(mode="r", fileobj=gz_buffer) as gz_file:
reader = csv.DictReader(TextIOWrapper(gz_file, "utf8"))
record_count = sum(1 for row in reader) # Count records
print(f"Number of records in the buffer: {record_count}")
import tempfile
try:
# Create a temporary file
with tempfile.NamedTemporaryFile(mode="w+b", suffix=".gz", delete=False) as temp_file:
with gzip.GzipFile(mode="w", fileobj=temp_file) as gz_file:
writer = csv.DictWriter(TextIOWrapper(gz_file, "utf8"), fieldnames=data[0].keys())
writer.writeheader()
for i, row in enumerate(data):
writer.writerow(row)
#print(f"Record {i} written")
gz_file.close()
temp_file_path = temp_file.name
print(f"Temporary file created at: {temp_file_path}")
# Read from the temporary file
with gzip.open(temp_file_path, 'rt') as gz_file:
reader = csv.DictReader(gz_file)
record_count = sum(1 for row in reader)
print(f"Number of records in the temporary file: {record_count}")
</code></pre>
<p>outout</p>
<pre><code>Data has 4598 rows
4598 records written
Number of records in the buffer: 4587
Temporary file created at: /var/folders/v8/hs0gswcx4459zbsz8zskbkz40000gr/T/tmpxf9nlmt6.gz
Number of records in the temporary file: 4587
</code></pre>
<p>If I write to s3 there is only 4587 records. What am I missing? Do I need to do something else to make sure the data currently in the buffer gets written to? It seems to 'offload' every 10-15 records, so those are currently being lost. What can I do?</p>
|
<python><io><gzip>
|
2023-12-24 16:27:03
| 1
| 1,931
|
L Xandor
|
77,711,076
| 12,858,691
|
Print/log everything that pandas.testing.assert_frame_equal() finds
|
<p>Consider the following test. Its purpose is to detect - given a tolerance - any differences between two dataframes.</p>
<pre><code>import pandas as pd
def test_two_cubes_linewise() -> None:
df1 = pd.DataFrame({"A": [1, 2, 3, 4, 5.0], "B": [4, 5, 6, 7, 8.0001]})
df2 = pd.DataFrame({"A": [97, 98, 3, 4, 5], "B": [99, 5, 6, 7, 8.0002]})
pd.testing.assert_frame_equal(df1, df2, rtol=1e-3, check_dtype=False)
</code></pre>
<p><a href="https://pandas.pydata.org/docs/reference/api/pandas.testing.assert_frame_equal.html" rel="nofollow noreferrer">pandas.testing.assert_frame_equal()</a> works perfectly for that. There are several differences between df1 and df2:</p>
<ul>
<li><code>5</code> and <code>5.0</code> are different dtypes.</li>
<li><code>1</code> is not equal to <code>97</code>.</li>
<li><code>2</code> is not equal to <code>98</code>.</li>
<li><code>4</code> is not equal to <code>99</code>.</li>
<li><code>8.0001</code> is not equal to <code>8.0002</code>.</li>
</ul>
<p>As the last difference is below the tolerance, the assert statement only detects the other differences - as wanted. However, when I run the test, the assertion error message only shows the first difference:</p>
<p><a href="https://i.sstatic.net/AifsL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AifsL.png" alt="pytest screenshot" /></a></p>
<p>Any suggestions on how to access the information which are all differences respectively?</p>
|
<python><pandas>
|
2023-12-24 15:29:56
| 1
| 611
|
Viktor
|
77,711,045
| 353,497
|
LLM RAG with Agent
|
<p>I was trying the application code in the <a href="https://python.langchain.com/docs/use_cases/question_answering/conversational_retrieval_agents" rel="nofollow noreferrer">link</a>.</p>
<p>I am using the following Llang Chain version</p>
<p>langchain 0.0.327
langchain-community 0.0.2
langchain-core 0.1.0</p>
<p>Getting the following error:</p>
<pre class="lang-none prettyprint-override"><code>Entering new AgentExecutor chain...
Traceback (most recent call last):
File "RAGWithAgent.py", line 54, in <module>
result = agent_executor({"input": "hi, im bob"})
File "\lib\site-packages\langchain\chains\base.py", line 310, in __call__
raise e
File "\lib\site-packages\langchain\chains\base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "\lib\site-packages\langchain\agents\agent.py", line 1146, in _call
next_step_output = self._take_next_step(
File "\lib\site-packages\langchain\agents\agent.py", line 933, in _take_next_step
output = self.agent.plan(
File "\lib\site-packages\langchain\agents\openai_functions_agent\base.py", line 104, in plan
predicted_message = self.llm.predict_messages(
File "\lib\site-packages\langchain\chat_models\base.py", line 650, in predict_messages
return self(messages, stop=_stop, **kwargs)
File "\lib\site-packages\langchain\chat_models\base.py", line 600, in __call__
generation = self.generate(
File "\lib\site-packages\langchain\chat_models\base.py", line 349, in generate
raise e
File "\lib\site-packages\langchain\chat_models\base.py", line 339, in generate
self._generate_with_cache(
File "\lib\site-packages\langchain\chat_models\base.py", line 492, in _generate_with_cache
return self._generate(
File "\lib\site-packages\langchain\chat_models\openai.py", line 357, in _generate
return _generate_from_stream(stream_iter)
File "\lib\site-packages\langchain\chat_models\base.py", line 57, in _generate_from_stream
for chunk in stream:
File "\lib\site-packages\langchain\chat_models\openai.py", line 326, in _stream
for chunk in self.completion_with_retry(
File "\lib\site-packages\langchain\chat_models\openai.py", line 299, in completion_with_retry
return _completion_with_retry(**kwargs)
File "\lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "\lib\site-packages\tenacity\__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "\lib\site-packages\tenacity\__init__.py", line 314, in iter
return fut.result()
File "D:\Program Files\Python38\lib\concurrent\futures\_base.py", line 432, in result
return self.__get_result()
File "D:\Program Files\Python38\lib\concurrent\futures\_base.py", line 388, in __get_result
raise self._exception
File "\lib\site-packages\tenacity\__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "\lib\site-packages\langchain\chat_models\openai.py", line 297, in _completion_with_retry
return self.client.create(**kwargs)
File "\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 155, in create
response, _, api_key = requestor.request(
File "\lib\site-packages\openai\api_requestor.py", line 299, in request
resp, got_stream = self._interpret_response(result, stream)
File "\lib\site-packages\openai\api_requestor.py", line 710, in _interpret_response
self._interpret_response_line(
File "\lib\site-packages\openai\api_requestor.py", line 775, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: Unrecognized request argument supplied: functions
Process finished with exit code 1
</code></pre>
<p>I used Azure LLM instead openAI.
FAISS was not working for me so used Chroma Vector Store.</p>
<p>Following is my code:</p>
<pre><code>from langchain.text_splitter import CharacterTextSplitter
from langchain.document_loaders import TextLoader
from langchain.agents.agent_toolkits import create_retriever_tool
from langchain.agents.agent_toolkits import create_conversational_retrieval_agent
from langchain.chat_models import AzureChatOpenAI
from langchain.vectorstores import Chroma
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
import os
AZURE_OPENAI_API_KEY = ""
os.environ["OPENAI_API_KEY"] = AZURE_OPENAI_API_KEY
loader = TextLoader(r"Toward a Knowledge Graph of Cybersecurity Countermeasures.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
chunks = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-mpnet-base-v2")
current_directory = os.path.dirname("__file__")
# load it into Chroma and save it to disk
db = Chroma.from_documents(chunks, embedding_function, collection_name="groups_collection",
persist_directory=r"\rag_with_agent_chroma_db")
retriever = db.as_retriever(search_kwargs={"k": 5})
tool = create_retriever_tool(
retriever,
"search_state_of_union",
"Searches and returns documents regarding the state-of-the-union.",
)
tools = [tool]
llm = AzureChatOpenAI(
deployment_name='gtp35turbo',
model_name='gpt-35-turbo',
openai_api_key=AZURE_OPENAI_API_KEY,
openai_api_version='2023-03-15-preview',
openai_api_base='https://azureft.openai.azure.com/',
openai_api_type='azure',
streaming=True,
verbose=True
)
agent_executor = create_conversational_retrieval_agent(llm, tools, verbose=True, remember_intermediate_steps=True,
memory_key="chat_history")
result = agent_executor({"input": "hi, im bob"})
print(result["output"])
</code></pre>
|
<python><large-language-model>
|
2023-12-24 15:20:03
| 3
| 1,990
|
Ameya
|
77,710,890
| 1,064,197
|
Parse table of divs on a website. that doesn't follow standard th & tr format
|
<p>I am trying to parse the table of predictions here but <a href="https://theanalyst.com/na/2023/08/opta-football-predictions/" rel="nofollow noreferrer">https://theanalyst.com/na/2023/08/opta-football-predictions/</a></p>
<p>However, I am struggling to parse the divs table. I find_all("tr") and getting the row data doesnt work as I expect however each table looks sensible in the pase_table function below:</p>
<pre><code>from selenium import webdriver
from pyvirtualdisplay import Display
from bs4 import BeautifulSoup
def parse_table(table):
#struggling here <-
def parse_tables(soup):
try:
# Find all tables on the page
tables = soup.find_all('table')
# Loop through each table and extract data
data = []
for table in tables:
table.append(parse_table(table))
except Exception as e:
print(f"An error occurred during table parsing: {e}")
def parse_website():
# Start a virtual display
display = Display(visible=0, size=(800, 600))
display.start()
try:
# Set Chrome options with the binary location
chrome_options = webdriver.ChromeOptions()
chrome_options.binary_location = '/usr/bin/google-chrome'
# Initialize Chrome driver
driver = webdriver.Chrome(options=chrome_options)
# Open the desired URL
url = 'https://theanalyst.com/na/2023/08/opta-football-predictions/'
driver.get(url)
# Wait for the page to load completely (adjust the time as needed)
driver.implicitly_wait(10)
# Get the page source after waiting for it to load
page_source = driver.page_source
# Parse the page source using BeautifulSoup
soup = BeautifulSoup(page_source, 'html.parser')
# Call the function to parse tables
parse_tables(soup)
except Exception as e:
print(f"An error occurred: {e}")
finally:
# Quit the driver and stop the virtual display
if 'driver' in locals():
driver.quit()
display.stop()
# Call the function to parse the website
parse_website()
</code></pre>
<p>So I tried to parse search for find_all("tr") and iterating through but I am not corectly identifying the fields from here <a href="https://i.sstatic.net/n0srz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n0srz.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><web-scraping><beautifulsoup>
|
2023-12-24 14:16:37
| 1
| 2,625
|
Michael WS
|
77,710,813
| 2,482,149
|
Psycopg3 - no pq wrapper available
|
<p>I am facing this error when I try to run psycopg3 on AWS Lambda.</p>
<pre><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'function': no pq wrapper available.
Attempts made:
- couldn't import psycopg 'c' implementation: No module named 'psycopg_c'
- couldn't import psycopg 'binary' implementation: No module named 'psycopg_binary'
- couldn't import psycopg 'python' implementation: libpq library not found
</code></pre>
<p>I reference it as a dependency in my <code>.venv</code> based on this in my setup.py</p>
<pre><code> install_requires=[
'boto3',
'pydantic',
'python-dotenv',
'psycopg',
'twine',
'wheel',
'xlsxwriter'
],
extras_require={
'binary': ['psycopg-binary'],
},
</code></pre>
<p>I have copied the installation process as the <a href="https://www.psycopg.org/psycopg3/docs/basic/install.html#binary-installation" rel="nofollow noreferrer">official site</a>.</p>
|
<python><postgresql><aws-lambda><psycopg2><psycopg3>
|
2023-12-24 13:40:38
| 2
| 1,226
|
clattenburg cake
|
77,710,768
| 1,682,699
|
Getting constrained layout to work with nested subplots in Matplotlib
|
<p>I'm trying to create nested subplots in Matplotlib, where each plot is produced by a function that takes an <code>axis</code> object as an argument. Some of the plot functions can create subplots of their own.</p>
<p>This simple example works:</p>
<pre class="lang-py prettyprint-override"><code>def myplot(ax):
gs = gridspec.GridSpecFromSubplotSpec(2, 1, subplot_spec=ax)
fig = ax.figure
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot([1, 3, 2])
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot([1, 3, 2])
f, a = plt.subplot_mosaic("ab")
myplot(a['a'])
</code></pre>
<p>However, with complex figures, the label ticks and legends overlap.</p>
<p><a href="https://i.sstatic.net/3FEa8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3FEa8.png" alt="nested subplots" /></a></p>
<p>Constrained layouts is typically the way to go, but adding that throws an error.</p>
<pre class="lang-py prettyprint-override"><code>def myplot(ax):
gs = gridspec.GridSpecFromSubplotSpec(2, 1, subplot_spec=ax)
fig = ax.figure
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot([1, 3, 2])
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot([1, 3, 2])
f, a = plt.subplot_mosaic("ab")
f.set_constrained_layout(True)
myplot(a['a'])
</code></pre>
<pre><code>AttributeError: 'Axes' object has no attribute 'rowspan'
</code></pre>
<p>How can I fix this code?</p>
|
<python><matplotlib>
|
2023-12-24 13:24:03
| 0
| 507
|
roberto
|
77,710,609
| 85,837
|
Creating an SQLDatabase using schema file instead of URI
|
<p>Currently experimenting with the SQLDatabaseChain were the end result would be to generate the SQL statesmen, not running the query</p>
<p>Is there a way to create SQLDatabase from schema file, hence avoiding giving the SQLDatabase actual connection to the database?</p>
<p>Is it possible to not use the SQLDatabaseChain at all, and provide the schema to MML for processing in another way?</p>
<p>BR</p>
|
<python><langchain><py-langchain>
|
2023-12-24 12:23:42
| 1
| 321
|
eLAN
|
77,710,362
| 1,028,679
|
Streaming results of a large db query in FastAPI takes too long
|
<p>We have a tiny FastAPI deployed in Fargate Cluster, and the task is set to use 16vCPU's and 32GB Ram.</p>
<p>It queries data from Aurora, on a single table which has millions of records.</p>
<p>This is the code:</p>
<pre class="lang-py prettyprint-override"><code>@trade_router.get('/trades')
def read_item(symbol: str, db: Session = Depends(get_db)):
with db as session:
trades = session.execute(text("SELECT * FROM trades WHERE symbol = :symbol"), {"symbol": symbol.upper()})
def iter_trades():
for trade in trades:
# send as csv
yield f"{trade.trade_date}, {trade.symbol}, {trade.price}, {trade.quantity}\n"
return StreamingResponse(iter_trades(), media_type="application/json")
</code></pre>
<p>Whilst the response is almost instant, some queries have to return 700k records, which can take upto 2 mins for all data to be returned.</p>
<p>If I run the same query directly to the database it only takes about 7 seconds to return 700k.</p>
<p>Maybe Python and FastAPI are just not the right solution. But is there anyway to increase the speed?</p>
<p><a href="https://stackoverflow.com/a/72577231/1028679">This answer</a> to a similar question says "You should not make a 700k row database request from FastAPI or any other web server."</p>
<p>Ok, so then how do I have the database do the work and return the entire result set? Do I need a materialized view, or should I partition the table for each symbol (as that is the only query param)?</p>
<p><strong>UPDATE</strong></p>
<p>It is a desktop client (C# .Net). We see no difference in performance whether the .Net code calls the endpoint, or the browser, or Postman or curl. Streaming the full 700k takes around 2mins in every case.</p>
|
<python><postgresql><amazon-web-services><fastapi><aws-fargate>
|
2023-12-24 10:48:40
| 0
| 5,571
|
rmcsharry
|
77,710,202
| 1,959,753
|
Does Python optimize key-value pairs with None values?
|
<p>Consider a dict with an int key and dict value, where the value features string keys and further dict values.</p>
<p>Now consider these 2 representations of this dict:</p>
<pre><code>>>> a_it_1
{ 1: {"it": None, "ndar": {1:1}, 2: {"it": {2:8}, "ndar": None} }
>>> a_it_2
{ 1: { "ndar": {1:1}, 2: { "it": {2:8}} }
</code></pre>
<p>As you can see the inner dict is sparse; not all values appear in all keys.</p>
<p><code>a_it_2</code> is excluding key-value pairs that include <code>None</code> values in an attempt to save memory.</p>
<p>This is a simple example, the actual dict contains more than 10,000 outer key-value pairs.</p>
<p>I am doing some debugging and can see that both these use the same amount of memory.</p>
<p>I am using:</p>
<pre><code>round(asizeof.asizeof({...}) / (1024 * 1024), 2)
</code></pre>
<p>where the <code>...</code> represents the dict comprehension I am using to build the 2 dicts displayed above.</p>
<p>The <code>asizeof</code> method is from the <a href="https://pympler.readthedocs.io/en/latest/library/asizeof.html" rel="nofollow noreferrer"><code>asizeof</code></a> module of Pympler.</p>
<p>I am including my dict comprehensions below.</p>
<p>Retaining the inner sparse key-value pairs but setting None values:</p>
<pre><code>a_it_1 = {k: {p: vi if type(vi) == int or type(vi) == bool or len(vi) > 0 else None for p, vi in v.items() if vi is not None} for k, v in a_it.items() if k < 10000}
</code></pre>
<p>Removing the inner sparse key-value pairs completely:</p>
<pre><code>a_it_2 = {k: {p: vi for p, vi in v.items() if vi is not None and (type(vi) == int or type(vi) == bool or len(vi) > 0)} for k, v in a_it.items() if k < 10000}
</code></pre>
<p>Taking the size of both these dicts using <code>asizeof</code> returns the same value: 12.3 (megabytes) each.</p>
<p>I would have expected the lack of the key-value pairs in the first place, to use less memory.</p>
<p>Does Python internally optimize key-value pairs with <code>None</code> values?</p>
<p><strong>Updated</strong></p>
<p>My question may not have been clear. And the question I posed was perhaps the wrong one.</p>
<p>My current a_it is using a lot of memory. I would like to optimize it.</p>
<p>I created 2 new versions of the dict based on the comprehensions above.</p>
<p>One includes the key-value pairs with None values; and the other ignores the key-value pairs completely.</p>
<p>I would like to know why they still use the same amount of memory. I accept that Python does not optimise key-value pairs with None values, because <strong>it can't</strong>.</p>
<p>But is there benefit to excluding the key value pairs for non-yet set values? My experiment did not prove this and I turned to the SO community for help.</p>
|
<python><dictionary-comprehension>
|
2023-12-24 09:49:45
| 1
| 736
|
Jurgen Cuschieri
|
77,710,146
| 7,695,845
|
How to request the latest core OpenGL profile with GLFW?
|
<p>I am new to OpenGL and GLFW, and I wanted to make a little graphics engine in Python using the GLFW Python bindings and the <a href="https://moderngl.readthedocs.io/en/5.8.2/" rel="nofollow noreferrer">moderngl</a> library to get an OpenGL context. From the <a href="https://www.glfw.org/docs/3.3/window_guide.html#window_hints_ctx" rel="nofollow noreferrer">documentation</a> of GLFW, if I leave the <code>glfw.CONTEXT_VERSION_MAJOR</code> and <code>glfw.CONTEXT_VERSION_MINOR</code> at their default values I should get the latest OpenGL context available on my system. In addition, I also want to enforce the use of a core profile so I don't get deprecated functionality by using <code>glfw.window_hint(glfw.OPENGL_PROFILE, glfw.OPENGL_CORE_PROFILE)</code>. However, doing that seems to force me to specify explicitly the context version, otherwise, it fails to create the window. Because of that, I am no longer able to request the latest version available. Here's my sample code:</p>
<pre class="lang-py prettyprint-override"><code>import platform
import sys
import glfw
import moderngl as mgl
def glfw_error_callback(error: int, description: bytes) -> None:
print(f"GLFW error [{error}]: {description.decode()}", file=sys.stderr)
glfw.set_error_callback(glfw_error_callback)
if not glfw.init():
sys.exit("Failed to initialize GLFW")
glfw.window_hint(glfw.RESIZABLE, False)
glfw.window_hint(glfw.DOUBLEBUFFER, True)
# I want to use the core profile to avoid deprecated functions,
# but doing so forces me to specify the context version,
# which means I can't get the latest version.
glfw.window_hint(glfw.OPENGL_PROFILE, glfw.OPENGL_CORE_PROFILE)
glfw.window_hint(glfw.CONTEXT_VERSION_MAJOR, 4)
glfw.window_hint(glfw.CONTEXT_VERSION_MINOR, 6)
if platform.system() == "Darwin":
glfw.window_hint(glfw.OPENGL_FORWARD_COMPAT, True)
window = glfw.create_window(800, 600, "Demo window", None, None)
if not window:
sys.exit("Failed to create GLFW window")
glfw.make_context_current(window)
glfw.swap_interval(1)
ctx = mgl.create_context() # Create ModernGL context to see the context version
print("OpenGL info:")
print(f" Vendor: {ctx.info['GL_VENDOR']}")
print(f" Renderer: {ctx.info['GL_RENDERER']}")
print(f" Version: {ctx.info['GL_VERSION']}")
while not glfw.window_should_close(window):
glfw.swap_buffers(window)
glfw.poll_events()
glfw.destroy_window(window)
ctx.release()
glfw.terminate()
</code></pre>
<p>I have no problem specifying a minimum version (Maybe I'll want to use features of 4.0+ in the future), but specifying these window hints gives me a context with <strong>exactly</strong> the version I specified and not a higher version. To me, it seems logical to request a minimum version, but allow and even support the use of higher versions if available. If I say I need a minimum of OpenGL 3.3, then I shouldn't have a problem with a context of 4.6, but it doesn't seem to be possible with my current setup.</p>
<p>My question is, how can I request the latest OpenGL version available and force the core profile with GLFW? Since it seems harder than it needs to be, maybe it's not how I should work? Is there a reason to request a specific version like 4.6 instead of specifying a minimum version and getting the latest version compatible with my specifications? In other words, what is the "right" way to request an OpenGL context with GLFW?</p>
|
<python><opengl><glfw>
|
2023-12-24 09:21:22
| 1
| 1,420
|
Shai Avr
|
77,710,143
| 2,417,709
|
What is the best way to retrieve data each second from sqlite table (at the same time inserting the rows to the same table in other thread)
|
<p>I'm using sqlite database to store tick data in a table, at the same time I have a requirement to fetch the data each second from the table.</p>
<p>I have written following sample code in python to test this.</p>
<pre><code>def get_ltp():
query = """select * from test order by rowid desc LIMIT 1"""
results = c.execute(query).fetchall()
for i in results:
print(i)
import time
for i in range(20):
get_ltp()
time.sleep(1)
</code></pre>
<p>when I'm executing this after fetching the data for few seconds, I'm getting "<strong>database is locked</strong>" error. Can anyone please help me on what is going wrong here? Inserting the data and retrieving the data at the same time - does sqlite is the right choice for this requirement?</p>
|
<python><sqlite>
|
2023-12-24 09:20:51
| 0
| 311
|
Naga
|
77,710,089
| 17,596,179
|
login with superuser nog working in django admin panel
|
<p>So I built this custom auth model because I want to use the email instead of the username field which is default for django.</p>
<pre><code>AUTH_USER_MODEL = 'customers.Customers'
</code></pre>
<p>This I have in my <code>settings.py</code></p>
<pre><code>from django.utils import timezone
from django.db import models
from django.contrib.auth.models import AbstractBaseUser, PermissionsMixin, UserManager
from django.contrib.auth.models import UserManager
# Create your models here.
class CustomerManager(UserManager):
def _create_user(self, email, password, **extra_fields):
if not email:
raise ValueError('Customers must have an email address')
user = self.model(
email=email,
**extra_fields
)
user.set_password(password)
user.save(using=self._db)
return user
def create_user(self, email=None, password=None, **extra_fields):
extra_fields.setdefault('is_superuser', False)
extra_fields.setdefault('is_staff', False)
return self._create_user(email, password, **extra_fields)
def create_superuser(self, name, last_name, email, phone, password, **kwargs):
kwargs.setdefault('is_superuser', True)
kwargs.setdefault('is_staff', True)
return self._create_user(email, password, **kwargs)
class Customers (AbstractBaseUser, PermissionsMixin):
name = models.CharField(max_length=20)
last_name = models.CharField(max_length=20)
email = models.EmailField(blank=False, unique=True)
phone = models.CharField(max_length=15)
password = models.CharField(max_length=20)
is_active = models.BooleanField(default=True)
is_staff = models.BooleanField(default=False)
is_superuser = models.BooleanField(default=False)
date_joined = models.DateTimeField(default=timezone.now)
last_login = models.DateTimeField(blank=True, null=True)
objects = CustomerManager()
USERNAME_FIELD = 'email'
EMAIL_FIELD = 'email'
REQUIRED_FIELDS = ['name', 'last_name', 'phone']
class Meta:
verbose_name = 'Customer'
verbose_name_plural = 'Customers'
def get_full_name(self):
return self.name + ' ' + self.last_name
def get_short_name(self):
return self.name
def check_password(self, password):
return self.password == password
</code></pre>
<p>This is my custom auth model. I have succesfully created the superuser but when trying to login I always get the error:</p>
<pre><code>Please enter the correct email and password for a staff account. Note that both fields may be case-sensitive.
</code></pre>
<p>Im 100% sure i'm using the correct login credentials. Anybody knows what could be the problem? Looking forward to everyone's thoughts!</p>
<h4>EDIT</h4>
<p>I went in to the sqlite3 db and copied the password there and used it for authentication and it worked. I noticed it's a hash (probably done by django's user model) But is there something i'm missing because i used the correct password to login but maybe by default it doesnt hash when logging in? Or do I have to create a custom login function or something?</p>
|
<python><django><django-models><django-authentication><django-auth-models>
|
2023-12-24 08:49:47
| 1
| 437
|
david backx
|
77,710,075
| 23,106,915
|
libcairo-2.dll or libcairo library not found
|
<p>I created a tkinter project which shows user-data analysis using pygal graph plotter. The graph is using a library to render the png file called cairosvg. I am facing the following issue when running the code:</p>
<pre><code>OSError: no library called "cairo-2" was found
no library called "cairo" was found
no library called "libcairo-2" was found
cannot load library 'libcairo.so.2': error 0x7e
cannot load library 'libcairo.2.dylib': error 0x7e
cannot load library 'libcairo-2.dll': error 0x7e
Exception in Tkinter callback
</code></pre>
<p>Also I have the same project in another folder in that folder I got cairocffi version=1.3.0 installed the same project is showing the graph but when I copy the project into another environment and download the same version it doesnt work. The solutin I found was, if I copy the cairocffi folder from the environment in which the project is working the other project works fine as well.</p>
<p>This is the file tree in my current project of cairocffi:</p>
<pre><code>D:.
│ constants.py
│ context.py
│ ffi_build.py
│ fonts.py
│ matrix.py
│ patterns.py
│ pixbuf.py
│ surfaces.py
│ test_cairo.py
│ test_numpy.py
│ test_pixbuf.py
│ test_xcb.py
│ VERSION
│ xcb.py
│ __init__.py
│
├───_generated
│ │ ffi.py
│ │ ffi_pixbuf.py
│ │
│ └───__pycache__
│ ffi.cpython-311.pyc
│ ffi_pixbuf.cpython-311.pyc
│
└───__pycache__
constants.cpython-311.pyc
context.cpython-311.pyc
ffi_build.cpython-311.pyc
fonts.cpython-311.pyc
matrix.cpython-311.pyc
patterns.cpython-311.pyc
pixbuf.cpython-311.pyc
surfaces.cpython-311.pyc
test_cairo.cpython-311.pyc
test_numpy.cpython-311.pyc
test_pixbuf.cpython-311.pyc
test_xcb.cpython-311.pyc
xcb.cpython-311.pyc
__init__.cpython-311.pyc
</code></pre>
<p>This is the file tree in my working project of cairocffi:</p>
<pre><code>D:.
│ cairo.dll
│ constants.py
│ context.py
│ ffi_build.py
│ fonts.py
│ matrix.py
│ patterns.py
│ pixbuf.py
│ surfaces.py
│ test_cairo.py
│ test_numpy.py
│ test_pixbuf.py
│ test_xcb.py
│ VERSION
│ xcb.py
│ __init__.py
│
├───_generated
│ │ ffi.py
│ │ ffi_pixbuf.py
│ │
│ └───__pycache__
│ ffi.cpython-311.pyc
│ ffi_pixbuf.cpython-311.pyc
│
└───__pycache__
constants.cpython-311.pyc
context.cpython-311.pyc
ffi_build.cpython-311.pyc
fonts.cpython-311.pyc
matrix.cpython-311.pyc
patterns.cpython-311.pyc
pixbuf.cpython-311.pyc
surfaces.cpython-311.pyc
test_cairo.cpython-311.pyc
test_numpy.cpython-311.pyc
test_pixbuf.cpython-311.pyc
test_xcb.cpython-311.pyc
xcb.cpython-311.pyc
__init__.cpython-311.pyc
</code></pre>
|
<python><python-3.x><cairo><pycairo><pygal>
|
2023-12-24 08:44:24
| 0
| 546
|
AshhadDevLab
|
77,709,826
| 2,568,647
|
Keras model suddenly started outputting Tensors. How to revert that?
|
<p>So I was learning DQNs trying to solve Cart Pole env:</p>
<pre><code>import gymnasium as gym
import numpy as np
from rl.agents import DQNAgent
from rl.memory import SequentialMemory
from rl.policy import BoltzmannQPolicy
from tensorflow.python.keras.layers import InputLayer, Dense
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.optimizer_v2.adam import Adam
if __name__ == '__main__':
env = gym.make("CartPole-v1")
# tensorflow.compat.v1.experimental.output_all_intermediates(True)
model = Sequential()
model.add(InputLayer(input_shape=(1, 4)))
model.add(Dense(24, activation="relu"))
# model.add(GRU(24))
model.add(Dense(24, activation="relu"))
model.add(Dense(env.action_space.n, activation="linear"))
model.build()
print(model.summary())
agent = DQNAgent(
model=model,
memory=SequentialMemory(limit=50000, window_length=1),
policy=BoltzmannQPolicy(),
nb_actions=env.action_space.n,
nb_steps_warmup=100,
target_model_update=0.01
)
agent.compile(Adam(learning_rate=0.001), metrics=["mae"])
agent.fit(env, nb_steps=100000, visualize=False, verbose=1)
results = agent.test(env, nb_episodes=10, visualize=True)
print(np.mean(results.history["episode_reward"]))
env.close()
</code></pre>
<p>Everything was fine, I was able to solve this env, but at some point I wanted to try adding GRU layer to see how this will affect learning. Then to make it work I used <code>tensorflow.compat.v1.experimental.output_all_intermediates(True)</code>. And now, even without GRU layer, I get the following error:</p>
<pre><code>Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 1, 24) 120
_________________________________________________________________
dense_1 (Dense) (None, 1, 24) 600
_________________________________________________________________
dense_2 (Dense) (None, 1, 2) 50
=================================================================
Total params: 770
Trainable params: 770
Non-trainable params: 0
_________________________________________________________________
None
Traceback (most recent call last):
File "cart_pole.py", line 25, in <module>
agent = DQNAgent(
File "lib\site-packages\rl\agents\dqn.py", line 107, in __init__
raise ValueError(f'Model output "{model.output}" has invalid shape. DQN expects a model that has one dimension for each action, in this case {self.nb_actions}.')
ValueError: Model output "Tensor("dense_2/BiasAdd:0", shape=(None, 1, 2), dtype=float32)" has invalid shape. DQN expects a model that has one dimension for each action, in this case 2.
</code></pre>
<p>What I'm assuming is happening is that adding <code>tensorflow.compat.v1.experimental.output_all_intermediates(True)</code> made my model output Tensor instead of what it was outputting before. Passing <code>False</code> or <code>None</code> to <code>output_all_intermediates</code> have no effect at all. How do I revert my model to work with DQN agent again?</p>
|
<python><tensorflow><keras><reinforcement-learning><dqn>
|
2023-12-24 06:10:49
| 1
| 1,117
|
sleexed
|
77,709,752
| 12,104,604
|
When using string shared memory in Python's multiprocessing alongside Kivy, it causes an error
|
<p>I have tried using <strong>shared memory for strings with the following simple Python code, and it works well</strong>.</p>
<pre><code>from multiprocessing import Value, Array, Process, Lock
def process1(array, lock):
with lock:
buffer_data = str("z1111").encode("utf-8")
array[0:len(buffer_data)] = buffer_data[:]
if __name__ == '__main__':
array = Array('c', 35)
lock = Lock()
process_test1 = Process(target=process1, args=[array, lock], daemon=True)
process_test1.start()
process_test1.join()
print(array[:].decode("utf-8"))
print("process ended")
</code></pre>
<p>Next, I tried implementing <strong>shared memory for numbers using a GUI library called Kivy, and that worked well</strong> too.</p>
<pre><code>from multiprocessing import Process, Value, set_start_method, freeze_support, Lock
kvLoadString=("""
<TextWidget>:
BoxLayout:
orientation: 'vertical'
size: root.size
Button:
id: button1
text: "start"
font_size: 48
on_press: root.buttonClicked()
Label:
id: lab
text: 'result'
""")
def start_process(count,lock):
process_test = Process(target=p_test, args=[count,lock], daemon=True)
process_test.start()
return process_test
def p_test(count,lock):
i = 0
while True:
print(i)
i = i + 1
with lock:
count.value = i
if __name__ == '__main__':
freeze_support()
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.lang import Builder
from kivy.uix.checkbox import CheckBox
from kivy.uix.spinner import Spinner
from kivy.properties import StringProperty,NumericProperty,ObjectProperty, BooleanProperty
from kivy.core.window import Window
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.popup import Popup
from kivy.uix.treeview import TreeView, TreeViewLabel, TreeViewNode
from kivy.uix.label import Label
from kivy.uix.textinput import TextInput
from kivy.uix.progressbar import ProgressBar
from kivy.uix.button import Button
set_start_method('spawn')
class TestApp(App):
def __init__(self, **kwargs):
super(TestApp, self).__init__(**kwargs)
self.title = 'testApp'
def build(self):
return TextWidget()
class TextWidget(Widget):
def __init__(self, **kwargs):
super(TextWidget, self).__init__(**kwargs)
self.process_test = None
self.proc = None
# shared memory
self.count = Value('i', 0)
self.lock = Lock()
def buttonClicked(self):
if self.ids.button1.text == "start":
self.proc = start_process(self.count,self.lock)
self.ids.button1.text = "stop"
else:
if self.proc:
self.proc.kill()
self.ids.button1.text = "start"
self.ids.lab.text = str(self.count.value)
Builder.load_string(kvLoadString)
TestApp().run()
</code></pre>
<p>However, I did <strong>not succeed in using Kivy with shared memory for strings</strong> as follows.</p>
<pre><code>from multiprocessing import Array,Process, Value, set_start_method, freeze_support, Lock
import math
kvLoadString=("""
<TextWidget>:
BoxLayout:
orientation: 'vertical'
size: root.size
Button:
id: button1
text: "start"
font_size: 48
on_press: root.buttonClicked()
Label:
id: lab
text: 'result'
""")
def start_process(array,lock):
process_test = Process(target=p_test, args=[array,lock], daemon=True)
process_test.start()
return process_test
def p_test(array,lock):
with lock:
buffer_data = str("z1111").encode("utf-8")
array[0:len(buffer_data)] = buffer_data[:]
print(array.decode("utf-8"))
if __name__ == '__main__':
freeze_support()
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.lang import Builder
from kivy.uix.checkbox import CheckBox
from kivy.uix.spinner import Spinner
from kivy.properties import StringProperty,NumericProperty,ObjectProperty, BooleanProperty
from kivy.core.window import Window
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.popup import Popup
from kivy.uix.treeview import TreeView, TreeViewLabel, TreeViewNode
from kivy.uix.label import Label
from kivy.uix.textinput import TextInput
from kivy.uix.progressbar import ProgressBar
from kivy.uix.button import Button
set_start_method('spawn')
class TestApp(App):
def __init__(self, **kwargs):
super(TestApp, self).__init__(**kwargs)
self.title = 'testApp'
def build(self):
return TextWidget()
class TextWidget(Widget):
def __init__(self, **kwargs):
super(TextWidget, self).__init__(**kwargs)
self.process_test = None
self.proc = None
# shared memory
self.array = Array('c', 35)
self.lock = Lock()
def buttonClicked(self):
if self.ids.button1.text == "start":
self.proc = start_process(self.array,self.lock)
self.ids.button1.text = "stop"
else:
if self.proc:
self.proc.kill()
self.ids.button1.text = "start"
self.ids.lab.text = self.array.decode("utf-8")
Builder.load_string(kvLoadString)
TestApp().run()
</code></pre>
<p>The error message is as follows. How can I avoid such an error?</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\taichi\Documents\Winpython64-3.9.10.0\WPy64-39100\python-3.9.10.amd64\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\taichi\Documents\Winpython64-3.9.10.0\WPy64-39100\python-3.9.10.amd64\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\taichi\Desktop\test.py", line 29, in p_test
print(array.decode("utf-8"))
AttributeError: 'SynchronizedString' object has no attribute 'decode'
</code></pre>
|
<python><multiprocessing><kivy><shared-memory>
|
2023-12-24 05:26:55
| 1
| 683
|
taichi
|
77,709,546
| 12,461,032
|
Pyspark RAM leakage
|
<p>My spark codes recently causes ram leakage. For instance, before running any script, when I run top, I can see 251 GB total memory and 230 GB free + used memory.</p>
<p>When I run my spark job through <code>spark-submit</code>, regardless of whether the job is completed or not (ending with exception) the free + used memory is much lower than the start. This is one sample of my code:</p>
<pre class="lang-py prettyprint-override"><code> from pyspark.sql import SparkSession
def read_df(spark, jdbc_url, table_name, jdbc_properties ):
df = spark.read.jdbc(url=jdbc_url, table=table_name, properties=jdbc_properties)
return df
def write_df(result, table_name, jdbc_properties):
result = result.repartition(50)
result.write.format('jdbc').options(
url=jdbc_properties['jdbc_url'],
driver="org.postgresql.Driver",
user=jdbc_properties["user"],
password=jdbc_properties["password"],
dbtable=table_name,
mode="overwrite"
).save()
if __name__ == '__main__':
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.driver.extraClassPath", "postgresql-42.5.2.jar").config("spark.executor.extraClassPath","postgresql-42.5.2.jar") \
.config("spark.local.dir", "/shared/hm31") \
.config("spark.master", "local[*]") \
.getOrCreate()
spark.sparkContext.setLogLevel("WARN")
parquet_path = '/shared/hossein_hm31/embeddings_parquets'
try:
unique_nodes = read_df(spark, jdbc_url, 'hm31.unique_nodes_cert', jdbc_properties)
df = spark.read.parquet(parquet_path)
unique_nodes.createOrReplaceTempView("unique_nodes")
df.createOrReplaceTempView("all_embeddings")
sql_query = """
select u.node_id, a.embedding from unique_nodes u inner join all_embeddings a on u.pmid = a.pmid
"""
result = spark.sql(sql_query)
print("num", result.count())
result.repartition(10).write.parquet('/shared/parquets_embeddings/')
write_df(result, 'hm31.uncleaned_embeddings_cert', jdbc_properties)
spark.catalog.clearCache()
unique_nodes.unpersist()
df.unpersist()
result.unpersist()
spark.stop()
exit(0)
except:
print('Error')
spark.catalog.clearCache()
unique_nodes.unpersist()
df.unpersist()
spark.stop()
exit(0)
print('Error')
spark.catalog.clearCache()
unique_nodes.unpersist()
df.unpersist()
spark.stop()
exit(0)
</code></pre>
<p>Where I tried to remove cached data frames. This RAM leakage would need a server restart, which is uncomfortable.</p>
<p>This is the command I run:</p>
<pre class="lang-bash prettyprint-override"><code>spark-submit --master local[50] --driver-class-path ./postgresql-42.5.2.jar --jars ./postgresql-42.5.2.jar --driver-memory 200g --conf "spark.local.dir=./logs" calculate_similarities.py
</code></pre>
<p>And this is the top output, that you can see free + used memory is much less than the total, and used to be around 230 before I ran my spark job. The jobs are sorted by memory usage, and you can see there is no memory-intensive job running after the spark ended with an exception.</p>
<p>I shall add that the machine does not have Pyspark itself. It has Java 11, and I just run Pyspark by importing its package.</p>
<p><a href="https://i.sstatic.net/01TL2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/01TL2.png" alt="enter image description here" /></a></p>
<p>Thanks</p>
<p>P.S: The <code>unique_nodes</code> is around 0.5 GB on Postgres. The <code>df = spark. read.parquet(parquet_path)</code> reads 38 parquet files, each around 3 GB. After joining, the <code>result</code> is around 8 GB.</p>
|
<python><apache-spark><pyspark>
|
2023-12-24 02:43:10
| 1
| 472
|
m0ss
|
77,709,426
| 899,573
|
Media player error Exception occurred during processing of request
|
<p>I have a simple HTML media player like following,</p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Sample Media Player</title>
<style>
video,
audio {
width: 100%;
max-width: 600px;
margin-bottom: 20px;
}
</style>
</head>
<body>
<!-- Video Player -->
<video preload="none" controls>
<source src="sample-5s.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</body>
</html>
</code></pre>
<p>I'm serving the HTML file with <code>python3 -m http.server</code>, and the video source is downloaded from <a href="https://download.samplelib.com/mp4/sample-5s.mp4" rel="nofollow noreferrer">https://download.samplelib.com/mp4/sample-5s.mp4</a>.
This works seamlessly on various browsers including Mac Safari, Mobile Safari, and Chrome. However, when attempting to load the page on a Samsung Smart TV browser(TizenBrowser - 4.4.11030), I encounter the following exception:</p>
<pre><code>::ffff:192.168.2.107 - - [24/Dec/2023 01:34:59] "GET / HTTP/1.1" 200 -
::ffff:192.168.2.107 - - [24/Dec/2023 01:35:12] "GET /sample-5s.mp4 HTTP/1.1" 200 -
----------------------------------------
Exception occurred during processing of request from ('::ffff:192.168.2.107', 43371, 0, 0)
Traceback (most recent call last):
File "/opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/socketserver.py", line 691, in process_request_thread
self.finish_request(request, client_address)
File "/opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/server.py", line 1310, in finish_request
self.RequestHandlerClass(request, client_address, self,
File "/opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/server.py", line 671, in __init__
super().__init__(*args, **kwargs)
File "/opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/socketserver.py", line 755, in __init__
self.handle()
File "/opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/server.py", line 436, in handle
self.handle_one_request()
File "/opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/server.py", line 424, in handle_one_request
method()
File "/opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/server.py", line 678, in do_GET
self.copyfile(f, self.wfile)
File "/opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/server.py", line 877, in copyfile
shutil.copyfileobj(source, outputfile)
File "/opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/shutil.py", line 200, in copyfileobj
fdst_write(buf)
File "/opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/socketserver.py", line 834, in write
self._sock.sendall(b)
BrokenPipeError: [Errno 32] Broken pipe
----------------------------------------
::ffff:192.168.2.107 - - [24/Dec/2023 01:35:12] "GET /sample-5s.mp4 HTTP/1.1" 200 -
</code></pre>
<p>Has anyone encountered a similar issue with Smart TV browsers, and are there any specific considerations or adjustments that need to be made for compatibility? Any insights or suggestions would be greatly appreciated.</p>
|
<python><html><html5-video><media-player><video-codecs>
|
2023-12-24 00:51:09
| 0
| 12,959
|
Johnykutty
|
77,709,402
| 1,126,944
|
Does the "object" here refer to the object class
|
<p>I saw in the Python documentation for defining an emulating container type <a href="https://docs.python.org/3/reference/datamodel.html#emulating-container-types" rel="nofollow noreferrer">https://docs.python.org/3/reference/datamodel.html#emulating-container-types</a>, it states:</p>
<blockquote>
<p><strong>object.</strong>__len__(self)<br />
Called to implement the built-in function len()...<br />
...</p>
<p><strong>object.</strong>__getitem__(self, key)<br />
Called to implement evaluation of self[key]. For sequence types ...<br />
...<br />
......</p>
</blockquote>
<p>I wonder it seems instructing to add the <code>__len__</code> function as the attribute of <code>object</code> class, but if you make a emulating class e.g. EmuList</p>
<pre><code>class EmuList:
def __len__(self):
pass
...
</code></pre>
<p>The <code>__len__</code> function should be attached to <code>EmuList</code> but <code>object</code>, although EmuList inherited from object class. What does <code>object</code> prefix in <code>object.__len__</code> really mean in this context ?</p>
|
<python><python-3.x><language-lawyer>
|
2023-12-24 00:34:03
| 2
| 1,330
|
IcyBrk
|
77,709,363
| 9,315,690
|
Why does having a `packages` property in pyproject.toml affect mypy's behaviour?
|
<p>I've been having the issue that Mypy suddenly no longer type checks the contents of my imports, even though they are in the same source tree. After bisecting the issue, I figured out that the cause was the addition of the <code>packages</code> line to my pyproject.toml file by a co-worker. He claims it must be there for use with another program which uses the aforementioned source as a git submodule, which would makes sense given that I presume he's adding it as <a href="https://python-poetry.org/docs/dependency-specification/#path-dependencies" rel="nofollow noreferrer">a <code>path</code> dependency with <code>develop</code> set</a> (we use Poetry for dependency management). With that established, simply removing the <code>packages</code> line is clearly not an option. As such, I made a minimal reproducer to study the problem better:</p>
<p>pyproject.toml:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "mypy-issue-reproducer"
version = "0.1.0"
description = ""
authors = ["Your Name <you@example.com>"]
readme = "readme.md"
packages = [
{ include = "**/*.py" },
]
[tool.poetry.dependencies]
python = "^3.10"
[tool.poetry.group.dev.dependencies]
mypy = "^1.8.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>reproducer-main.py:</p>
<pre class="lang-py prettyprint-override"><code>from some_module.other_module.reproducer_import import main
main()
</code></pre>
<p>some_module/other_module/reproducer-import.py:</p>
<pre class="lang-py prettyprint-override"><code>def main() -> None:
# blatant violation
variable: str = 1
</code></pre>
<p>readme.md also needs to be created and can contain whatever.</p>
<p>Creating these files and installing the project with <code>$ poetry install --with dev</code>, then running <code>$ poetry run mypy reproducer-main.py</code> reproduces the issue (note how mypy does not complain about the blatant type violation in reproducer-import.py). Removing the <code>packages</code> property, then removing the virtualenv and repeating the install and Mypy steps now has Mypy actually report the aforementioned type violation. But why? And what can I do to make Mypy type check this import while keeping the package installable as <a href="https://python-poetry.org/docs/dependency-specification/#path-dependencies" rel="nofollow noreferrer">a <code>path</code> dependency with <code>develop</code> set</a>?</p>
|
<python><mypy><python-poetry><pyproject.toml>
|
2023-12-24 00:03:25
| 0
| 3,887
|
Newbyte
|
77,709,350
| 3,761,310
|
How can I convert this Python code into a SQLAlchemy query?
|
<p>I'm building a data dashboard for a project where players field a certain position within a game. Players can also field multiple positions in a single game. The dashboard needs the following data:</p>
<ul>
<li>Each player in the Player table</li>
<li>The number of games that player has appeared in, regardless of whether they appeared in multiple positions in that game. (i.e. if Fred appears at 3 positions each in 2 games, the query should return 2 not 6)</li>
<li>The winning percentage of games in which that player appeared. (i.e. if John appears in 20 games and 11 of them have `outcome="Win", the query should return 0.55)</li>
</ul>
<p>I wrote some Python code that accomplishes this. It's very slow and requires a number of database queries in a loop. Is it possible to achieve the same output from a single query?</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import func, select
def get_player_winning_percentages():
"""
Returns a list of all players who have appeared in a game, along with the
number of games they've appeared in (regardless of how many times they
appeared in each game) and the winning percentage of games they've appeared in.
Output looks like:
[("John", 20, 0.55), ("Alejandro", 15, 0.75), ("Fred", 13, 0.5), ...]
"""
# Get all players who have appeared in a game
players_query = (
Player.query.join(GameToPlayerPosition)
.group_by(Player.id)
.order_by(func.count(Player.id).desc())
)
# For each player with an appearance, find the number of games they've appeared in,
# and the number of winning games they've appeared in.
data = []
for player in players_query.all():
games = Game.query.filter(
Game.players.any(Player.name == player.name)
).count()
wins = Game.query.filter(
Game._outcome == "Win",
Game.players.any(Player.name == player.name)
).count()
data.append({
"name": player.name, "count": games, "wins": wins
})
# Find players who haven't appeared in a game and add them to the data
no_appearances = Player.query.filter(
~Player.id.in_(select(players_query.subquery().c.id))
).all()
data += [{"name": player.name, "count": 0, "wins": 0} for player in no_appearances]
return data
</code></pre>
<p>My database tables look like this:</p>
<pre class="lang-py prettyprint-override"><code># I'm using sqlalchemy and Flask_SQLAlchemy
from my_flask_app import db
from sqlalchemy.ext.associationproxy import association_proxy
class Game(db.Model):
"""
Represents a game. Each game can have several players at different positions.
A single player can appear in a game multiple times at different positions
(i.e. have multiple GameToPlayerPosition associations).
"""
id = db.Column(db.Integer, primary_key=True)
# "Win" or "Loss"
outcome = db.Column(db.String(10), nullable=False)
... # Other fields
positions = db.relationship(
"GameToPlayerPosition", back_populates="game", cascade="all, delete"
)
players = association_proxy(
"positions",
"player",
creator=lambda player, position: GameToPlayerPosition(player=player, position=position),
)
class GameToPlayerPosition(db.Model):
"""
Association table connecting Game to Player. Adds some additional information,
namely the `position` column shown here.
"""
game_id = db.Column(db.ForeignKey("game.id"), primary_key=True)
player_id = db.Column(db.ForeignKey(f"player.id"), primary_key=True)
position = db.Column(db.String(10), nullable=False)
... # Other fields
game = db.relationship("Game", back_populates="positions")
player = db.relationship("Player", back_populates="game_positions")
class Player(db.Model):
"""Players in a game."""
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(60), nullable=False, unique=True)
... # Other fields
game_positions = db.relationship(
"GameToPlayerPosition", back_populates="player", cascade="all, delete"
)
games = association_proxy("game_positions", "game")
</code></pre>
|
<python><sqlalchemy><subquery><querying>
|
2023-12-23 23:53:38
| 0
| 718
|
DukeSilver
|
77,709,215
| 2,671,049
|
ezdxf -- transform undefined EPSG 3395 latitude longitude decimal degrees coordinates to WCS coordinates to save revised DXF file
|
<p><strong>UPDATE 26 December 2023</strong></p>
<p>The <code>DXF</code> file was created via the Export Project to DXF function in QGIS 3.34. Although the exported <code>DXF</code> was set to EPSG 3395, that designation is not set in the <code>dxf.coordinate_type</code>. The answer by mozman and the previously displayed code have been combined as the following:</p>
<pre><code>import ezdxf
from ezdxf import transform
from ezdxf.math import Matrix44
CRS_TO_WCS = True
doc = ezdxf.readfile("tester.dxf")
msp = doc.modelspace()
geo_data = msp.get_geodata()
def wcs_to_crs(entities, m):
transform.inplace(entities, m)
def crs_to_wcs(entities, m):
m = m.copy()
m.inverse()
transform.inplace(entities, m)
if geo_data:
# Get the transformation matrix and epsg code:
m, epsg = geo_data.get_crs_transformation()
else:
# Identity matrix for DXF files without a geo location reference:
m = Matrix44()
epsg = 3395
if geo_data:
m, epsg = geo_data.get_crs_transformation()
if CRS_TO_WCS:
crs_to_wcs(msp)
else:
wcs_to_crs(msp)
else:
print("No geo reference data available.")
</code></pre>
<p>The result is "No geo reference data available."</p>
<p>I will ask a separate question (<a href="https://stackoverflow.com/questions/77716058/ezdxf-dxf-set-crs-for-dxf-file-internally-or-via-python">ezdxf - DXF - set CRS for DXF file internally or via python</a>) to set the CRS value.</p>
<hr />
<p>All entities in the tester <code>DXF</code> are either <code>TEXT</code> or <code>LWPOLYLINE</code>.</p>
<p>My goal is to transform all <code>LWPOLYLINE</code> and <code>TEXT</code> entities in all layers in the <code>DXF</code> recursively from EPSG 3395 (latitude longitude in decimal degrees) to <code>WCS</code> coordinates, and then save that revised <code>DXF</code> file as a new file.</p>
<p>In order to do so, should I use <code>crs_to_wcs</code> or <code>globe_to_map</code>? Is there a good example that shows the steps?</p>
<p>Below is the python code modified from the <code>ezdxf</code> documentation:</p>
<pre><code>import ezdxf
from ezdxf.math import Matrix44
from ezdxf.addons import geo
doc = ezdxf.readfile("tester.dxf")
msp = doc.modelspace()
# Get the geo location information from the DXF file:
geo_data = msp.get_geodata()
if geo_data:
# Get transformation matrix and epsg code:
m, epsg = geo_data.get_crs_transformation()
else:
# Identity matrix for DXF files without geo reference data:
m = Matrix44()
epsg = 3395
</code></pre>
|
<python><latitude-longitude><dxf><ezdxf>
|
2023-12-23 22:29:44
| 1
| 944
|
iembry
|
77,709,156
| 5,938,276
|
Check if float result is close to one of the expected values
|
<p>I have a function in Python makes a number of calculations and returns a float.</p>
<p>I need to validate the float result is close (i.e. +/- 1) to one of 4 possible integers.</p>
<p>For example the expected integers are 20, 50, 80, 100.</p>
<p>If I have <code>result = 19.808954</code> if would pass the validation test as it is with +/- 1 of 20 which is one of the expected results.</p>
<p>Is there a concise way to do this test? My python math functions are basic.</p>
|
<python><math>
|
2023-12-23 21:52:29
| 1
| 2,456
|
Al Grant
|
77,708,882
| 7,391,480
|
Change first and last elements of strings or lists inside a dataframe
|
<p>I have a dataframe like this:</p>
<pre><code>data = {
'name': ['101 blueberry 2023', '102 big cat 2023', '103 small white dog 2023'],
'number': [116, 118, 119]}
df = pd.DataFrame(data)
df
</code></pre>
<p>output:</p>
<pre><code> name number
0 101 blueberry 2023 116
1 102 big cat 2023 118
2 103 small white dog 2023 119
</code></pre>
<p>I would like to change the first and last numbers in the <code>name</code> column. For example, the first number in <code>name</code> to the number in the <code>number</code> column, and the last number in <code>name</code> to '2024'. So finally it would look like:</p>
<pre><code> name number
0 116 blueberry 2024 116
1 118 big cat 2024 118
2 119 small white dog 2024 119
</code></pre>
<p>I have tried splitting <code>name</code> into a list and changing the first and last elements of the list.</p>
<pre><code>df['name_pieces'] = df['name'].split(' ')
df
</code></pre>
<p>output:</p>
<pre><code> name number name_pieces
0 101 blueberry 2023 116 [101, blueberry, 2023]
1 102 big cat 2023 118 [102, big, cat, 2023]
2 103 small white dog 2023 119 [103, small, white, dog, 2023]
</code></pre>
<p>I can access the first item of the lists using <code>.str</code>, but I cannot change the item.</p>
<pre><code>df['name_pieces'].str[0]
</code></pre>
<p>output:</p>
<pre><code>0 101
1 102
2 103
</code></pre>
<p>but trying to assign the first value of the list gives an error</p>
<pre><code>df['name_pieces'].str[0] = df['number']
</code></pre>
<p>output:</p>
<pre class="lang-none prettyprint-override"><code>TypeError: 'StringMethods' object does not support item assignment
</code></pre>
<p>How can I replace the first and last value of <code>name</code> inside this dataframe?</p>
|
<python><pandas><dataframe><list>
|
2023-12-23 20:02:42
| 5
| 1,364
|
edge-case
|
77,708,843
| 6,357,916
|
Reading opencv yaml in python gives error "Input file is invalid in function 'open'"
|
<p>I have following test.yaml:</p>
<pre><code>Camera.ColourOrder: "RGB"
Camera.ColourDepth: "UCHAR_8"
Camera.Spectrum: "Visible_NearIR"
IMU.T_b_c1: !!opencv-matrix
rows: 4
cols: 4
dt: f
data: [0.999903, -0.0138036, -0.00208099, -0.0202141,
0.0137985, 0.999902, -0.00243498, 0.00505961,
0.0021144, 0.00240603, 0.999995, 0.0114047,
0.0, 0.0, 0.0, 1.0]
</code></pre>
<p>I am trying to read it using python opencv <code>cv2</code> module (because it contains opencv related objects like <code>opencv_matrix</code> embedded in yaml, which will give error if tried reading using python's inbuilt yaml module). But it is giving me error as shown below:</p>
<pre><code>>>> import os
>>> 'test.yaml' in os.listdir()
True
>>> import cv2
>>> fs = cv2.FileStorage("test.yaml", cv2.FILE_STORAGE_READ)
cv2.error: OpenCV(4.8.1) /io/opencv/modules/core/src/persistence.cpp:699: error: (-5:Bad argument) Input file is invalid in function 'open'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
SystemError: <class 'cv2.FileStorage'> returned a result with an error set
</code></pre>
<p>I tried removing <code>IMU.T_b_c1</code> and keeping just <code>Camera.xyz</code> values, still the same error. What I am missing here?</p>
<p>PS: is there any other way / module to read such yaml file with embedded opencv objects? I am ok with not using cv2 and read some other package / module.</p>
|
<python><opencv><yaml>
|
2023-12-23 19:44:35
| 1
| 3,029
|
MsA
|
77,708,621
| 3,697,344
|
Unable to connect to postgres instance when flask app Dockerfile in different directory
|
<p>I have a project structure like this</p>
<pre><code>project-root/
back/
Dockerfile
docker-compose.yaml
.env
</code></pre>
<p>Where <code>docker-compose.yaml</code> has</p>
<pre><code>services:
postgres:
image: postgres:latest
restart: always
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
ports:
- ${POSTGRES_PORT}:${POSTGRES_PORT}
volumes:
- ./data:/var/lib/posgresql/data
flask_app:
network_mode: "host"
build:
context: ./back
dockerfile: Dockerfile
ports:
- ${FLASK_PORT}:${FLASK_PORT}
environment:
FLASK_ENV: "development"
depends_on:
- postgres
volumes:
- ./back:/app
</code></pre>
<p>And my <code>Dockerfile</code></p>
<pre><code>FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
ENV FLASK_APP=main.py
CMD if [ "$FLASK_ENV" = "development" ]; then flask --debug run; else flask run; fi
</code></pre>
<p>and .env</p>
<pre><code>POSTGRES_USER=postgres
POSTGRES_PASSWORD=password
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
FLASK_PORT=5000
</code></pre>
<p>But my flask app is unable to connect to postgres with the following error:</p>
<pre><code>psycopg2.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
</code></pre>
<p>Regular python scripts ran from my host like seed scripts and integration tests are able to connect to postgres with no problem. And when I move everything under <code>/back</code> the flask app within docker can connect as well. But I would like <code>docker-compose.yaml</code>
to live in the project root so I can build out a frontend and connect all the pieces together.</p>
<p>There must be some docker compose networking piece that I haven't been able to figure out.</p>
<p>Any input is appreciated. Thanks!</p>
|
<python><postgresql><docker><flask><docker-compose>
|
2023-12-23 18:28:35
| 1
| 301
|
user3697344
|
77,708,557
| 2,205,916
|
Python Tensorflow.keras LSTM: TypeError: `generator` yielded an element of shape (36, 36, 147) where an element of shape (36, 147) was expected
|
<p>I've been struggling with the following code for some time now.</p>
<p>Basically, I'm getting either of the following two errors, depending on my adjustments:</p>
<p><code>TypeError: </code>generator<code> yielded an element of shape (36, 36, 147) where an element of shape (36, 147) was expected.</code></p>
<p>and</p>
<p><code>ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 36, 147), found shape=(36, 147)</code></p>
<p>I realize this has to do with my <code>data_generator()</code> function, as the error messages suggest that the outputs of my <code>data_generator()</code>: <code>train_data</code> and <code>val_data</code> are of different shape/dimensions than what is expected.</p>
<p>When I look at them:</p>
<pre><code>print("Train Data element_spec:", train_data.element_spec)
print("Val Data element_spec:", val_data.element_spec)
Train Data element_spec: (TensorSpec(shape=(36, 36, 147), dtype=tf.float64, name=None), TensorSpec(shape=(36,), dtype=tf.float64, name=None))
Val Data element_spec: (TensorSpec(shape=(36, 36, 147), dtype=tf.float64, name=None), TensorSpec(shape=(36,), dtype=tf.float64, name=None))
</code></pre>
<p>Which seems close. I understand that the first 36 represents the batch size, which is expected and what I want it to be. The second 36 represents time steps and 147 represents features per time step.</p>
<p>Note that <code>X</code> and <code>y</code> in the code below, and from which <code>train_data</code> and <code>val_data</code> are made, are both numpy.ndarrays with shape: <code>(418238, 36, 147)</code> and <code>(418238,)</code>, respectively.</p>
<p>Here's my MWE:</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense
from tensorflow.keras.optimizers import Adam
from kerastuner.tuners import BayesianOptimization
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)
def data_generator(features, labels, batch_size):
data_size = len(features)
while True: # Loop indefinitely
for start in range(0, data_size, batch_size):
end = min(start + batch_size, data_size)
yield features[start:end], labels[start:end] # Yielding a batch of features and labels
train_data = tf.data.Dataset.from_generator(
lambda: data_generator(features=X_train, labels=y_train, batch_size=36),
output_types=(X.dtype, y.dtype),
# output_shapes=([None, 36, 147], [None]) # Update this line
# output_shapes=([None, 147], [None]) # Each sample has shape (36, 147).
# output_shapes=([36, 147], []) # TypeError: `generator` yielded an element of shape (36, 36, 147) where an element of shape (36, 147) was expected.
# output_shapes=([147], []) # ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 36, 147), found shape=(36, 147)
# output_shapes=([147], [None]) # ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 36, 147), found shape=(36, 147)
output_shapes=([None, 147], []) # TypeError: `generator` yielded an element of shape (36, 36, 147) where an element of shape (None, 147) was expected.
).repeat().batch(batch_size, drop_remainder=True)
train_data
val_data = tf.data.Dataset.from_generator(
lambda: data_generator(X_val, y_val, batch_size),
output_types=(X.dtype, y.dtype),
# output_shapes=([None, 36, 147], [None]) # Update this line
# output_shapes=([None, 147], [None]) # Each sample has shape (36, 147).
# output_shapes=([36, 147], []) # TypeError: `generator` yielded an element of shape (36, 36, 147) where an element of shape (36, 147) was expected.
# output_shapes=([147], []) # ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 36, 147), found shape=(36, 147)
# output_shapes=([147], [None]) # ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 36, 147), found shape=(36, 147)
output_shapes=([None, 147], []) # TypeError: `generator` yielded an element of shape (36, 36, 147) where an element of shape (None, 147) was expected.
).batch(batch_size, drop_remainder=True) # Usually, you don't need to repeat validation data
# Define parameters
batch_size = 36
n_epochs = 10 # Define the number of epochs
steps_per_epoch = len(X_train) // batch_size
def build_model(hp):
model = Sequential()
model.add(LSTM( units=hp.Int('units1', min_value=50, max_value=750, step=50),
batch_input_shape=(n_batch_size, X_train.shape[1], X_train.shape[2]), #36, 36, 147# ValueError: If a RNN is stateful, it needs to know its batch size. Specify the batch size of your input tensors: - If using a Sequential model, specify the batch size by passing a `batch_input_shape` argument to your first layer.
return_sequences=True, # default: False
seed=1, # Random seed for dropout.
stateful=True, # Boolean (default: False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. https://datascience.stackexchange.com/questions/66031/tensorflow-keras-what-is-stateful-true-in-lstm-layers
)
)
# print(model.summary())
model.add(LSTM(units=hp.Int('units2',min_value=50,max_value=500,step=50),return_sequences=True)) # stateful=True here causes error
model.add(LSTM(units=hp.Int('units3',min_value=50,max_value=500,step=50),return_sequences=True)) # stateful=True here causes error
model.add(LSTM(units=hp.Int('units4',min_value=50,max_value=500,step=50)))
model.add(Dense(1))
model.compile(optimizer=Adam(hp.Choice('learning_rate', values=[1e-1, 1e-2, 1e-3, 1e-4])),
loss='mean_squared_error')
return model
# Bayesian Optimization Tuner setup
bayesian_opt_tuner = BayesianOptimization(
build_model,
seed=1,
objective='val_loss',
max_trials=25,
executions_per_trial=1,
directory='bayesian_optimization',
project_name='keras_lstm',
overwrite=True,
max_consecutive_failed_trials=1
)
# Now use the train_data and val_data in the search method
best_hps = bayesian_opt_tuner.search(train_data,
epochs=n_epochs,
steps_per_epoch=steps_per_epoch,
validation_data=val_data, # Using explicit validation data here
verbose=1)
# Get the best set of hyperparameters
best_hps = bayesian_opt_tuner.get_best_hyperparameters(num_trials=1)[0]
</code></pre>
<p>Does anyone have any ideas?</p>
|
<python><tensorflow><machine-learning><keras><deep-learning>
|
2023-12-23 18:05:26
| 0
| 3,476
|
user2205916
|
77,708,463
| 22,466,650
|
How to pick slices of groups and complete some if necessary?
|
<p>My input is a dataframe :</p>
<pre><code>df = pd.DataFrame({'mycol': ['A', 'B', 'A', 'B', 'B', 'C', 'A', 'C', 'A', 'A']})
print(df)
mycol
0 A
1 B
2 A
3 B
4 B
5 C
6 A
7 C
8 A
9 A
</code></pre>
<p><code>A</code> appear 5 times, <code>B</code> appear 3 times and <code>C</code> appear 2 times.</p>
<p>Lets say we need to slice the groups to take only 3 items of each group. Then index 8 and 9 should be removed because group <code>A</code> is exceeded by 2 values and index 10 should be created to complete group <code>C</code> because it lacks 1 value.</p>
<p>PS : the order of the original rows must be preserved and the index is important for me as I need to track the positions (start and end) for each slice.</p>
<p>For that I made the code below but some indexes are messed up and also there is some undesired nan rows.</p>
<pre><code>def func(group):
df2 = pd.DataFrame(None, index=[max(group.index)+1], columns=['mycol'])
result = pd.concat([group.iloc[:3], df2])
result['newcol'] = [group.name + str(i+1) for i in range(len(result))]
return result
final = df.groupby('mycol').apply(func).droplevel(0)
print(final)
mycol newcol
0 A A1
2 A A2
6 A A3
10 NaN A4
1 B B1
3 B B2
4 B B3
5 NaN B4
5 C C1
7 C C2
8 NaN C3
</code></pre>
<p>Do you guys know how to fix my code ? Or do you have another suggestions ?</p>
<p>My expected output is this :</p>
<pre><code> mycol newcol
0 A A1
1 B B1
2 A A2
3 B B2
4 B B3
5 C C1
6 A A3
7 C C2
10 NaN C3
</code></pre>
|
<python><pandas>
|
2023-12-23 17:30:51
| 2
| 1,085
|
VERBOSE
|
77,708,267
| 1,874,170
|
Multiple byte "OUTPUT" arguments in SWIG?
|
<p>I've got a C library which has 3 functions (all arguments are fixed-length <code>uint8_t*</code>, with the length known at compile time):</p>
<ul>
<li>One which has 2 output parameters only</li>
<li>One which has 1 input parameter and 2 output parameters</li>
<li>One which has 2 input parameters and 1 output parameter</li>
</ul>
<p>I'm trying to create SWIG-based Python bindings for this C library, but they don't seem to work. Even after importing <code>typemaps.i</code>, swig still complains that <strong>a <code>bytearray</code> is inappropriate for a <code>uint8_t INOUT[]</code> argument when trying to call the function.</strong></p>
<p>I don't get it. What am I doing wrong?</p>
<p><a href="https://www.swig.org/Doc4.1/Python.html#Python_nn46" rel="nofollow noreferrer">https://www.swig.org/Doc4.1/Python.html#Python_nn46</a></p>
<blockquote>
<p>Notice how the <code>INPUT</code> parameters allow integer values to be passed instead of pointers and how the <code>OUTPUT</code> parameter creates a return result.</p>
<p>If you don't want to use the names <code>INPUT</code> or <code>OUTPUT</code>, use the %apply directive.</p>
</blockquote>
<p>It doesn't look like <code>%pybuffer_mutable_binary</code> is the right choice, since it <em>requires</em> a size parameter, and my interface doesn't <em>have</em> one.</p>
<hr />
<h2>Appendix: code</h2>
<h3>SWIG module</h3>
<pre><code>%module "libmceliece6960119f_clean"
%include "typemaps.i"
%{
// https://github.com/PQClean/PQClean/blob/fb003a2a625c49f3090eec546b2383dcfa2c75d8/crypto_kem/mceliece6960119f/clean/api.h
#include "api.h"
%}
int PQCLEAN_MCELIECE6960119F_CLEAN_crypto_kem_keypair(
uint8_t INOUT[PQCLEAN_MCELIECE6960119F_CLEAN_CRYPTO_PUBLICKEYBYTES],
uint8_t INOUT[PQCLEAN_MCELIECE6960119F_CLEAN_CRYPTO_SECRETKEYBYTES]);
int PQCLEAN_MCELIECE6960119F_CLEAN_crypto_kem_enc(
uint8_t INOUT[PQCLEAN_MCELIECE6960119F_CLEAN_CRYPTO_CIPHERTEXTBYTES],
uint8_t INOUT[PQCLEAN_MCELIECE6960119F_CLEAN_CRYPTO_BYTES],
const uint8_t INPUT[PQCLEAN_MCELIECE6960119F_CLEAN_CRYPTO_PUBLICKEYBYTES]);
int PQCLEAN_MCELIECE6960119F_CLEAN_crypto_kem_dec(
uint8_t INOUT[PQCLEAN_MCELIECE6960119F_CLEAN_CRYPTO_BYTES],
const uint8_t INPUT[PQCLEAN_MCELIECE6960119F_CLEAN_CRYPTO_CIPHERTEXTBYTES],
const uint8_t INPUT[PQCLEAN_MCELIECE6960119F_CLEAN_CRYPTO_SECRETKEYBYTES]);
</code></pre>
<h3>Usage (broken)</h3>
<pre class="lang-py prettyprint-override"><code>_libmceliece6960119f_clean.PQCLEAN_MCELIECE6960119F_CLEAN_crypto_kem_keypair()
# Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# TypeError: PQCLEAN_MCELIECE6960119F_CLEAN_crypto_kem_keypair expected 2 arguments, got 0
pk, sk = bytearray(1047319), bytearray(13948); _libmceliece6960119f_clean.PQCLEAN_MCELIECE6960119F_CLEAN_crypto_kem_keypair(pk, sk)
# Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# TypeError: in method 'PQCLEAN_MCELIECE6960119F_CLEAN_crypto_kem_keypair', argument 1 of type 'uint8_t [PQCLEAN_MCELIECE6960119F_CLEAN_CRYPTO_PUBLICKEYBYTES]'
</code></pre>
|
<python><swig><swig-typemap>
|
2023-12-23 16:15:31
| 1
| 1,117
|
JamesTheAwesomeDude
|
77,708,266
| 1,473,517
|
Speed-up for finding an optimal partition line
|
<p>This coding question derived from this <a href="https://codegolf.stackexchange.com/questions/268581/finding-the-optimal-straight-line-boundary">question</a>.</p>
<p>Consider an n by n grid of integers. The task is to draw a straight line across the grid so that the part that includes the top left corner sums to the largest number possible. Here is a picture of an optimal solution with score 45:</p>
<p><a href="https://i.sstatic.net/Oq6xh.png" rel="noreferrer"><img src="https://i.sstatic.net/Oq6xh.png" alt="enter image description here" /></a></p>
<p>We include a square in the part that is to be summed if its middle is above or on the line. Above means in the part including the top left corner of the grid. (To make this definition clear, no line can start exactly in the top left corner of the grid.)</p>
<p>The task is to choose the line that maximizes the sum of the part that includes the top left square. The line must go straight from one side to another. The line can start or end anywhere on a side and not just at integer points.</p>
<p>The Python code given is:</p>
<pre><code>import numpy as np
import fractions
def best_line(grid):
n, m = grid.shape
D = [(di, dj) for di in range(-(n - 1), n) for dj in range(-(n - 1), n)]
def slope(d):
di, dj = d
if dj == 0:
return float('inf') if di <= 0 else float('-inf'), -di
else:
return fractions.Fraction(di, dj), fractions.Fraction(-1, dj)
D.sort(key=slope)
D = np.array(D, dtype=np.int64)
s_max = grid.sum()
for grid in (grid, grid.T):
left_sum = 0
for j in range(grid.shape[1]):
left_sum += grid[:,j].sum()
for i in range(grid.shape[0]):
p = np.array([i, j], dtype=np.int64)
Q = p + D
Q = Q[np.all((0 <= Q) & (Q < np.array(grid.shape)), axis=1)]
s = left_sum
for q in Q:
if not np.any(q):
break
if q[1] <= j:
s -= grid[q[0],q[1]]
else:
s += grid[q[0],q[1]]
s_max = max(s_max, s)
return s_max
</code></pre>
<p>This code is already slow for n=30.</p>
<p>Is there any way to speed it up in practice?</p>
<h1>Test cases</h1>
<p>As the problem is quite complicated, I have given some example inputs and outputs.</p>
<p>The easiest test cases are when the input matrix is made of positive (or negative) integers only. In that case a line that makes the part to sum the whole matrix (or the empty matrix if all the integers are negative) wins.</p>
<p>Only slightly less simple is if there is a line that clearly separates the negative integers from the non negative integers in the matrix.</p>
<p>Here is a slightly more difficult example with an optimal line shown. The optimal value is 14.</p>
<p><a href="https://i.sstatic.net/XDsHL.png" rel="noreferrer"><img src="https://i.sstatic.net/XDsHL.png" alt="enter image description here" /></a></p>
<p>The grid in machine readable form is:</p>
<pre><code>[[ 3 -1 -2 -1]
[ 0 1 -1 1]
[ 1 1 3 0]
[ 3 3 -1 -1]]
</code></pre>
<p>Here is an example with optimal value 0.</p>
<p><a href="https://i.sstatic.net/kyQDM.png" rel="noreferrer"><img src="https://i.sstatic.net/kyQDM.png" alt="enter image description here" /></a></p>
<pre><code>[[-3 -3 2 -3]
[ 0 -2 -1 0]
[ 1 0 2 0]
[-1 -2 1 -1]]
</code></pre>
<p>This matrix has optimal score 31:</p>
<pre><code>[[ 3 0 1 3 -1 1 1 3 -2 -1]
[ 3 -1 -1 1 0 -1 2 1 -2 0]
[ 2 2 -2 0 1 -3 0 -2 2 1]
[ 0 -3 -3 -1 -1 3 -2 0 0 3]
[ 2 2 3 2 -1 0 3 0 -3 -1]
[ 1 -1 3 1 -3 3 -2 0 -3 0]
[ 2 -2 -2 -3 -2 1 -2 0 0 3]
[ 0 3 0 1 3 -1 2 -3 0 -2]
[ 0 -2 2 2 2 -2 0 2 1 3]
[-2 -2 0 -2 -2 2 0 2 3 3]]
</code></pre>
<p>In Python/numpy, an easy way to make more test matrices is:</p>
<pre><code>import numpy as np
N = 30
square = np.random.randint(-3, 4, size=(N, N))
</code></pre>
<h1>Timing</h1>
<pre><code>N = 30
np.random.seed(42)
big_square = randint(-3, 4, size=(N, N))
print(best_line(np.array(big_square)))
</code></pre>
<p>takes 1 minute 55 seconds and gives the output 57.</p>
<ul>
<li>Andrej Kesely's parallel code takes 1 min 5 seconds for n=250. This is a huge improvement.</li>
</ul>
<p>Can it be made faster still?</p>
|
<python><algorithm><performance><optimization>
|
2023-12-23 16:15:30
| 2
| 21,513
|
Simd
|
77,708,146
| 5,540,159
|
My autoencoder was not learning to predict value [Updated]
|
<p>I am trying to work on building a variational autoencoder in Keras, with an input shape of X= (1,50) and Y= (1,20).</p>
<p>I have uploaded the DataSet, <a href="https://www.dropbox.com/scl/fi/tp3dr1dxdml454zjuepin/DataSet.rar?rlkey=wazw1m29ck4400x6fg5aetl2z&dl=0" rel="nofollow noreferrer">you can download it from here</a>.</p>
<p>I made 1 input, and I want to make relation between the input and output. ( the data is 1 dimension of binary cases). but always I found these results:</p>
<p><a href="https://i.sstatic.net/8N5o5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8N5o5.png" alt="enter image description here" /></a></p>
<p>I tried changing activation and loss and no positive results.</p>
<pre><code>from keras.layers import Lambda, Input, Dense, Dropout
from keras.models import Model
from keras import backend as K, optimizers
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import keras.optimizers
# Function for reparameterization trick
def sampling(args):
z_mean, z_log_var = args
batch = K.shape(z_mean)[0]
dim = K.int_shape(z_mean)[1]
epsilon = K.random_normal(shape=(batch, dim))
return z_mean + K.exp(0.5 * z_log_var) * epsilon
# Load your data
# Note: Replace this with your actual data loading
# training_feature = X
# ground_truth_r = Y
original_dim = 32 # Adjust according to your data shape
latent_dim = 32
# Encoder network
inputs_x = Input(shape=(original_dim, ), name='encoder_input')
inputs_x_dropout = Dropout(0.25)(inputs_x)
inter_x1 = Dense(128, activation='tanh')(inputs_x_dropout)
inter_x2 = Dense(64, activation='tanh')(inter_x1)
z_mean = Dense(latent_dim, name='z_mean')(inter_x2)
z_log_var = Dense(latent_dim, name='z_log_var')(inter_x2)
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])
encoder = Model(inputs_x, [z_mean, z_log_var, z], name='encoder')
# Decoder network for reconstruction
latent_inputs = Input(shape=(latent_dim,), name='z_sampling')
inter_y1 = Dense(64, activation='tanh')(latent_inputs)
inter_y2 = Dense(128, activation='tanh')(inter_y1)
outputs_reconstruction = Dense(original_dim)(inter_y2) # original_dim should be 32
decoder = Model(latent_inputs, outputs_reconstruction, name='decoder')
decoder.compile(optimizer='adam', loss='mean_squared_error')
from keras.models import Model, Sequential
from keras.layers import BatchNormalization
# Predictor network
# Start of the predictor model
latent_input_for_predictor = Input(shape=(latent_dim,))
# Building the predictor model using the functional API
x = Dense(1024, activation='relu')(latent_input_for_predictor)
x = Dense(512, activation='relu')(x)
x = Dense(64, activation='relu')(x)
x = Dense(32, activation='relu')(x)
x = BatchNormalization()(x)
predictor_output = Dense(Y.shape[1], activation='linear')(x) # Adjust the output dimension as per your requirement
if ( 1 == 1):
# Create the model
predictor = Model(inputs=latent_input_for_predictor, outputs=predictor_output)
# Compile the model
optimizer = optimizers.Adam(learning_rate=0.001)
predictor.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['accuracy'])
# Train the reconstruction model
history_reconstruction = decoder.fit(X, X, epochs=100, batch_size=100, shuffle=True, validation_data=(XX, XX))
latent_representations = encoder.predict(X)[2]
#vae.fit([training_feature_sk,training_score], epochs=epochs, batch_size=batch_size, verbose = 0)
# Train the prediction model
history_prediction = predictor.fit(latent_representations, Y, epochs=100, batch_size=100, shuffle=True, validation_data=(encoder.predict(XX)[2], YY))
# Save models and plot training/validation loss
encoder.save("BrmEnco_Updated.h5", overwrite=True)
decoder.save("BrmDeco_Updated.h5", overwrite=True)
predictor.save("BrmPred_Updated.h5", overwrite=True)
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(history_reconstruction.history['loss'], label='Decoder Training Loss')
plt.plot(history_reconstruction.history['val_loss'], label='Decoder Validation Loss')
plt.title('Decoder Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(history_prediction.history['loss'], label='Predictor Training Loss')
plt.plot(history_prediction.history['val_loss'], label='Predictor Validation Loss')
plt.title('Predictor Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend()
plt.show()
</code></pre>
|
<python><keras><deep-learning><neural-network>
|
2023-12-23 15:33:28
| 1
| 962
|
stevGates
|
77,707,957
| 519,422
|
How to preserve whitespace in the "name" part of a name/value pair when importing JSON-formatted data (Python)?
|
<p>I have been asking for help in importing JSON-formatted data from URLs (I am a newbie as far as dealing with JSON) and received a great answer in response to <a href="https://stackoverflow.com/questions/77681060/how-to-get-json-formatted-data-from-a-website-into-a-dataframe-i-e-extract-va/77681349#77681349">this</a> question.</p>
<p>However, I have encountered a complication. Some of my property names contain spaces. For example, "Property1" and several other property names from my previous question might actually be "Property1_word1 Property1_word2." The current solution preserves only the first word of a property name. I could get away with that at first but now need all words. If anyone could please point me to any tips, I would be grateful. I haven't managed to find any so far.</p>
<hr />
<p>Edit (providing all information here so that there's no need to refer to previous posts):</p>
<p>I want to import data from a website. First I save the contents (below) of the website as a file. In my previous question, each property name was made up of only one word. Now I'm dealing with property names that are made up of multiple words. I have provided an example below, where Property1, Property4, and Property8 have names with multiple words.</p>
<pre><code>{
"payload": {
"allShortcutsEnabled": false,
"fileTree": {
"": {
"items": [
{
"name": "thing",
"path": "thing",
"contentType": "directory"
},
{
"name": ".repurlignore",
"path": ".repurlignore",
"contentType": "file"
},
{
"name": "README.md",
"path": "README.md",
"contentType": "file"
},
{
"name": "thing2",
"path": "thing2",
"contentType": "file"
},
{
"name": "thing3",
"path": "thing3",
"contentType": "file"
},
{
"name": "thing4",
"path": "thing4",
"contentType": "file"
},
{
"name": "thing5",
"path": "thing5",
"contentType": "file"
},
{
"name": "thing6",
"path": "thing6",
"contentType": "file"
},
{
"name": "thing7",
"path": "thing7",
"contentType": "file"
},
{
"name": "thing8",
"path": "thing8",
"contentType": "file"
},
{
"name": "thing9",
"path": "thing9",
"contentType": "file"
},
{
"name": "thing10",
"path": "thing10",
"contentType": "file"
},
{
"name": "thing11",
"path": "thing11",
"contentType": "file"
}
],
"totalCount": 500
}
},
"fileTreeProcessingTime": 5.262188,
"foldersToFetch": [],
"reducedMotionEnabled": null,
"repo": {
"id": 1234567,
"defaultBranch": "main",
"name": "repository",
"ownerLogin": "contributor",
"currentUserCanPush": false,
"isFork": false,
"isEmpty": false,
"createdAt": "2023-10-31",
"ownerAvatar": "https://avatars.repurlusercontent.com/u/98765432?v=1",
"public": true,
"private": false,
"isOrgOwned": false
},
"symbolsExpanded": false,
"treeExpanded": true,
"refInfo": {
"name": "main",
"listCacheKey": "v0:13579",
"canEdit": false,
"refType": "branch",
"currentOid": "identifier"
},
"path": "thing2",
"currentUser": null,
"blob": {
"rawLines": [
" C_1H_4 Methane ",
" 5.00000 Property1_word1 Property1_word2 ",
" 20.00000 Property2 ",
" 500.66500 Property3 ",
" 100.00000 Property4_word1 Property4_word2 ",
" -4453.98887 Property5 ",
" 100.48200 Property6 ",
" 59.75258 Property7 ",
" 5.33645 Property8_word1 Property8_word2 ",
" 0.00000 Property9 ",
" 645.07777 Property10 ",
" 0.00000 Property11 ",
" 0.00000 Property12 ",
" 0.00000 Property13 ",
" 0.00000 Property14 ",
" 0.00000 Property15 ",
" 0.00000 Property16 ",
" 0.00000 Property17 ",
" 0.00000 Property18 ",
" 0.00000 Property19 ",
" 0.00000 Property20 ",
" 0.00000 Property21 ",
" 0.00000 Property22 ",
" 0.00000 Property23 ",
" 0.00000 Property24 ",
" 0.00000 Property25 ",
" 0.57876 Property26 ",
" 4.00000 Property27 ",
" 0.00000 Property28 ",
" 0.00000 Property29 ",
" 0.00000 Property30 ",
" 0.00000 Property31 ",
" 0.00000 Property32 ",
" 1.00000 Property33 ",
" 0.00000 Property34 ",
" 26.00000 Property35 ",
" 1.44571 Property36 ",
" 1.08756 Property37 ",
" 0.00000 Property38 ",
" 0.00000 Property39 ",
" 0.00000 Property40 ",
" 6.00000 Property41 ",
" 9.00000 Property42 ",
" 0.00000 Property43 "
],
"stylingDirectives": [
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[]
],
"csv": null,
"csvError": null,
"dependabotInfo": {
"showConfigurationBanner": false,
"configFilePath": null,
"networkDependabotPath": "/contributor/repository/network/updates",
"dismissConfigurationNoticePath": "/settings/dismiss-notice/dependabot_configuration_notice",
"configurationNoticeDismissed": null,
"repoAlertsPath": "/contributor/repository/security/dependabot",
"repoSecurityAndAnalysisPath": "/contributor/repository/settings/security_analysis",
"repoOwnerIsOrg": false,
"currentUserCanAdminRepo": false
},
"displayName": "thing2",
"displayUrl": "https://repurl.com/contributor/repository/blob/main/thing2?raw=true",
"headerInfo": {
"blobSize": "3.37 KB",
"deleteInfo": {
"deleteTooltip": "You must be signed in to make or propose changes"
},
"editInfo": {
"editTooltip": "XXX"
},
"ghDesktopPath": "https://desktop.repurl.com",
"repurlLfsPath": null,
"onBranch": true,
"shortPath": "5678",
"siteNavLoginPath": "/login?return_to=identifier",
"isCSV": false,
"isRichtext": false,
"toc": null,
"lineInfo": {
"truncatedLoc": "33",
"truncatedSloc": "33"
},
"mode": "executable file"
},
"image": false,
"isCodeownersFile": null,
"isPlain": false,
"isValidLegacyIssueTemplate": false,
"issueTemplateHelpUrl": "https://docs.repurl.com/articles/about-issue",
"issueTemplate": null,
"discussionTemplate": null,
"language": null,
"languageID": null,
"large": false,
"loggedIn": false,
"newDiscussionPath": "/contributor/repository/issues/new",
"newIssuePath": "/contributor/repository/issues/new",
"planSupportInfo": {
"repoOption1": null,
"repoOption2": null,
"requestFullPath": "/contributor/repository/blob/main/thing2",
"repoOption4": null,
"repoOption5": null,
"repoOption6": null,
"repoOption7": null
},
"repoOption8": {
"repoOption9": "/settings/dismiss-notice/repoOption10",
"releasePath": "/contributor/repository/releases/new=true",
"repoOption11": false,
"repoOption12": false
},
"rawBlobUrl": "https://repurl.com/contributor/repository/raw/main/thing2",
"repoOption13": false,
"richText": null,
"renderedFileInfo": null,
"shortPath": null,
"tabSize": 8,
"topBannersInfo": {
"overridingGlobalFundingFile": false,
"universalPath": null,
"repoOwner": "contributor",
"repoName": "repository",
"repoOption14": false,
"citationHelpUrl": "https://docs.repurl.com/en/repurl/archiving/about",
"repoOption15": false,
"repoOption16": null
},
"truncated": false,
"viewable": true,
"workflowRedirectUrl": null,
"symbols": {
"timedOut": false,
"notAnalyzed": true,
"symbols": []
}
},
"collabInfo": null,
"collabMod": false,
"wtsdf_signifier": {
"/contributor/repository/branches": {
"post": "identifier"
},
"/repos/preferences": {
"post": "identifier"
}
}
},
"title": "repository/thing2 at main \\u0000 contributor/repository"
}
</code></pre>
<p>Here is the code that deals with property names made up of one word (the command that strips whitespace mans that only the first word of names made up of multiple words is imported):</p>
<pre><code>import json
import pandas as pd
f = open("yourJson.json", "r")
data = json.load(f)
f.close()
# Get what we want to extract from the json
to_extract = data["payload"]["blob"]["rawLines"]
# Remove useless whitespace
stripped = [e.strip() for e in to_extract]
trimmed = [" ".join(e.split()) for e in stripped]
# Transform the list of string to a dict
as_dict = {e.split(' ')[0]: e.split(' ')[1] for e in trimmed}
# Load the dict with pandas
df = pd.DataFrame(as_dict.items(), columns=['Value', 'Property'])
</code></pre>
<p>I have experimented with various solutions (e.g., not stripping whitespace, specifying the exact property names associated with the data I need) but am so lost as far as JSON that the errors are not meaningful.</p>
|
<python><json><python-3.x><pandas><file-io>
|
2023-12-23 14:29:14
| 2
| 897
|
Ant
|
77,707,880
| 16,748,945
|
How to groupby a multiindex dataframe with different rules on different levels?
|
<p>If I have a dataframe like</p>
<pre><code>In [58]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one1", "one2", "one1", "one2", "one1", "two", "one1", "two"],
....: ]
....:
In [59]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [60]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [61]: df
Out[61]:
A B
first second
bar one1 1 0
one2 1 1
baz one1 1 2
one2 1 3
foo one1 2 4
two 2 5
qux one1 3 6
two 3 7
</code></pre>
<p>Then I want a result</p>
<pre><code>Out[61]:
Out[61]:
A B
first second
bar one 2 1
baz one 2 5
foo one 2 4
two 2 5
qux one 3 6
two 3 7
</code></pre>
<p>That's to say, when 'level=0', I want to group by and remain its own index, at the same time , when 'level=1', I want to group by like <code>lambda x: x.startwith(x[:3])</code> and the new index into <code>x[:3]</code>.
So could it be achieved only with statement of <code>groupby</code>?Or some other ways?</p>
|
<python><dataframe><group-by>
|
2023-12-23 14:03:19
| 1
| 665
|
f1msch
|
77,707,742
| 893,254
|
How to perform insert statements with SQL Alchemy in a structured way?
|
<p>I do not know if the following is possible with SQL Alchemy and Python.</p>
<p>My objective is to perform an insert statement in a more structured way, using some kind of object as a row.</p>
<p>Some pseduocode might look like this:</p>
<pre><code># SQL:
#
# create table test_table (
# id bigserial not null,
# value varchar(256)
# );
row = # get some row object from somewhere
row.value = "some new value"
row.insert()
# or
# test_table.insert(row)
</code></pre>
<p>Is something like this possible?</p>
<p>The key point here is to be able to manipulate an object which represents a row in a structured way such that the values can be set using column names. In this case there is just the one column named <code>value</code>.</p>
<p>The <code>id</code> column, being a primary key would presumably not be manually settable. So it may be the case the syntax uses strings for column names rather than being actual field names. (Just a guess)</p>
<pre><code>row.set_column('value', 'some new text value')
</code></pre>
<p>I couldn't find anything in the SQL Alchemy documentation which explains how to do this. I also tried asking ChatGPT - and it didn't seem to think this is possible.</p>
<p>The reason why I think it should be possible is that select statements work in a similarly structured way.</p>
<pre><code>query = test_table.select()
result_proxy = connection.execute(query)
result_set = result_proxy.fetchall()
</code></pre>
<p>From the returned data it is possible to iterate over rows as tuples, to get a mapping of column names to tuple index, and do similarly structured things.</p>
<p>The closest I have been able to get so far for an <code>insert</code> statement is this:</p>
<pre><code>query = test_table.insert().values(value='a text value')
</code></pre>
<p>This isn't great because it doesn't make it obvious if a column is missing, or one which doesn't exist in the table has been provided as an argument.</p>
<p>It also doesn't work when trying to insert multiple rows. In that case, the only option I have found is to use a list of dictionaries. Since dictionaries are totally unstructured this seems like a worse alternative.</p>
|
<python><sqlalchemy>
|
2023-12-23 13:11:25
| 0
| 18,579
|
user2138149
|
77,707,689
| 1,283,836
|
python-docx: matching an lxml element using xpath
|
<p>I'm trying to access the underlying <code>lxml</code> data of a cell of a table in a docx file using <a href="https://pypi.org/project/python-docx/" rel="nofollow noreferrer">python-docx</a> library in order to change the <code>val</code> attribute of an element.</p>
<pre><code>from docx import Document
doc = Document("test.docx")
table = doc.tables[0]
target_cell = table.cell(0, 0)
print(target_cell._tc.xml)
</code></pre>
<p>Up to this point, everything works and I can get the <code>lxml</code> of the cell which is this:</p>
<pre><code><w:tc xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:wpc="http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas" xmlns:cx="http://schemas.microsoft.com/office/drawing/2014/chartex" xmlns:cx1="http://schemas.microsoft.com/office/drawing/2015/9/8/chartex" xmlns:cx2="http://schemas.microsoft.com/office/drawing/2015/10/21/chartex" xmlns:cx3="http://schemas.microsoft.com/office/drawing/2016/5/9/chartex" xmlns:cx4="http://schemas.microsoft.com/office/drawing/2016/5/10/chartex" xmlns:cx5="http://schemas.microsoft.com/office/drawing/2016/5/11/chartex" xmlns:cx6="http://schemas.microsoft.com/office/drawing/2016/5/12/chartex" xmlns:cx7="http://schemas.microsoft.com/office/drawing/2016/5/13/chartex" xmlns:cx8="http://schemas.microsoft.com/office/drawing/2016/5/14/chartex" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:aink="http://schemas.microsoft.com/office/drawing/2016/ink" xmlns:am3d="http://schemas.microsoft.com/office/drawing/2017/model3d" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml" xmlns:w15="http://schemas.microsoft.com/office/word/2012/wordml" xmlns:w16cex="http://schemas.microsoft.com/office/word/2018/wordml/cex" xmlns:w16cid="http://schemas.microsoft.com/office/word/2016/wordml/cid" xmlns:w16="http://schemas.microsoft.com/office/word/2018/wordml" xmlns:w16sdtdh="http://schemas.microsoft.com/office/word/2020/wordml/sdtdatahash" xmlns:w16se="http://schemas.microsoft.com/office/word/2015/wordml/symex" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:wpi="http://schemas.microsoft.com/office/word/2010/wordprocessingInk" xmlns:wne="http://schemas.microsoft.com/office/word/2006/wordml" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape">
<w:tcPr>
<w:tcW w:w="1024" w:type="pct"/>
<w:shd w:val="clear" w:color="auto" w:fill="auto"/>
</w:tcPr>
<w:p w14:paraId="33557991" w14:textId="77777777" w:rsidR="00B07513" w:rsidRPr="00F2422D" w:rsidRDefault="00B07513" w:rsidP="0081609A">
<w:pPr>
<w:rPr>
<w:rFonts w:cs="B Yekan"/>
<w:sz w:val="12"/>
<w:szCs w:val="12"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:cs="B Yekan"/>
<w:noProof/>
</w:rPr>
<mc:AlternateContent>
<mc:Choice Requires="wps">
<w:drawing>
<wp:anchor distT="0" distB="0" distL="114300" distR="114300" simplePos="0" relativeHeight="251704320" behindDoc="0" locked="0" layoutInCell="1" allowOverlap="1" wp14:anchorId="7C3C7F8B" wp14:editId="073786DF">
<wp:simplePos x="0" y="0"/>
<wp:positionH relativeFrom="column">
<wp:posOffset>-1270</wp:posOffset>
</wp:positionH>
<wp:positionV relativeFrom="paragraph">
<wp:posOffset>3175</wp:posOffset>
</wp:positionV>
<wp:extent cx="232426" cy="88265"/>
<wp:effectExtent l="0" t="0" r="15240" b="26035"/>
<wp:wrapNone/>
<wp:docPr id="271" name="Rectangle: Rounded Corners 271"/>
<wp:cNvGraphicFramePr/>
<a:graphic xmlns:a="http://schemas.openxmlformats.org/drawingml/2006/main">
<a:graphicData uri="http://schemas.microsoft.com/office/word/2010/wordprocessingShape">
<wps:wsp>
<wps:cNvSpPr/>
<wps:spPr>
<a:xfrm>
<a:off x="0" y="0"/>
<a:ext cx="232426" cy="88265"/>
</a:xfrm>
<a:prstGeom prst="roundRect">
<a:avLst>
<a:gd name="adj" fmla="val 39243"/>
</a:avLst>
</a:prstGeom>
<a:solidFill>
<a:srgbClr val="FFFF00"/>
</a:solidFill>
<a:ln>
<a:solidFill>
<a:schemeClr val="bg1">
<a:lumMod val="95000"/>
</a:schemeClr>
</a:solidFill>
</a:ln>
</wps:spPr>
<wps:style>
<a:lnRef idx="2">
<a:schemeClr val="accent1">
<a:shade val="50000"/>
</a:schemeClr>
</a:lnRef>
<a:fillRef idx="1">
<a:schemeClr val="accent1"/>
</a:fillRef>
<a:effectRef idx="0">
<a:schemeClr val="accent1"/>
</a:effectRef>
<a:fontRef idx="minor">
<a:schemeClr val="lt1"/>
</a:fontRef>
</wps:style>
<wps:bodyPr rot="0" spcFirstLastPara="0" vertOverflow="overflow" horzOverflow="overflow" vert="horz" wrap="square" lIns="91440" tIns="45720" rIns="91440" bIns="45720" numCol="1" spcCol="0" rtlCol="0" fromWordArt="0" anchor="ctr" anchorCtr="0" forceAA="0" compatLnSpc="1">
<a:prstTxWarp prst="textNoShape">
<a:avLst/>
</a:prstTxWarp>
<a:noAutofit/>
</wps:bodyPr>
</wps:wsp>
</a:graphicData>
</a:graphic>
<wp14:sizeRelH relativeFrom="margin">
<wp14:pctWidth>0</wp14:pctWidth>
</wp14:sizeRelH>
</wp:anchor>
</w:drawing>
</mc:Choice>
<mc:Fallback>
<w:pict>
<v:roundrect w14:anchorId="2887428C" id="Rectangle: Rounded Corners 271" o:spid="_x0000_s1026" style="position:absolute;margin-left:-.1pt;margin-top:.25pt;width:18.3pt;height:6.95pt;z-index:251704320;visibility:visible;mso-wrap-style:square;mso-width-percent:0;mso-wrap-distance-left:9pt;mso-wrap-distance-top:0;mso-wrap-distance-right:9pt;mso-wrap-distance-bottom:0;mso-position-horizontal:absolute;mso-position-horizontal-relative:text;mso-position-vertical:absolute;mso-position-vertical-relative:text;mso-width-percent:0;mso-width-relative:margin;v-text-anchor:middle" arcsize="25716f" o:gfxdata="UEsDBBQABgAIAAAAIQC2gziS/gAAAOEBAAATAAAAW0NvbnRlbnRfVHlwZXNdLnhtbJSRQU7DMBBF&#10;90jcwfIWJU67QAgl6YK0S0CoHGBkTxKLZGx5TGhvj5O2G0SRWNoz/78nu9wcxkFMGNg6quQqL6RA&#10;0s5Y6ir5vt9lD1JwBDIwOMJKHpHlpr69KfdHjyxSmriSfYz+USnWPY7AufNIadK6MEJMx9ApD/oD&#10;OlTrorhX2lFEilmcO2RdNtjC5xDF9pCuTyYBB5bi6bQ4syoJ3g9WQ0ymaiLzg5KdCXlKLjvcW893&#10;SUOqXwnz5DrgnHtJTxOsQfEKIT7DmDSUCaxw7Rqn8787ZsmRM9e2VmPeBN4uqYvTtW7jvijg9N/y&#10;JsXecLq0q+WD6m8AAAD//wMAUEsDBBQABgAIAAAAIQA4/SH/1gAAAJQBAAALAAAAX3JlbHMvLnJl&#10;bHOkkMFqwzAMhu+DvYPRfXGawxijTi+j0GvpHsDYimMaW0Yy2fr2M4PBMnrbUb/Q94l/f/hMi1qR&#10;JVI2sOt6UJgd+ZiDgffL8ekFlFSbvV0oo4EbChzGx4f9GRdb25HMsYhqlCwG5lrLq9biZkxWOiqY&#10;22YiTra2kYMu1l1tQD30/bPm3wwYN0x18gb45AdQl1tp5j/sFB2T0FQ7R0nTNEV3j6o9feQzro1i&#10;OWA14Fm+Q8a1a8+Bvu/d/dMb2JY5uiPbhG/ktn4cqGU/er3pcvwCAAD//wMAUEsDBBQABgAIAAAA&#10;IQA4fUuVzAIAABUGAAAOAAAAZHJzL2Uyb0RvYy54bWysVEtv2zAMvg/YfxB0X+24adcGdYogRYYB&#10;XVu0HXpWZCn2IImapLz260fJj6RrscOwHBTRJD+Sn0heXe+0IhvhfAOmpKOTnBJhOFSNWZX0+/Pi&#10;0wUlPjBTMQVGlHQvPL2efvxwtbUTUUANqhKOIIjxk60taR2CnWSZ57XQzJ+AFQaVEpxmAUW3yirH&#10;toiuVVbk+Xm2BVdZB1x4j19vWiWdJnwpBQ/3UnoRiCop5hbS6dK5jGc2vWKTlWO2bniXBvuHLDRr&#10;DAYdoG5YYGTtmjdQuuEOPMhwwkFnIGXDRaoBqxnlf1TzVDMrUi1IjrcDTf7/wfK7zYMjTVXS4vOI&#10;EsM0PtIj0sbMSokJeYS1qURF5uAMvjKJVsjZ1voJuj7ZB9dJHq+RgJ10Ov5jaWSXeN4PPItdIBw/&#10;FqfFuDinhKPq4qI4P4uQ2cHXOh++CNAkXkrqYg4xp8Qw29z6kKiuunRZ9YMSqRU+3IYpcnpZjE87&#10;xM4YsXvM6OlBNdWiUSoJbrWcK0fQtaQL/OWpK9DllZkybz1jk4rBd7kapQTVWn+DqsW7PMsPcL15&#10;qvUIHENF9CyS2tKYbmGvRIypzKOQ+ESRuBQgDcchLuNcmNDG9jWrRBs6Rh4qeRU6AUZkiRQM2B1A&#10;b9mC9Njt+3T20VWk2Rqc878l1joPHikymDA468aAew9AYVVd5Na+J6mlJrK0hGqPDeygnWxv+aLB&#10;prllPjwwhx2BQ4/rKdzjIRVsSwrdjZIa3K/3vkd7nDDUUrLF1VBS/3PNnKBEfTU4e5ej8TjukiSM&#10;zz4XKLhjzfJYY9Z6DthaOF2YXbpG+6D6q3SgX3CLzWJUVDHDMXZJeXC9MA/tysI9yMVslsxwf1gW&#10;bs2T5RE8shp7/Hn3wpztJifgxN1Bv0bYJI1Dy+jBNnoamK0DyCZE5YHXTsDdg7dXy+1YTlaHbT79&#10;DQAA//8DAFBLAwQUAAYACAAAACEAxK9X+9kAAAAEAQAADwAAAGRycy9kb3ducmV2LnhtbEyOQU+D&#10;QBBG7yb+h82YeGuXVmgMsjTGxoMHE1uNXgd2BCI7i+zS4r93PNnj5Ht584rt7Hp1pDF0ng2slgko&#10;4trbjhsDb6+Pi1tQISJb7D2TgR8KsC0vLwrMrT/xno6H2CiRcMjRQBvjkGsd6pYchqUfiGX79KPD&#10;KOfYaDviSeSu1+sk2WiHHcuHFgd6aKn+OkxOLO/fqw+7n/jJheeueamGXbbLjLm+mu/vQEWa4z8M&#10;f/mSDqU0VX5iG1RvYLEW0EAGSsabTQqqEihNQZeFPo8vfwEAAP//AwBQSwECLQAUAAYACAAAACEA&#10;toM4kv4AAADhAQAAEwAAAAAAAAAAAAAAAAAAAAAAW0NvbnRlbnRfVHlwZXNdLnhtbFBLAQItABQA&#10;BgAIAAAAIQA4/SH/1gAAAJQBAAALAAAAAAAAAAAAAAAAAC8BAABfcmVscy8ucmVsc1BLAQItABQA&#10;BgAIAAAAIQA4fUuVzAIAABUGAAAOAAAAAAAAAAAAAAAAAC4CAABkcnMvZTJvRG9jLnhtbFBLAQIt&#10;ABQABgAIAAAAIQDEr1f72QAAAAQBAAAPAAAAAAAAAAAAAAAAACYFAABkcnMvZG93bnJldi54bWxQ&#10;SwUGAAAAAAQABADzAAAALAYAAAAA&#10;" fillcolor="yellow" strokecolor="#f2f2f2 [3052]" strokeweight="1pt">
<v:stroke joinstyle="miter"/>
</v:roundrect>
</w:pict>
</mc:Fallback>
</mc:AlternateContent>
</w:r>
</w:p>
</w:tc>
</code></pre>
<p>I'm looking for <code>a:solidFill</code> element which as it's obvious from the above output, there's only two of them but when I'm trying to get that element using <code>xpath</code>, I'm getting back many more matches.</p>
<pre><code>solidFills = target_cell._tc.xpath('//a:solidFill')
print(len(solidFills)) #86
</code></pre>
<p>What is causing this?</p>
|
<python><xpath><lxml><python-docx>
|
2023-12-23 12:52:16
| 0
| 2,093
|
wiki
|
77,707,610
| 102,315
|
Collapse edges in NetworkX
|
<p>Is this really the best way to collapse edges in NetworkX?</p>
<pre class="lang-py prettyprint-override"><code>n = list(G.neighbors(node))
new_weight = G[node][n[0]]['weight'] + G[node][n[1]]['weight']
G.add_edge(n[0], n[1], weight=new_weight)
G.remove_edges_from(G[node])
G.remove_node(node)
</code></pre>
<p>It's very awkward. Surely there's a better way?</p>
|
<python><networkx>
|
2023-12-23 12:20:38
| 1
| 4,061
|
Alper
|
77,707,524
| 1,473,517
|
How to remove redundancy when computing sums for many rings
|
<p>I have this code to compute the sum of the values in a matrix that are closer than some distance but further away than another. Here is the code with some example data:</p>
<pre><code>square = [[ 3, 0, 1, 3, -1, 1, 1, 3, -2, -1],
[ 3, -1, -1, 1, 0, -1, 2, 1, -2, 0],
[ 2, 2, -2, 0, 1, -3, 0, -2, 2, 1],
[ 0, -3, -3, -1, -1, 3, -2, 0, 0, 3],
[ 2, 2, 3, 2, -1, 0, 3, 0, -3, -1],
[ 1, -1, 3, 1, -3, 3, -2, 0, -3, 0],
[ 2, -2, -2, -3, -2, 1, -2, 0, 0, 3],
[ 0, 3, 0, 1, 3, -1, 2, -3, 0, -2],
[ 0, -2, 2, 2, 2, -2, 0, 2, 1, 3],
[-2, -2, 0, -2, -2, 2, 0, 2, 3, 3]]
def enumerate_matrix(matrix):
"""
Enumerate the elements in the matrix.
"""
for x, row in enumerate(matrix):
for y, value in enumerate(row):
yield x, y, value
def sum_of_values(matrix, d):
"""
Calculate the sum of values based on specified conditions.
"""
total_sum = 0
for x, y, v in enumerate_matrix(matrix):
U = x * x + x + y * y + y + 1
if d * d * 2 < U < (d + 1) ** 2 * 2:
total_sum += v
return total_sum
</code></pre>
<p>For this case, I want to compute sum_of_values(square, x) for x in [0.5, 1.5, 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5, 9.5]. This is fast enough but I also want to do it for much larger matrices and the code is then doing a lot of redundant computation. How can I remove this redundancy?</p>
<p>For example:</p>
<pre><code>import numpy as np
square = np.random.randint(-3, 4, size=(1000, 1000))
for i in range(1000):
result = sum_of_values(square, i + 0.5)
print(f"Sum of values: {result} {i}")
</code></pre>
<p>This is too slow as I will need to perform this calculation for thousands of different matrices. How can the redundant calculations in my code be removed?</p>
<p>The key problem I think is that enunerate_matrix should only be looking at cells in the matrix that are likely to be the right distance instead of repeatedly rechecking all the cells in the matrix .</p>
<hr />
<h1>Timings</h1>
<p>For a 400 by 400 matrix my code takes approx 26 seconds.</p>
<pre><code>def calc_values(matrix, n):
scores = []
for i in tqdm(range(n)):
result = sum_of_values(square, i + 0.5)
scores.append(result)
return scores
n = 400
square = np.random.randint(-3, 4, size=(n, n))
%timeit calc_values(square, n)
</code></pre>
<ul>
<li>RomanPerekhrest's code takes approx 119ms even including making the U_arrays matrix.</li>
<li>Reinderien's code takes approx 149ms.</li>
</ul>
|
<python><algorithm><optimization>
|
2023-12-23 11:50:58
| 3
| 21,513
|
Simd
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.