QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,711,482
| 4,162,811
|
pyinstaller won't package python-docx
|
<p><strong>Goal</strong>
I want to package a simple python app (let's call it <em>wordy</em>), that creates a basic word document using the python-docx libary, into a <em><strong>single</strong></em> .exe file.</p>
<p><strong>setup</strong>
PyCharm project with poetry as the interpreter. python-docx installed after selecting is in Settings > Project > Python Intepreter.</p>
<p>using <code>pip show python-docx</code>, I found docx here:</p>
<pre><code>C:\Users\myusername\AppData\Local\pypoetry\Cache\virtualenvs\wordy-nOisIA6h-py3.10\Lib\site-packages\docx
</code></pre>
<p><strong>problem</strong>
The program works file when ran in PyCharm.
But the executable I created in the /dist folder, throws an error when ran from the command line:
<code>ModuleNotFoundError: No module named 'docx'</code></p>
<p><em><strong>what I tried</strong></em>
The below commands were run from the PyCharm terminal from within the virtualenv</p>
<pre><code>(docxpoc-nOisIA6h-py3.10) PS C:\wordy>
</code></pre>
<pre><code>pyinstaller --onefile main.py --name "wordy1.exe"
pyinstaller --onefile main.py --name "wordy2.exe" --hidden-import docx
pyinstaller --onefile main.py --name "wordy3.exe" --additional-hooks-dir myhooks.py
pyinstaller --onefile main.py --name "wordy4.exe" --collect-data "docx"
pyi-makespec --onefile main.py
pyinstaller .\main.spec
</code></pre>
<p>Only when I use the .spec file do I get something other than the above error msg. I get:</p>
<pre><code> File "main.py", line 2, in <module>
File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module
File "wordy.py", line 1, in <module>
File "C:\Users\myusername\AppData\Local\Temp\_MEI112442\docx\__init__.py", line 7, in <module>
from docx.image.bmp import Bmp
File "C:\Users\myusername\AppData\Local\Temp\_MEI112442\docx\image.py", line 3, in <module>
from __future__ import annotations
ModuleNotFoundError: No module named '__future__'
</code></pre>
<p>I don't see <code>"C:\Users\myusername\AppData\Local\Temp\_MEI112442</code> in this location, btw.```</p>
<p>The only change I made to the generated spec file was to insert a tuple in the datas=[] line.</p>
<pre><code>datas=[("C:/Users/myusername/AppData/Local/pypoetry/Cache/virtualenvs/wordy-nOisIA6h-py3.10/Lib/site-packages/docx/*.*", "docx")]
</code></pre>
<p>I also tried <code>hiddenimports=["docx"]</code> but ended up with the original error msg.</p>
|
<python><pycharm><pyinstaller><docx><python-docx>
|
2024-07-05 12:31:11
| 1
| 349
|
woodduck
|
78,711,339
| 5,980,655
|
Set same scale in legend matplotlib
|
<p>I'm working with geospacial data and I have two pandas dataframes with two different regions, both have a column <code>geometry</code> with the (multi)polygons and a column <code>SCORE</code> with a value for each of the (multi)polygons I want to plot.</p>
<p>For example, this is the plot for the provinces of Spain excluding Canary Islands:</p>
<pre><code>shapefile_prov.plot(column="SCORE", legend=True)
plt.xticks([])
plt.yticks([])
</code></pre>
<p><a href="https://i.sstatic.net/9nIIIhrK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9nIIIhrK.png" alt="enter image description here" /></a></p>
<p>And this is the plot of the dataframe with Canary Island:</p>
<pre><code>shapefile_prov_can.plot(column="SCORE", legend=True)
plt.xticks([])
plt.yticks([])
</code></pre>
<p><a href="https://i.sstatic.net/pz6GL5gf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pz6GL5gf.png" alt="enter image description here" /></a></p>
<p>Is there a way to colour the graph for Canary Islands with the same scale of colours of the first graph? What I mean is that yellow colour should be associated to values of <code>SCORE</code> roughly above 0.5 and purple for values below 0.325 approximately.</p>
|
<python><pandas><matplotlib><graphics><geopandas>
|
2024-07-05 11:57:18
| 2
| 1,035
|
Ale
|
78,711,242
| 11,082,866
|
merge columns with same name in pandas
|
<p>I have a dataframe which consists of a couple of columns with same name i.e.</p>
<pre><code>items_group quanity items_group quanity
KIT1259 0
KIT1260 0
KIT1261 0
KIT1151 1
KIT1198A 4
KIT1198D 5
KIT1243 29
KIT1249 8
</code></pre>
<p>How do I merge them and have only one column of each?
I tried</p>
<pre><code> df_expanded2 = df_expanded2.stack().dropna().unstack()
</code></pre>
<p>But this gives the following error sometimes:</p>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/views/generic/base.py", line 71, in view
return self.dispatch(request, *args, **kwargs)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/rest_framework/views.py", line 505, in dispatch
response = self.handle_exception(exc)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/rest_framework/views.py", line 465, in handle_exception
self.raise_uncaught_exception(exc)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/rest_framework/views.py", line 476, in raise_uncaught_exception
raise exc
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/rest_framework/views.py", line 502, in dispatch
response = handler(request, *args, **kwargs)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend copy/reports/views.py", line 1890, in get
df_expanded2 = df_expanded2.stack().dropna().unstack()
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/pandas/core/series.py", line 4157, in unstack
return unstack(self, level, fill_value)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/pandas/core/reshape/reshape.py", line 491, in unstack
unstacker = _Unstacker(
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/pandas/core/reshape/reshape.py", line 140, in __init__
self._make_selectors()
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/pandas/core/reshape/reshape.py", line 192, in _make_selectors
raise ValueError("Index contains duplicate entries, cannot reshape")
ValueError: Index contains duplicate entries, cannot reshape
</code></pre>
|
<python><pandas>
|
2024-07-05 11:37:06
| 2
| 2,506
|
Rahul Sharma
|
78,711,237
| 11,049,863
|
django.db.utils.DatabaseError: DPY-4027: no configuration directory to search for tnsnames.ora in my dockerized django application
|
<p>I am trying to connect my django application with an oracle database in vain.<br/>
I added the TNS_ADMIN environment variable but the problem persists.<br/>
Here is the content of my tnsnames.ora file:</p>
<pre><code># tnsnames.ora Network Configuration File: C:\app\HP\product\21c\homes\OraDB21Home1\NETWORK\ADMIN\tnsnames.ora
# Generated by Oracle configuration tools.
XE =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = xxx)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = XE)
)
)
LISTENER_XE =
(ADDRESS = (PROTOCOL = TCP)(HOST = xxx)(PORT = 1521))
ORACLR_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
(CONNECT_DATA =
(SID = CLRExtProc)
(PRESENTATION = RO)
)
)
</code></pre>
<p>It should be noted that I use SQL developer.<br/>
My database configuration in django:</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.oracle',
'NAME': os.environ.get("DB_NAME"),
'USER': os.environ.get("DB_USER"),
'HOST': os.environ.get("DB_HOST"),
'POSRT': os.environ.get("DB_PORT"),
'PASSWORD': os.environ.get("DB_PASS")
}
}
</code></pre>
<p>docker-compose.yml file:</p>
<pre><code>version: '3'
services:
backend:
container_name: bdr-backend
build:
context: .
command: >
sh -c "python manage.py makemigrations --noinput &&
python manage.py migrate --noinput &&
python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- ./backend:/backend
env_file:
- .env
environment:
- DEBUG=1
- DB_HOST=${DB_HOST}
- DB_NAME=${DB_NAME}
- DB_USER=${DB_USER}
- DB_PASS=${DB_PASS}
depends_on:
- redis
redis:
image: redis:7.0.5-alpine
container_name: redis2
expose:
- 6379
</code></pre>
<p>I use docker docker desktop on windows 10.<br/>
How to solve this problem?</p>
|
<python><django><oracle-database><docker>
|
2024-07-05 11:35:57
| 2
| 385
|
leauradmin
|
78,711,171
| 1,469,954
|
Get the most optimal combination of places from a list of places through graph algorithm
|
<p>We are having a networking situation where he have four nodes - say <code>A</code>, <code>B</code>, <code>C</code> and <code>D</code>. The problem - <code>A</code>, <code>B</code>, <code>C</code> and <code>D</code> are not fixed, but have a bunch of possibilities.</p>
<p>Say <code>A</code> can have values <code>A1</code>, <code>A2</code> upto <code>An</code>.
Same for <code>B</code> - <code>B1</code>, <code>B2</code>, <code>Bn</code>.</p>
<p>The same for <code>C</code> and <code>D</code> as well. Each of <code>An</code>, <code>Bn</code>, <code>Cn</code> and <code>Dn</code> has coordinates as properties (a tuple - <code>(21.2134, -32.1189)</code>).</p>
<p>What we are trying to achieve - find the best combination of <code>An</code>, <code>Bn</code>, <code>Cn</code> and <code>Dn</code> which are closest to each other if traversed in that order (<code>A -> B -> C -> D</code>). Example, it can be <code>A11 -> B1 -> C9 -> D4</code> - basically choose one option from each of <code>A</code>, <code>B</code>, <code>C</code> and <code>D</code> from all their respective possible values so that the combination of these values are closest to each other among all such combinations.</p>
<p>We are trying brute force approach of course, but it is very expensive, since its time complexity is <code>N(A) * N(A) * N(C) * N(D)</code>.</p>
<p>So we are looking for some graph based solution where we can harness a graph by storing all points with coordinates in it and at runtime we can simply pass in all sets of points (all <code>n</code> points for <code>A</code>, all <code>k</code> points for <code>B</code>, all <code>j</code> points for <code>C</code> and all <code>l</code> points for <code>D</code>) to it and it will output the right point for each of those classes (<code>A</code>, <code>B</code>, <code>C</code> and <code>D</code>) for which the route is most optimal.</p>
<p>If the graph needs to be stored in memory, that's fine as well, the quicker the runtime response time the better, memory is not a constraint, and also, whether a route exists between <code>An</code> and <code>Bn</code> is irrelevant here, all we want to do is find out coordinate based proximity (assuming a linear route between each set of points).</p>
<p>Also, it is not mandatory, in fact, it will be almost the case always, that the runtime query includes only few of the categories of places, like the exhaustive list has A, B, C and D, but a runtime query may try to find combination of just B and C, or A and D, or A, B and D, etc.</p>
|
<python><network-programming><graph><graph-theory>
|
2024-07-05 11:23:45
| 3
| 5,353
|
NedStarkOfWinterfell
|
78,711,030
| 5,221,078
|
How to use Params to set weight of optional property
|
<p>I want to use FactoryBoy to create some fake data with an optional property. I want to be able to override the probability that the property is <code>None</code>.</p>
|
<python><pytest><factory-boy>
|
2024-07-05 10:49:42
| 1
| 1,378
|
Greg Brown
|
78,710,901
| 10,855,529
|
Follow sort after a group_by in polars
|
<pre class="lang-py prettyprint-override"><code>import polars as pl
# Sample data
data = {
'Group': ['A', 'A', 'B', 'B', 'C', 'C'],
'Value': [10, 20, 15, 25, 5, 30],
'OtherColumn': [100, 200, 150, 250, 50, 300]
}
# Create DataFrame
df = pl.DataFrame(data)
# Group by 'Group' and sort within each group by 'Value'
sorted_df = df.group_by('Group').map_groups(lambda group_df: group_df.sort('Value'))
# Display the sorted DataFrame
print(sorted_df)
</code></pre>
<p>is there a native polars way to do a sort within the groups after a <code>group_by</code> without using <code>map_groups</code>?
the alternative approach I know is to sort specifying multiple columns, but I would like to first <code>group_by</code>, and then do the sort.</p>
|
<python><group-by><python-polars>
|
2024-07-05 10:18:04
| 1
| 3,833
|
apostofes
|
78,710,691
| 3,367,091
|
Python object instance attribute not same as class, but still?
|
<p>This question is about the <code>id</code> method in Python and if a class method is the same instance for the class object itself and any object instances created from that class.</p>
<p>Sample code:</p>
<pre class="lang-py prettyprint-override"><code>class Person:
def __init__(self, name: str) -> None:
self.name = name
def name_as_upper(self) -> str:
return self.name.upper()
p = Person("john")
p2 = Person("sarah")
print(
id(p.name_as_upper), id(p2.name_as_upper), id(Person.name_as_upper)
) # first two are the same, last one different
print(p.name_as_upper()) # JOHN
print(Person.name_as_upper(p)) # JOHN
Person.name_as_upper = (
lambda self: self.name.upper() + " - suffix"
) # "point" name_as_upper method object to different method
print(p.name_as_upper()) # JOHN - suffix
print(Person.name_as_upper(p)) # JOHN - suffix
</code></pre>
<p>In the above example, the ids printed by the calls <code>id(p.name_as_upper)</code> and <code>id(p2.name_as_upper)</code> are identical, but the call <code>id(Person.name_as_upper)</code> is different.</p>
<p>I was expecting all of the ids to be identical. The method object referenced by the name <code>Person.name_as_upper</code> is different?</p>
<p>I thought a call to a method on an object instance was translated into a call to the method object belonging to the class, i.e.:</p>
<pre class="lang-py prettyprint-override"><code>p.name_as_upper()
# Would be translated / equivalent to
Person.name_as_upper(p)
</code></pre>
<p>When re-assigning <code>Person.name_as_upper</code> to a new method (the lambda expression in the sample code above) the change is seen also by the instance <code>p</code>.
So <code>p.name_is_upper</code> still is "the same" as <code>Person.name_as_upper</code>.</p>
<p>Thankful for any clarifications to my (lack of) understanding.</p>
|
<python>
|
2024-07-05 09:30:14
| 0
| 2,890
|
jensa
|
78,710,643
| 14,739,428
|
werkzeug Local got unexpected value
|
<p>I may have a misunderstood with werkzeug.local</p>
<p>Here is my simplified test code:</p>
<pre><code>from flask import Flask, request
from werkzeug.local import Local
import threading
app = Flask(__name__)
local_data = Local()
@app.route('/')
def index():
if local_data.data:
# I don't know why new request without token will get token and return here.
return local_data.data
local_data.data = request.headers.get('token')
return local_data.data
if __name__ == "__main__":
app.run()
</code></pre>
<p>In fact, my logic is that when a request arrives, it first performs a get. If it doesn't retrieve a value from <code>local_data.data</code>, it sets a token to <code>local_data.data</code> and returns it. However, I noticed that under concurrent conditions, a new request coming in would get content it shouldn't have access to.</p>
<p>I expect that only the first request will carry a token, and other requests should return None because there are nothing in their local thread.</p>
<p>However, in continuous observations, I found that many requests successfully retrieved the value stored in the <code>local_data.data</code>, which means local_data is not working here.</p>
<p>I don't know why, and GPT says my code should have no mistakes...</p>
|
<python><gunicorn><werkzeug>
|
2024-07-05 09:16:16
| 0
| 301
|
william
|
78,710,603
| 10,548,486
|
CosmosDB with PyMongo replaces the array field in document with literal array filter subdocument
|
<p>I'm using the CosmosDB on Azure with MongoDbApi. In my Flask service I use pymongo.
My collection consists of a documents similar to this:</p>
<pre class="lang-json prettyprint-override"><code>{
"file_name": "foo",
"date": "2024-05-07",
"count": 1,
"deliveries": [
{
...,
"events": [
...
],
"type": "NG"
},
{
...,
"events": [
...
],
"type": "Legacy"
}
],
"snapshot_timestamp": 1234567
}
</code></pre>
<p>I have a set of functions (basically simple queries) that update the file. An example one would be something like this:</p>
<pre class="lang-python prettyprint-override"><code>collection.update_one(
{
"file_name": ...,
"snapshot_timestamp": ...,
},
{
"$set": {
"deliveries.$[outer].type": "NG",
"deliveries.$[outer].foo": "bar",
"count": 5
},
"$push": {
"deliveries.$[outer].some_array": "some_value"
}
},
upsert=True,
array_filters=[
{"outer.x": "y"}
]
)
</code></pre>
<p>Sometimes I see a corrupted document that looks like this:</p>
<pre class="lang-json prettyprint-override"><code>{
"file_name": "foo",
"date": "2024-05-07",
"count": 1,
"deliveries": {
"$[outer]": {
...,
"events": [
...
],
"type": "NG"
},
},
"snapshot_timestamp": 1234567
}
</code></pre>
<p>I'm really trying to find the source of this, but there is no update operation where I use a single <code>"$set": {"delivery.$[outer]"}</code>. This happens exteremely rarely. Unfortunately I can't find any type of trace or path that causes this. Is there any way where the update() operation can cause this behavior?</p>
|
<python><mongodb><azure><azure-cosmosdb><pymongo>
|
2024-07-05 09:07:39
| 0
| 732
|
777moneymaker
|
78,710,457
| 9,032,335
|
List all available dataset-names contained in a hugginface datasets dataset
|
<p>I want to know which datasets are included in e.g. this collection of huggingface datasets:
<a href="https://huggingface.co/datasets/autogluon/chronos_datasets" rel="nofollow noreferrer">https://huggingface.co/datasets/autogluon/chronos_datasets</a></p>
<p>"m4_daily" and "weatherbench_daily" are mentioned explicitly, but there should be more.</p>
<p>I am not interested in a list of all such collections.</p>
<p>I get the list through the error message in case I leave the name parameter unspecified:</p>
<pre><code>ds = datasets.load_dataset("autogluon/chronos_datasets", split="train") # error with list
# ds = datasets.load_dataset("autogluon/chronos_datasets", "m4_daily" split="train") # no error
</code></pre>
<p>How do retrieve the list of names propperly?</p>
|
<python><huggingface-datasets>
|
2024-07-05 08:32:44
| 1
| 723
|
ivegotaquestion
|
78,710,413
| 6,631,639
|
plotly: AttributeError: 'NoneType' object has no attribute 'constructor'
|
<p>Using Plotly in Python I ran into the following, rather cryptic, error message, with a minimal reproducible example. The original plot was a lot more complex, as such it took me longer than I like to admit to debug this.</p>
<pre class="lang-py prettyprint-override"><code>import plotly as px
px.scatter(x=[1,2,3], y=[2,3,4], marginal_x="density")
Traceback (most recent call last):
File "/home/wdecoster/repositories/seetea/scripts/calculate-zscores.py", line 44, in <module>
main()
File "/home/wdecoster/repositories/seetea/scripts/calculate-zscores.py", line 20, in main
fig = px.scatter(
^^^^^^^^^^^
File "/net/winky2/winky2/study252-P200_analysis/results/rr/study/hg38s/study252-P200_analysis/workflow_res>
return make_figure(args=locals(), constructor=go.Scatter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/net/winky2/winky2/study252-P200_analysis/results/rr/study/hg38s/study252-P200_analysis/workflow_res>
trace = trace_spec.constructor(name=trace_name)
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'constructor'
</code></pre>
<p>What is going wrong?</p>
|
<python><plotly>
|
2024-07-05 08:23:59
| 1
| 527
|
Wouter De Coster
|
78,710,347
| 759,352
|
PySpark join fields in JSON to a dataframe
|
<p>I am trying to pull out some fields from a JSONn string into a dataframe. I can achieve this by put each field in a dataframe then join all the dataframes like below. But is there some easier way to do this? Because this is just an simplified example and I have a lot more fields to extract in my project.</p>
<pre><code> from pyspark.sql import Row
s = '{"job_id":"123","settings":{"task":[{"taskname":"task1"},{"taskname":"task2"}]}}'
json_object = json.loads(s)
# json_object
job_id_l = [Row(job_id=json_object['job_id'])]
job_id_df = spark.createDataFrame(job_id_l)
# display(job_id_df)
tasknames = []
for t in json_object['settings']["task"]:
tasknames.append(Row(taskname=t["taskname"]))
tasknames_df = spark.createDataFrame(tasknames)
# display(tasknames_df)
job_id_df.crossJoin(tasknames_df).display()
</code></pre>
<p>Result:</p>
<pre><code> job_id taskname
123 task1
123 task2
</code></pre>
|
<python><json><pyspark>
|
2024-07-05 08:05:24
| 1
| 1,757
|
thotwielder
|
78,710,310
| 13,903,626
|
I cannot import monsoon after installing Monsoon
|
<p>I try to install Monsoon and then use it in the script, but I cannot import this module after installing it.</p>
<p>The Python version is 3.8.1; I also tried it in Python 3.11 and got the same issue.</p>
<p>Install script:</p>
<pre><code>pip install monsoon
</code></pre>
<p>Then</p>
<pre><code>import monsoon.LVPM as LVPM
import monsoon.sampleEngine as sampleEngine
import monsoon.Operations as op
</code></pre>
<p>Run the script, I got the error:</p>
<pre><code>ModuleNotFoundError: No module named 'monsoon'
</code></pre>
<p>What am I missing here or I need install it via another mode?</p>
<p><a href="https://i.sstatic.net/eAYHfYnv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAYHfYnv.png" alt="enter image description here" /></a></p>
<p>Update1</p>
<p>import monsoon and get the same issue.</p>
<p><a href="https://i.sstatic.net/TMLZQO5J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMLZQO5J.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><python-import>
|
2024-07-05 07:56:18
| 2
| 8,396
|
Vito Liu
|
78,710,295
| 569,711
|
Azure functions in Python - "Duplex option is required" error
|
<p>As part of my data engineering job, I inherited some Azure functions written in Python a while ago. I now have to update them to reflect changed credentials, but keep running into issues.</p>
<p>When deploying, it fires off the below "duplex option is required when sending a body".</p>
<p><a href="https://i.sstatic.net/82FgNsdT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82FgNsdT.png" alt="Duplex error message" /></a></p>
<p>I found numerous posts on SO and other sites that recommend adding "duplex": "half" or "duplex true", and it hinting at being related to node.js somehow (which to the best of my knowledge, we do not use). I also stumbled across a <a href="https://stackoverflow.com/questions/77462253/getting-error-requestinit-duplex-option-is-required-when-sending-a-body-when">SO thread</a> suggesting the error can be caused by a requested token not coming through in time, and making the call asynchronous.</p>
<p>I'm comfortable with C#, but extremely new to Python, so I'm hoping someone could have a look at the code and let me know what/where I could fix this, as so far my attempts did not yield any result.</p>
<p>Our code:</p>
<pre><code>import logging
import pandas as pd
import azure.functions as func
import time
import io
from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file import File
class Office():
'''
Uses the Office365 API and accesses the the Growth team sharepoint to extract the content
of a specific file into a pandas dataframe. The extracted datetime is also added to the dataframe
for uses in the table.
For now this class only reads in 1 specific file.
For future: Make it so it can take any xlsx file in any sharepoint.
'''
def __init__(self, Secret):
self.status_code = 200
self.site_url = '<%sharepoint site%>'
username = '<%service principal GUID%>'
password = str(Secret)
#Note Points to specific file now. TODO: Update to take any fname from Json.
self.relative_url = '<%link to file%>'
app_principal = {
'client_id': username,
'client_secret': password,
}
try: # setting up connection
self.context_auth = AuthenticationContext(url = self.site_url)
self.context_auth.acquire_token_for_app(client_id = app_principal['client_id'], client_secret = app_principal['client_secret'])
self.ctx = ClientContext(self.site_url, self.context_auth)
self.status_code = 200
except:
self.status_code = 403
def getData(self):
'''
Reads the file content in a byte stream format.
Uses the openpyxl to reformat this to be readeble for pandas.
'''
if self.status_code != 200:
return None, self.status_code
response = File.open_binary(self.ctx, self.relative_url) # gets a byte steam.
self.status_code = response.status_code
if self.status_code != 200:
return pd.DataFrame({'A' : []}), self.status_code #if error returns empty df.
bytes_file_obj = io.BytesIO()
bytes_file_obj.write(response.content)
bytes_file_obj.seek(0) #set file object to start
df = pd.read_excel(bytes_file_obj, engine = 'openpyxl')
return df, self.status_code
def main(req: func.HttpRequest) -> func.HttpResponse:
now = time.strftime("%Y-%m-%d-%H:%M:%S") # Get extract time.
logging.info('Python HTTP trigger function processed a request.')
ClientSecret = req.params.get('Sharepoint-ClientSecret')
if not ClientSecret:
try:
req_body = req.get_json()
except ValueError:
ClientSecret = '<None>'
pass
else:
ClientSecret = req_body.get('value')
if ClientSecret:
now = time.strftime("%Y-%m-%d-%H:%M:%S") # Get extract time.
Of365 = Office(Secret = ClientSecret)
df, statusCode = Of365.getData()
# Some error handeling.
if statusCode == 403:
return func.HttpResponse(f"Authentication with secret key failed.\nStatus Code {statusCode}.", status_code = statusCode)
elif statusCode == 404:
return func.HttpResponse(f"File not found.\nStatus Code {statusCode}.", status_code = statusCode)
elif statusCode != 200:
return func.HttpResponse(f"Sometihng Failed.\nStatus Code {statusCode}", status_code = statusCode)
else:
logging.info(
'Success in connecting to Sharepoint API and generating {0} object class'.format(Office.__class__.__name__))
Date_Extracted = [now]*df.shape[0] # Created vector with time.
df ['Date_Extracted'] = Date_Extracted
return func.HttpResponse(df.to_json(orient="table"), status_code = 200)
else:
return func.HttpResponse(
"Bad Request: did not get any data.",
status_code=400
)
</code></pre>
|
<python><azure-functions>
|
2024-07-05 07:54:19
| 1
| 9,729
|
SchmitzIT
|
78,709,252
| 1,983,613
|
Recursive types in Python and difficulties inferring the type of `type(x)(...)`
|
<p>Trying to build recursive types to annotate a nested data structure, I hit the following.</p>
<p>This code is correct according to mypy:</p>
<pre><code>IntType = int | list["IntType"] | tuple["IntType", ...]
StrType = str | list["StrType"] | tuple["StrType", ...]
def int2str(x: IntType) -> StrType:
if isinstance(x, list):
return list(int2str(v) for v in x)
if isinstance(x, tuple):
return tuple(int2str(v) for v in x)
return str(x)
</code></pre>
<p>But not this one, which should be equivalent:</p>
<pre><code>IntType = int | list["IntType"] | tuple["IntType", ...]
StrType = str | list["StrType"] | tuple["StrType", ...]
def bad_int2str(x: IntType) -> StrType:
if isinstance(x, (list, tuple)):
return type(x)(bad_int2str(v) for v in x) # error here
return str(x)
</code></pre>
<p>The error message is</p>
<pre><code>line 6: error: Incompatible return value type (
got "list[int | list[IntType] | tuple[IntType, ...]] | tuple[int | list[IntType] | tuple[IntType, ...], ...]",
expected "str | list[StrType] | tuple[StrType, ...]"
) [return-value]
line 6: error: Generator has incompatible item type
"str | list[StrType] | tuple[StrType, ...]";
expected "int | list[IntType] | tuple[IntType, ...]" [misc]
</code></pre>
<p>I would assume mypy could infer that <code>type(x)</code> is either <code>list</code> or <code>tuple</code>.
Is this a limitation of mypy or is there something fishy with this code?
If so, where does the limitation come from?</p>
|
<python><mypy><python-typing>
|
2024-07-05 00:04:20
| 2
| 417
|
Winks
|
78,709,175
| 7,563,454
|
What is the most efficient way to detect when a moving 3D position entered a new area delimited by a fixed scale?
|
<p>Context: I have a CPU based raytracing engine written in Pygame, it works by having rays move 1 unit per loop in the direction of their velocity and detecting any voxel at that integer position. I recently added chunks to boost performance, voxels in each area are stored in a container of a fixed size which is read first. For example: If the chunk size is 16 the chunk at position <code>(0, -16, 32)</code> will store all data from it to position <code>(16, 0, 48)</code>. Valid chunks are stored in a dictionary indexed by their start corner tuple, the end corner can be obtained by adding the size to it. Here's an example of the data structure, in this case the chunks are None since their data and how it's used is irrelevant to my question.</p>
<pre><code>chunks = {
(0, 0, 0): None,
(64, 0, 32): None,
(-96, 48, 16): None,
(-128, -96, 0): None,
}
</code></pre>
<p>I noticed that scanning for chunks is more costly than it could be. Most lag appears to originate from checking the position and snapping it to the chunk size to get the appropriate index: Each time the ray moves I need to check if it's in the area of another chunk and fetch its data. For example if the ray moved from position <code>(-0.5, 0.25, 0)</code> to <code>(0.5, 1.25, 1)</code> given the velocity <code>(1, 1, 1)</code> I now need to get the data of chunk <code>(0, 0, 0)</code>. This is a representation of my ray loop so far:</p>
<pre><code>pos = (0, 0, 0)
chunk_min = (0, 0, 0)
chunk_max = (0, 0, 0)
chunk_size = 16
chunk = None
while True:
if pos[0] < chunk_min[0] or pos[1] < chunk_min[1] or pos[2] < chunk_min[2] or pos[0] > chunk_max[0] or pos[1] > chunk_max[1] or pos[2] > chunk_max[2]:
pos_min = ((self[0] // chunk_size) * chunk_size, (self[1] // chunk_size) * chunk_size, (self[2] // chunk_size) * chunk_size)
pos_max = (pos_min[0] + chunk_size, pos_min[1] + chunk_size, pos_min[2] + chunk_size)
chunk = chunks[pos_min] if pos_min in chunks else None
# Do things with the data in chunk, advance pos by -1 or +1 on at least one axis, and break out of the loop when tracing is over
</code></pre>
<p>That loop runs dozens of times per pixel thus thousands of times in total, every check must be very efficient or FPS drops drastically. I get a performance boost by caching the boundaries of the last chunk then checking when the ray's position is no longer within those bounds to only change the chunk once, but this check is itself costly and snapping the position to the chunk size is even more expensive. How can this be done most optimally, what are the cheapest operations to detect the position entering a new cubic area?</p>
|
<python><math><3d>
|
2024-07-04 23:08:12
| 0
| 1,161
|
MirceaKitsune
|
78,709,058
| 2,521,423
|
Subtracting pandas series from all elements of another pandas series with a common ID
|
<p>I have a pandas <code>series.groupby</code> objects, call it <code>data</code>. If I print out the elements, it looks like this:</p>
<pre><code><pandas.core.groupby.generic.SeriesGroupBy object at ***>
(1, 0 397.44
1 12.72
2 422.40
Name: value, dtype: float64)
(2, 3 398.88
4 6.48
5 413.52
Name: value, dtype: float64)
(3, 6 398.40
7 68.40
8 18.96
9 56.64
10 406.56
Name: value, dtype: float64)
(4, 11 398.64
12 14.64
13 413.76
Name: value, dtype: float64)
...
</code></pre>
<p>I want to make an equivalent object, where the entries are the cumulative sum of each sublist in the series, minus the first entry of that list. So, for example, the first element would become:</p>
<pre><code>(1, 0 0 #(= 397.44 - 397.44)
1 12.72 #(= 397.44 + 12.72 - 397.44)
2 435.12 #(= 397.44 + 12.72 + 422.40 - 397.44)
</code></pre>
<p>I can get the cumulative sum easily enough using <code>apply</code>:</p>
<p><code>cumulative_sums = data.apply(lambda x: x.cumsum())</code></p>
<p>but when I try to subtract the first element of the list in what I would think of as the intuitive way (<code>lambda x: x.cumsum()-x[0]</code>) , I get a <code>KeyError</code>. How can I achieve what I am trying to do?</p>
|
<python><pandas><dataframe><group-by>
|
2024-07-04 22:01:18
| 2
| 1,488
|
KBriggs
|
78,708,866
| 1,725,944
|
Is it possible to read metadata from a python model before creating an instance?
|
<p>I have python models which are created using a particular version of my module repository. As updates and changes are made to the repo, the models are not always backward compatible.</p>
<p>I'm wondering if it's possible to store some sort of header info in the model which can be read before creating the Class instance. This way I would be able to pull the correct repository branch that works with the previously saved model.</p>
<p>First, I have an ABC (Abstract Base Class), which I then build MyModel classes from. Over time, the MyModel Class changes enough that it will not load a previous saved version. I'd like to find an elegant solution for being able to still use that model.</p>
|
<python><abc>
|
2024-07-04 20:42:42
| 0
| 481
|
P-Rod
|
78,708,817
| 1,072,352
|
Colab, Jax, and GPU: why does cell execution take 60 seconds when %%timeit says it only takes 70 ms?
|
<p>As the basis for a project on fractals, I'm trying to use GPU computation on Google Colab using the Jax library.</p>
<p>I'm using <a href="https://gist.github.com/jpivarski/da343abd8024834ee8c5aaba691aafc7" rel="nofollow noreferrer">Mandelbrot on all accelerators</a> as a model, and I'm encountering a problem.</p>
<p>When I use the <code>%%timeit</code> command to measure how long it takes to calculate my GPU function (same as in the model notebook), the times are entirely reasonable, and in line with expected results -- <strong>70 to 80 ms</strong>.</p>
<p>But actually <em>running</em> <code>%%timeit</code> takes something like <strong>a full minute</strong>. (By default, it runs the function 7 times in a row and reports the average -- but even that should take less than a second.)</p>
<p>Similarly, when I run the function in a cell and output the results (a 6 megapixel image), it takes around 60 seconds for the cell to finish -- to execute a function that supposedly only takes 70-80 ms.</p>
<p>It seems like something is producing a massive amount of overhead, that also seems to scale with the amount of computation -- e.g. when the function contains 1,000 iterative calculations <code>%%timeit</code> says it takes 71 ms while in reality it takes 60 seconds, but with just 20 iterations <code>%%timeit</code> says it takes 10 ms while in reality it takes about 10 seconds.</p>
<p>I am pasting the code below, but <a href="https://colab.research.google.com/drive/1sdRUYQzgGy_5USda7hxc8NFf4epa2c8R?usp=sharing" rel="nofollow noreferrer">here is a link to the Colab notebook itself</a> -- anyone can make a copy, connect to a "T4 GPU" instance, and run it themselves to see.</p>
<pre><code>import math
import numpy as np
import matplotlib.pyplot as plt
import jax
assert len(jax.devices("gpu")) == 1
def run_jax_kernel(c, fractal):
z = c
for i in range(1000):
z = z**2 + c
diverged = jax.numpy.absolute(z) > 2
diverging_now = diverged & (fractal == 1000)
fractal = jax.numpy.where(diverging_now, i, fractal)
return fractal
run_jax_gpu_kernel = jax.jit(run_jax_kernel, backend="gpu")
def run_jax_gpu(height, width):
mx = -0.69291874321833995150613818345974774914923989808007473759199
my = 0.36963080032727980808623018005116209090839988898368679237704
zw = 4 / 1e3
y, x = jax.numpy.ogrid[(my-zw/2):(my+zw/2):height*1j, (mx-zw/2):(mx+zw/2):width*1j]
c = x + y*1j
fractal = jax.numpy.full(c.shape, 1000, dtype=np.int32)
return np.asarray(run_jax_gpu_kernel(c, fractal).block_until_ready())
</code></pre>
<p>Takes about a minute to produce an image:</p>
<pre><code>fig, ax = plt.subplots(1, 1, figsize=(15, 10))
ax.imshow(run_jax_gpu(2000, 3000));
</code></pre>
<p>Takes about a minute to report that the function only takes 70-80 ms to execute:</p>
<pre><code>%%timeit -o
run_jax_gpu(2000, 3000)
</code></pre>
|
<python><jupyter-notebook><google-colaboratory><jax>
|
2024-07-04 20:25:01
| 1
| 1,375
|
crazygringo
|
78,708,730
| 498,584
|
Django Nested Serializers witj Foreign Field
|
<p>I am trying to post this json request through postman</p>
<pre><code>{ "name":"Someones order",
"date_due": "2024-06-23T15:52:59Z",
"customer":3,
"orderItem":[{
"item":1,
"count":1
}]
}
</code></pre>
<p>I have implemented my models like so.</p>
<pre><code> class Item(models.Model):
name = models.CharField(max_length=255)
price = models.DecimalField(max_digits=6, decimal_places=2)
bakery = models.ForeignKey(Bakery,on_delete=models.CASCADE,related_name='items')
def __str__(self):
return f'{self.name}'
class Order(models.Model):
name = models.CharField(max_length=255)
bakery = models.ForeignKey(Bakery,on_delete=models.CASCADE,related_name='orders')
customer = models.ForeignKey(Customer,on_delete=models.CASCADE,related_name='orders')
date_created= models.DateTimeField(auto_now_add=True)
date_due = models.DateTimeField()
date_updated = models.DateTimeField(auto_now=True)
#order_item = models.OneToOneField(OrderItem, on_delete=models.CASCADE,related_name='item_orderitems')
def __str__(self):
return f'{self.name}'
class OrderItem(models.Model):
bakery = models.ForeignKey(Bakery,on_delete=models.CASCADE,related_name='order_items')
count = models.IntegerField()
item = models.ForeignKey(Item, on_delete=models.CASCADE,related_name='item_orderitems')
order = models.OneToOneField(Order, on_delete=models.CASCADE,related_name='orderitems')
def __str__(self):
return f'{self.name}'
</code></pre>
<p>Serializer is implemented as</p>
<pre><code>class OrderItemCreateSerializer(serializers.ModelSerializer):
class Meta:
model = OrderItem
fields = ['item','count']
class OrderCreateSerializer(serializers.ModelSerializer):
orderItem=OrderItemCreateSerializer(many=True)
class Meta:
model = Order
fields = ("name","date_due","customer","orderItem")
def create(self, validated_data):
print("create")
print(validated_data)
itemid=validated_data.pop("orderItem")
print(itemid)
bakery=self.context['request'].user.bakery
customer=validated_data.pop("customer")
print(customer)
print("validated_data")
print(validated_data)
order=Order.objects.create(bakery=bakery,customer=customer,**validated_data)
print("create 4")
for items in itemid:
print("create 5")
OrderItem.objects.create(bakery=bakery,order=order,**items)
print("create 6")
print("returning")
return order
</code></pre>
<p>When I make the post request i get</p>
<p>Got AttributeError when attempting to get a value for field <code>orderItem</code> on serializer <code>OrderCreateSerializer</code>.
The serializer field might be named incorrectly and not match any attribute or key on the <code>Order</code> instance.
Original exception text was: 'Order' object has no attribute 'orderItem'.</p>
<p>I understand that <code>Order</code> model do not have a field pointing to <code>orderitem</code> but relation ship is rather reverse. But OrderCreateSerializer lists orderItem as a field.</p>
<p>I have implemented the same with a regular serializer instead of Modelserializer thinking that maybe it is complaining since it really is not part of the model.</p>
<p>But most important thing is that if you look at my print statements it prints "returning" and returns.</p>
<p>Should I implement the create method of the modelviewset and implement some of the logic there? would that help?</p>
|
<python><django><django-serializer>
|
2024-07-04 19:53:28
| 1
| 1,723
|
Evren Bingøl
|
78,708,667
| 15,452,898
|
Create subset and calculate sums in Python based on a condition
|
<p>I am currently doing some data manipulation procedures and have run into a problem of how to make subsets based on special conditions.</p>
<p>My example (dataframe) is like this:</p>
<pre><code>Name ID Debt DurationOfDelay CD
A ID1 10 15 1
A ID1 15 30 1
A ID2 20 60 2
A ID2 40 60 3
A ID3 20 15 3
A ID3 20 60 3
B ID4 15 30 4
B ID4 30 60 4
B ID5 35 40 3
B ID6 35 0 2
B ID7 80 30 3
B ID7 35 60 2
</code></pre>
<p>My goal is to create one subset (one new dataframe), where for each unique Name the sum of Debt is returned given a condition:</p>
<p>To take into account IDs with at least two Debts with different DurationOfDelay.</p>
<p>Expected result:</p>
<pre><code>Name Debt_Delay_15to_30 Debt_Delay_15to_60 Debt_Delay_30to_60
A 10 20 0
B 0 0 95
</code></pre>
<p>Any help is highly appreciated!</p>
|
<python><pyspark><data-manipulation>
|
2024-07-04 19:25:38
| 0
| 333
|
lenpyspanacb
|
78,708,642
| 497,649
|
If multiple Python processes write data into InfluxDB, only parts are written
|
<p>I have multiple Python processes, running on a server (RPI5) by <code>cron</code>, which read data from web API's, and write it then into a common InfluxDB database - on the same bucket.</p>
<p>However, some of the data is lost. The code to write into Influx is:</p>
<pre><code>influxdb_client = InfluxDBClient(url=url, token=token, org=org)
...
def f(df):
write_api = influxdb_client.write_api()
...
record = []
for i in range(df.shape[0]):
point = Point(measurement).tag("location", ...).time(...)
for col in list(df.columns):
value = df.loc[i, col]
point = point.field(col, value)
record += [point]
write_api.write(bucket=bucket, org=org, record=record)
...
## Let df be a data.frame with 20-500 rows, and 10-20 columns.
f(df)
</code></pre>
<p>What could be the reason of this issue? Some problem with asynchronous/synchronous?</p>
<p>Thx</p>
|
<python><cron><synchronous><influxdb-2><data-loss>
|
2024-07-04 19:18:18
| 1
| 640
|
lambruscoAcido
|
78,708,628
| 8,231,936
|
Iterate over divs to get tables using Selenium and Python
|
<p>I'm trying to get info from the following site, using python and selenium:
<a href="https://conveniomarco.mercadopublico.cl/alimentos/marketplace/seller/profile/shop/797095-5800279" rel="nofollow noreferrer">https://conveniomarco.mercadopublico.cl/alimentos/marketplace/seller/profile/shop/797095-5800279</a></p>
<p>It has two sections:
"Condiciones Comerciales" and
"Condiciones Regionales"</p>
<p>I need to put all the tables of the section "Condiciones Regionales" in a dataframe, including the title that is outside of the table.</p>
<p>I can the all the tables info using the following code:</p>
<pre><code>html=driver.find_element(By.CSS_SELECTOR, "body").get_attribute('outerHTML')
pd.read_html(html)[0]...
pd.read_html(html)[2]
</code></pre>
<p>But with this way, I lose the title, like "Region de Coquimbo", because this is not part of the table.</p>
<p>I've been trying to iterate over all the divs to get the title and then the inner table, with the following code:</p>
<pre><code>condiciones_regionales=driver.find_element(By.CSS_SELECTOR, "div.wk_mp_design div.wk-mp-custom-regional div.wk-mp-profile-block div.wk-mp-aboutus-data div.box-regional-info")
condiciones=condiciones_regionales.find_elements(By.CSS_SELECTOR, "div.item")
item2=[]
for item2 in condiciones_regionales:
print(item2.text)
</code></pre>
<p>but I get an error: "'WebElement' object is not iterable"</p>
<p>How can I solve it?</p>
<p>Thanks</p>
|
<python><pandas><selenium-webdriver><web-scraping>
|
2024-07-04 19:13:19
| 1
| 517
|
Cristian Avendaño
|
78,708,358
| 9,744,061
|
How to animate the title using FuncAnimation in Python?
|
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
fig = plt.figure()
axis = plt.axes(xlim=(-4, 4), ylim=(-3, 3))
line, = axis.plot([], [],'r+', lw=2)
plt.title('omega = {}')
def init():
line.set_data([], [])
return line,
def animate(i):
y = omega[i]*np.sin(x)
line.set_data(x, y)
return line,
x = np.arange(-4, 4+0.1, 0.1)
domega = 0.25
omega = np.arange(-3, 3+domega, domega)
anim = FuncAnimation(fig, animate, init_func=init, frames=omega.size, interval=120, blit=False)
plt.show()
</code></pre>
<p>I want to plot y=omega*sin(x), which omega is start from -3 to 3 with step size 0.25, and x start from -4 to 4 with step size 0.1. I can animate it with above code.</p>
<p>Now, I want add the animated title "omega = ..." (the value of omega is suitable with moving curve). I don't know how to do it. I just can add <code>plt.title('omega = {}')</code>. So, how to animate the title in FuncAnimation? Any help be very appreciated.</p>
|
<python><animation><title>
|
2024-07-04 17:39:07
| 1
| 305
|
Ongky Denny Wijaya
|
78,708,244
| 1,115,833
|
pandas rolling mean and stacking previous values
|
<p>I have a pandas dataframe of shape (2000,1) and I would like to compute rolling means but also keep the previous values as a lagged variable.</p>
<p>Assuming the Series:</p>
<pre><code>1
2
3
4
5
6
7
8
9
10
</code></pre>
<p>with a rolling window of 3, I would like:</p>
<pre><code>1,2,3,mean(4,5,6)
4,5,6,mean(7,8,9)
</code></pre>
<p>I am able to use the rolling function:</p>
<pre><code>train_ds=train_ds.var1.rolling(3).mean()
</code></pre>
<p>but this does not produce the above structure for me since I am unable to stack the previous values.</p>
|
<python><python-3.x><pandas>
|
2024-07-04 16:57:47
| 2
| 7,096
|
JohnJ
|
78,708,011
| 1,522,066
|
How to fully integrate QSystemTrayIcon with MacOS Sonoma Menu Bar?
|
<p>In Sonoma, the MacOS Menu Bar changes it's appearance automagically, depending on the Wallpaper, Accessibility Settings, and perhaps other events.</p>
<p>The MacOS Sonoma menu bar can be black, white, or semi-transparent with the wallpaper color showing through. The OS seems to then pick whether the menu bar icon is black or white, and the app can control if it's greyed out.</p>
<p><a href="https://i.sstatic.net/eAb0TzQv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAb0TzQv.png" alt="A Pyside6 QSystemTrayIcon and Menu" /></a></p>
<p>Developing in Pyside6 (python wrapped Qt), the QSystemTrayIcon allows implementation of an menu bar icon, menu, actions, etc. as traditionally done in Qt, using QAction for menu items and callbacks.</p>
<pre><code> import PySide6.QtWidgets as qtw
import PySide6.QtCore as qtc
import PySide6.QtGui as qtg
# Create the tray
self.tray = qtw.QSystemTrayIcon()
self.tray.setIcon(self.icon)
self.tray.setVisible(True)
# Create the menu
self.menu = qtw.QMenu()
self.action = qtg.QAction("About My App...")
self.menu.addAction(self.action)
# Add a Quit option to the menu.
self.quit = qtg.QAction("Quit")
self.quit.triggered.connect(self.app.quit)
self.menu.addAction(self.quit)
# Add the menu to the tray
self.tray.setContextMenu(self.menu)
</code></pre>
<p>When the OS decides (wallpaper changes, etc.), or the application decides, the MenuBar icon color can change. My simple implementation above doesn't comply with icon color changes, as shown below.</p>
<p><a href="https://i.sstatic.net/yuCvLr0w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yuCvLr0w.png" alt="enter image description here" /></a></p>
<p>How can I properly handle menu bar icon color preferences and change events in my PySide6 application?</p>
<p>The <a href="https://doc.qt.io/qtforpython-6/PySide6/QtWidgets/QSystemTrayIcon.html#PySide6.QtWidgets.QSystemTrayIcon.setVisible" rel="nofollow noreferrer">PySide6.7 QSystemTrayIcon docs</a> state that use is possible on MacOS, but there don't seem to be QObject methods to handle icon color configuration by the OS, and color change events.</p>
<p>(The icon shown above is a Google Material Design Icon, which is Apache licensed. )</p>
<p><em>Edit:</em></p>
<p>Most of the answer is solved <a href="https://stackoverflow.com/questions/31837420/create-monochromatic-tray-icon-for-os-x-using-qsystemtrayicon">here</a> by setting the mask flag on the icon itself, before assigning it to the QSystemTrayIcon().</p>
<pre><code># Create the icon
self.icon = qtg.QIcon("img/widgets_96.png")
self.icon.setIsMask(True)
</code></pre>
<p>One part of the question still remains: how to set the icon to grey from my app? (This is shown on the Stage Manager and Focus icons in the images above).</p>
|
<python><pyside6><macos-sonoma>
|
2024-07-04 15:50:54
| 1
| 552
|
Marcus
|
78,707,895
| 7,431,523
|
Type of iterator is Any in zip
|
<p>The following script:</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Iterable, Iterator
class A(Iterable):
_list: list[int]
def __init__(self, *args: int) -> None:
self._list = list(args)
def __iter__(self) -> Iterator[int]:
return iter(self._list)
a = A(1, 2, 3)
for i in a:
reveal_type(i)
for s, j in zip("abc", a):
reveal_type(j)
</code></pre>
<p>yields the following mypy output:</p>
<pre><code>$ mypy test.py
test.py:17: note: Revealed type is "builtins.int"
test.py:20: note: Revealed type is "Any"
Success: no issues found in 1 source file
</code></pre>
<p>Why is the type <code>Any</code> when iterating on <code>zip</code>, but not on the object directly?</p>
<p>Note, subclassing <code>class A(Iterable[int])</code> does allow for correct type resolution, but that's not the question here ;)</p>
|
<python><mypy><python-typing>
|
2024-07-04 15:20:24
| 2
| 650
|
hintze
|
78,707,674
| 2,249,312
|
More efficient iteration method
|
<p>I m looking for a better way of doing the following. The code below works but its very slow because I m working with a large dataset. I was trying to also use itertools but somehow I couldnt make it work. So here my very unpythonic starting point.</p>
<p>Helper function:</p>
<pre><code>def signalbin(x,y):
if x > y:
return 1
else:
return -1
</code></pre>
<p>Test Data:</p>
<pre><code>np.random.seed(0)
df = pd.DataFrame(
{
'a': np.random.normal(0, 2.5, n),
'b': np.random.normal(0, 2.5, n),
}
)
</code></pre>
<p>My Current code:</p>
<pre><code>df["signal"] = [signalbin(x, y) for x, y in zip(df["a"], df["b"])]
df["signal2"] = df["signal"]
for i, row in df.iterrows():
if i == 0:
continue
if (row['signal2'] != df.at[i-1, "signal"]):
df.at[i, "signal2"] = df.at[i-1, "signal2"]
</code></pre>
<p>In this case the column signal2 is the desired result.</p>
<p>So I m looking for a more efficient iteration logic that allows to put conditions on multiple columns and rows</p>
|
<python><pandas><python-itertools>
|
2024-07-04 14:31:29
| 1
| 1,816
|
nik
|
78,707,635
| 701,867
|
Connecting to Delta Lake hosted on MinIO from Dask
|
<p>I'm trying to connect to a DeltaLake table that is stored on MinIO rather than S3. I can do this directly with the <code>deltalake</code> Python package as follows:</p>
<pre class="lang-py prettyprint-override"><code>storage_options = {
"AWS_ENDPOINT_URL": "http://localhost:9000",
"AWS_REGION": "local",
"AWS_ACCESS_KEY_ID": access_key,
"AWS_SECRET_ACCESS_KEY": secret_key,
"AWS_S3_ALLOW_UNSAFE_RENAME": "true",
"AWS_ALLOW_HTTP": "true"
}
dt = DeltaTable("s3a://my_bucket/data", storage_options=storage_options)
df = dt.to_pandas()
</code></pre>
<p>However, I want to read into a Dask dataframe instead, so I'm trying to use <code>dask-deltatable</code>. As it uses <code>deltalake</code> under the hood, I assumed the following would work:</p>
<pre class="lang-py prettyprint-override"><code>ddf = dask_deltatable.read_deltalake("s3a://my_bucket/data", storage_options=storage_options)
</code></pre>
<p>However, it still seems to be trying to connect to AWS:</p>
<pre><code>OSError Traceback (most recent call last)
Cell In[3], line 1
----> 1 ddf = dask_deltatable.read_deltalake("s3a://my_bucket/data", storage_options=storage_options)
File ~/.local/lib/python3.10/site-packages/dask_deltatable/core.py:285, in read_deltalake(path, catalog, database_name, table_name, version, columns, storage_options, datetime, delta_storage_options, **kwargs)
282 raise ValueError("Please Provide Delta Table path")
284 delta_storage_options = utils.maybe_set_aws_credentials(path, delta_storage_options) # type: ignore
--> 285 resultdf = _read_from_filesystem(
286 path=path,
287 version=version,
288 columns=columns,
289 storage_options=storage_options,
290 datetime=datetime,
291 delta_storage_options=delta_storage_options,
292 **kwargs,
293 )
294 return resultdf
File ~/.local/lib/python3.10/site-packages/dask_deltatable/core.py:102, in _read_from_filesystem(path, version, columns, datetime, storage_options, delta_storage_options, **kwargs)
99 delta_storage_options = utils.maybe_set_aws_credentials(path, delta_storage_options) # type: ignore
101 fs, fs_token, _ = get_fs_token_paths(path, storage_options=storage_options)
--> 102 dt = DeltaTable(
103 table_uri=path, version=version, storage_options=delta_storage_options
104 )
105 if datetime is not None:
106 dt.load_as_version(datetime)
File ~/.local/lib/python3.10/site-packages/deltalake/table.py:297, in DeltaTable.__init__(self, table_uri, version, storage_options, without_files, log_buffer_size)
277 """
278 Create the Delta Table from a path with an optional version.
279 Multiple StorageBackends are currently supported: AWS S3, Azure Data Lake Storage Gen2, Google Cloud Storage (GCS) and local URI.
(...)
294
295 """
296 self._storage_options = storage_options
--> 297 self._table = RawDeltaTable(
298 str(table_uri),
299 version=version,
300 storage_options=storage_options,
301 without_files=without_files,
302 log_buffer_size=log_buffer_size,
303 )
OSError: Generic S3 error: Error after 10 retries in 13.6945151s, max_retries:10, retry_timeout:180s, source:error sending request for url (http://169.254.169.254/latest/api/token)
</code></pre>
<p>Has anyone successfully managed to read from Deltalake into a Dask dataframe from MinIO, and if so how?</p>
|
<python><dask><delta-lake><minio>
|
2024-07-04 14:23:23
| 1
| 1,258
|
James Baker
|
78,707,455
| 4,701,426
|
Trouble using the right module version installed in pipenv environment
|
<p>I'm trying to run some Python code in VS Code in an environment.</p>
<ol>
<li>Step one. Open cmd from the folder containing the environment, activate the environment, and run VS code
<a href="https://i.sstatic.net/9nqdz6rK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9nqdz6rK.jpg" alt="enter image description here" /></a></li>
</ol>
<p>then:</p>
<p><a href="https://i.sstatic.net/LJVKULdr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LJVKULdr.png" alt="enter image description here" /></a></p>
<ol start="2">
<li>Seems like VS Code has opened in the environment
<a href="https://i.sstatic.net/xV4oLsDi.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xV4oLsDi.jpg" alt="enter image description here" /></a></li>
<li>But when I run some code, it complains that "Numba needs NumPy 1.21 or less" while pip freeze shows that that is indeed the case in that environment:
<a href="https://i.sstatic.net/jXgtW6Fd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jXgtW6Fd.png" alt="enter image description here" /></a></li>
</ol>
<p>So, what am I doing wrong here?</p>
|
<python><python-3.x><numba><pipenv>
|
2024-07-04 13:46:04
| 1
| 2,151
|
Saeed
|
78,707,393
| 3,100,523
|
Customize populate_obj on Flask AppBuilder view
|
<p>Is there a way to override the population of an item from a form on edit and/or create on Flask AppBuilder?</p>
<p>On Flask-Admin there is a method called <a href="https://flask-admin.readthedocs.io/en/latest/api/mod_model/#flask_admin.model.BaseModelView.update_model" rel="nofollow noreferrer">update_model</a> that can be overridden, and I am looking for a behavior like that. So that I can have access to both the item that is being create (the model's instance) and the form with its data.</p>
<p>For the moment I've only found the method <a href="https://flask-appbuilder.readthedocs.io/en/latest/api.html#flask_appbuilder.baseviews.BaseCRUDView.pre_add" rel="nofollow noreferrer">pre_add</a> (and its equivalent <code>pre_update</code>) but they don't have access to the form and its data. What would be the leanest way to implement a behavior similar to <code>update_model</code> ?</p>
|
<python><flask-appbuilder>
|
2024-07-04 13:29:39
| 1
| 7,968
|
Jundiaius
|
78,707,294
| 2,359,027
|
Python: Logging to a single file when having a ProcessPoolExecutor within Process workers, using QueueHandler
|
<p>I want to add logs to a single file from different Processes, which contain a ProcessPoolExectuor.</p>
<p>My application has the following strcture:</p>
<ul>
<li>Main Process</li>
<li>Secondary Process with a pool of processes: ProcessPoolExecutor</li>
</ul>
<p>Both the secondary process and the pool of processes are kept open all the time until the application closes.</p>
<p>The example below is an example of the architecture I have in my application. I am trying to setup a multiprocessing queue to handle all logs coming from the Process and the PoolProcessExecutor so the access to the file is safe.</p>
<p>I tried to use the example from the <a href="https://docs.python.org/3/howto/logging-cookbook.html#logging-to-a-single-file-from-multiple-processes" rel="nofollow noreferrer">python documentation</a>, and added the PoolProcesExecutor, but in the example below, the logs from the main process are not stored in the file. What am I missing?</p>
<pre><code>import logging
import logging.handlers
import multiprocessing
from concurrent.futures import ProcessPoolExecutor
import sys
import traceback
from random import choice
# Arrays used for random selections in this demo
LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL]
NUMBER_OF_WORKERS = 2
NUMBER_OF_POOL_WORKERS = 1
NUMBER_OF_MESSAGES = 1
LOG_SIZE = 1024 # 10 KB
BACKUP_COUNT = 10
LOG_FORMAT = '%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s'
LOG_NAME = 'mptest.log'
def configure_logging_format():
"""
Configure the listener process logging.
"""
root = logging.getLogger()
h = logging.handlers.RotatingFileHandler(LOG_NAME, 'a', LOG_SIZE, BACKUP_COUNT)
f = logging.Formatter(LOG_FORMAT)
h.setLevel(logging.DEBUG)
h.setFormatter(f)
root.addHandler(h)
def main_process_listener(queue: multiprocessing.Queue):
"""
This is the listener process top-level loop: wait for logging events
(LogRecords)on the queue and handle them, quit when you get a None for a
LogRecord.
Parameters
----------
queue: Queue
Queue to get the log records from.
"""
print('Trigger main listener.')
configure_logging_format()
while True:
try:
record = queue.get()
if record is None: # sentinel to tell the listener to quit.
break
logger = logging.getLogger(record.name)
logger.handle(record) # No level or filter logic applied - just do it!
except Exception:
traceback.print_exc(file=sys.stderr)
def broadcast_logs_from_pool_to_main_listener(pool_process_queue, main_process_queue):
"""
This is the listener process top-level loop: wait for logging events from the pool process
and broadcast them to the main listener process.
pool_process_queue: Queue
Pool process queue to get the log records from.
main_process_queue: Queue
Main process queue to put the log records to.
"""
print('Broadcasting logs from pool to main listener.')
# configure_logging_format()
while True:
try:
record = pool_process_queue.get()
if record is None: # sentinel to tell the listener to quit.
break
# TODO: apply level of filtering
main_process_queue.put(record)
except Exception:
traceback.print_exc(file=sys.stderr)
def configure_logging_for_multiprocessing(queue):
"""
The worker configuration is done at the start of the worker process run.
Note that on Windows you can't rely on fork semantics, so each process
will run the logging configuration code when it starts.
"""
print('Configuring logging for multiprocessing...')
h = logging.handlers.QueueHandler(queue) # Handler needed to send records to queue
root = logging.getLogger()
root.addHandler(h)
# send all messages, for demo; no other level or filter logic applied.
root.setLevel(logging.DEBUG)
def pool_process(queue):
configure_logging_for_multiprocessing(queue)
name = multiprocessing.current_process().name
print('Pool process started: %s' % name)
logging.getLogger(name).log(choice(LEVELS), 'message')
print('Pool process finished: %s' % name)
def worker_process(queue):
"""
Worker process that logs messages to the queue.
Parameters
----------
queue: Queue
Queue to log the messages to.
"""
configure_logging_for_multiprocessing(queue)
pool_queue = multiprocessing.Manager().Queue(-1)
lp = multiprocessing.Process(target=broadcast_logs_from_pool_to_main_listener, args=(pool_queue, queue))
lp.start()
# Create ProcessPoolExecutor
executor = ProcessPoolExecutor(max_workers=NUMBER_OF_POOL_WORKERS)
for i in range(NUMBER_OF_POOL_WORKERS):
executor.submit(pool_process, pool_queue)
# Send message
name = multiprocessing.current_process().name
print('Worker started: %s' % name)
logging.getLogger(name).log(choice(LEVELS), 'message')
print('Worker finished: %s' % name)
# Shutdown the executor and the listener
executor.shutdown()
pool_queue.put_nowait(None)
if __name__ == '__main__':
main_logging_queue = multiprocessing.Manager().Queue()
# Start the listener process
lp = multiprocessing.Process(target=main_process_listener, args=(main_logging_queue,))
lp.start()
logging.getLogger('main_1').log(choice(LEVELS), 'main process 1')
# Start the worker processes
workers = []
for i in range(NUMBER_OF_WORKERS):
worker = multiprocessing.Process(target=worker_process, args=(main_logging_queue,))
workers.append(worker)
worker.start()
# Log a message from the main process
logging.getLogger('main_2').log(choice(LEVELS), 'main process 1')
# Wait for all of the workers to finish
for w in workers:
w.join()
main_logging_queue.put_nowait(None)
lp.join()
</code></pre>
|
<python><multiprocessing><python-multiprocessing>
|
2024-07-04 13:04:32
| 1
| 1,133
|
laurapons
|
78,707,235
| 19,648,465
|
How to Implement a Guest Login in Django REST Framework
|
<p>I currently have a backend setup where users can register by providing an email, name, and password. These fields are required for registration. I want to implement a guest login feature where a guest account is deleted when the browser is closed or the guest logs out.</p>
<p>How should I proceed to create a guest account, and what information should I include for the guest account (email, name, password)?</p>
<p>Here is the current code:</p>
<p><strong>models.py</strong></p>
<pre><code>class UserManager(BaseUserManager):
"""Manager for user"""
def create_user(self, email, name, password=None, **extra_fields):
"""Create, save and return a new user."""
if not email:
raise ValueError('User must have an email address.')
user = self.model(email=self.normalize_email(email), name=name, **extra_fields)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, email, name, password):
"""Create and save a new superuser with given details"""
user = self.create_user(email, name, password)
user.is_superuser = True
user.is_staff = True
user.save(using=self._db)
return user
class User(AbstractBaseUser, PermissionsMixin):
"""Database model for users in the system"""
id = models.AutoField(primary_key=True)
email = models.EmailField(unique=True)
name = models.CharField(max_length=50)
is_active = models.BooleanField(default=True)
is_staff = models.BooleanField(default=False)
is_guest = models.BooleanField(default=False)
phone_number = models.CharField(max_length=20, blank=True, null=True)
avatar_color = models.CharField(max_length=7)
objects = UserManager()
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['name']
def save(self, *args, **kwargs):
if not self.pk:
self.avatar_color = random.choice([
'#FF5733', '#C70039', '#900C3F', '#581845',
'#8E44AD', '#1F618D', '#008000', '#A52A2A', '#000080'
])
super().save(*args, **kwargs)
</code></pre>
<p><strong>serializers.py</strong></p>
<pre><code>class UserSerializer(serializers.ModelSerializer):
"""Serializer for the user object."""
class Meta:
model = get_user_model()
fields = ['id', 'email', 'password', 'name', 'phone_number', 'avatar_color']
extra_kwargs = {
'email': {'required': False},
'password': {'required': False, 'write_only': True, 'style': {'input_type': 'password'}, 'min_length': 6},
'name': {'required': False},
'phone_number': {'required': False},
'avatar_color': {'read_only': True}
}
def create(self, validated_data):
"""Create and return a user with encrypted password."""
return get_user_model().objects.create_user(**validated_data)
def update(self, instance, validated_data):
"""Update and return user."""
password = validated_data.pop('password', None)
user = super().update(instance, validated_data)
if password:
user.set_password(password)
user.save()
return user
class GuestSerializer(serializers.ModelSerializer):
"""Serializer for guests."""
class Meta:
model = get_user_model()
fields = ['id', 'email', 'name', 'is_guest', 'avatar_color']
extra_kwargs = {
'email': {'required': False},
'name': {'required': False},
'is_guest': {'default': True},
'avatar_color': {'read_only': True}
}
def create(self, validated_data):
# Create and return a guest user with default values.
validated_data['email'] = f'guest_{random.randint(100000, 999999)}@example.com'
validated_data['name'] = 'Guest'
validated_data['is_guest'] = True
return get_user_model().objects.create_user(**validated_data)
class AuthTokenSerializer(serializers.Serializer):
"""Serializer for the user auth token."""
email = serializers.EmailField()
password = serializers.CharField(
style={'input_type': 'password'},
trim_whitespace=False,
)
def validate(self, attrs):
"""Validate and authenticate the user."""
email = attrs.get('email')
password = attrs.get('password')
user = authenticate(
request=self.context.get('request'),
username=email,
password=password,
)
if not user:
msg = _('Unable to authenticate with provided credentials.')
raise serializers.ValidationError(msg, code='authorization')
attrs['user'] = user
return attrs
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>class CreateUserView(generics.CreateAPIView):
"""Create a new user in the system."""
serializer_class = UserSerializer
authentication_classes = [TokenAuthentication]
permission_classes = [permissions.UpdateOwnProfile]
filter_backends = (filters.SearchFilter,)
search_fields = ('name', 'email',)
class UserViewSet(viewsets.ModelViewSet):
"""List all users."""
queryset = User.objects.all()
serializer_class = UserSerializer
authentication_classes = [TokenAuthentication]
permission_classes = [IsAuthenticated]
class CreateGuestView(generics.CreateAPIView):
"""Create a new guest."""
serializer_class = GuestSerializer
def perform_create(self, serializer):
user = serializer.save()
token, created = Token.objects.get_or_create(user=user)
print('TOKEN: ', token.key)
return Response({'token': token}, status=status.HTTP_201_CREATED)
class GuestLogoutView(APIView):
"""Logout a guest and delete."""
authentication_classes = [TokenAuthentication]
permission_classes = [IsAuthenticated]
def post(self, request, *args, **kwargs):
if request.user.is_guest:
request.user.auth_token.delete()
request.user.delete()
return Response(status=status.HTTP_200_OK)
else:
return Response(status=status.HTTP_400_BAD_REQUEST)
class CreateTokenView(ObtainAuthToken):
"""Create a new auth token for user."""
serializer_class = AuthTokenSerializer
renderer_classes = api_settings.DEFAULT_RENDERER_CLASSES
class UserLoginApiView(ObtainAuthToken):
"""Handle creating user authentication tokens"""
renderer_classes = api_settings.DEFAULT_RENDERER_CLASSES
class LogoutView(APIView):
"""Logout the authenticated user."""
authentication_classes = [TokenAuthentication]
permission_classes = [IsAuthenticated]
def post(self, request, *args, **kwargs):
logout(request)
return Response(status=status.HTTP_200_OK)
class ManageUserView(generics.RetrieveUpdateAPIView):
"""Manage the authenticated user."""
serializer_class = UserSerializer
authentication_classes = [TokenAuthentication]
permission_classes = [IsAuthenticated]
def get_object(self):
"""Retrieve and return the authenticated user."""
return self.request.user
</code></pre>
<p><strong>urls.py</strong></p>
<pre><code>urlpatterns = [
path('register/', views.CreateUserView.as_view(), name='register'),
path('register/guest/', views.CreateGuestView.as_view(), name='register_guest'),
path('logout/', views.LogoutView.as_view(), name='logout'),
path('logout/guest', views.GuestLogoutView.as_view(), name='logout_guest'),
path('token/', views.CreateTokenView.as_view(), name='token'),
path('me/', views.ManageUserView.as_view(), name='me'),
path('', include(router.urls)),
]
</code></pre>
|
<python><django><authentication><django-rest-framework><user-accounts>
|
2024-07-04 12:51:05
| 1
| 705
|
coder
|
78,706,921
| 13,806,869
|
Why does plotting errors vs actual via PredictionErrorDisplay result in a value error?
|
<p>I have trained a random forest regression model using sklearn, and used it to make some predictions on a test dataset. Naturally there are errors where the values predicted by the model are not the same as the actual values; in this case, the model's Mean Average Error and Mean Squared Error are quite high.</p>
<p>I want to visualise the errors, so that I can understand whether the errors are consistently large, or whether there are just a few unusually large errors driving up the averages.</p>
<p>I'm trying to use sklearn's PredictionErrorDisplay function to do this, but the following code returns the error message <em>"ValueError: Unable to coerce to Series, length must be 1"</em>:</p>
<pre><code>errors = PredictionErrorDisplay(y_true = test_targets, y_pred = test_predictions)
errors.plot()
plt.savefig('Output.png')
plt.clf()
</code></pre>
<p>Does anyone know how I can resolve this please? My reading of the error is that I need to convert the object PredictionErrorDisplay creates into a different format, but I'm not sure how to do that, or what the format needs to be exactly.</p>
|
<python><matplotlib><scikit-learn>
|
2024-07-04 11:44:16
| 1
| 521
|
SRJCoding
|
78,706,895
| 3,737,484
|
Send coins/jettons on TON blockchain using Python
|
<p>How can I send coins/Jettons (for example, Notcoin) from one TON blockchain wallet to another using Python?</p>
<p>No one SDK has a working example. 😞</p>
|
<python><blockchain><ton>
|
2024-07-04 11:39:09
| 2
| 336
|
veaceslav.kunitki
|
78,706,741
| 9,627,166
|
How should I deal with multiple imports having the same name
|
<p>I have a repository that contains multiple programs:</p>
<pre><code>.
└── Programs
├── program1
│ └── Generic_named.py
└── program2
└── Generic_named.py
</code></pre>
<p>I would like to add testing to this repository.</p>
<p>I have attempted to do it like this:</p>
<pre><code>.
├── Programs
│ ├── program1
│ │ └── Generic_named.py
│ └── program2
│ └── Generic_named.py
└── Tests
├── mock
│ ├── 1
│ │ └── custom_module.py
│ └── 2
│ └── custom_module.py
├── temp
├── test1.py
└── test2.py
</code></pre>
<p>Where <code>temp</code> is a folder to store each program temporarily with mock versions of any required imports that can not be stored directly with the program.</p>
<p>Suppose we use a hello world example like this:</p>
<pre><code>cat Programs/program1/Generic_named.py
import custom_module
def main():
return custom_module.out()
cat Programs/program2/Generic_named.py
import custom_module
def main():
return custom_module.out("Goodbye, World!")
cat Tests/mock/1/custom_module.py
def out():return "Hello, World!"
cat Tests/mock/2/custom_module.py
def out(x):return x
</code></pre>
<p>And I were to use these scripts to test it:</p>
<pre><code>cat Tests/test1.py
import unittest
import os
import sys
import shutil
if os.path.exists('Tests/temp/1'):
shutil.rmtree('Tests/temp/1')
shutil.copytree('Tests/mock/1', 'Tests/temp/1/')
shutil.copyfile('Programs/program1/Generic_named.py', 'Tests/temp/1/Generic_named.py')
sys.path.append('Tests/temp/1')
import Generic_named
sys.path.remove('Tests/temp/1')
class Test(unittest.TestCase):
def test_case1(self):
self.assertEqual(Generic_named.main(), "Hello, World!")
if __name__ == '__main__':
unittest.main()
cat Tests/test2.py
import unittest
import os
import sys
import shutil
if os.path.exists('Tests/temp/2'):
shutil.rmtree('Tests/temp/2')
shutil.copytree('Tests/mock/2', 'Tests/temp/2')
shutil.copyfile('Programs/program2/Generic_named.py', 'Tests/temp/2/Generic_named.py')
sys.path.append('Tests/temp/2')
import Generic_named
sys.path.remove('Tests/temp/2')
class Test(unittest.TestCase):
def test_case1(self):
self.assertEqual(Generic_named.main(), "Goodbye, World!")
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>Both tests pass when run individually:</p>
<pre><code>python3 -m unittest Tests/test1.py
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
python3 -m unittest Tests/test2.py
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
</code></pre>
<p>However, they fail when being run together:</p>
<pre><code>python3 -m unittest discover -p test*.py -s Tests/
.F
======================================================================
FAIL: test_case1 (test2.Test)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/s/Documents/Coding practice/2024/Test Mess/1/Tests/test2.py", line 18, in test_case1
self.assertEqual(Generic_named.main(), "Goodbye, World!")
AssertionError: 'Hello, World!' != 'Goodbye, World!'
- Hello, World!
+ Goodbye, World!
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (failures=1)
</code></pre>
<p>If I try to use a different temporary name for one of the scripts I am trying to test,</p>
<pre><code>cat Tests/test2.py
import unittest
import os
import sys
import shutil
if os.path.exists('Tests/temp/2'):
shutil.rmtree('Tests/temp/2')
shutil.copytree('Tests/mock/2', 'Tests/temp/2')
shutil.copyfile('Programs/program2/Generic_named.py', 'Tests/temp/2/Generic_named1.py')
sys.path.append('Tests/temp/2')
import Generic_named1
sys.path.remove('Tests/temp/2')
class Test(unittest.TestCase):
def test_case1(self):
self.assertEqual(Generic_named1.main(), "Goodbye, World!")
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>Then I get a different error:</p>
<pre><code>python3 -m unittest discover -p test*.py -s Tests/
.E
======================================================================
ERROR: test_case1 (test2.Test)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/s/Documents/Coding practice/2024/Test Mess/2/Tests/test2.py", line 18, in test_case1
self.assertEqual(Generic_named1.main(), "Goodbye, World!")
File "/home/s/Documents/Coding practice/2024/Test Mess/2/Tests/temp/2/Generic_named1.py", line 4, in main
return custom_module.out("Goodbye, World!")
TypeError: out() takes 0 positional arguments but 1 was given
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (errors=1)
</code></pre>
<p>It seems to be trying to import the same file, despite me using a different file from a different path with the same name. This seems strange, as I've been making sure to undo any changes to the Python Path after importing what I wish to test. Is there any way to mock the path? I can't change the name of the <code>custom_module</code>, as that would require changing the programs I wish to test.</p>
<p>How should I write, approach, or setup these tests such that they can be tested with unittest discover the same as they can individually?</p>
|
<python><python-3.x><python-unittest>
|
2024-07-04 11:06:54
| 1
| 489
|
Programmer S
|
78,706,728
| 3,299,432
|
Flask app errors on all other endpoints except for root - what am I missing
|
<p>I've developed a Flask app that runs perfectly on my laptop - all endpoints work perfectly and I've also created several HTML pages that interact with the Flask API endpoints.</p>
<p>The next step is to deploy this (permanently) to AWS EC2 (Ubuntu). I've followed this tutorial about creating a Flask app <a href="https://plainenglish.io/community/deploying-a-flask-application-on-ec2-dab5d3" rel="nofollow noreferrer">https://plainenglish.io/community/deploying-a-flask-application-on-ec2-dab5d3</a> and the "hello world" works perfectly.</p>
<p>I replaced app.py with my own code and again the / endpoint works fine - but no other endpoint does, I keep getting 404 errors.</p>
<p>I've been trying to narrow the reason, but even if I create a /test endpoint it doesn't work (via a browser and the public IP) and returns a 404. If I SSH to the EC2 and run</p>
<pre><code>curl HTTP://127.0.0.1:8000/test
</code></pre>
<p>it works - I can see the HTML being returned.</p>
<p>What am I missing?</p>
<p>this is the current app.py:</p>
<pre><code>from flask import Flask, render_template, request, make_response, jsonify
from flask_restful import Api, Resource, reqparse, abort, fields, marshal_with
from sqlalchemy import *
import pandas as pd
app = Flask(__name__)
api = Api(app)
@app.route("/")
def home_page():
return render_template("api_home.html")
@app.route('/test')
def testing_area():
return render_template('test.html')
@app.route('/custom1')
def custom1():
return render_template('custom_interface_1.html')
@app.route('/custom2')
def custom2():
return render_template('custom_interface_2.html')
@app.route('/custom3')
def custom3():
return render_template('custom_interface_3.html')
@app.route('/custom4')
def custom4():
return render_template('custom_interface_4.html')
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
|
<python><flask>
|
2024-07-04 11:04:48
| 1
| 547
|
cmcau
|
78,706,701
| 11,749,309
|
FastAPI/Poetry Docker Deployment on Digital Ocean App Platform/Droplet
|
<p>I have a <a href="https://github.com/humblFINANCE/humblAPI" rel="nofollow noreferrer">FastAPI Project </a> that I am trying to deploy on Digital Ocean's App Platform. I am using a base CPU instance, which is not the issue because I have tested on larger images with more RAM and CPUs.</p>
<p>I have a Dockerfile for my FastAPI app, that builds and runs successfully on my local machine (MacBook Pro M3 Pro), but when I attmept to deploy via Github/Dockerfile on the App Platform I keep getting two errors:</p>
<p><code>Deploy Error: Container Terminated</code> & <code>Deploy Error: Health Checks</code></p>
<p>There seems to be an issue when the Docker container spins up in that my project is not installed properly or referenced correctly, becuase when I want to use any modules that import components form the package itself (i.e <code>humblapi.core.config import Config</code>) the container seems to hang. It seems the conaitner will deploy successfully if I just have one file as my FastAPI app, but not when importing modules form the package. I have seen an error message that the <code>humblapi</code> package was not installed (when I wasn't using <code>--no-root</code>) and also <code>/app/src/humblapi does not contain any element</code>.</p>
<p>How do I get poetry to install properly or get my container to run?</p>
<h1>Attempts</h1>
<p>I have tried using <code>poetry install --no-root</code> to avoid installing the package itself, which hasn't worked.
I have tried changing the <code>[tool.poetry]</code> to include <code>packages = [{ include = "humblapi", from = "src" },]</code>, this did not work.</p>
<p>I have tried doing the suggested <code>pip install</code> from FastAPI <a href="https://fastapi.tiangolo.com/deployment/docker/#docker-image-with-poetry" rel="nofollow noreferrer">docs</a>, that didn't work.</p>
<p>The deployment is only successful when I do not include any <code>humblapi</code> imports.</p>
<p>I have also deployed a droplet, with the <code>pip-install</code> branch from <code>humblAPI</code> and I am also met with a hanging container. It seems that it only runs on my local machine. What is causing this?</p>
<p>Do you know if I can only use relative imports for Digital Ocean? I am confused because my local Dockerfile container runs perfectly.</p>
<p>The lastest attempt of trying to deploy the pip-install` branch has this error. It starts up, but then collapses and quits.</p>
<p><a href="https://i.sstatic.net/65iSMH0B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65iSMH0B.png" alt="pip-install deployment" /></a></p>
<h2>Resources</h2>
<p>I am using <code>docker build -t humblapi .</code> & <code>docker run --rm -p 8080:8080 humblapi</code></p>
<h3>poetry install --no-root (runs locally)</h3>
<pre><code>FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
g++ \
curl \
pkg-config \
libssl-dev \
libgtk-3-dev \
libsoup2.4-dev \
libjavascriptcoregtk-4.0-dev \
libwebkit2gtk-4.0-dev \
&& rm -rf /var/lib/apt/lists/*
# Install Rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
# Copy project files
COPY pyproject.toml poetry.lock* README.md /app/
# Install dependencies without pywry
RUN pip install poetry && \
poetry config virtualenvs.create false && \
poetry install --no-dev --no-interaction --no-ansi --no-root -v
# Install pywry separately
RUN pip install pywry==0.6.2
# Copy the rest of the application
COPY . /app
EXPOSE 8080
CMD ["fastapi", "run", "src/humblapi/main.py", "--host", "0.0.0.0", "--port", "8080"]
</code></pre>
<h3>pip install multi-stage Dockerfile (runs locally)</h3>
<pre><code># Stage 1: Generate requirements.txt
FROM python:3.11-slim as requirements-stage
WORKDIR /tmp
RUN pip install poetry
COPY pyproject.toml poetry.lock* /tmp/
RUN poetry export -f requirements.txt --output requirements.txt --without-hashes
# Stage 2: Final image
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
g++ \
curl \
pkg-config \
libssl-dev \
libgtk-3-dev \
libsoup2.4-dev \
libjavascriptcoregtk-4.0-dev \
libwebkit2gtk-4.0-dev \
&& rm -rf /var/lib/apt/lists/*
# Install Rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
# Copy requirements.txt from the previous stage
COPY --from=requirements-stage /tmp/requirements.txt /app/requirements.txt
# Install dependencies
RUN pip install --no-cache-dir -r /app/requirements.txt
# Copy the rest of the application
COPY . /app
EXPOSE 8080
CMD ["fastapi", "run", "src/humblapi/main.py", "--host", "0.0.0.0", "--port", "8080"]
</code></pre>
<h1>ARNE SOLUTION ERROR</h1>
<p>I have added</p>
<pre><code>RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
</code></pre>
<p>and I still get this error:</p>
<pre><code> docker build --no-cache -t humblapi:develop-new .
[+] Building 95.3s (18/22) docker:desktop-linux
=> [internal] load build definition from Dockerfile 0.0s
...
48.91 Saved ./wheels/wrapt-1.16.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
48.91 Saved ./wheels/xmltodict-0.13.0-py2.py3-none-any.whl
48.91 Saved ./wheels/yarl-1.9.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
48.91 Saved ./wheels/yfinance-0.2.40-py2.py3-none-any.whl
48.91 Saved ./wheels/zipp-3.19.2-py3-none-any.whl
48.91 Saved ./wheels/zope.interface-6.4.post2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
48.91 Building wheels for collected packages: linearmodels, pandas-ta, peewee, pywry
48.91 Building wheel for linearmodels (pyproject.toml): started
53.91 Building wheel for linearmodels (pyproject.toml): finished with status 'done'
53.91 Created wheel for linearmodels: filename=linearmodels-4.25-cp311-cp311-linux_aarch64.whl size=1921643 sha256=8d8b3807ebee1482ca197d7b04e7e90c11d99971bddf6a9174329f3d6ccec372
53.91 Stored in directory: /root/.cache/pip/wheels/bb/c1/c4/889e7eff2dbf726c41e207c98a23eb8f82e34a1315af53df98
53.91 Building wheel for pandas-ta (pyproject.toml): started
54.02 Building wheel for pandas-ta (pyproject.toml): finished with status 'done'
54.02 Created wheel for pandas-ta: filename=pandas_ta-0.3.14b0-py3-none-any.whl size=218910 sha256=6a9ea14430f4ec8e8dacd1711a16f3ca2d81e5dbbedaf65dfd36e67040a373cb
54.02 Stored in directory: /root/.cache/pip/wheels/7f/33/8b/50b245c5c65433cd8f5cb24ac15d97e5a3db2d41a8b6ae957d
54.02 Building wheel for peewee (pyproject.toml): started
58.27 Building wheel for peewee (pyproject.toml): finished with status 'done'
58.27 Created wheel for peewee: filename=peewee-3.17.6-cp311-cp311-linux_aarch64.whl size=876729 sha256=0c2ca67489e3a434a972ad3a06960aace503f88352ccdaf7b08ffd1c5042739e
58.27 Stored in directory: /root/.cache/pip/wheels/1c/09/7e/9f659fde248ecdc1722a142c1d744271aad3914a0afc191058
58.27 Building wheel for pywry (pyproject.toml): started
62.37 Building wheel for pywry (pyproject.toml): finished with status 'error'
62.37 error: subprocess-exited-with-error
62.37
62.37 × Building wheel for pywry (pyproject.toml) did not run successfully.
62.37 │ exit code: 1
62.37 ╰─> [119 lines of output]
62.37 Running `maturin pep517 build-wheel -i /root/.cache/pypoetry/virtualenvs/humblapi-XDF0t07c-py3.11/bin/python --compatibility off`
62.37 ⚠️ Warning: Please use maturin in pyproject.toml with a version constraint, e.g. `requires = ["maturin>=1.0,<2.0"]`. This will become an error.
62.37 📦 Including license file "/tmp/pip-wheel-r6hbeiv5/pywry_672648706bc1479f91b19d671ff14b04/LICENSE"
62.37 🍹 Building a mixed python/rust project
62.37 🔗 Found bin bindings
62.37 Compiling serde v1.0.189
62.37 Compiling pkg-config v0.3.27
62.37 Compiling smallvec v1.10.0
62.37 Compiling hashbrown v0.14.0
62.37 Compiling equivalent v1.0.1
62.37 Compiling winnow v0.5.15
62.37 Compiling target-lexicon v0.12.11
62.37 Compiling heck v0.4.1
62.37 Compiling version-compare v0.1.1
62.37 Compiling libc v0.2.149
62.37 Compiling autocfg v1.1.0
62.37 Compiling version_check v0.9.4
62.37 Compiling indexmap v2.0.0
62.37 Compiling proc-macro2 v1.0.67
62.37 Compiling unicode-ident v1.0.8
62.37 Compiling cfg-expr v0.15.5
62.37 Compiling cfg-if v1.0.0
62.37 Compiling proc-macro-error-attr v1.0.4
62.37 Compiling futures-core v0.3.28
62.37 Compiling syn v1.0.109
62.37 Compiling proc-macro-error v1.0.4
62.37 Compiling slab v0.4.9
62.37 Compiling bitflags v1.3.2
62.37 Compiling anyhow v1.0.75
62.37 Compiling once_cell v1.17.1
62.37 Compiling futures-task v0.3.28
62.37 Compiling quote v1.0.33
62.37 Compiling pin-project-lite v0.2.13
62.37 Compiling futures-util v0.3.28
62.37 Compiling syn v2.0.36
62.37 Compiling pin-utils v0.1.0
62.37 Compiling thiserror v1.0.48
62.37 Compiling unicode-segmentation v1.10.1
62.37 Compiling futures-channel v0.3.28
62.37 Compiling cfg-expr v0.9.1
62.37 Compiling version-compare v0.0.11
62.37 Compiling semver v1.0.18
62.37 Compiling heck v0.3.3
62.37 Compiling gio v0.15.12
62.37 Compiling futures-io v0.3.28
62.37 Compiling unicase v2.6.0
62.37 Compiling memoffset v0.9.0
62.37 Compiling simd-adler32 v0.3.5
62.37 Compiling rustc_version v0.4.0
62.37 Compiling crc32fast v1.3.2
62.37 Compiling getrandom v0.2.9
62.37 Compiling lock_api v0.4.9
62.37 Compiling field-offset v0.3.6
62.37 Compiling log v0.4.17
62.37 Compiling adler v1.0.2
62.37 Compiling parking_lot_core v0.9.8
62.37 Compiling miniz_oxide v0.7.1
62.37 Compiling gtk v0.15.5
62.37 Compiling x11 v2.21.0
62.37 Compiling num-traits v0.2.15
62.37 Compiling crossbeam-utils v0.8.16
62.37 Compiling scopeguard v1.1.0
62.37 Compiling serde_spanned v0.6.3
62.37 Compiling toml_datetime v0.6.3
62.37 Compiling toml v0.5.11
62.37 Compiling tinyvec_macros v0.1.1
62.37 Compiling tinyvec v1.6.0
62.37 Compiling toml_edit v0.19.15
62.37 Compiling cc v1.0.83
62.37 Compiling system-deps v5.0.0
62.37 Compiling futures-executor v0.3.28
62.37 Compiling flate2 v1.0.26
62.37 Compiling fdeflate v0.3.0
62.37 Compiling toml v0.7.8
62.37 Compiling proc-macro-crate v1.3.1
62.37 Compiling soup2-sys v0.2.0
62.37 Compiling javascriptcore-rs-sys v0.4.0
62.37 Compiling system-deps v6.1.1
62.37 The following warnings were emitted during compilation:
62.37
62.37 warning: soup2-sys@0.2.0: `PKG_CONFIG_ALLOW_SYSTEM_CFLAGS="1" "pkg-config" "--libs" "--cflags" "libsoup-2.4" "libsoup-2.4 >= 2.62"` did not exit successfully: exit status: 1
62.37
62.37 error: failed to run custom build command for `soup2-sys v0.2.0`
62.37
62.37 Caused by:
62.37 process didn't exit successfully: `/tmp/pip-wheel-r6hbeiv5/pywry_672648706bc1479f91b19d671ff14b04/target/release/build/soup2-sys-4953b3cc1053678a/build-script-build` (exit status: 1)
62.37 --- stdout
62.37 cargo:rerun-if-env-changed=LIBSOUP_2.4_NO_PKG_CONFIG
62.37 cargo:rerun-if-env-changed=PKG_CONFIG_aarch64-unknown-linux-gnu
62.37 cargo:rerun-if-env-changed=PKG_CONFIG_aarch64_unknown_linux_gnu
62.37 cargo:rerun-if-env-changed=HOST_PKG_CONFIG
62.37 cargo:rerun-if-env-changed=PKG_CONFIG
62.37 cargo:rerun-if-env-changed=PKG_CONFIG_PATH_aarch64-unknown-linux-gnu
62.37 cargo:rerun-if-env-changed=PKG_CONFIG_PATH_aarch64_unknown_linux_gnu
62.37 cargo:rerun-if-env-changed=HOST_PKG_CONFIG_PATH
62.37 cargo:rerun-if-env-changed=PKG_CONFIG_PATH
62.37 cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_aarch64-unknown-linux-gnu
62.37 cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_aarch64_unknown_linux_gnu
62.37 cargo:rerun-if-env-changed=HOST_PKG_CONFIG_LIBDIR
62.37 cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR
62.37 cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_aarch64-unknown-linux-gnu
62.37 cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_aarch64_unknown_linux_gnu
62.37 cargo:rerun-if-env-changed=HOST_PKG_CONFIG_SYSROOT_DIR
62.37 cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR
62.37 cargo:warning=`PKG_CONFIG_ALLOW_SYSTEM_CFLAGS="1" "pkg-config" "--libs" "--cflags" "libsoup-2.4" "libsoup-2.4 >= 2.62"` did not exit successfully: exit status: 1
62.37 error: could not find system library 'libsoup-2.4' required by the 'soup2-sys' crate
62.37
62.37 --- stderr
62.37 Package libsoup-2.4 was not found in the pkg-config search path.
62.37 Perhaps you should add the directory containing `libsoup-2.4.pc'
62.37 to the PKG_CONFIG_PATH environment variable
62.37 Package 'libsoup-2.4', required by 'virtual:world', not found
62.37 Package 'libsoup-2.4', required by 'virtual:world', not found
62.37
62.37 warning: build failed, waiting for other jobs to finish...
62.37 💥 maturin failed
62.37 Caused by: Failed to build a native library through cargo
62.37 Caused by: Cargo build finished with "exit status: 101": `env -u CARGO "cargo" "rustc" "--message-format" "json-render-diagnostics" "--manifest-path" "/tmp/pip-wheel-r6hbeiv5/pywry_672648706bc1479f91b19d671ff14b04/Cargo.toml" "--release" "--bin" "pywry"`
62.37 Error: command ['maturin', 'pep517', 'build-wheel', '-i', '/root/.cache/pypoetry/virtualenvs/humblapi-XDF0t07c-py3.11/bin/python', '--compatibility', 'off'] returned non-zero exit status 1
62.37 [end of output]
62.37
62.37 note: This error originates from a subprocess, and is likely not a problem with pip.
62.37 ERROR: Failed building wheel for pywry
62.37 Successfully built linearmodels pandas-ta peewee
62.37 Failed to build pywry
62.37 ERROR: Failed to build one or more wheels
62.38
62.38 [notice] A new release of pip is available: 24.1 -> 24.1.2
62.38 [notice] To update, run: pip install --upgrade pip
------
Dockerfile:21
--------------------
19 | RUN poetry build -f wheel
20 | RUN poetry export -f requirements.txt --output requirements.txt --without-hashes
21 | >>> RUN poetry run pip wheel -w wheels -r requirements.txt
22 | RUN mv dist/* wheels
23 |
--------------------
ERROR: failed to solve: process "/bin/sh -c poetry run pip wheel -w wheels -r requirements.txt" did not complete successfully: exit code: 1
View build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/nxmnfopdykkadaxass0212qo2
</code></pre>
<p>If i use this Dockerfile:</p>
<pre><code>FROM python:3.11 as build
WORKDIR /build
# Install build dependencies
RUN apt-get update && apt-get install -y \
curl \
gcc \
g++ \
pkg-config \
libssl-dev \
libgtk-3-dev \
libsoup2.4-dev \
libjavascriptcoregtk-4.0-dev \
libwebkit2gtk-4.0-dev \
&& rm -rf /var/lib/apt/lists/*
# Install poetry and add it to PATH
RUN curl -sSL https://install.python-poetry.org | python3 -
ENV PATH="/root/.local/bin:$PATH"
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
COPY src src
COPY pyproject.toml poetry.lock* README.md ./
RUN poetry build -f wheel
RUN poetry export -f requirements.txt --output requirements.txt --without-hashes
RUN poetry run pip wheel -w wheels -r requirements.txt
RUN mv dist/* wheels
FROM python:3.11-slim
WORKDIR /app
# Copy dependencies from the previous stage
COPY --from=build /build/wheels /app/wheels
# Set up a virtual environment
RUN python -m venv venv
# Install dependencies
# RUN venv/bin/pip install "uvicorn[standard]" # TODO: run `poetry add` on me instead
RUN venv/bin/pip install wheels/* --no-deps --no-index
EXPOSE 8080
ENTRYPOINT ["venv/bin/uvicorn", "humblapi.main:app"]
CMD ["--host", "0.0.0.0", "--port", "8080"]
</code></pre>
<p>then it builds successfully (locally)</p>
|
<python><docker><pip><digital-ocean><python-poetry>
|
2024-07-04 10:59:47
| 1
| 373
|
JJ Fantini
|
78,706,696
| 3,556,110
|
How do I add a prefetch (or similar) to a django queryset, using contents of a JSONField?
|
<h2>Background</h2>
<p>I have a complex analytics application using conventional relational setup where different entities are updated using a CRUD model.</p>
<p>However, our system is quite event driven, and it's becoming a real misery to manage history, and apply data migrations respecting that history.</p>
<p>So we're moving toward a quasi-noSQL approach in which I'm using JSON to represent data with a linked list of patches representing the history.</p>
<p>As far as data structure goes it's a very elegant solution and will work well with other services. As far as django goes, I'm now struggling to use the ORM.</p>
<p>My database is PostgreSQL.</p>
<h2>Situation</h2>
<p>Previously I had something like (pseudocode):</p>
<pre class="lang-py prettyprint-override"><code>class Environment(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
class Analysis(models.Model):
created = models.DateTime(auto_now_add=True)
environment = models.ForeignKey(
"myapp.Environment",
blank=True,
null=True,
related_name="analyses",
)
everything_else = models.JSONField()
</code></pre>
<p>Whereas I now will have:</p>
<pre class="lang-py prettyprint-override"><code>class Environment(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
class Analysis(models.Model):
created = models.DateTime(auto_now_add=True)
everything = models.JSONField()
</code></pre>
<p>Where <code>everything</code> might be a JSON object looking like...</p>
<pre class="lang-js prettyprint-override"><code>{
"environment": "2bf94e55-7b47-41ad-b81a-6cce59762160",
"other_stuff": "lots of stuff"
}
</code></pre>
<p>Now, I can still totally get the <code>Environment</code> for a given <code>Analysis</code> because I have its ID, but if I do something like this (again pseudocode):</p>
<pre class="lang-py prettyprint-override"><code>for analysis in Analysis.objects.filter(created=today).all():
print(Environment.objects.get(id=analysis.everything['environment'])
</code></pre>
<p>I of course get an <code>N+1</code> error because I'm querying the database for the environment on every iteration.</p>
<p>In a simple script I <em>could</em> put all the ids in a list then do a single query for them all using <code>ids__in</code> filtering... but that doesn't work well with the django ecosystem, which typically relies on a <code>get_queryset</code> method to customise what gets fetched... In particular this will enable the solution to work with</p>
<ul>
<li>API serialisers</li>
<li>admin methods.</li>
</ul>
<h2>Question</h2>
<p>Given a queryset like <code>Analysis.objects.filter(created=today).all()</code>, how can I adjust this queryset to pre-fetch the related environment objects, now that their IDs are in a JSONField?</p>
|
<python><json><django><postgresql><jsonfield>
|
2024-07-04 10:59:10
| 1
| 5,582
|
thclark
|
78,706,565
| 10,349,175
|
Pass system env variable to self-hosted github-runner
|
<p>I have a self-hosted runner based on Mac OS Ventura 13.6.5.
Mac has installed Python using Homebrew:</p>
<pre><code>brew install python
</code></pre>
<p>and Path for python variable in .<code>zshrc</code></p>
<pre><code>export PATH="$(brew --prefix python)/libexec/bin:$PATH"
</code></pre>
<p>In the workflow, I have a bash script that use Python to do some work, however, when I start flow it threw an error that <code>python</code> is not found</p>
<p>I have tried to install Python setup GitHub action, however, step just are stucking for an unknown reason</p>
<pre><code>- uses: actions/setup-python@v5
with:
python-version: '3.10'
</code></pre>
<p>Also I tried to add a Path variable to the runners <code>.env</code> file, but without any effect</p>
<pre><code>python="$(brew --prefix)/opt/python@3/libexec/bin
</code></pre>
<p>How can I tell Github runner to use the <code>python</code> variable from the system variables?</p>
|
<python><github-actions><cicd><github-actions-self-hosted-runners>
|
2024-07-04 10:30:59
| 0
| 432
|
AShX
|
78,706,456
| 287,297
|
Opposite results between `bool(x)` and `x.__bool__()` after patching
|
<p>In this simple little test case, I monkey patch the <code>__bool__</code> method of an instance, but am surprised to see that calling <code>bool(x)</code> and <code>x.__bool__()</code> give opposite results afterwards. What is the explanation for this behavior? Is it a bug or a feature of the language? Thanks.</p>
<pre><code>#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Define class #
class Person:
def __bool__(self):
return False
# Create instance #
bob = Person()
# Call test methods #
print("- Before monkey patching -")
print("Using __bool__(): ", bob.__bool__()) # False
print("Using bool(): ", bool(bob)) # False
# Patch #
def always_true(x): return True
bob.__bool__ = always_true.__get__(bob, bob.__class__)
# Call test methods #
print("- After monkey patching -")
print("Using __bool__(): ", bob.__bool__()) # True
print("Using bool(): ", bool(bob)) # False
</code></pre>
<p>Will print this with python 3.12:</p>
<pre><code>- Before monkey patching -
Using __bool__(): False
Using bool(): False
- After monkey patching -
Using __bool__(): True
Using bool(): False
</code></pre>
<p>The same result is obtained when going the <code>from types import MethodType</code> route.</p>
|
<python><boolean><monkeypatching>
|
2024-07-04 10:10:54
| 0
| 6,514
|
xApple
|
78,706,369
| 10,470,463
|
Insert values in a dataframe depending on the date
|
<p>I want to put the results of many pivot tables in a dataframe where the dates in the pivot table are the same as the dates in the another dataframe.</p>
<p>I do not wish to simply replace all nan values. The nan values will be important later.</p>
<p>The code below does what I want to do, but I read, for-loops are not recommended in Pandas.</p>
<p>How can I change this to get the same result, but just using Pandas functions, without the for loops?</p>
<pre><code>import pandas as pd
cols = ['date', 'Heavy', 'Light']
daysdf = pd.DataFrame(columns = cols)
# set a dates range
start = '2015-06-06'
end = '2015-07-06'
# daysdf has just all possible dates in the range to start with
# columns Heavy and Light just contain NaN
daysdf['date']= pd.date_range(start=start, end=end)
# fake pivot table
pf = pd.DataFrame({'date': ['2015-06-06', '2015-06-16', '2015-06-26', '2015-07-05'],
'Heavy': [450, 567, 612, 701],
'Light': [450, 567, 612, 701]})
# get index of row in daysdf where pf['date'] = daysdf['date']
indexes = []
for p in pf['date']:
my_index = daysdf.index[daysdf['date']==p]
indexes.append(my_index)
# this does what I want to do
count = 0
for i in indexes:
daysdf.iloc[i, 1] = pf.iloc[count, 1]
daysdf.iloc[i, 2] = pf.iloc[count, 2]
count +=1
</code></pre>
|
<python><pandas>
|
2024-07-04 09:54:10
| 0
| 511
|
Pedroski
|
78,706,174
| 3,121,975
|
Non-iterable value used in iterating context in Pydantic validator
|
<p>I have a Pydantic DTO that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import AfterValidator, model_validator, BaseModel
class Bid(BaseModel):
start_block: Annotated[DateTime, AfterValidator(block_validator)]
end_block: Annotated[DateTime, AfterValidator(block_validator)]
threshold: int
pq_pairs: Annotated[List[PriceQuantityPair], AfterValidator(pq_pair_validator)] = Field(min_length=1)
@model_validator(mode="after")
def validate_bid(self) -> Self:
"""Validate an offer."""
# Check that the start block is before the end block
if self.start_block >= self.end_block:
raise ValueError("The start block must be before the end block.")
for pq_pair in self.pq_pairs:
if self.threshold > pq_pair.quantity:
raise ValueError("The threshold must be less than or equal to the quantity.")
# Return the object now that we've validated it
return self
</code></pre>
<p>This works fine but Pylint is raising a linting error:</p>
<blockquote>
<p>E1133: Non-iterable value self.pq_pairs is used in an iterating context (not-an-iterable)</p>
</blockquote>
<p>I would assume that this has something to do with the use of <code>Annotated</code> but I'm not sure what to do about it. Any help would be appreciated.</p>
|
<python><pydantic><pylint>
|
2024-07-04 09:18:29
| 1
| 8,192
|
Woody1193
|
78,706,156
| 21,348,174
|
Auto complete for Python files in VS Code does not work for sub directories
|
<p><strong>End goal</strong></p>
<p>Setting up autocomplete for my self wrriten lib in VS Code for python files that will eventually run in Autodesk Revit.</p>
<p><strong>Project Structure</strong></p>
<pre><code>├── my_directory
│ ├── script1.py
│ └── subdirectory
│ └── script2.py
└── lib
├── __init__.py
├── example.py
└── converter
├── __init__.py
└── converter_script.py
</code></pre>
<p><strong>What I've done</strong></p>
<ol>
<li><p>creating .env file in the project's root directory</p>
<p><code>PYTHONPATH=${PYTHONPATH}:${workspaceFolder}/lib</code></p>
</li>
<li><p>Creating settings.json in the project's root directory</p>
<p>{</p>
<pre><code> "python.envFile": "${workspaceFolder}/.env",
"python.autoComplete.extraPaths": [
"${workspaceFolder}/lib"
]
</code></pre>
<p>}</p>
</li>
<li><p>Adding the route of <code>lib</code> directory to Environment Variables globally in my pc</p>
</li>
</ol>
<p><strong>The problem</strong></p>
<p>I want to import and use the code from <code>lib</code> inside scripts that are located in <code>my_directory</code>. Right now I get autocomplete AND the code executes properly in Revit ONLY for <code>example.py</code> , but when trying to import something from <code>converter_script.py</code> - There is no autocomplete and I get an error when trying to run the code.</p>
<pre><code>ImportError: No module named converter_script
</code></pre>
<p>I added an import to the functions into <code>__init__.py</code> file inside the <code>converter</code> directory.</p>
<p>How can I solve it so that I will get autocomplete and the code will run, without having to change and reconfigure files every time I add new scripts and sub directories in <code>lib</code>?</p>
<p><strong>script1.py content</strong></p>
<pre><code>from example import test_print # Works
from converter_script import converter_print # Does not work
converter_print()
test_print()
print('hola mundo')
</code></pre>
|
<python><visual-studio-code><environment-variables>
|
2024-07-04 09:15:20
| 0
| 435
|
IdoBa
|
78,706,105
| 6,049,429
|
Python Set Context precision for Decimal field
|
<pre><code>from decimal import Decimal, setcontext, getcontext
class MyNewSerializer(serializers.Serializer):
total_value_base = serializers.SerializerMethodField()
total_outgoing_value_base = serializers.DecimalField(
max_digits=38,
decimal_places=8,
source="value_sent",
)
total_incoming_value = serializers.DecimalField(
max_digits=38,
decimal_places=4,
source="value_received",
)
def get_total_value_base(self, obj):
total = Decimal(obj.value_received) + Decimal(
obj.value_sent
)
# Values of above objects
# obj.value_received = 425933085766969760747388.45622168
# obj.value_sent = 0
# total = 425933085766969760747388.4562
dec = Decimal(str(total))
return round(dec, 8)
</code></pre>
<p>But this is throwing error:</p>
<pre><code>return round(dec, 8)
decimal.InvalidOperation: [<class 'decimal.InvalidOperation'>]
</code></pre>
<p>This get's fixed when i do the following:</p>
<pre><code>def get_total_value_base(self, obj):
# the below line fixes the issue
getcontext().prec = 100
total = Decimal(obj.value_received) + Decimal(
obj.value_sent
)
# Values of above objects
# obj.value_received = 425933085766969760747388.45622168
# obj.value_sent = 0
# total = 425933085766969760747388.4562
dec = Decimal(str(total))
return round(dec, 8)
</code></pre>
<p>I want to increase precision for all the values in the class as well as other similar classes.</p>
<p>How can i do that using a base class or overriding the Decimal class for the whole file or for various classes in the file?</p>
<p>I am expecting to increase the decimal precision for 100's of variables that are present in a file in various different classes.</p>
|
<python><python-3.x><django><decimal>
|
2024-07-04 09:04:25
| 1
| 984
|
Cool Breeze
|
78,706,072
| 11,233,365
|
Python: Have multiprocessing pools start and run their own multiprocessing pools
|
<p>I am in the process of processing a list of arrays, and this is a workflow that could be parallelised at multiple points: One sub-process for each dataset in the list, and multiple sub-processes each to handle the different slices in the array.</p>
<p>I've read that process daemons are generally not allowed to spawn their own processes, and would appreciate advice on whether it's possible to overcome that limitation, and have sub-processes trigger their own sub-processes.</p>
<p>I've written a short example code for reference.</p>
<pre class="lang-py prettyprint-override"><code># Can a pool run their own pools?
import numpy as np
from multiprocessing import Pool
def do_something(
num: int,
data: list[int],
num_procs: int = 1,
):
args_list = []
for n in range(len(data)):
args_list.append([data[n], num])
with Pool(processes=num_procs) as pool:
results = pool.starmap(do_another_thing, args_list)
return results
def do_another_thing(
value1: int,
value2: int,
):
return value1 * value2
# Main
if __name__ == "__main__":
# Example dataset of a list of arrays
data = [np.random.randint(0, 255, (20)) for n in range(20)]
args_list = []
for i in range(len(data)):
args_list.append([i, data[i]])
with Pool(processes=4) as pool:
results = pool.starmap(do_something, args_list)
for result in results:
print(results)
</code></pre>
<p>Advice would be much appreciated. Thanks!</p>
|
<python><multiprocessing>
|
2024-07-04 08:56:57
| 0
| 301
|
TheEponymousProgrammer
|
78,706,027
| 9,974,205
|
Optimizing DataFrame Processing for User Login Intervals with Pandas
|
<p>I'm working on a function that processes the login times of users on a website. These times are stored in a Pandas DataFrame where the first column indicates a time interval, and the rest indicate if a user was logged in during that interval. My program acts on this DataFrame and groups users, creating as many columns as there are possible combinations of users. It then checks if those users were connected during the time interval defined by the row.</p>
<p>For example, in a case with 3 users, A, B, and C, if A and B are logged in at a specific row, the column A,B will have a 1 while the columns for A and B will be 0. If all three users are active at the same time, the column A,B,C will have a 1, and the rest will be 0.</p>
<p>In my real case, there are many columns, so the exponential cost of the function makes it prohibitive. I have been trying to generate code that looks for groups that never coincide to avoid constructing redundant columns. For instance, if B and C never have a row with a value of 1 in common, it doesn't make sense to generate the columns B,C or A,B,C.</p>
<p>I tried using GitHub Copilot, but it hasn't been able to provide a useful solution. Could someone help me optimize my code?</p>
<p>Here is the code I'm using:</p>
<pre><code>def process_dataframe_opt2(df):
# List of columns for combinations, excluding 'fecha_hora'
columns = [col for col in df.columns if col != 'fecha_hora']
# Generate all possible combinations of the columns
for r in range(1, len(columns) + 1):
for comb in combinations(columns, r):
col_name = ','.join(comb)
df[col_name] = df[list(comb)].all(axis=1).astype(int)
# Create a copy of the original DataFrame to modify it
df_copy = df.copy()
# Process combinations from largest to smallest
for r in range(len(columns), 1, -1):
for comb in combinations(columns, r):
col_name = ','.join(comb)
active_rows = df[col_name] == 1
if active_rows.any():
for sub_comb in combinations(comb, r-1):
sub_col_name = ','.join(sub_comb)
df_copy.loc[active_rows, sub_col_name] = 0
# Remove columns that only contain 0
df_copy = df_copy.loc[:, (df_copy != 0).any(axis=0)]
return df_copy
</code></pre>
<p>And to generate an example</p>
<pre><code>import pandas as pd
from itertools import combinations
# Create a range of time
date_rng = pd.date_range(start='2024-05-13 15:52:00', end='2024-05-13 16:04:00', freq='min')
# Create an empty DataFrame
df = pd.DataFrame(date_rng, columns=['fecha_hora'])
# Add the login columns with corresponding values
df['A'] = 1 # Always active
df['B'] = [1] * 6 + [0] * 5 + [1] * 2 # Active in the first 6 intervals
df['C'] = [1] * 5 + [0] * 6 + [1] * 2 # Active in the first 5 intervals
df['D'] = [1] * 4 + [0] * 7 + [1] * 2 # Active in the first 4 intervals
df['E'] = [1] * 3 + [0] * 3 + [1] * 2 + [0] * 3 + [1]*2 # Active in two blocks
df['F'] = [0] * 7 + [1] * 3 + [0] * 3 # Active in a single block towards the end
df['alfa'] = [0] * 10 + [1] * 1 + [0] * 2
# Adjust some rows to have more than one '1'
df.loc[1, ['A', 'B', 'C']] = 1 # Row with multiple '1's
df.loc[8, ['D', 'E', 'F']] = 1 # Another row with multiple '1's
df_copy = process_dataframe_opt2(df)
</code></pre>
<p>Could anyone provide insights or suggestions on how to optimize this function to avoid the exponential cost and improve performance?</p>
|
<python><pandas><dataframe><optimization><python-itertools>
|
2024-07-04 08:49:46
| 1
| 503
|
slow_learner
|
78,706,017
| 15,948,240
|
Change style of DataTable in Shiny for Python
|
<p>I would like to set a background color in a Shiny for Python DataTable</p>
<pre><code>from palmerpenguins import load_penguins
from shiny import render
from shiny.express import ui
penguins = load_penguins()
ui.h2("Palmer Penguins")
@render.data_frame
def penguins_df():
return render.DataTable(
penguins,
summary="Viendo filas {start} a {end} de {total}",
)
</code></pre>
<p><a href="https://i.sstatic.net/DdOWczp4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DdOWczp4.png" alt="example of data table" /></a></p>
<p>Using <a href="https://stackoverflow.com/questions/43695515/shiny-datatable-how-to-change-theme">a similar question for RShiny</a> I tried to add a <code>style</code> keyword argument to <code>render.DataTable</code> but got <code>unexpected keyword argument error</code>.</p>
<p>How can I specify styling arguments when rendering a DataTable ?</p>
|
<python><py-shiny>
|
2024-07-04 08:48:33
| 1
| 1,075
|
endive1783
|
78,705,533
| 3,798,964
|
Create overloaded functions ignoring irrelevant parameters in the signature
|
<p>How can I change the signature of the overloads to signal to type checkers that only a single parameter is responsible for the return type, ignoring all the other parameters?</p>
<p>Starting from this code (neither <code>mypy</code> nor <code>pyright</code> complains yet):</p>
<pre><code>from typing import Literal, overload
@overload
def test(switch: Literal[True]) -> int: ...
@overload
def test(switch: Literal[False]) -> None: ...
def test(switch: bool) -> int | None:
return 123 if switch is True else None
test_int: int = test(True)
test_none: None = test(False)
</code></pre>
<p>Now I want to add parameters (<strong>perspectively a lot!</strong>) that are irrelevant to the return type:</p>
<pre><code>def test(switch: bool, var1: str, var2: float, var3: int) -> int | None:
return 123 if switch is True else None
</code></pre>
<p>How can I change the overloads, so that I don't have to update them everytime I add another parameter? I tried with <code>*args</code> and <code>**kwargs</code> combined with <code>Any</code> in the overloads, but haven't been successful.</p>
|
<python><mypy><python-typing><pyright>
|
2024-07-04 07:09:35
| 1
| 4,485
|
johnson
|
78,705,531
| 1,831,695
|
upload file to Sharepoint using python
|
<p>I try to upload a csv file to a Sharepoint.
I came so far with my script:</p>
<pre><code>from office365.runtime.auth.client_credential import ClientCredential
from office365.sharepoint.client_context import ClientContext
import msal
import os
import requests
sharepoint_site_url = "https://companyname.sharepoint.com/sites/PowerBILookUp"
document_library_url = "Shared Documents" # Adjust as necessary
# Azure AD App Registration details
client_id = "client_id"
client_secret = "client_secret"
tenant_id = "tenant_id"
# Path to the CSV file you want to upload
local_csv_path = "C:\\Users\\Administrator\\Downloads\\orders.csv"
# Destination file name on SharePoint (including any folders within the document library)
destination_file_name = "SubFolder/orders2.csv" # Adjust the path as necessary
# Authentication authority URL
authority_url = f"https://login.microsoftonline.com/{tenant_id}"
# Initialize MSAL confidential client application
app = msal.ConfidentialClientApplication(
client_id,
authority=authority_url,
client_credential=client_secret,
)
# Acquire a token for SharePoint
token_response = app.acquire_token_for_client(scopes=["https://companyname.sharepoint.com/.default"])
# Check if the token was acquired successfully
if "access_token" in token_response:
access_token = token_response["access_token"]
print(f"Access token acquired: {access_token[:30]}...")
try:
# Create the SharePoint client context with the acquired token
ctx = ClientContext(sharepoint_site_url).with_access_token(access_token)
# Ensure the folder can be accessed
target_folder = ctx.web.get_folder_by_server_relative_url(document_library_url)
ctx.execute_query()
print(f"Folder accessed successfully: {document_library_url}")
# Prepare the file upload URL
upload_url = f"{sharepoint_site_url}/_api/web/GetFolderByServerRelativeUrl('{document_library_url}')/Files/add(url='{destination_file_name}', overwrite=true)"
print(f"Upload URL: {upload_url}")
# Read the CSV file content
with open(local_csv_path, 'rb') as file_content:
file_data = file_content.read()
# Perform the upload using REST API
headers = {
"Authorization": f"Bearer {access_token}",
"Accept": "application/json;odata=verbose",
"Content-Type": "application/octet-stream"
}
print(f"Headers: {headers}")
print(f"Uploading file {local_csv_path} to {upload_url}...")
response = requests.post(upload_url, headers=headers, data=file_data)
if response.status_code == 200:
print(f"File '{destination_file_name}' uploaded to SharePoint successfully.")
else:
print(f"File upload failed: {response.status_code} - {response.text}")
except Exception as e:
print(f"An error occurred during the SharePoint operation: {e}")
else:
print(f"Error acquiring token: {token_response.get('error_description')}")
</code></pre>
<p>but I get this error:</p>
<blockquote>
<p>Access token acquired: xxxx... Folder accessed successfully: Shared
Documents Upload URL:
https://companyname.sharepoint.com/sites/PowerBILookUp/_api/web/GetFolderByServerRelativeUrl('Shared
Documents')/Files/add(url='SubFolder/orders2.csv', overwrite=true)
Headers: {'Authorization': 'Bearer
eyJ0eXAiOiJKV1VcFphP_tgxJA-BwQ-qXCX7z_FC7jfJLNr4j03GEDw',
'Accept': 'application/json;odata=verbose', 'Content-Type':
'application/octet-stream'} Uploading file
C:\Users\Administrator\Downloads\orders.csv to
https://companyname.sharepoint.com/sites/PowerBILookUp/_api/web/GetFolderByServerRelativeUrl('Shared
Documents')/Files/add(url='SubFolder/orders2.csv', overwrite=true)...
File upload failed: 401 - {"error_description":"ID3035: The request
was not valid or is malformed."}</p>
</blockquote>
<p>any body any idea? I try to upload to <a href="https://companyname.sharepoint.com/sites/PowerBILookUp/subFolder/" rel="nofollow noreferrer">https://companyname.sharepoint.com/sites/PowerBILookUp/subFolder/</a></p>
<p>thanks</p>
|
<python><csv><sharepoint><upload>
|
2024-07-04 07:09:22
| 0
| 563
|
Ele
|
78,705,438
| 6,666,008
|
How to handle relative resource paths while using pyinstaller / auto py to exe?
|
<p>I have python script that saves the log file and another csv to a relative resource path. The pyinstaller throws file not found error. The file structure of code is.</p>
<pre><code>-ProjectFolder
|-common
| |- commom.py
|-src
| |- main.py
| |- anotherclass.py
|-resources
| |-output.logs
| |-result.csv
</code></pre>
<p>Output exe directory</p>
<pre><code>-ProjectFolder
|-output
| |-main
| | |-main.exe
| | |-_internal
| | | |-tons of packages and resources folder if I pass it while creating installer
</code></pre>
<p>How do I get this output to a folder besides the main folder of the executable?</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'D:\\Projects\\CapstoneProject-Outkits\\output\\resources\\outkitsapicall.log'
</code></pre>
<p>Code inside main.py</p>
<pre><code>file_handler = logging.FileHandler('../resources/outkitsapicall.log', mode='w', encoding='utf-8')
</code></pre>
|
<python><pyinstaller><auto-py-to-exe>
|
2024-07-04 06:44:46
| 1
| 682
|
ss301
|
78,704,955
| 9,630,700
|
Avoiding additional null string using Pandas date_range
|
<p>I was trying to obtain a formatted list of dates Pandas date_range and obtained an extra null string. This is using Pandas v2.2.2.</p>
<pre><code>>>> import pandas as pd
>>> pd.date_range('2000-01-01','2000-01-03').format('%Y-%m-%d')
<stdin>:1: FutureWarning: DatetimeIndex.format is deprecated and will be removed in a future version. Convert using index.astype(str) or index.map(formatter) instead.
['', '2000-01-01', '2000-01-02', '2000-01-03']
</code></pre>
|
<python><pandas>
|
2024-07-04 03:44:45
| 1
| 328
|
chuaxr
|
78,704,879
| 4,778,587
|
Using requests I get a 401 but urllib3 and curl work
|
<p>The following urllib code works to connect to the using the :</p>
<pre class="lang-py prettyprint-override"><code>import urllib3
import json
# Initialize a PoolManager instance
http = urllib3.PoolManager()
# Define the URL, headers, and data
url = "https://<url>"
headers = {
"Authorization": "Bearer <token>",
"Content-Type": "application/json; charset=utf-8"
}
data = {
"name": "some_test"
}
# Encode data to JSON
encoded_data = json.dumps(data).encode('utf-8')
# Send the POST request
response = http.request(
'POST',
url,
body=encoded_data,
headers=headers
)
# Print the response status and data
print(response.status)
print(response.data.decode('utf-8'))
</code></pre>
<p>I also tried connecting via this CURL command which works:</p>
<pre><code>curl -X "POST" "https://<url>" \
-H 'Authorization: Bearer <token>' \
-H 'Content-Type: application/json; charset=utf-8' \
-d $'{
"name": "some_test"
}'
</code></pre>
<p>But using the requests module the code returns a 401 message -> Unauthorized:</p>
<pre class="lang-py prettyprint-override"><code>import requests
url = "<url>"
headers = {
"Authorization": "Bearer <token>",
"Content-Type": "application/json; charset=utf-8"
}
data = {
"name": "some_test"
}
response = requests.post(url, headers=headers, json=data)
print(response.status_code)
print(response.text)
</code></pre>
<p>I tried different requests version but all behave the same. What am I missing? Could it be a different conflict?</p>
|
<python><curl><python-requests><urllib3>
|
2024-07-04 03:06:13
| 1
| 695
|
doom4
|
78,704,811
| 11,462,274
|
I need Selenium WebDriver Firefox return the same response as using requests, is there any way?
|
<p>I have code all assembled to work on this response:</p>
<pre class="lang-python prettyprint-override"><code>response = requests.get(url).text
soup = BeautifulSoup(response, 'html.parser')
</code></pre>
<p>But I will need to use <code>Selenium WebDriver</code> instead of <code>requests</code>, I tried to use it this way:</p>
<pre class="lang-python prettyprint-override"><code>driver = web_driver()
response = driver.page_source
soup = BeautifulSoup(response, 'html.parser')
</code></pre>
<p>But the responses are different and this generates several errors in the code. Is there a way to get <code>Selenium WebDriver</code> to collect exactly the same response as using <code>requests</code>?</p>
<p>Detail: if possible, there is a way to always get the same result regardless of the <code>url</code> used.</p>
<p><strong>Example of Usage</strong></p>
<p>In a scenario where I want to create a pandas dataframe with the exact values of this table that exists in this url. <code>requests</code> delivers a result in terms of reading elements different from <code>Selenium</code>.</p>
<p><a href="https://int.soccerway.com/teams/egypt/pharco/38185/squad/" rel="nofollow noreferrer">https://int.soccerway.com/teams/egypt/pharco/38185/squad/</a></p>
<p><a href="https://i.sstatic.net/xVebwZOi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xVebwZOi.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><python-requests><selenium-firefoxdriver>
|
2024-07-04 02:28:19
| 1
| 2,222
|
Digital Farmer
|
78,704,660
| 3,458,191
|
How to format the dataframe into a 2D table
|
<p>I have following issue with formatting a pandas dataframe into a 2D format.
My data is:</p>
<pre><code>+----+------+-----------+---------+
| | Jobs | Measure | Value |
|----+------+-----------+---------|
| 0 | Job1 | Temp | 43 |
| 1 | Job1 | Humidity | 65 |
| 2 | Job2 | Temp | 48 |
| 3 | Job2 | TempS | 97.4 |
| 4 | Job2 | Humidity | nan |
| 5 | Job3 | Humidity | 55 |
| 6 | Job1 | Temp | 41 |
| 7 | Job1 | Duration | 23 |
| 8 | Job3 | Temp | 39 |
| 9 | Job1 | Temp | nan |
| 10 | Job1 | Humidity | 55 |
| 11 | Job2 | Temp | 48 |
| 12 | Job2 | TempS | 97.4 |
| 13 | Job2 | Humidity | nan |
| 14 | Job3 | Humidity | 55 |
| 15 | Job1 | Temp | nan |
| 16 | Job1 | Duration | 25 |
| 17 | Job3 | Temp | nan |
| 18 | Job2 | Humidity | 61 |
+----+------+-----------+---------+
</code></pre>
<p>and my code for now is:</p>
<pre><code>from tabulate import tabulate
import pandas as pd
df = pd.read_csv('logs.csv')
#print(df)
print(tabulate(df, headers='keys', tablefmt='psql'))
grouped = df.groupby(['Jobs','Measure'], dropna=True)
average_temp = grouped.mean()
errors = df.groupby(['Jobs','Measure']).agg(lambda x: x.isna().sum())
frames = [average_temp, errors]
df_merged = pd.concat(frames, axis=1).set_axis(['Avg', 'Error'], axis='columns')
print(df_merged)
</code></pre>
<p>and the output of the print is:</p>
<pre><code>Table-1
Avg Error
Jobs Measure
Job1 Duration 24.0 0
Humidity 60.0 0
Temp 42.0 2
Job3 Humidity 55.0 0
Temp 39.0 1
Job2 Humidity 61.0 2
TempS 97.4 0
Temp 48.0 0
</code></pre>
<p>How can I format this table into something like this:</p>
<pre><code>Table-2
Jobs Avg.Temp Err.Temp Avg.Humidity Err.Humidity Avg.Duration ...
Job1 42.0 2 60.0 0 24.0
Job2 48.0 0 61.0 0 -
Job3 39.0 1 55.0 1 -
</code></pre>
<p>So, what we see is that for example, Avg.Temp for Job1 in Table-2 is the Avg. value of Job1->Temp in Table-1. Another thing is that not all Jobs need to have the same measure fields and can also differ like for Job2 we have 'TempS'.</p>
<p>Update: using the answer from <a href="https://stackoverflow.com/users/24714692/user24714692">user24714682</a> the table looks like this.</p>
<pre><code>+--------------+----------------+----------------+--------------+------------+------------------+------------------+----------------+--------------+
| Jobs | Avg.Duration | Avg.Humidity | Avg.S.Temp | Avg.Temp | Error.Duration | Error.Humidity | Error.S.Temp | Error.Temp |
|--------------+----------------+----------------+--------------+------------+------------------+------------------+----------------+--------------|
| Job1 | 24 | 60 | nan | 42 | 0 | 0 | nan | 2 |
| Job3 | nan | 55 | nan | 39 | nan | 0 | nan | 1 |
| Job2 | nan | 61 | 97.4 | 48 | 1 | 2 | 0 | 0 |
+--------------+----------------+----------------+--------------+------------+------------------+------------------+----------------+--------------+
</code></pre>
<p>How can I now sort the columns in that way to first show the Measure that has the highest total Error count first and the descending to the rest of total Error counts. example:</p>
<pre><code>+--------------+------------+--------------+----------------+------------------+...
| Jobs | Avg.Temp | Error.Temp | Avg.Humidity | Error.Humidity |
|--------------+------------+--------------|----------------+------------------+...
| Job1 | 42 | 2 | 60 | 0 |
| Job3 | 39 | 1 | 55 | 0 |
| Job2 | 48 | 0 | 61 | 2 |
+--------------+------------+--------------+----------------+------------------+...
</code></pre>
<p>In the above table the columns are sorted 1st Avg.Temp bacause it is the sensor with highest total error count of 3 and then it shows Avg.Humidity because it has the 2nd highest total error count and so on.</p>
|
<python><pandas><dataframe><numpy>
|
2024-07-04 01:04:05
| 2
| 1,187
|
FotisK
|
78,704,600
| 11,793,491
|
Using lambda with conditional in pandas chain
|
<p>I have this dataset:</p>
<pre class="lang-py prettyprint-override"><code>thedf = pd.DataFrame({'a':[10,20,0],'b':[9,16,15]})
thedf
a b
0 10 9
1 20 16
2 0 15
</code></pre>
<p>And I want to create a new column using assign in a pandas chain. To avoid a division by zero, I put a conditional inside a lambda. I tried this code:</p>
<pre class="lang-py prettyprint-override"><code>thedf.assign(division = lambda x: x['b']/x['a'] if x['a'] != 0 else 0)
</code></pre>
<p>But it returns an error:
<code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</code></p>
<p>The expected result is:</p>
<pre class="lang-py prettyprint-override"><code> a b division
0 10 9 0.9
1 20 16 0.8
2 0 15 0
</code></pre>
<p>Please, this question is related to method chaining in pandas, and I expect the answer using <code>assign</code>, because I use method chaining for data cleaning of more complex datasets.</p>
|
<python><pandas>
|
2024-07-04 00:15:39
| 1
| 2,304
|
Alexis
|
78,704,554
| 17,800,932
|
Displaying QStandardItemModel from Python in a QML TableView
|
<p>I am trying to display very simple 2D data in Qt Quick using a QML <code>TableView</code> and Qt for Python / PySide6. Here is an example of what I am looking to create:</p>
<p><a href="https://i.sstatic.net/p4A58ofg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p4A58ofg.png" alt="enter image description here" /></a></p>
<p>In the following code, I have exposed a singleton object to QML which provides a <code>QStandardItemModel</code> as a <code>QObject</code> property, but have been unable to get anything to display. What is wrong with my attempt?</p>
<pre><code>// Main.qml
import QtQuick
import QtQuick.Window
import QtQuick.Layouts
import QtQuick.Controls
import Qt.labs.qmlmodels
import com.simplified
Window {
width: 740
height: 540
visible: true
title: "Python log viewer"
TableView {
id: log
anchors.fill: parent
columnSpacing: 1
rowSpacing: 1
clip: true
model: Simplified.log
delegate: Rectangle {
border.width: 1
clip: true
Text {
text: display
anchors.centerIn: parent
}
}
}
}
</code></pre>
<pre class="lang-py prettyprint-override"><code># main.py
QML_IMPORT_NAME = "com.simplified"
QML_IMPORT_MAJOR_VERSION = 1
# Core dependencies
from pathlib import Path
import sys
# Package dependencies
from PySide6.QtCore import QObject, Signal, Property, Qt
from PySide6.QtGui import QGuiApplication, QStandardItemModel, QStandardItem
from PySide6.QtQml import QQmlApplicationEngine, QmlElement, QmlSingleton
LOG_ENTRIES = [
{
"Timestamp": "2024-07-01 19:16:03.326",
"Name": "root.child",
"Level": "DEBUG",
"Message": "This is a debug message",
},
{
"Timestamp": "2024-07-01 19:16:03.326",
"Name": "root.child",
"Level": "INFO",
"Message": "This is an info message",
},
]
FIELD_NAMES = ["Timestamp", "Name", "Level", "Message"]
@QmlElement
@QmlSingleton
class Simplified(QObject):
log_changed = Signal()
def __init__(self) -> None:
super().__init__()
_ = self.log
self.log_changed.emit()
@Property(QStandardItemModel, notify=log_changed)
def log(self):
lines = LOG_ENTRIES
table = QStandardItemModel(len(lines), len(FIELD_NAMES))
table.setHorizontalHeaderLabels(FIELD_NAMES)
for line in lines:
row = [QStandardItem(str(line[key])) for key in FIELD_NAMES]
table.appendRow(row)
return table
if __name__ == "__main__":
application = QGuiApplication(sys.argv)
engine = QQmlApplicationEngine()
qml_file = Path(__file__).resolve().parent / "qml" / "Main.qml"
engine.load(qml_file)
if not engine.rootObjects():
sys.exit(-1)
engine.singletonInstance("com.simplified", "Simplified")
sys.exit(application.exec())
</code></pre>
<hr />
|
<python><qt><qml><pyside><qtquick2>
|
2024-07-03 23:43:16
| 1
| 908
|
bmitc
|
78,704,542
| 395,857
|
Exporting a Bert-based PyTorch model to CoreML. How can I make the CoreML model work for any input?
|
<p>I use the code below to export a Bert-based PyTorch model to CoreML.</p>
<p>Since I used</p>
<pre><code>dummy_input = tokenizer("A French fan", return_tensors="pt")
</code></pre>
<p>the CoreML model only works with that input when tested on macOS. How can I make the CoreML model work for any input (i.e., any text)?</p>
<hr />
<p>Export script:</p>
<pre><code># -*- coding: utf-8 -*-
"""Core ML Export
pip install transformers torch coremltools nltk
"""
import os
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
import torch.nn as nn
import nltk
import coremltools as ct
nltk.download('punkt')
# Load the model and tokenizer
model_path = os.path.join('model')
model = AutoModelForTokenClassification.from_pretrained(model_path, local_files_only=True)
tokenizer = AutoTokenizer.from_pretrained(model_path, local_files_only=True)
# Modify the model's forward method to return a tuple
class ModifiedModel(nn.Module):
def __init__(self, model):
super(ModifiedModel, self).__init__()
self.model = model
self.device = model.device # Add the device attribute
def forward(self, input_ids, attention_mask, token_type_ids=None):
outputs = self.model(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)
return outputs.logits
modified_model = ModifiedModel(model)
# Export to Core ML
def convert_to_coreml(model, tokenizer):
# Define a dummy input for tracing
dummy_input = tokenizer("A French fan", return_tensors="pt")
dummy_input = {k: v.to(model.device) for k, v in dummy_input.items()}
# Trace the model with the dummy input
traced_model = torch.jit.trace(model, (
dummy_input['input_ids'], dummy_input['attention_mask'], dummy_input.get('token_type_ids')))
# Convert to Core ML
inputs = [
ct.TensorType(name="input_ids", shape=dummy_input['input_ids'].shape),
ct.TensorType(name="attention_mask", shape=dummy_input['attention_mask'].shape)
]
if 'token_type_ids' in dummy_input:
inputs.append(ct.TensorType(name="token_type_ids", shape=dummy_input['token_type_ids'].shape))
mlmodel = ct.convert(traced_model, inputs=inputs)
# Save the Core ML model
mlmodel.save("model.mlmodel")
print("Model exported to Core ML successfully")
convert_to_coreml(modified_model, tokenizer)
</code></pre>
<p>To use the exported model:</p>
<pre><code>import os
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
import torch.nn as nn
import nltk
import coremltools as ct
from coremltools.models import MLModel
import numpy as np
from transformers import AutoTokenizer
import nltk
nltk.download('punkt')
# Load the Core ML model
model = MLModel('model.mlmodel')
# Load the tokenizer
model_path = 'model'
tokenizer = AutoTokenizer.from_pretrained(model_path, local_files_only=True)
def prepare_input(text, tokenizer):
tokens = nltk.tokenize.word_tokenize(text)
tokenized_inputs = tokenizer(tokens, is_split_into_words=True, return_tensors="np")
input_ids = tokenized_inputs['input_ids'].astype(np.int32)
attention_mask = tokenized_inputs['attention_mask'].astype(np.int32)
input_data = {
'input_ids': input_ids,
'attention_mask': attention_mask
}
if 'token_type_ids' in tokenized_inputs:
input_data['token_type_ids'] = tokenized_inputs['token_type_ids'].astype(np.int32)
return input_data, tokens
def predict(text):
# Prepare the input
input_data, tokens = prepare_input(text, tokenizer)
# Make the prediction
prediction = model.predict(input_data)
# Extract the predicted labels
logits = prediction['output'] # Adjust this key according to your model's output
predicted_label = np.argmax(logits, axis=-1)[0]
# Display the results
for word, label in zip(tokens, predicted_label):
print(f"{word}: {model.model_description.outputDescriptions[0].dictionaryType.int64KeyType.stringDictionary[label]}")
# Test the model with a sentence
predict("A French fan")
</code></pre>
<p>The script only works with the example 'A French Fan'. When I tried another example <code>predict("A footbal fan is standing in the stadium.")</code>, it triggers an error:</p>
<pre><code>NSLocalizedDescription = "MultiArray shape (1 x 12) does not match the shape (1 x 5) specified in the model description";
</code></pre>
<hr />
<p>Environments:</p>
<ul>
<li>Export script: Tested Python 3.10 and torch 2.3.1 on Ubuntu 20.04 (does <a href="https://stackoverflow.com/q/78687580/395857">not</a> work on Windows 10).</li>
<li>Prediction script: must be run on macOS 10.13+, as CoreML model <a href="https://stackoverflow.com/q/78694076/395857">only</a> supports predictions on macOS 10.13+.</li>
</ul>
|
<python><machine-learning><bert-language-model><coreml><coremltools>
|
2024-07-03 23:39:36
| 1
| 84,585
|
Franck Dernoncourt
|
78,704,484
| 3,825,495
|
Pandas DataFrame - find if row matches all values in dictionary
|
<p>I'm doing an analysis along a pipeline, modeling points along segments, with some physical parameters. I need to keep track of the progress so, when the model hangs due to connectivity issues, etc., I can pick back up without starting from scratch.</p>
<p>I'm storing the completed parameters in a dictionary, building into a list and storing to a CSV. At the start of the script, I'm pulling in all of the completed runs so far.</p>
<p>my current implementation will not scale well, as I'm converting the completed runs dataframe values to a list of runs and then comparing that to a list of the header values to see if any rows match. I'm looking for a solution that will better scale.</p>
<pre><code># at the top of the script, pull in the runs completed to date
completed_runs_df = pd.read_csv('completed_runs.csv')
completed_runs_list_of_values = completed_runs.values
completed_runs_list_of_dicts = completed_runs.to_dict(orient='records')
for model in models:
header = {
'lat': lat,
'long': long,
'press_psig': press_psig,
'diam_in': diam_in,
}
# need a better way to check this
header_vals = list(header.values())
if header_vals in completed_runs_list_of_values:
continue
# modeling goes here
completed_runs_list_of_dicts.append(header)
completed_runs_df = pd.DataFrame(completed_runs_list_of_dicts)
completed_runs_df.to_csv('completed_runs.csv', index=False)
</code></pre>
<p>I'm looking for a direct method to take the header dictionary and immediately test if there is a matching row of values in completed_runs_df.</p>
|
<python><pandas><dataframe>
|
2024-07-03 23:00:28
| 1
| 616
|
Michael James
|
78,704,322
| 20,591,261
|
Summing Values Based on Date Ranges in a DataFrame using Polars
|
<p>I have a DataFrame (<code>df</code>) that contains columns: <code>ID</code>, <code>Initial Date</code>, <code>Final Date</code>, and <code>Value</code>, and another DataFrame (<code>dates</code>) that contains all the days for each ID from <code>df</code>.</p>
<p>On the <code>dates</code> dataframe i want to sum the values if exist on the range of each <code>ID</code></p>
<p>Here is my code</p>
<pre><code>import polars as pl
from datetime import datetime
data = {
"ID" : [1, 2, 3, 4, 5],
"Initial Date" : ["2022-01-01", "2022-01-02", "2022-01-03", "2022-01-04", "2022-01-05"],
"Final Date" : ["2022-01-03", "2022-01-06", "2022-01-07", "2022-01-09", "2022-01-07"],
"Value" : [10, 20, 30, 40, 50]
}
df = pl.DataFrame(data)
dates = pl.datetime_range(
start=datetime(2022,1,1),
end=datetime(2022,1,7),
interval="1d",
eager = True,
closed = "both"
).to_frame("date")
</code></pre>
<pre><code>shape: (5, 4)
┌─────┬──────────────┬────────────┬───────┐
│ ID ┆ Initial Date ┆ Final Date ┆ Value │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ str ┆ str ┆ i64 │
╞═════╪══════════════╪════════════╪═══════╡
│ 1 ┆ 2022-01-01 ┆ 2022-01-03 ┆ 10 │
│ 2 ┆ 2022-01-02 ┆ 2022-01-06 ┆ 20 │
│ 3 ┆ 2022-01-03 ┆ 2022-01-07 ┆ 30 │
│ 4 ┆ 2022-01-04 ┆ 2022-01-09 ┆ 40 │
│ 5 ┆ 2022-01-05 ┆ 2022-01-07 ┆ 50 │
└─────┴──────────────┴────────────┴───────┘
</code></pre>
<pre><code>shape: (7, 1)
┌─────────────────────┐
│ date │
│ --- │
│ datetime[μs] │
╞═════════════════════╡
│ 2022-01-01 00:00:00 │
│ 2022-01-02 00:00:00 │
│ 2022-01-03 00:00:00 │
│ 2022-01-04 00:00:00 │
│ 2022-01-05 00:00:00 │
│ 2022-01-06 00:00:00 │
│ 2022-01-07 00:00:00 │
└─────────────────────┘
</code></pre>
<p>In this case, on 2022-01-01 the value would be 10. On 2022-01-02, it would be 10 + 20, and on 2022-01-03, it would be 10 + 20 + 30, and so on. In other words, I want to check if the date exists within the range of each row in the DataFrame (<code>df</code>), and if it does, sum the values.</p>
<p>I think the aproach for this is like this:</p>
<pre><code>(
dates.with_columns(
pl.sum(
pl.when(
(df["Initial Date"] <= pl.col("date")) & (df["Final Date"] >= pl.col("date"))
).then(df["Value"]).otherwise(0)
).alias("Summed Value")
)
)
</code></pre>
|
<python><dataframe><datetime><python-polars>
|
2024-07-03 21:50:47
| 2
| 1,195
|
Simon
|
78,704,099
| 21,152,416
|
How to make override decorator mandatory?
|
<p>I came across new <a href="https://docs.python.org/3/whatsnew/3.12.html#pep-698-override-decorator-for-static-typing" rel="nofollow noreferrer">override decorator</a> for Python 3.12 and it looks like an extremely good practice.</p>
<p>I'm wondering if there is a way to make it "mandatory" to use? Perhaps there is some way to configure <code>mypy</code> for it? Or at least configure linters to show warnings.</p>
<pre class="lang-py prettyprint-override"><code>from typing import override
class Base:
def log_status(self) -> None:
...
class Sub(Base):
@override # I would like to make it mandatory while overriding a method
def log_status(self) -> None:
...
</code></pre>
|
<python><mypy><python-typing>
|
2024-07-03 20:32:54
| 1
| 1,197
|
Victor Egiazarian
|
78,704,004
| 1,285,061
|
Django accounts custom AuthenticationForm failing as invalid
|
<p>Why is this giving <code>form invalid</code>?
My username password input is correct.</p>
<p>forms.py</p>
<pre><code>class CustomAuthForm(AuthenticationForm):
username = forms.CharField(required=True, max_length = 50,
widget=forms.EmailInput(attrs={"placeholder": "Email", "class":"form-control"}))
password = forms.CharField(required=True, max_length = 50,
widget=forms.PasswordInput(attrs={"placeholder":"Password", "class":"form-control"}))
</code></pre>
<p>views.py</p>
<pre><code>@csrf_protect
def user_login(request):
if request.user.is_authenticated:
return redirect('/')
form=CustomAuthForm(request.POST or None)
print(request.POST)
if form.is_valid():
print(form.cleaned_data)
else:
print ('form invalid')
</code></pre>
<p>Console print</p>
<pre><code><QueryDict: {'csrfmiddlewaretoken': ['mt5a3e9KyCbVg4OokaDeCu97EDrHVInAwVJmmK3a2xn0Nn4KRi0gt7axWJyRDMmT'],
'username': ['myemail@gmail.com'], 'password': ['mypass']}>
form invalid
</code></pre>
|
<python><python-3.x><django><django-forms>
|
2024-07-03 19:56:45
| 1
| 3,201
|
Majoris
|
78,703,950
| 12,436,050
|
Explode multiple columns in pandas dataframe
|
<p>I am trying to explode below dataframe based on a delimiter '|'.</p>
<pre><code> preferred_title_symbol mim_number MDR_code MDR_term
0 17-BETA HYDROXYSTEROID DEHYDROGENASE III 264300.0 10037122 pseudohermaphroditism
1 2,4-DIENOYL-CoA REDUCTASE DEFICIENCY 616034.0 10027417 | 10081311 | 10059750 leukodystrophy | metabolic acidosis | muscle retention
</code></pre>
<p>The expected output is this:</p>
<pre><code> preferred_title_symbol mim_number MDR_code MDR_term
0 17-BETA HYDROXYSTEROID DEHYDROGENASE III 264300.0 10037122 pseudohermaphroditism
1 2,4-DIENOYL-CoA REDUCTASE DEFICIENCY 616034.0 10027417 leukodystrophy
1 2,4-DIENOYL-CoA REDUCTASE DEFICIENCY 616034.0 10081311 metabolic acidosis
1 2,4-DIENOYL-CoA REDUCTASE DEFICIENCY 616034.0 10059750 muscle retention
</code></pre>
<p>I am using below code to do that.</p>
<pre><code>out = (
pd.concat([df[col].str.split("\|")
.explode() for col in ["MDR_code", "MDR_term"]], axis=1)
.join(df["preferred_title_symbol"])
)
</code></pre>
<p>However i get below error:
ValueError: cannot reindex on an axis with duplicate labels</p>
<p>How can I achieve the desired output.</p>
|
<python><pandas><explode>
|
2024-07-03 19:37:29
| 1
| 1,495
|
rshar
|
78,703,910
| 8,057,872
|
How to Resolve ArgumentParser Conflict with -h/--help in Subcommands?
|
<p>I'm developing a Python command-line tool using argparse with multiple modules, each having its own set of arguments. I'm using separate classes to define each module's arguments and integrate them into a main script. However, I'm encountering an issue where the <code>-h/--help</code> argument is conflicting when running the help command for a subcommand.</p>
<p>Here's a simplified version of my setup:</p>
<p>fastp_arguments.py:</p>
<pre class="lang-py prettyprint-override"><code>import argparse
class FastpArguments:
def __init__(self, args=None):
self.parser = argparse.ArgumentParser(description="fastp arguments.", add_help=False)
self.args = args if args is not None else []
self._initialize_parser()
def _initialize_parser(self):
fastp_args = self.parser.add_argument_group("Fastp Options")
fastp_args.add_argument("-i", "--in1", required=True, help="read1 input file name")
fastp_args.add_argument("-o", "--out1", required=True, help="read1 output file name")
fastp_args.add_argument("-I", "--in2", help="read2 input file name")
fastp_args.add_argument("-O", "--out2", help="read2 output file name")
def parse_arguments(self):
self.args = self.parser.parse_args(self.args)
return self.args
</code></pre>
<p>main.py:</p>
<pre class="lang-py prettyprint-override"><code>import argparse
from fastp_arguments import FastpArguments
def main():
parser = argparse.ArgumentParser(description='My Tool', add_help=False)
parser.add_argument('-h', '--help', action='help', help='Show this help message and exit')
subparsers = parser.add_subparsers(dest='tool', help='Sub-command help')
fastp_args_class = FastpArguments()
fastp_parser = subparsers.add_parser('fastp', help='Fastp Help', parents=[fastp_args_class.parser], add_help=False)
fastp_parser.add_argument('-h', '--help', action='help', help='Show this help message and exit')
args = parser.parse_args()
if args.tool == 'fastp':
fastp_args_class.args = args
fastp_args_class.parse_arguments()
print("Fastp selected")
print(fastp_args_class.args)
else:
parser.print_help()
if __name__ == '__main__':
main()
</code></pre>
<p><strong>Issue</strong>
When I run seqsightqc fastp -h, I get the following error:</p>
<pre class="lang-bash prettyprint-override"><code>argparse.ArgumentError: argument -h/--help: conflicting option strings: -h, --help
</code></pre>
<p>What I've Tried
Setting <code>add_help=False</code> for the parent parsers.
Manually adding the <code>-h/--help</code> argument to both the top-level parser and subparsers.</p>
<p>Desired Outcome</p>
<p>I want to be able to run <code>seqsightqc fastp -h</code> and see the help message for the fastp subcommand without any conflicts or errors.</p>
<p><strong>Question</strong></p>
<p>How can I resolve the <code>argparse</code> conflict with the <code>-h/--help</code> argument when using subcommands in this setup?
Specifically, I want the final out of <code>-h</code> to print all the arguments of all the modules!</p>
|
<python><argparse>
|
2024-07-03 19:25:51
| 1
| 548
|
Mahdi Baghbanzadeh
|
78,703,837
| 13,834,264
|
Unable to parse JSON string obtained from attribute in a HTML tag in Python
|
<p>I am making an AJAX call to an endpoint (I didn't create this API) where the response is in JSON form. Within the JSON there is an a key called <code>content</code> of type <code>string</code>. This content appears to me to be HTML data which contains some JSON inside. I want to be able to parse this JSON which is contained within the HTML data, but I keep getting the following error when I attempt to do a <code>json.loads()</code> of the string:</p>
<pre><code>{JSONDecodeError}JSONDecodeError('Expecting property name enclosed in double quotes: line 1 column 2 (char 1)')
</code></pre>
<p>and I don't really understand why I am getting this error</p>
<p>Here is the JSON string I am trying to parse:</p>
<pre><code>{\"name\":\"ThreadMainListItemNormalizer\",\"props\":{\"thread\":{\"threadId\":4369992,\"threadTypeId\":1,\"titleSlug\":\"sebamed-sale-extra-soft-baby-cream-ps239-anti-dandruff-shampoo-ps387\",\"title\":\"Sebamed sale - extra soft baby cream \£2.39 / anti dandruff shampoo \£3.87\",\"currentUserVoteDirection\":\"\",\"commentCount\":0,\"status\":\"Activated\",\"isExpired\":false,\"isNew\":true,\"isPinned\":false,\"isTrending\":null,\"isBookmarked\":false,\"isLocal\":false,\"temperature\":0,\"temperatureLevel\":\"\",\"type\":\"Deal\",\"nsfw\":false,\"deletedAt\":null,\"publishedAt\":1720003748,\"voucherCode\":\"\",\"link\":\"https://www.justmylook.com/sebamed-m583\",\"merchant\":{\"merchantId\":45518,\"merchantName\":\"Justmylook\",\"merchantUrlName\":\"justmylook.co.uk\",\"isMerchantPageEnabled\":true},\"price\":2.39,\"nextBestPrice\":0,\"percentage\":0,\"discountType\":null,\"shipping\":{\"isFree\":1,\"price\":0},\"user\":{\"userId\":2701300,\"username\":\"Manish_N\",\"title\":\"\",\"avatar\":{\"path\":\"users/raw/default\",\"name\":\"2701300_6\",\"slotId\":\"default\",\"width\":0,\"height\":0,\"version\":6,\"unattached\":false,\"uid\":\"2701300_6.raw\",\"ext\":\"raw\"},\"persona\":{\"text\":null,\"type\":null},\"isBanned\":false,\"isDeletedOrPendingDeletion\":false,\"isUserProfileHidden\":false}}}}
</code></pre>
<p>If I paste the above JSON string at <a href="https://jsonformatter.curiousconcept.com/#" rel="nofollow noreferrer">this online JSON validator tool</a> it says that it is invalid JSON, however, when I unescape the <a href="https://www.freeformatter.com/json-escape.html#before-output" rel="nofollow noreferrer">JSON using this tool</a> I get the following output:</p>
<pre><code>"name":"ThreadMainListItemNormalizer","props":{"thread":{"threadId":4369991,"threadTypeId":1,"titleSlug":"samsung-55-qn700c-neo-qled-8k-hdr-smart-tv","title":"Samsung 55\" QN700C Neo QLED 8K HDR Smart TV Sold by Reliant Direct FBA","currentUserVoteDirection":"","commentCount":0,"status":"Activated","isExpired":false,"isNew":true,"isPinned":false,"isTrending":null,"isBookmarked":false,"isLocal":false,"temperature":0.59,"temperatureLevel":"Hot1","type":"Deal","nsfw":false,"deletedAt":null,"publishedAt":1720003637,"voucherCode":"","link":"https://www.amazon.co.uk/dp/B0BWFNLPTP?smid=A2CN43WDI0AWCL","merchant":{"merchantId":1650,"merchantName":"Amazon","merchantUrlName":"amazon-uk","isMerchantPageEnabled":true},"price":999,"nextBestPrice":1198,"percentage":0,"discountType":null,"shipping":{"isFree":1,"price":0},"user":{"userId":2679277,"username":"ben.jammin","title":"","avatar":{"path":"users/raw/default","name":"2679277_1","slotId":"default","width":0,"height":0,"version":1,"unattached":false,"uid":"2679277_1.raw","ext":"raw"},"persona":{"text":null,"type":null},"isBanned":false,"isDeletedOrPendingDeletion":false,"isUserProfileHidden":false}}}}
</code></pre>
<p>which is in fact valid JSON. My issue then arises, when I try to replicate the unescape tool and try do unescape the string within Python.</p>
<p>I have tried the following solutions</p>
<ul>
<li><p><a href="https://stackoverflow.com/a/69772725/13834264">Using <code>ast.literal_eval()</code></a> but I get the following error</p>
<pre><code>{SyntaxError}SyntaxError('unexpected character after line continuation character', ('<unknown>', 1, 3, '{\\"name\\":\\"ThreadMainListItemNo...:null,\\"type\\":null},\\"isBanned\\":false,\\"isDeletedOrPendingDeletion\\":false,\\"isUserProfileHidden\\":false}}}}', 1, 0))
</code></pre>
</li>
<li><p>Using <code>.encode('raw_unicode_escape').decode('unicode_escape')</code> method outlined <a href="https://stackoverflow.com/a/69772725/13834264">here</a> but after doing a <code>json.loads()</code> of the unescaped string I get the following error</p>
<pre><code>{JSONDecodeError}JSONDecodeError('Invalid \\escape: line 1 column 224 (char 223)')
</code></pre>
</li>
</ul>
<p><a href="https://pastecode.io/s/gwf0tcmg" rel="nofollow noreferrer">Here is the full API response as requested.</a> I am interested in the value of the <code>content</code> key</p>
<p><strong>UPDATE:</strong></p>
<p>I think the issue is that I have some invalid escape characters in the string e.g. <code>\£</code>. I followed the solution <a href="https://stackoverflow.com/a/42208862/13834264">here</a> and it's resolved my issue.</p>
<p>Does anyone have any idea why this API might be including an escaped <code>£</code> symbol?</p>
|
<python><json><beautifulsoup>
|
2024-07-03 19:05:01
| 2
| 965
|
knowledge_seeker
|
78,703,690
| 8,547,986
|
'pipx' using the latest Python 3.12 instead of the system default
|
<p>I am using <code>pyenv</code> for managing a Python installation in my system. Using <code>pyenv</code>, I have installed <code>python3.11.9</code> and set it as default with <code>pyenv global 3.11.9</code>, I have also added recommended commands by <code>pyenv</code> in my <em><a href="https://wiki.debian.org/Zsh#Configuration" rel="nofollow noreferrer">.zshrc</a></em> file such that my system treats Python 3.11.9 as the default Python.</p>
<p>Now, when I install <code>pipx</code> using <code>brew install pipx</code>, it also installs <code>python3.12</code>. And any installation done with <code>pipx</code> then uses <code>python3.12</code> instead of the system default <code>3.11.9</code>.</p>
<p>How can I ensure that when I do <code>brew install pipx</code>, it installs using the default Python version instead of downloading the latest Python version?</p>
|
<python><python-3.x><homebrew><pyenv><pipx>
|
2024-07-03 18:21:20
| 2
| 1,923
|
monte
|
78,703,688
| 18,769,241
|
How to train yolov8 to recognize one additional image
|
<p>So I want to train yolov8 with a dataset containing one annotated image ( using roboflow ) to add the label to the current model so that the yielded trained model will recognize the new image.</p>
<p>First I get the one-image-dataset annotated in roboflow like so:</p>
<pre><code>dataset = version.download(model_format="yolov8", location="./datasets")
</code></pre>
<p>Then train the yolov8 model using:</p>
<pre><code>results = model.train(data="/roboflow_ml_image_detection/datasets/oups-1/data.yaml", epochs=5)
</code></pre>
<p>Then export the new model:</p>
<pre><code>success = model.export(format="onnx")
</code></pre>
<p>which I will be using again to do the prediction over the new image:</p>
<pre><code>model = YOLO('/roboflow_ml_image_detection/runs/detect/train29/weights/last.pt')
results = model(source='./weirdobject.jpg', conf=0.25)[0]
</code></pre>
<p>And finally try and detect the label of the image using <code>supervision</code>:</p>
<pre><code>import supervision as sv
detections = sv.Detections.from_ultralytics(results)
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)
sv.plot_image(annotated_image)
</code></pre>
<p>but the final call only shows the image without the frame nor the label.</p>
<p>What's wrong here?</p>
|
<python><machine-learning><deep-learning><yolov8><roboflow>
|
2024-07-03 18:21:02
| 0
| 571
|
Sam
|
78,703,567
| 169,252
|
Can I use a python script to run as a daemon with async libs or do I need to use subprocess?
|
<p>I need to run a quick experiment with python.
However, I am stuck with an async issue. I am not very familiar with python async programming, so please bear with me.</p>
<p>What I want is to run a script which starts different daemons at different ports, and then waits for connections to be established to it.</p>
<p>I used basically the code below (which does not run, because I wanted to simplify for the question, but it should be enough to make the case. If not, please let me know).</p>
<p>The issue I have is that this code (probably obvious to the experts) stalls at the first <code>sleep_forever</code>. In other words, only one daemon gets started.</p>
<p>Can this be done at all from the same script? Or do I need subprocesses (or something else completely) to start?
I tried removing the <code>await</code> from <code>self.__start()</code>, but that results in an error saying <code>__start() was not awaited</code>. My thinking was that then the async function would indeed not be awaited and the script would move on, while the network stuff would be initiated and then it would wait. Looks like it doesn't work like that. I also tried starting a task, but then the task needs to be awaited as well?</p>
<pre><code>import trio
class Daemon:
port: int
@classmethod
async def new(cls, port):
self = cls()
self.port = port
#assume this to return an `AbstractAsyncContextManager`
self.server = create_server() `
await self.__start()
async def __start(self):
print("starting with port...".format(port=self.port))
# init network socket
async with self.server.start(port=self.port), trio.open_nursery() as nursery:
#setup network objects
print("server ok")
# ->> WAIT FOR INCOMING Connections
await trio.sleep_forever()
print("exiting.")
async def run():
d_list = []
num = 3
for i in range(num):
d = await Daemon.new(5000+i)
d_list.append(d)
if __name__ == "__main__":
trio.run(run)
</code></pre>
|
<python><asynchronous><async-await><daemon>
|
2024-07-03 17:50:52
| 1
| 6,390
|
unsafe_where_true
|
78,703,543
| 5,838,180
|
Get the errorbars to bins of a dataset by using bootstrapping
|
<p>I have some data called <code>variable</code> that I can simulate in the following way:</p>
<pre><code>import pandas as pd
import numpy as np
from scipy.stats import bootstrap
random_values = np.random.uniform(low=0, high=10, size=10000)
df = pd.DataFrame({"variable": random_values})
</code></pre>
<p>I want to bin my data in 5 bins within the bins <code>bins = [0, 2, 4, 6, 8, 10]</code> and calculate to each of the bins error-bars with some bootstrapping method, e.g. the confidence interval of the 95% percent level. I figured out that the cumbersome thing is to calculate the error bars. I could do it with <code>scipy.stats.bootstrap</code> and then do</p>
<p><code>bootstrap(one_of_the_bins, my_statistic, confidence_level=0.95, method='percentile')</code></p>
<p>but it requires that I split my data into chunks according to the bins and loop over the chunks. So I wonder is there a more convenient way to do this, is there some functionality integrated in pandas for that? Or can I provide to <code>scipy.stats</code> my full data and the bins and then scipy will do the calculations for all the bins together? Thank you for any advice!</p>
|
<python><pandas><scipy><bootstrapping><errorbar>
|
2024-07-03 17:43:53
| 2
| 2,072
|
NeStack
|
78,703,540
| 3,486,684
|
What determines that a given Python type is coercible into a given pyarrow datatype?
|
<p>For example:</p>
<ul>
<li>I know a dictionary can be coerced into a struct; what else can be coerced into a struct? What determines whether a Python type is coercible into a <code>pyarrow</code> datatype? Convention? Documentation? Subclassing?</li>
<li>on subclassing: can I make a custom class a subclass of some <code>pyarrow</code> class so that my custom class is recognized by <code>pyarrow</code> as coercible into a <code>pyarrow</code> data type?</li>
</ul>
|
<python><pyarrow>
|
2024-07-03 17:42:46
| 0
| 4,654
|
bzm3r
|
78,703,537
| 11,163,122
|
Why does Python not allow Generics with isinstance checks?
|
<p>Running the below code with Python 3.12.4:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Generic, TypeVar
T = TypeVar("T")
class Foo(Generic[T]):
def some_method(self) -> T:
pass
isinstance(Foo[int](), Foo[int])
</code></pre>
<p>It will throw a <code>TypeError: Subscripted generics cannot be used with class and instance checks</code>.</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/path/to/a.py", line 9, in <module>
isinstance(Foo[int](), Foo[int])
File "/path/to/.pyenv/versions/3.12.4/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/typing.py", line 1213, in __instancecheck__
return self.__subclasscheck__(type(obj))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/.pyenv/versions/3.12.4/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/typing.py", line 1216, in __subclasscheck__
raise TypeError("Subscripted generics cannot be used with"
TypeError: Subscripted generics cannot be used with class and instance checks
</code></pre>
<p>What was the rationale for Python not allowing <code>isinstance</code> checks with <code>Generic</code>s?</p>
|
<python><generics><python-typing><isinstance>
|
2024-07-03 17:41:57
| 2
| 2,961
|
Intrastellar Explorer
|
78,703,500
| 3,486,684
|
Ways of creating a struct directly?
|
<p>I know I can create a Polars struct "scalar" indirectly using dictionaries as elements when building a series. But is there any way in which I can directly create a Polars struct scalar directly? (Not in a series, or a dataframe.)</p>
<p>For reasons I am not clear on, people think this question might be similar to <a href="https://stackoverflow.com/questions/78703478/ways-of-creating-a-pyarrow-structscalar-directly">Ways of creating a `pyarrow.StructScalar` directly?</a>.</p>
<p>It is not, because:</p>
<ul>
<li>while Polars uses Arrow under the hood, it only provides functionality for translating Arrow arrays into Polars Series, and Arrow tables into Polars DataFrames --- there is no functionality for converting Arrow scalars into Polars scalars</li>
<li>because Polars uses Arrow under the hood, <em>we can forget about the fact that Polars uses Arrow</em> when attempting to answer the question: "how can we create a Polars scalar?"</li>
<li>finally, note that while the other question had a relatively quick solution, <em>this one does not</em>!</li>
</ul>
|
<python><python-polars>
|
2024-07-03 17:32:11
| 1
| 4,654
|
bzm3r
|
78,703,489
| 6,622,697
|
Validating POST parameters with serializers in Django
|
<p>I'm trying to implement a simple validator for my POST parameters</p>
<p>My input looks like this:</p>
<pre><code>{
"gage_id": "01010000",
"forcing_source":"my_source",
"forcing_path":"my_path"
}
</code></pre>
<p>I have the following Serializer:</p>
<pre><code>class SaveTab1Serializer(Serializer):
gage_id = CharField(min_length=1, required=True),
forcing_source = CharField(min_length=1, required=True),
forcing_path = CharField(min_length=1, required=True),
</code></pre>
<p>And I use it like this:</p>
<pre><code>@api_view(['POST'])
def save_tab1(request):
body = json.loads(request.body)
ser = SaveTab1Serializer(data=body)
print('serializer', ser.is_valid())
print('errors', ser.errors)
</code></pre>
<p>But no matter what I do with the data, it only shows as valid with no errors. Is there more code that I need to add to the serializer to do the validation?</p>
|
<python><django><validation><django-rest-framework><django-serializer>
|
2024-07-03 17:28:19
| 1
| 1,348
|
Peter Kronenberg
|
78,703,478
| 3,486,684
|
Ways of creating a `pyarrow.StructScalar` directly?
|
<p>I know I can create a <code>pa.StructScalar</code> by casting:</p>
<pre><code>import pyarrow as pa
import pyarrow.compute as pac
struct_scalar = pac.cast(
{"hello": "greetings", "world": 5},
target_type=pa.struct([("hello", pa.string()), ("world", pa.int16())]),
)
print(f"{struct_scalar=}")
print(f"{struct_scalar.type=}")
</code></pre>
<p>But is there any other way in which I can create a struct scalar?</p>
|
<python><pyarrow>
|
2024-07-03 17:26:47
| 1
| 4,654
|
bzm3r
|
78,703,443
| 3,486,684
|
Infer `pyarrow.DataType` from Python type?
|
<p>Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any, get_type_hints
import pyarrow as pa
import pyarrow.compute as pac
ArrowTypes = {
int: pa.int64(),
str: pa.string(),
}
class StructLike:
__attr_types__: dict[str, Any] = {}
__attr_annotations__: dict[str, Any]
def __init__(self):
self.__dict__["__attr_annotations__"] = get_type_hints(self)
def __setattr__(self, name: str, value: Any) -> None:
try:
val_type = type(value)
if issubclass(self.__attr_annotations__[name], val_type):
self.__attr_types__[name] = type(value)
self.__dict__[name] = value
else:
raise TypeError(
f"'{name}' should have type "
f"{self.__attr_annotations__[name]}, "
f"but instead it has type {val_type}"
)
except TypeError as e:
raise e
except KeyError:
raise KeyError(
f"Could not find '{name}' in {self.__attr_annotations__=}"
)
def as_dict(self) -> dict[str, Any]:
return {k: getattr(self, k) for k in self.__attr_annotations__.keys()}
def arrow_type(self) -> pa.StructType:
return pa.struct(
[
pa.field(name, ArrowTypes[ty])
for name, ty in self.__attr_types__.items()
]
)
def to_arrow(self) -> pa.StructScalar:
return pac.cast(
self.as_dict(),
target_type=self.arrow_type(),
)
class PyStruct(StructLike):
aleph: str
bet: int
def __init__(self, aleph: str, bet: int):
super().__init__()
self.aleph = aleph
self.bet = bet
test = PyStruct("hello", 5)
test.to_arrow()
</code></pre>
<p>It works:</p>
<pre><code><pyarrow.StructScalar: [('aleph', 'hello'), ('bet', 5)]>
</code></pre>
<p>But I have to rely on the hand-built <code>ArrowTypes</code> dictionary to map from Python types to a <code>pa.DataType</code> in the method <code>StructLike.arrow_type</code>. How can I infer a simple <code>pa.DataType</code> from a given (simple) Python type in an automatic fashion?</p>
|
<python><pyarrow>
|
2024-07-03 17:15:30
| 0
| 4,654
|
bzm3r
|
78,703,347
| 3,748,377
|
Invalid argument value. SQLSTATE=HY009 SQLCODE=-99999 with executemany() using Python
|
<p>I want to insert records in batches in DB using <strong>executemany()</strong> using python. I have a below code</p>
<pre><code>
def find_data(db_string):
json_str = '[\\"{\\"NAME\\":\\"Aman\\",\\"EMAIL\\":\\"aman@gmail.com\\",\\"SALARY\\":785674}\\",\\"{\\"NAME\\":\\"Rahul\\",\\"EMAIL\\":\\"rahul@gmail.com\\",\\"SALARY\\":65489}\\", \\"{\\"NAME\\":\\"Vijay\\",\\"EMAIL\\":\\"vijay@gmail.com\\",\\"SALARY\\":1254798}\\"]'
formatted_json = json_str.replace('\\','').replace('\"{','{').replace('}"','}')
json_data = json.loads(formatted_json)
try:
bulk_operations=[]
with concurrent.futures.ThreadPoolExecutor(new_worker=10) as executor:
futures_task = []
for json_row in json_data:
futures_task.append(executor.submit(insert_data,json_row, bulk_operations))
concurrent.futures.wait(futures_task)
if bulk_operations:
insert_query = "Insert into Employee (name, email, salary, createdTs) values(?,?,?,?)"
conn = ibm_db_dbi.connect(db_string)
cur = conn.cursor()
cur.executemany(insert_query, bulk_operations)
expect Exception as e:
print("Error")
def insert_data(json_row, bulk_operations):
try:
approach #1:
name = str(json_row["NAME"])
email = str(json_row["EMAIL"])
salary = json_row["SALARY"]
current_time = datetime.now()
values = f"('{name}','{email}',{salary},'{current_time}')"
bulk_operations.append(values)
approach #2:
name1 = json_row["NAME"]
email1 = json_row["EMAIL"]
salary1 = json_row["SALARY"]
current_time = datetime.now()
record = [(name1,email1,salary1, current_time)]
bulk_operations.append(record)
expect Exception as e:
print("Error")
</code></pre>
<p>When running this I am getting below error :</p>
<p>[MainThread] ERROR ibm_db_dbi::DatabaseError: [IBM][CLI Driver] CLI0124E Invalid argument value. SQLSTATE=HY009 SQLCODE=-99999</p>
<p>Can anyone help me on this please?</p>
|
<python><db2>
|
2024-07-03 16:50:01
| 0
| 380
|
ankit
|
78,703,317
| 5,525,901
|
Difference between .scalars().all() and list(...scalars()) in SQLAlchemy
|
<p>Are there any significant differences (performance, or otherwise) between the following two ways of getting a list of results?</p>
<pre class="lang-py prettyprint-override"><code>users: Sequence[User] = session.execute(select(User)).scalars().all()
</code></pre>
<pre class="lang-py prettyprint-override"><code>users: Sequence[User] = list(session.execute(select(User)).scalars())
</code></pre>
<p>The second seems more pythonic to me, and is possible because <code>ScalarResult</code> is iterable over its rows, so calling <code>list(...scalars())</code> just iterates over the <code>ScalarResult</code> object. However, since <code>.all()</code> also exists, I assume there must be some reason for it, so maybe it's more efficient to use <code>.all()</code>?</p>
<p>What are the differences between the two, and what, if anything, goes on behind the scenes that causes the difference?</p>
|
<python><sqlalchemy>
|
2024-07-03 16:44:02
| 1
| 1,752
|
Abraham Murciano Benzadon
|
78,703,313
| 4,332,644
|
segment_anything causing error with numpy.uint8
|
<p>I am trying to run <a href="https://github.com/facebookresearch/segment-anything/blob/main/notebooks/onnx_model_example.ipynb" rel="nofollow noreferrer">https://github.com/facebookresearch/segment-anything/blob/main/notebooks/onnx_model_example.ipynb</a> locally, on an M2 MacBook with Sonoma 14.5. However, I keep running into the following error at step 11:</p>
<pre><code>---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[75], line 1
----> 1 masks = mask_generator.generate(image)
File ~/opt/anaconda3/envs/ve_env/lib/python3.9/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File ~/opt/anaconda3/envs/ve_env/lib/python3.9/site-packages/segment_anything/automatic_mask_generator.py:163, in SamAutomaticMaskGenerator.generate(self, image)
138 """
139 Generates masks for the given image.
140
(...)
159 the mask, given in XYWH format.
160 """
162 # Generate masks
--> 163 mask_data = self._generate_masks(image)
165 # Filter small disconnected regions and holes in masks
166 if self.min_mask_region_area > 0:
File ~/opt/anaconda3/envs/ve_env/lib/python3.9/site-packages/segment_anything/automatic_mask_generator.py:206, in SamAutomaticMaskGenerator._generate_masks(self, image)
204 data = MaskData()
205 for crop_box, layer_idx in zip(crop_boxes, layer_idxs):
--> 206 crop_data = self._process_crop(image, crop_box, layer_idx, orig_size)
207 data.cat(crop_data)
209 # Remove duplicate masks between crops
File ~/opt/anaconda3/envs/ve_env/lib/python3.9/site-packages/segment_anything/automatic_mask_generator.py:236, in SamAutomaticMaskGenerator._process_crop(self, image, crop_box, crop_layer_idx, orig_size)
234 cropped_im = image[y0:y1, x0:x1, :]
235 cropped_im_size = cropped_im.shape[:2]
--> 236 self.predictor.set_image(cropped_im)
238 # Get points for this crop
239 points_scale = np.array(cropped_im_size)[None, ::-1]
File ~/opt/anaconda3/envs/ve_env/lib/python3.9/site-packages/segment_anything/predictor.py:57, in SamPredictor.set_image(self, image, image_format)
55 # Transform the image to the form expected by the model
56 input_image = self.transform.apply_image(image)
---> 57 input_image_torch = torch.as_tensor(input_image, device=self.device)
58 input_image_torch = input_image_torch.permute(2, 0, 1).contiguous()[None, :, :, :]
60 self.set_torch_image(input_image_torch, image.shape[:2])
RuntimeError: Could not infer dtype of numpy.uint8
</code></pre>
<p>I am using a conda environment with Python 3.9.19, and also tested with Python 3.11. Based on online comments I suspected this to be an issue with numpy versions, but having tried multiple versions I cannot find the correct combination. I am currently trying with the following:</p>
<pre><code>numpy==1.24.4
torch==1.9.0
torchvision==0.10.0
opencv-python==4.10.0.84
</code></pre>
<p>Running the same notebook on Google Colab works fine, and the versions indicated there are:</p>
<pre><code>import numpy as np
import torch
import cv2
print(np.__version__)
print(torch.__version__)
print(cv2.__version__)
1.25.2
2.3.0+cu121
4.8.0
</code></pre>
<p>This is using Python 3.10.12. These versions are not available on Mac, so I am stuck.</p>
<p>How can I find out why numpy.uint8 is not being recognized, and how can I fix this error? Most online comments point to upgrading numpy, but I have tried several numpy versions without luck. Any help is appreciated.</p>
|
<python><numpy><machine-learning><pytorch><segment-anything>
|
2024-07-03 16:43:17
| 1
| 3,201
|
LNI
|
78,703,282
| 1,126,944
|
Example Needed for "Directories With A Common Name Hiding Modules"
|
<p>When I read Python 3's documentation for modules, in <a href="https://docs.python.org/3/tutorial/modules.html#packages" rel="nofollow noreferrer">6.4 Packages</a>, it said (emphasis mine):</p>
<blockquote>
<p>The <code>__init__.py</code> files are required to make Python treat directories
containing the file as packages (unless using a namespace package, a
relatively advanced feature). This <strong>prevents directories with a
common name, such as <code>string</code>, from unintentionally hiding valid
modules</strong> that occur later on the module search path.</p>
</blockquote>
<p>Can anybody give an example to explain what the bolded part means?</p>
|
<python>
|
2024-07-03 16:33:48
| 1
| 1,330
|
IcyBrk
|
78,703,240
| 2,153,235
|
pyplot.plot uses markerfacecolor while pyplot.scatter uses facecolor
|
<p>I come from a Matlab background, which is an inspiration for <code>matplotlib</code> (or so I understand, rightly or wrongly).</p>
<p>I was surprised to find that similar options for the different plots use different option names. For example, <code>pyplot.plot</code> uses <code>markerfacecolor</code> while <code>pyplot.scatter</code> uses <code>facecolor</code>.</p>
<p>In Matlab, such options are common across plots. I'm wondering how the differences arose in <code>matplotlib.pyplot</code>. Are the different plots developed by different groups?</p>
|
<python><matplotlib>
|
2024-07-03 16:24:27
| 1
| 1,265
|
user2153235
|
78,703,168
| 2,746,401
|
Annotate JSON schema properties in Python with msgspec (or pydantic)
|
<p>In the JSON schema produced from a <code>msgspec</code> Struct, I'm wanting to output to the schema some text descriptions of the properties held within the Struct in the same way as the docstring of the Struct shows up in the JSON schema.</p>
<p>This little toy example (cut down from <a href="https://jcristharif.com/msgspec/jsonschema.html" rel="nofollow noreferrer">https://jcristharif.com/msgspec/jsonschema.html</a>):</p>
<pre><code>import json
import msgspec
from msgspec import Struct
def print_schema(schema):
encoded_schema = msgspec.json.encode(schema)
formatted_schema = json.dumps(json.loads(encoded_schema), indent=4)
print(formatted_schema)
class Product(Struct):
"""A product in a catalog"""
id: int
name: str
price: float
schema = msgspec.json.schema(Product)
print_schema(schema)
</code></pre>
<p>outputs:</p>
<pre><code>{
"$ref": "#/$defs/Product",
"$defs": {
"Product": {
"title": "Product",
"description": "A product in a catalog",
"type": "object",
"properties": {
"id": {
"type": "integer"
},
"name": {
"type": "string"
},
"price": {
"type": "number"
}
},
"required": [
"id",
"name",
"price"
]
}
}
}
</code></pre>
<p>with <code>description</code> containing the docstring. I'd like to do something like</p>
<pre><code>class Product(Struct):
"""A product in a catalog"""
id: int # DB uid
name: str # Name of product
price: float # Price of product
</code></pre>
<p>and have the comments show up in the JSON schema against the appropriate property. Perhaps something like:</p>
<pre><code>{
"$ref": "#/$defs/Product",
"$defs": {
"Product": {
"title": "Product",
"description": "A product in a catalog",
"type": "object",
"properties": {
"id": {
"description": "DB uid"
"type": "integer"
},
"name": {
"description": "Name of product"
"type": "string"
},
"price": {
"description": "Price of product"
"type": "number"
}
},
"required": [
"id",
"name",
"price"
]
}
}
}
</code></pre>
<p>However, I don't know enough about JSON schemas to know if this is correct or valid although looking at <a href="https://json-schema.org/learn/getting-started-step-by-step" rel="nofollow noreferrer">https://json-schema.org/learn/getting-started-step-by-step</a> it seems about right.</p>
<p>How can I do this using <code>msgspec</code>? Or maybe rewrite my code to use <code>pydantic</code>? Thanks.</p>
|
<python><json><pydantic><msgspec>
|
2024-07-03 16:08:13
| 2
| 3,496
|
user2746401
|
78,703,154
| 6,622,697
|
Usage of OneToOneField in Django
|
<p>I am very confused about the usage of the <code>OnetoOneField</code>. I thought it was for a case where a given record can only have 1 reference to another table. For example, a Child has 1 Parent.</p>
<p>But it seems that Django makes the field that is defined as OneToOne a primary key of your table, meaning it has a Unique constraint. This doesn't make any sense. Any given Child has only 1 Parent. But there might be more than 1 child with the same parent. The the FK to the parent is not going to be unique in the entire Child table</p>
<p>I wanted to use OneToOne instead of ForeignKey in order to enforce the one-to-one aspect, as opposed to ForeignKey, which is 1 to Many (any given Child can have more than 1 parent).</p>
<p>Am I wrong in my understanding? Should I go back to using ForeignKey and just ensure that my code enforces 1-1?</p>
<p>I found these other links which asked the same question, but not sure that I saw a definitive answer
<a href="https://stackoverflow.com/questions/69054979/why-would-someone-set-primary-key-true-on-an-one-to-one-reationship-onetoonefie">Why would someone set primary_key=True on an One to one reationship (OneToOneField)?</a>
<a href="https://stackoverflow.com/questions/5870537/onetoonefield-vs-foreignkey-in-django">OneToOneField() vs ForeignKey() in Django</a></p>
|
<python><django><django-models><one-to-one>
|
2024-07-03 16:06:22
| 1
| 1,348
|
Peter Kronenberg
|
78,703,101
| 2,153,235
|
Prevent Spyder editor from opening on startup
|
<p>Is there any way to prevent the editor from opening when Spyder starts?</p>
<p><a href="https://stackoverflow.com/questions/48500051">This Q&A</a> is about opening files on startup <em>if</em> those files were open when Spyder was last closed. In my case, usually have the editor closed when I quit Spyder.</p>
<p><a href="https://github.com/spyder-ide/spyder/issues/2998" rel="nofollow noreferrer">This GitHub issue</a> is about project-specific files that are kept open. I am not working in the context of a defined project, and I don't want any files remembered.</p>
|
<python><spyder>
|
2024-07-03 15:52:58
| 0
| 1,265
|
user2153235
|
78,703,076
| 5,269,892
|
PyCharm unresolved attribute reference
|
<p>When inverting a boolean mask, PyCharm shows an inspection warning <em>Unresolved attribute reference 'sum' for class 'int'</em>. It seems it does not know that the inverted boolean mask maintains the <em>sum()</em> method.</p>
<p><strong>Questions:</strong> What is the reason for this behavior in this specific case? Why does it not show the same warning for the original boolean mask? How can the inspection warning be avoided?</p>
<p><strong>Example code:</strong></p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Number': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]})
cond = (df['Number'] <= 5)
a = cond.sum()
# PyCharm shows "Unresolved attribute reference 'sum' for class 'int'"
b = (~cond).sum()
print(a)
print(b)
</code></pre>
|
<python><pandas><pycharm><code-inspection>
|
2024-07-03 15:46:46
| 1
| 1,314
|
silence_of_the_lambdas
|
78,702,950
| 2,153,235
|
Bring Spyder plot window to the front on Windows 10
|
<p>I am following <a href="https://stackoverflow.com/a/53669049">this answer</a> to bring plot windows to the front by choosing a different graphics backend:</p>
<pre><code>Tools > Preferences > Ipython > Graphis > backend
</code></pre>
<p>The options are <code>Inline</code>, <code>Automatic</code>, <code>Qt5</code>, and <code>TKinter</code>. I don't want <code>Inline</code> and I prefer more explicit control than <code>Automatic</code>. Both <code>Qt5</code> and <code>TKinter</code> pulls the plot window to the front when it is first created, i.e., when the initial plot is done. Here is my test code for creating the plot, adapted from <a href="https://hdbscan.readthedocs.io/en/latest/parameter_selection.html" rel="nofollow noreferrer">here</a>:</p>
<pre><code>from sklearn import datasets
import matplotlib.pyplot as plt
digits = datasets.load_digits()
data = digits.data
from sklearn.manifold import TSNE
projection = TSNE().fit_transform(data)
plot_kwds = {'alpha' : 0.5, 's' : 80, 'linewidths':1,
'edgecolors':"Blue", 'facecolors':"None"}
plt.scatter(*projection.T, **plot_kwds)
</code></pre>
<p>The problem is that if I re-issue the <code>plt.scatter()</code> command (say, with different <code>plot_kwds</code> options), the plot window does <em>not</em> get pulled to the front. I'm going through a quick cycle of re-plots, so it's not efficient to use the mouse to the plot window to the front.</p>
<p><em><strong>Is there a command (or another way) to will bring the plot window to the front?</strong></em></p>
|
<python><matplotlib><spyder>
|
2024-07-03 15:19:51
| 1
| 1,265
|
user2153235
|
78,702,466
| 23,483,136
|
About the iterator protocol
|
<p>Why in Python is the iterator exhaustion fixed by calling <code>StopIteration</code>, and not by implementing the <code>IsDone()</code> method on the iterator, as written in GoF Design Patterns?
Raising an exception is still a rather expensive operation.</p>
|
<python>
|
2024-07-03 13:43:59
| 0
| 354
|
Pavel
|
78,702,337
| 3,146,304
|
Tensorflow/Keras model raises output shape errors when loaded in another system
|
<p>I have a tf.keras lstm model that I trained on Google Colab and I want to load it on my laptop so I can run it to do inference.
I created a Python virtual environment on my laptop with the same versions of Tensorflow/Keras as Google Colab (2.15) and the same version of Python (3.10).
Despite this, when I load the model I get the following errors:</p>
<pre><code>2024-07-03 15:01:12.725137: W tensorflow/core/common_runtime/graph_constructor.cc:840] Node 'cond/while' has 13 outputs but the _output_shapes attribute specifies shapes for 42 outputs. Output shapes may be inaccurate.
2024-07-03 15:01:13.713773: W tensorflow/core/common_runtime/graph_constructor.cc:840] Node 'cond/while' has 13 outputs but the _output_shapes attribute specifies shapes for 42 outputs. Output shapes may be inaccurate.
2024-07-03 15:01:14.703037: W tensorflow/core/common_runtime/graph_constructor.cc:840] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 42 outputs. Output shapes may be inaccurate.
2024-07-03 15:01:14.887920: W tensorflow/core/common_runtime/graph_constructor.cc:840] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 42 outputs. Output shapes may be inaccurate.
2024-07-03 15:01:15.375012: W tensorflow/core/common_runtime/graph_constructor.cc:840] Node 'cond/while' has 13 outputs but the _output_shapes attribute specifies shapes for 42 outputs. Output shapes may be inaccurate.
2024-07-03 15:01:15.841333: W tensorflow/core/common_runtime/graph_constructor.cc:840] Node 'cond/while' has 13 outputs but the _output_shapes attribute specifies shapes for 42 outputs. Output shapes may be inaccurate.
2024-07-03 15:01:15.884074: W tensorflow/core/common_runtime/graph_constructor.cc:840] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 42 outputs. Output shapes may be inaccurate.
2024-07-03 15:01:17.601834: W tensorflow/core/common_runtime/graph_constructor.cc:840] Node 'cond/while' has 13 outputs but the _output_shapes attribute specifies shapes for 42 outputs. Output shapes may be inaccurate.
2024-07-03 15:01:17.638458: W tensorflow/core/common_runtime/graph_constructor.cc:840] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 42 outputs. Output shapes may be inaccurate.
2024-07-03 15:01:18.121723: W tensorflow/core/common_runtime/graph_constructor.cc:840] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 42 outputs. Output shapes may be inaccurate.
</code></pre>
<p>The model is a <code>tf.keras.Sequential</code> model saved with its <code>save</code> method and loaded with the <code>tf.keras.models.load_model</code> method.
Looking for this problem I saw that it is mainly due to differences in the versions of tensorflow/keras between the environment where the model is saved and the one where it is loaded, but this is not my case. The only difference is the operating system (Linux vs Windows). I also found a thread indicating to add <code>compile=False</code> in the <code>tf.keras.models.load_model</code> method, but this does not change anything in my case.</p>
<p>Do you have any suggestions on how to solve this problem? I would like to have a model that can be ported to different devices, as long as the same Python virtual environment is used.</p>
|
<python><tensorflow><keras>
|
2024-07-03 13:14:54
| 0
| 389
|
Vitto
|
78,702,312
| 16,613,735
|
Python pandas-market-calendars
|
<p>Question on calendar derivation logic in the Python module <a href="https://pypi.org/project/pandas-market-calendars/" rel="nofollow noreferrer">https://pypi.org/project/pandas-market-calendars/</a>.
Does this module depend on any third party API's to get the calendars or the calendars are derived based on <strong>rules within the code</strong>?</p>
|
<python><pandas><calendar>
|
2024-07-03 13:10:29
| 1
| 335
|
Dinakar Ullas
|
78,702,278
| 652,460
|
Why does readthedocs require authorisation to write commits when connecting to github?
|
<p>I'm trying to use readthedocs with a github repo that stores a Python project. When I try to connect it to my github profile, I am asked to authorise the following:</p>
<blockquote>
<p>This application will be able to read and write commit statuses (no direct code access).</p>
</blockquote>
<p>It shouldn't need to change anything in my github repo. Can someone explain this?</p>
|
<python><github><read-the-docs>
|
2024-07-03 13:03:42
| 1
| 834
|
mtanti
|
78,702,193
| 6,622,697
|
Automatically updated created_by/updated_by with Django authenticated user
|
<p>I have a BaseModel that looks like this:</p>
<pre><code>class BaseModel(models.Model):
updated_by = models.ForeignKey(get_user_model(), related_name='+', on_delete=models.RESTRICT, db_column='updated_by')
updated_at = models.DateTimeField(auto_now=True)
created_by = models.ForeignKey(get_user_model(), related_name='+', on_delete=models.RESTRICT, db_column='created_by')
created_at = models.DateTimeField(auto_now_add=True)
class Meta:
abstract = True
</code></pre>
<p>I want to automatically update the <code>updated_by</code> and <code>created_by</code> fields. But I don't have access to the <code>request</code> object. How can I get the user?</p>
|
<python><django><authentication>
|
2024-07-03 12:49:10
| 0
| 1,348
|
Peter Kronenberg
|
78,702,014
| 1,285,061
|
How to get the current domain name in Django template?
|
<p>How to get the current domain name in Django template?
Similar to {{domain}} for auth_views. I tried <code>{{ domain }}</code>, <code>{{ site }}</code>, <code>{{ site_name }}</code> according to below documentation. It didnt work.</p>
<pre><code><p class="text-right">&copy; Copyright {% now 'Y' %} {{ site_name }}</p>
</code></pre>
<p>It can be either IP address <code>192.168.1.1:8000</code> or <code>mydomain.com</code></p>
<p><a href="https://docs.djangoproject.com/en/5.0/ref/contrib/sites/" rel="nofollow noreferrer">https://docs.djangoproject.com/en/5.0/ref/contrib/sites/</a></p>
<blockquote>
<p>In the syndication framework, the templates for title and description automatically have access to a variable {{ site }}, which is the Site object representing the current site. Also, the hook for providing item URLs will use the domain from the current Site object if you don’t specify a fully-qualified domain.</p>
</blockquote>
<blockquote>
<p>In the authentication framework, django.contrib.auth.views.LoginView passes the current Site name to the template as {{ site_name }}.</p>
</blockquote>
|
<python><python-3.x><django><django-templates>
|
2024-07-03 12:15:46
| 2
| 3,201
|
Majoris
|
78,701,694
| 1,509,264
|
How to manually generate fixtures for Django Polymorphic Models?
|
<p>I have some <a href="https://django-polymorphic.readthedocs.io/en/stable/index.html" rel="nofollow noreferrer">Django Polymorphic</a> models:</p>
<pre class="lang-python prettyprint-override"><code>import uuid
from django.db import models
from polymorphic.models import PolymorphicModel
class Fruit(PolymorphicModel):
class Meta:
abstract = True
class Apple(Fruit):
variety=models.CharField(max_length=30,primary_key=True)
class Grape(Fruit):
id=models.UUIDField(primary_key=True, default=uuid.uuid4)
colour=models.CharField(max_length=30)
</code></pre>
<p>Then I can create some fixtures:</p>
<pre class="lang-json prettyprint-override"><code>[
{"model": "test_polymorphic.apple", "pk": "bramley", "fields": {}},
{"model": "test_polymorphic.apple", "pk": "granny smith", "fields": {}},
{"model": "test_polymorphic.grape", "pk": "00000000-0000-4000-8000-000000000000", "fields": { "colour": "red"} },
{"model": "test_polymorphic.grape", "pk": "00000000-0000-4000-8000-000000000001", "fields": { "colour": "green"} }
]
</code></pre>
<p>and use <code>python -m django loaddata fixture_name</code> to load it into the database which "appears" to be successful.</p>
<p>Then if I use:</p>
<pre class="lang-python prettyprint-override"><code>from test_polymorphic import models
models.Apple.objects.all()
</code></pre>
<p>It raises the error:</p>
<blockquote>
<pre class="lang-none prettyprint-override"><code>PolymorphicTypeUndefined: The model Apple#bramley does not have a `polymorphic_ctype_id` value defined.
If you created models outside polymorphic, e.g. through an import or migration, make sure the `polymorphic_ctype_id` field points to the ContentType ID of the model subclass.
</code></pre>
</blockquote>
<p>Using <code>loaddata</code> bypasses the <code>save()</code> method of the model so the default content-types are not set on the models.</p>
<p>I could find the appropriate content types using:</p>
<pre class="lang-python prettyprint-override"><code>from django.contrib.contenttypes.models import ContentType
for model_name in ("apple", "grape"):
print(
model_name,
ContentType.objects.get(app_label="test_polymorphic", model=model_name).id,
)
</code></pre>
<p>Which outputs:</p>
<blockquote>
<pre class="lang-none prettyprint-override"><code>apple: 3
grape: 2
</code></pre>
</blockquote>
<p>and then change the fixture to:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"model": "test_polymorphic.apple",
"pk": "bramley",
"fields": { "polymorphic_ctype_id": 3 }
},
{
"model": "test_polymorphic.apple",
"pk": "granny smith",
"fields": { "polymorphic_ctype_id": 3 }
},
{
"model": "test_polymorphic.grape",
"pk": "00000000-0000-4000-8000-000000000000",
"fields": { "colour": "red", "polymorphic_ctype_id": 2 }
},
{
"model": "test_polymorphic.grape",
"pk": "00000000-0000-4000-8000-000000000001",
"fields": { "colour": "green", "polymorphic_ctype_id": 2 }
}
]
</code></pre>
<p>Including the polymorphic content type in the fields and then the fixture works. However, this value is effectively a magic number and I am not convinced that it will not vary if:</p>
<ul>
<li>The application is installed alongside other Django applications that have already generated content types; or</li>
<li>If other models are added to the application;</li>
<li>Etc.</li>
</ul>
<hr />
<h3>Question:</h3>
<p>Can I rely on the <code>polymorphic_ctype_id</code> to be a fixed value and, if not, how should I set the default polymorphic content type on the model when loading fixtures (if using <code>loaddata</code> bypasses the <code>save</code> method of the model and I cannot be certain what the id values of the content types would be)?</p>
|
<python><django><django-fixtures><django-polymorphic>
|
2024-07-03 11:12:55
| 1
| 172,539
|
MT0
|
78,701,537
| 2,071,807
|
What's the difference between Pint's "Quantity" and "PlainQuantity"?
|
<p>My IDE (type-checking provided by the Pyright language server) doesn't like me passing <code>pint.Quantity</code> instances to functions type-hinted with <code>pint.Quantity</code>:</p>
<pre class="lang-py prettyprint-override"><code>from pint import Quantity
def get_units(quantity: Quantity):
return quantity.units
get_unit(Quantity(value=1, units="m")) # IDE complains here
</code></pre>
<p>My language server says this:</p>
<pre><code>Argument of type "PlainQuantity[Any]" cannot be assigned
to parameter "quantity" of type "Quantity" in function
"get_unit" [E]
</code></pre>
<p>My questions are: what is <code>PlainQuantity</code>? Why does <code>Quantity</code> not return an instance of <code>Quantity</code>? Am I doing something wrong?</p>
|
<python><python-typing><pyright><pint>
|
2024-07-03 10:39:32
| 1
| 79,775
|
LondonRob
|
78,701,509
| 4,732,111
|
How to replace a specific field inside a JSON string in each row of a csv file in Python with a random value?
|
<p>I have a CSV file named <em><code>input.csv</code></em> with the following columns:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">row_num</th>
<th style="text-align: center;">start_date_time</th>
<th style="text-align: right;">id</th>
<th style="text-align: right;">json_message</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">120</td>
<td style="text-align: center;">2024-02-02 00:01:00.001+00</td>
<td style="text-align: right;">1020240202450</td>
<td style="text-align: right;">{'amount': 10000, 'currency': 'NZD',<strong>'seqnbr': 161</strong> }</td>
</tr>
<tr>
<td style="text-align: left;">121</td>
<td style="text-align: center;">2024-02-02 00:02:00.001+00</td>
<td style="text-align: right;">1020240202451</td>
<td style="text-align: right;">{'amount': 20000, 'currency': 'AUD',<strong>'seqnbr': 162</strong> }</td>
</tr>
<tr>
<td style="text-align: left;">122</td>
<td style="text-align: center;">2024-02-02 00:03:00.001+00</td>
<td style="text-align: right;">1020240202452</td>
<td style="text-align: right;">{'amount': 30000, 'currency': 'USD',<strong>'seqnbr': None</strong> }</td>
</tr>
<tr>
<td style="text-align: left;">123</td>
<td style="text-align: center;">2024-02-02 00:04:00.001+00</td>
<td style="text-align: right;">1020240202455</td>
<td style="text-align: right;">{'amount': 40000, 'currency': 'INR',<strong>'seqnbr': 163</strong> }</td>
</tr>
</tbody>
</table></div>
<p>I'm using <strong>python3</strong> to read this CSV file and I need to replace <code>seqnbr</code> field under <em><code>json_message</code></em> column with a random integer digit for each row. If the seqnbr contains None then that row should not be replaced with random seqnbr rather it should be retained as is. The delimiter for my CSV file is the pipe (|) symbol. I'm using the below Python code to replace the file with the random generated integer value but still it doesn't overwrite it and here is my code:</p>
<pre><code>def update_seqnbr(cls):
filename = 'input.csv'
seqnbr_pattern = "'seqnbr': ([\s\d]+)"
with open(filename, 'r') as csvfile:
datareader = (csv.reader(csvfile, delimiter="|"))
next(datareader, None) # skip the headers
for row in datareader:
json_message = row[3]
match = re.findall(seqnbr_pattern, json_message)
if len(match) != 0:
replaced_json_message = json_message.replace(match[0], str(random.randint(500, 999)))
row[3] = replaced_json_message
x = open(filename, "a")
x.writelines(row)
x.close()
</code></pre>
<p>Below is how my file should look like:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">row_num</th>
<th style="text-align: center;">start_date_time</th>
<th style="text-align: right;">id</th>
<th style="text-align: right;">json_message</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">120</td>
<td style="text-align: center;">2024-02-02 00:01:00.001+00</td>
<td style="text-align: right;">1020240202450</td>
<td style="text-align: right;">{'amount': 10000, 'currency': 'NZD',<strong>'seqnbr': 555</strong> }</td>
</tr>
<tr>
<td style="text-align: left;">121</td>
<td style="text-align: center;">2024-02-02 00:02:00.001+00</td>
<td style="text-align: right;">1020240202451</td>
<td style="text-align: right;">{'amount': 20000, 'currency': 'AUD',<strong>'seqnbr': 897</strong> }</td>
</tr>
<tr>
<td style="text-align: left;">122</td>
<td style="text-align: center;">2024-02-02 00:03:00.001+00</td>
<td style="text-align: right;">1020240202452</td>
<td style="text-align: right;">{'amount': 30000, 'currency': 'USD',<strong>'seqnbr': None</strong> }</td>
</tr>
<tr>
<td style="text-align: left;">123</td>
<td style="text-align: center;">2024-02-02 00:04:00.001+00</td>
<td style="text-align: right;">1020240202455</td>
<td style="text-align: right;">{'amount': 40000, 'currency': 'INR',<strong>'seqnbr': 768</strong> }</td>
</tr>
</tbody>
</table></div>
<p>Can someone please help me with this?</p>
|
<python><pandas><python-polars>
|
2024-07-03 10:32:22
| 3
| 363
|
Balaji Venkatachalam
|
78,701,495
| 16,327,154
|
What is the default accuracy scoring in cross_val_score() in sklearn?
|
<p>I have a regression model made using random-Forest. I made pipelines using scikit-learn to process data and now have used <code>RandomForestRegressor</code> to predict.
I want to get the accuracy of model. because of the problem of over-fitting I decided to use the <code>cross_val_score</code> function to get rid of that.</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.ensemble import RandomForestRegressor
forest_reg = make_pipeline(preprocessing,
RandomForestRegressor(random_state=1))
acc = cross_val_score(forest_reg, data, labels,cv=10)
</code></pre>
<p>then, I used this to get the accuracy:</p>
<pre class="lang-py prettyprint-override"><code>print(acc.mean(),acc.std())
</code></pre>
<p>It gives me around 0.84 and 0.06.</p>
<p>I understand the standard deviation part but how is the first one calculated? Is 0.84 good? Is there a better scoring way to get accuracy?</p>
|
<python><scikit-learn>
|
2024-07-03 10:29:52
| 2
| 327
|
Mehan Alavi
|
78,701,305
| 1,735,184
|
pandas string selection
|
<p>I would like to extract rows containing a particular string - the string can be a part of a larger, space-separated string (which I would want to count in), or can be a part of another (continuous) string (which I would NOT want to count in). The string can be either at start, middle or end of the string value.</p>
<p>Example - say I would like to extract any row containing "HC":</p>
<pre><code>df = pd.DataFrame(columns=['test'])
df['test'] = ['HC', 'CHC', 'HC RD', 'RD', 'MRD', 'CEA', 'CEA HC']
test
0 HC
1 CHC
2 HC RD
3 RD
4 MRD
5 CEA
6 CEA HC
</code></pre>
<p>Desired output</p>
<pre><code> test
0 HC
2 HC RD
6 CEA HC
</code></pre>
|
<python><pandas><string><selection>
|
2024-07-03 09:52:16
| 1
| 1,708
|
branwen85
|
78,701,277
| 1,474,073
|
Streamlit app spams DEBUG logs despite loglevels are set correctly
|
<p>I have a streamlit app. I correctly set its log level to INFO, however I get tons of debug messages like this:</p>
<pre><code>[2024-07-03T09:42:03+0000 | DEBUG |inotify_buffer |watchdog.observers.inotify_buffer]: in-event <InotifyEvent: src_path=b'/app/streamlit/logs/myapp.log', wd=3, mask=IN_MODIFY, cookie=0, name='myapp.log'>
</code></pre>
<p>How do I suppress this? Setting the root level of the logger didn't change anything, and neither did setting the loglevel of <code>inotify_buffer</code> or <code>watchdog</code>.</p>
|
<python><streamlit>
|
2024-07-03 09:46:54
| 0
| 8,242
|
rabejens
|
78,701,097
| 9,458,268
|
How to dynamically create type annotations in Python?
|
<p>Normally, one can define a class with some type-annotated fields as follows:</p>
<pre><code>class A:
a: str
</code></pre>
<p>How to do this dynamically using <code>type</code> function?</p>
<p>I know the case when <code>a</code> is assigned to a value, i.e., in the case that</p>
<pre class="lang-py prettyprint-override"><code>class B:
a = 'hello world'
</code></pre>
<p>we can <code>B = type('B', (), {'a': 'hello world'})</code></p>
<p>However, I don't know how to add the type annotation information to the <code>type</code> function dynamically.</p>
<p>For the context, we have some functions that read such information to codegen tests and api calls.</p>
|
<python><metaprogramming>
|
2024-07-03 09:14:17
| 1
| 350
|
Clément Dato
|
78,701,013
| 113,586
|
Replacing pkg_resources `Distribution.run_script()`
|
<p>As pkg_resources is deprecated but the official migration guides don't offer migration paths for all of its API I'm seeking a replacement for <a href="https://setuptools.pypa.io/en/latest/pkg_resources.html#imetadataprovider-methods" rel="nofollow noreferrer"><code>Distribution.run_script()</code></a>.</p>
<p>As far as I can see its only purpose is to take a script shipped in <code>scripts</code> and either read it from the disk or get it with <code>Distribution.get_metadata()</code> and then execute it in the provided namespace. So my questions are:</p>
<ol>
<li>Is there a ready to use replacement for this (I suppose no)?</li>
<li>What is the <code>get_metadata()</code> code path for, is it there for zipped eggs or for some other reason? Is there a ready to use replacement for this code path, as the other codepath is much easier to replace? Or is it never relevant anyway?</li>
</ol>
|
<python><pkg-resources>
|
2024-07-03 08:55:10
| 1
| 25,704
|
wRAR
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.