QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
77,940,596
| 3,706,723
|
Azure Devops and uploading a Python package to Artifacts using pipeline | authentication issue
|
<p>I am trying to use the Azure pipeline to publish a Python package to Artifact's feed.
I could do it from my local machine and upload the package using Twine, but I have an authentication issue in the pipeline.</p>
<pre><code>trigger:
- main
pool:
vmImage: ubuntu-22.04
variables:
pip_cache_dir: '$(Pipeline.Workspace)/.pip_cache'
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.10'
addToPath: true
- bash: |
python -m venv worker_venv
source worker_venv/bin/activate
pip install --upgrade pip
pip install pipenv
pipenv requirements > requirements.txt
pipenv requirements --dev > requirements-dev.txt
pip install --cache-dir $(pip_cache_dir) -r ./requirements.txt
pip install --target="./.python_packages/lib/site-packages" --cache-dir $(pip_cache_dir) -r ./requirements.txt
displayName: 'Install tools'
- script: |
source worker_venv/bin/activate
python setup.py sdist bdist_wheel
displayName: 'Build package'
- task: TwineAuthenticate@1
inputs:
artifactFeed: sample-feed-01
- script: |
source worker_venv/bin/activate
python -m twine upload --verbose --config-file $(PYPIRC_PATH) --repository-url https://pkgs.dev.azure.com/**company**/Platform/_packaging/sample-feed-01/pypi/upload/ dist/*
env:
TWINE_USERNAME: "azure"
TWINE_PASSWORD: $(PYPI_TOKEN)
displayName: 'Upload package to Azure Artifacts'
</code></pre>
<p>I tried everything, including GPT-4, but the solutions seem wrong or outdated.
this is the error:</p>
<pre><code>/usr/bin/bash --noprofile --norc /home/vsts/work/_temp/75d0c60b-0c2a-44e9-be0f-29d838a3b86e.sh
Uploading distributions to
https://pkgs.dev.azure.com/**company**/Platform/_packaging/sample-feed-01/pypi/up
load/
INFO dist/**package**.whl (2.6 KB)
INFO dist/**package**.tar.gz (2.6 KB)
INFO username set by command options
INFO password set by command options
INFO username: azure
INFO password: <hidden>
Uploading **package**.whl
25l
0% ββββββββββββββββββββββββββββββββββββββββ 0.0/5.7 kB β’ --:-- β’ ?
100% ββββββββββββββββββββββββββββββββββββββββ 5.7/5.7 kB β’ 00:00 β’ ?
100% ββββββββββββββββββββββββββββββββββββββββ 5.7/5.7 kB β’ 00:00 β’ ?
25hINFO Response from
https://pkgs.dev.azure.com/**company**/Platform/_packaging/sample-feed-0
1/pypi/upload/:
401 Unauthorized
INFO ο»Ώ{"$id":"1","innerException":null,"message":"TF400813: The user
'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' is not authorized to access this
resource.","typeName":"Microsoft.TeamFoundation.Framework.Server.Unauth
orizedRequestException,
Microsoft.TeamFoundation.Framework.Server","typeKey":"UnauthorizedReque
stException","errorCode":0,"eventId":3000}
ERROR HTTPError: 401 Unauthorized from
https://pkgs.dev.azure.com/**company**/Platform/_packaging/sample-feed-0
1/pypi/upload/
Unauthorized
</code></pre>
<p>I have some doubts about <code>'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'</code> as user name, which I did not obfuscate, and that is what I see in the pipeline.
any help would be appreciated</p>
|
<python><azure-devops><artifact><twine>
|
2024-02-05 11:31:14
| 2
| 963
|
prhmma
|
77,940,593
| 13,392,257
|
How to parametrize table column creation with help of list?
|
<p>I want to use a list with column names to generate CREATE TABLE <code>sql</code> script.</p>
<p>I want to create a function.</p>
<pre><code>def create_table(column_names: List[str]):
pass
</code></pre>
<p>Example:</p>
<pre><code>Input ["col1", "col2", "colN"]
</code></pre>
<p>Output:</p>
<pre><code>psycopg2.sql.SQL('''CREATE TABLE my_table (
created_ts TIMESTAMPTZ,
col1 BYTEA,
col2 BYTEA,
...
colN BYTEA''')
</code></pre>
<p>Finally I want to execute this SQL statement</p>
<p>I read about</p>
<pre><code>import psycopg2
from psycopg2 import sql
columns = ["col1", "col2", "colN"]
stmt = sql.SQL('''CREATE TABLE my_table (
created_ts TIMESTAMPTZ,
{} BYTEA,
{} BYTEA,
{} BYTEA,
''').format(columns[0], columns[1], columns[2])
</code></pre>
<p>But problem is that number of columns may be different..</p>
|
<python><psycopg2>
|
2024-02-05 11:30:37
| 1
| 1,708
|
mascai
|
77,940,460
| 3,122,657
|
psycopg3 pool: all connections getting lost instantly after long idle time
|
<p>If it's a standalone persistant connection, I have no problem, connection lasts for hours.<br />
If I use psycopg(3) Connection pool, I make make requests, first requests are ok and my pool size stays at 5, at one point pool size decreases and I get a Pool Timeout when client makes a new request.<br />
Then I tried: start pool, do not request anything, just wait. After some time (around 1h) I look at postgresql (pg_stat_activity), I have 5 idle (=pool size) connections. Then I make a request from client, and all connections vanish at same time (I can see it from pg_stat_activity) and Pool Timeout, and situation is stuck.</p>
<p>I also tried to decrease max_timeout to 900 but still same issue.</p>
<pre><code>def init_pool(self, min_cnx=5):
cnx_str = f"host={DB_HOST} port={DB_PORT} dbname={DB_NAME} user={DB_USERNAME} password={DB_USERPWD}"
self.pool = ConnectionPool(conninfo=cnx_str, min_size=min_cnx, open=True, check=ConnectionPool.check_connection)
def query(self, q, dbv=None, debug=False) -> list:
print("pool size: ", len(self.pool._pool))
print("pool stats before: ", self.pool.get_stats())
with self.pool.connection() as cnx:
if cnx.closed:
self.pool.check()
raise ConnectionError("ERROR: PostgreSQL cnx from pool is closed.")
cnx.autocommit = True
cnx.row_factory = self.row_factory
with psycopg.ClientCursor(cnx) as rdc:
rdc.execute(q, dbv) if dbv else rdc.execute(q)
if debug and rdc._query:
print(rdc._query.query)
if rdc.description:
data = rdc.fetchall()
else:
data = []
print("pool stats after query: ", self.pool.get_stats())
print("pool stats after: ", self.pool.get_stats())
return data
</code></pre>
<p>And logs:</p>
<pre><code>[pid: 236344|app: 0|req: 26/26] () {56 vars in 1083 bytes} [Mon Feb 5 11:41:56 2024] POST /v1/user => generated 933 bytes in
109 msecs (HTTP/1.1 200) 8 headers in 749 bytes (1 switches on core 0)
pool size: 3
pool stats before: {'connections_num': 5, 'requests_num': 3, 'requests_queued': 1, 'connections_ms': 268, 'requests_wait_ms': 34, 'usage_ms
': 34, 'pool_min': 5, 'pool_max': 5, 'pool_size': 5, 'pool_available': 3, 'requests_waiting': 0}
pool stats after query: {'connections_num': 5, 'requests_num': 4, 'requests_queued': 1, 'connections_ms': 268, 'requests_wait_ms': 34, 'usa
ge_ms': 34, 'pool_min': 5, 'pool_max': 5, 'pool_size': 5, 'pool_available': 2, 'requests_waiting': 0}
pool stats after: {'connections_num': 5, 'requests_num': 4, 'requests_queued': 1, 'connections_ms': 268, 'requests_wait_ms': 34, 'usage_ms'
: 49, 'pool_min': 5, 'pool_max': 5, 'pool_size': 5, 'pool_available': 2, 'requests_waiting': 0}
[pid: 236344|app: 0|req: 28/28] () {56 vars in 1087 bytes} [Mon Feb 5 11:41:58 2024] POST /v1/iobjs => generated 4788 bytes i
n 29 msecs (HTTP/1.1 200) 6 headers in 302 bytes (1 switches on core 0)
[pid: 236344|app: 0|req: 29/29] () {54 vars in 816 bytes} [Mon Feb 5 11:42:05 2024] OPTIONS /v1/user/quit => generated 0 byte
s in 0 msecs (HTTP/1.1 200) 6 headers in 307 bytes (0 switches on core 0)
pool size: 0
pool stats before: {'connections_num': 5, 'requests_num': 6, 'requests_queued': 1, 'connections_ms': 268, 'requests_wait_ms': 34, 'usage_ms
': 62, 'pool_min': 5, 'pool_max': 5, 'pool_size': 5, 'pool_available': 0, 'requests_waiting': 0}
Traceback (most recent call last):
File "/var/srvr/log.py", line 68, in process
self.db.query(
File "/var/srvr/pg3p.py", line 71, in query
with self.pool.connection() as cnx:
File "/usr/local/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/var/srvr/lib/python3.12/site-packages/psycopg_pool/pool.py", line 170, in connection
conn = self.getconn(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/srvr/lib/python3.12/site-packages/psycopg_pool/pool.py", line 204, in getconn
raise PoolTimeout(
psycopg_pool.PoolTimeout: couldn't get a connection after 30.00 sec
pool size: 0
pool stats before: {'connections_num': 5, 'requests_num': 7, 'requests_queued': 2, 'connections_ms': 268, 'requests_wait_ms': 30035, 'usage
_ms': 62, 'requests_errors': 1, 'pool_min': 5, 'pool_max': 5, 'pool_size': 5, 'pool_available': 0, 'requests_waiting': 1}
</code></pre>
<p>EDIT:<br />
I switched back to 1 single persistant connection, it is very stable (days). Then following advices in comments, I moved back to pool with min_size=10 and max_size=20.<br />
No change in behaviour: pool is loosing connections without trying to initiate new ones to replace the lost (also tried 20 and 50 min/max, no difference)</p>
<pre><code>pool stats after: {'connections_num': 11, 'requests_num': 34, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'usage_m s': 67, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 5, 'requests_waiting': 0}
[pid: 282017|app: 0|req: 39/39] () {56 vars in 1087 bytes}[Thu Feb 8 11:02:17 2024] POST /v1/iobjs => generated 30081 bytes in 10 msecs (HTTP/1.1 200) 6 headers in 303 bytes (1 switches on core 0)
pool size: 5
pool stats before: {'connections_num': 11, 'requests_num': 34, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'usage_ ms': 67, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 5, 'requests_waiting': 0}
pool stats after query: {'connections_num': 11, 'requests_num': 35, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'u sage_ms': 67, 'connections_lost': 1, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 3, 'requests_waiting': 0}
pool stats after: {'connections_num': 11, 'requests_num': 35, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'usage_m s': 70, 'connections_lost': 1, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 3, 'requests_waiting': 0}
[pid: 282017|app: 0|req: 40/40] () {56 vars in 1087 bytes} [Thu Feb 8 11:02:17 2024] POST /v1/iobjs => generated 4788 bytes i n 5 msecs (HTTP/1.1 200) 6 headers in 302 bytes (1 switches on core 0)
[pid: 282017|app: 0|req: 41/41] () {54 vars in 808 bytes} [Thu Feb 8 11:02:26 2024] OPTIONS /v1/iobjs => generated 0 bytes in 0 msecs (HTTP/1.1 200) 6 headers in 307 bytes (0 switches on core 0)
[pid: 282017|app: 0|req: 42/42] () {54 vars in 814 bytes} [Thu Feb 8 11:02:26 2024] OPTIONS /v1/settings => generated 0 bytes in 0 msecs (HTTP/1.1 200) 6 headers in 307 bytes (0 switches on core 0)
pool size: 3
pool stats before: {'connections_num': 11, 'requests_num': 35, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'usage_ ms': 70, 'connections_lost': 1, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 3, 'requests_waiting': 0}
pool stats after query: {'connections_num': 11, 'requests_num': 36, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'u sage_ms': 70, 'connections_lost': 1, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 2, 'requests_waiting': 0}
pool stats after: {'connections_num': 11, 'requests_num': 36, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'usage_m s': 73, 'connections_lost': 1, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 2, 'requests_waiting': 0}
[pid: 282017|app: 0|req: 43/43] () {56 vars in 1087 bytes} [Thu Feb 8 11:02:26 2024] POST /v1/iobjs => generated 4788 bytes i n 6 msecs (HTTP/1.1 200) 6 headers in 302 bytes (1 switches on core 0)
Traceback (most recent call last): File "main.py", line 326, in v1_settings
row = db.query_row(
^^^^^^^^^^^^^
</code></pre>
<p>and postgresql logs (debug3) show nothing special (AFAIU):</p>
<pre><code>2024-02-08 11:02:17.075 CET [282007] pguser@maindb DEBUG: proc_exit(0): 2 callbacks to make
2024-02-08 11:02:17.075 CET [282007] pguser@maindb DEBUG: exit(0)
2024-02-08 11:02:17.075 CET [282007] pguser@maindb DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2024-02-08 11:02:17.075 CET [282007] pguser@maindb DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2024-02-08 11:02:17.075 CET [282007] pguser@maindb DEBUG: proc_exit(-1): 0 callbacks to make
2024-02-08 11:02:17.079 CET [281970] DEBUG: server process (PID 282007) exited with exit code 0
2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: shmem_exit(0): 4 before_shmem_exit callbacks to make
2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: shmem_exit(0): 6 on_shmem_exit callbacks to make
2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: proc_exit(0): 2 callbacks to make
2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: exit(0)
2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: proc_exit(-1): 0 callbacks to make
2024-02-08 11:02:26.188 CET [282009] pguser@maindb DEBUG: shmem_exit(0): 4 before_shmem_exit callbacks to make
2024-02-08 11:02:26.189 CET [282009] pguser@maindb DEBUG: shmem_exit(0): 6 on_shmem_exit callbacks to make
2024-02-08 11:02:26.189 CET [282009] pguser@maindb DEBUG: proc_exit(0): 2 callbacks to make
2024-02-08 11:02:26.189 CET [282009] pguser@maindb DEBUG: exit(0)
2024-02-08 11:02:26.189 CET [282009] pguser@maindb DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2024-02-08 11:02:26.189 CET [282009] pguser@maindb DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2024-02-08 11:02:26.189 CET [282009] pguser@maindb DEBUG: proc_exit(-1): 0 callbacks to make
2024-02-08 11:02:26.189 CET [281970] DEBUG: server process (PID 282006) exited with exit code 0
2024-02-08 11:02:26.191 CET [282011] pguser@maindb DEBUG: shmem_exit(0): 4 before_shmem_exit callbacks to make
2024-02-08 11:02:26.191 CET [282011] pguser@maindb DEBUG: shmem_exit(0): 6 on_shmem_exit callbacks to make
2024-02-08 11:02:26.191 CET [282011] pguser@maindb DEBUG: proc_exit(0): 2 callbacks to make
2024-02-08 11:02:26.192 CET [282011] pguser@maindb DEBUG: exit(0)
2024-02-08 11:02:26.192 CET [282011] pguser@maindb DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2024-02-08 11:02:26.192 CET [282011] pguser@maindb DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2024-02-08 11:02:26.192 CET [282011] pguser@maindb DEBUG: proc_exit(-1): 0 callbacks to make
2024-02-08 11:02:26.193 CET [281970] DEBUG: server process (PID 282009) exited with exit code 0
2024-02-08 11:02:26.194 CET [281970] DEBUG: server process (PID 282011) exited with exit code 0
2024-02-08 11:02:33.979 CET [281970] DEBUG: postmaster received pmsignal signal
2024-02-08 11:02:33.983 CET [282844] DEBUG: InitPostgres
2024-02-08 11:02:33.985 CET [282844] DEBUG: autovacuum: processing database "template1"
</code></pre>
<p>query_row() being same as query():</p>
<pre><code>def query_row(self, q, dbv=None, debug=False):
with self.pool.connection() as cnx:
cnx.autocommit = True
cnx.row_factory = self.row_factory
with psycopg.ClientCursor(cnx) as c:
c.execute(q, dbv) if dbv else c.execute(q)
if debug and c._query:
print(c._query.query)
if c.rowcount == 1:
return c.fetchone()
else:
return None
</code></pre>
|
<python><postgresql><psycopg2><psycopg3>
|
2024-02-05 11:12:26
| 1
| 3,374
|
comte
|
77,940,377
| 2,175,012
|
Mamba/Micromamba: Add path to environment
|
<p>Conda has the <code>conda develop</code> command, which can be used to add paths to local python projects (e.g. self-written libraries that aren't distributed as packages) to the module path of an environment.</p>
<p>How can I achieve a similar effect with mamba / micromamba (or even independently of the package manager)? The lack of solutions I was able to find on this question makes me wonder if maybe the <code>conda develop</code> solution is an anti-pattern.</p>
<p>I would also prefer a solution that doesn't require me to package my dependency library (so that it e.g. can be installed via <code>pip install -e</code>)</p>
|
<python><conda><mamba>
|
2024-02-05 10:56:08
| 0
| 2,156
|
Xaser
|
77,940,303
| 258,009
|
Python mocking: basic errors mocking a class using patch/MagicMock
|
<p>Although I've successfully used google test and mock successfully for C++, I'm getting nowhere when it comes to using mocking in Python.</p>
<p>I'm trying to mock a small class in my code so that I can ensure the response from a particular function. I want to mock the whole class since the constructor does things that involve dependencies that I'd like to avoid. This seems like a fairly typical and mundane use case for mocking.</p>
<p>However, when I call the function on the mock, what I get is a <code>MagicMock</code> object, not the result I wanted. Taking a very simple example:</p>
<pre><code>class MyClass:
def value(self):
return 10
</code></pre>
<p>If I want to mock the <code>MyClass</code> class and change the <code>value</code> function so that it returns a different value, I believed that I could do this as follows:</p>
<pre><code>with patch('__main__.MyClass') as MockMyClass:
b = MyClass()
</code></pre>
<p>If I then attempt to set the <code>value</code> function to something different, it doesn't work the way I expect and calling the function on the mock seems to return a MagicMock object instead:</p>
<pre><code>>>> with patch('__main__.MyClass') as MockMyClass:
... MockMyClass.value = lambda : 22
... b = MyClass()
... print(b.value())
<MagicMock name='MyClass().value()' id='140661276636400'>
</code></pre>
<p>I've been through so many references and examples and I <em>still</em> don't get what I'm doing wrong. Feel so clueless - can anyone help me?</p>
|
<python><unit-testing><mocking><magicmock>
|
2024-02-05 10:43:37
| 1
| 10,557
|
Component 10
|
77,940,195
| 3,842,823
|
Overriding list entry in Hydra, from another YAML
|
<p>In Hydra config YAML files, I have the following structure in the file <code>inner.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>key_a:
- entry_a_1: xxxx
entry_a_2: xxxxx
- entry_a_3: xxxx
entry_a_4: xxxxx
</code></pre>
<p>What I want is to be able to override the <code>entry_a_M</code> from another YAML file, for example at the file <code>outer.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>defaults:
- inner_config@outer_inner_config: inner
outter_inner_config:
entry_a_1: YYYY
</code></pre>
<p>I have tried with <code>key_a.0.entry_a_1: WWWW</code> and other combinations but it doesn't work.</p>
<p>Please note:</p>
<ul>
<li>I don't want to override it from the CLI</li>
<li>If there are keys in each list item, e.g. <code>- key: [entry_a_1...]</code> then it can be done, as shown in question <a href="https://stackoverflow.com/questions/71565452/override-list-dictionalry-element-in-fb-hydra">here</a>. But it is not my case and it would not work with having that key in the list entry, in my case.</li>
</ul>
<p>Any answers on that?</p>
|
<python><fb-hydra>
|
2024-02-05 10:24:46
| 1
| 1,951
|
Xxxo
|
77,940,051
| 365,872
|
`asyncio.get_event_loop()` works in plain python but not in ipython
|
<p>I observe different behavior of <code>asyncio.get_event_loop()</code> in plain <code>python</code> console and in <code>ipython</code>.</p>
<p>Plain <code>python</code> - all as expected:</p>
<pre class="lang-py prettyprint-override"><code>Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import asyncio
>>> asyncio.get_event_loop()
<stdin>:1: DeprecationWarning: There is no current event loop
<_UnixSelectorEventLoop running=False closed=False debug=False>
</code></pre>
<p><code>ipython</code> - getting the event loop fails:</p>
<pre class="lang-py prettyprint-override"><code>Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.21.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import asyncio
In [2]: asyncio.get_event_loop()
<ipython-input-2-6908e23590ee>:1: DeprecationWarning: There is no current event loop
asyncio.get_event_loop()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[2], line 1
----> 1 asyncio.get_event_loop()
File ~/.pyenv/versions/3.10.6/lib/python3.10/asyncio/events.py:656, in BaseDefaultEventLoopPolicy.get_event_loop(self)
653 self.set_event_loop(self.new_event_loop())
655 if self._local._loop is None:
--> 656 raise RuntimeError('There is no current event loop in thread %r.'
657 % threading.current_thread().name)
659 return self._local._loop
RuntimeError: There is no current event loop in thread 'MainThread'.
</code></pre>
<p>Moreover, creating and setting a loop explicitly doesn't work in IPython too:</p>
<pre class="lang-py prettyprint-override"><code>In [3]: loop = asyncio.set_event_loop(asyncio.new_event_loop())
In [4]: asyncio.get_event_loop()
<ipython-input-4-6908e23590ee>:1: DeprecationWarning: There is no current event loop
asyncio.get_event_loop()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[...]
RuntimeError: There is no current event loop in thread 'MainThread'.
</code></pre>
<p>The error is reproducible in some environment but not in the others:</p>
<pre><code>-----------------------------------------
host OS | runs in Docker* | reproduced
-----------------------------------------
Ubuntu | yes | yes**
Ubuntu | no | no
MacOS | yes | no
MacOS | no | no
</code></pre>
<p><code>*</code> - <code>docker -it python:3.10</code></p>
<p><code>**</code> - reproduced on 2 different machines</p>
<p>Any tips on what may go wrong with the version in IPython?</p>
|
<python><python-asyncio><ipython>
|
2024-02-05 10:02:17
| 0
| 28,682
|
ffriend
|
77,940,019
| 5,168,463
|
Spark read table from a specific location
|
<p>I have saved a dataframe as a table using the following code:</p>
<pre><code>yearly_calltype.write.option("path", "/home/user/tables/firstProject").saveAsTable('yearly_calltype_count')
</code></pre>
<p>But how do I read this table from this location?</p>
<p>When I am trying to do:</p>
<pre><code>spark.read.table("/home/user/tables/firstProject/yearly_calltype_count")
</code></pre>
<p>I am getting this error:</p>
<pre><code>[PARSE_SYNTAX_ERROR] Syntax error at or near '/'.(line 1, pos 0)
== SQL ==
/home/user/tables/firstProject/yearly_calltype_count
^^^
</code></pre>
<p>I believe when we try to read the tables, we cannot specify the location. And spark tries to read the table from default <code>/home/user/spark-warehouse</code> location. We can change this location by changing the <code>spark.sql.warehouse.dir</code> config. But I do not want to do that. Is there a way I can read this table by specifying the location of the table in the <code>read.table</code> ?</p>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2024-02-05 09:57:18
| 1
| 515
|
DumbCoder
|
77,939,988
| 11,046,379
|
Django Backend doesn't return json to Angular frontend
|
<p>I have web project, backend on Django , frontend on Angular</p>
<p>Backend started as:</p>
<pre><code>python3 manage.py runserver 192.168.1.195:8080
</code></pre>
<p>Frontend started as:</p>
<pre><code>ng serve --host 192.168.1.195
</code></pre>
<p>on the 4200 port.</p>
<p>On the Django in CORS_ORIGIN_WHITELIST section in settings.py</p>
<pre><code>CORS_ORIGIN_WHITELIST = (
'http://192.168.1.195:8080',
'http://localhost:8081',
'http://192.168.1.195:4200',
'http://127.0.0.1:4200',
)
</code></pre>
<p>On the frontend side:</p>
<p><em><strong>service.ts:</strong></em></p>
<pre><code>var apiUrl = "http://192.168.1.195:8080";
var httpLink = {
// getAllEmployee: apiUrl + "/api/employee/getAllEmployee",
getAllEmployee: apiUrl + "/api/hosts_groups",
deleteEmployeeById: apiUrl + "/api/employee/deleteEmployeeById",
getEmployeeDetailById: apiUrl + "/api/employee/getEmployeeDetailById",
saveEmployee: apiUrl + "/api/employee/saveEmployee"
}
@Injectable({
providedIn: 'root'
})
export class HttpProviderService {
constructor(private webApiService: WebApiService) { }
public getAllEmployee(): Observable<any> {
return this.webApiService.get(httpLink.getAllEmployee);
}
</code></pre>
<p><em><strong>home.component.ts</strong></em></p>
<pre><code> @Component({
selector: 'app-home',
templateUrl: './home.component.html',
styleUrls: ['./home.component.scss']
})
export class HomeComponent implements OnInit {
closeResult = '';
employeeList: any = [];
constructor(private router: Router, private modalService: NgbModal,
private toastr: ToastrService, private httpProvider : HttpProviderService) { }
ngOnInit(): void {
this.getAllEmployee();
}
async getAllEmployee() {
this.httpProvider.getAllEmployee().subscribe((data : any) => {
if (data != null && data.body != null) {
var resultData = data.body;
if (resultData) {
this.employeeList = resultData;
}
}
},
(error : any)=> {
if (error) {
if (error.status == 404) {
if(error.error && error.error.message){
this.employeeList = [];
}
}
}
});
}
</code></pre>
<p>When i call <code>http://192.168.1.195:8080/api/hosts_groups</code> django returns json as excpected
and in console i see :</p>
<pre><code>"GET /api/hosts_groups HTTP/1.1" 200 85
</code></pre>
<p>But in case when i call</p>
<pre><code>http://192.168.1.195:4200
</code></pre>
<p>no json returned although in console i see:</p>
<pre><code>"OPTIONS /api/hosts_groups HTTP/1.1" 200 0
</code></pre>
|
<python><django><angular>
|
2024-02-05 09:52:55
| 0
| 1,658
|
harp1814
|
77,939,981
| 1,047,788
|
How to handle diamond dependency in Python?
|
<p>My project has two dependencies, and each of them transitively depends on a different major version of protobuf.</p>
<p>The general situation is well described</p>
<ul>
<li>in an article <a href="https://jlbp.dev/what-is-a-diamond-dependency-conflict" rel="nofollow noreferrer">Google Best Practices for Java Libraries: What is a diamond dependency conflict?</a></li>
<li>and also <a href="https://abseil.io/resources/swe-book/html/ch21.html#conflicting_requirements_and_diamond_de" rel="nofollow noreferrer">Software Engineering at Google; Dependency Management: Conflicting Requirements and Diamond Dependencies</a>.</li>
</ul>
<p>The specific problem in my project manifests with this error message from Poetry.</p>
<pre><code>kfp (1.8.21) depends on protobuf (>=3.13.0,<4)
and robotframework-browser (18.0.0) depends on protobuf (4.25.1)
</code></pre>
<p>These packages operate completely independently. They do not share data. If each was able to use its own protobuf version, things would work out. Is this possible in Python?</p>
|
<python><dependency-management>
|
2024-02-05 09:51:45
| 1
| 29,820
|
user7610
|
77,939,970
| 11,092,636
|
Inconsistent behaviour with Python loggings when creating a package (__init__.py and from . import module)
|
<p>Here is a MRE:</p>
<p><code>my_package/__init__.py</code></p>
<pre class="lang-py prettyprint-override"><code>import logging
from . import module, a_second_module
logging.basicConfig(level=logging.INFO)
</code></pre>
<p><code>my_package/module.py</code></p>
<pre class="lang-py prettyprint-override"><code>import logging
def module():
logging.info("Doing something...")
logging.debug("This is a debug message and won't show up by default.")
</code></pre>
<p><code>my_package/a_second_module.py</code></p>
<pre class="lang-py prettyprint-override"><code>import logging
from my_package_test_so_poetry.module import module
logging.warning("Warning from a_second_module.py")
def module2():
module()
print("Doing something else...")
</code></pre>
<p>The <code>logging.basicConfig</code> does not affect the logging behaviour of the second file (<code>module.py</code>).
Since when I do (in the terminal)</p>
<pre class="lang-py prettyprint-override"><code>python (to enter the Python console)
from my_package.module import module
module()
</code></pre>
<p>I get</p>
<pre class="lang-py prettyprint-override"><code>WARNING:root:Warning from a_second_module.py
</code></pre>
<p>But I'm lacking the</p>
<pre class="lang-py prettyprint-override"><code>INFO:root:Doing something...
</code></pre>
<p>of the function <code>module</code></p>
<p>I think the problem comes from the warning since the documentation says:</p>
<blockquote>
<p>Log a message with severity 'WARNING' on the root logger. If the
logger has no handlers, call basicConfig() to add a console handler
with a pre-defined format.</p>
</blockquote>
<p>But I'm not sure I understand why, my <code>logging.basicConfig(level=logging.INFO)</code> (in the <code>__init__.py</code> file) should be called <em>after</em> the warning message and <em>before</em> I call <code>module</code>. I'm 99% sure the culprit is the warning message though because everything works as expected without it.</p>
<p>I'm using <code>PyCharm 2023.3.3 (Professional Edition)</code> (and I assume other IDEs would have the same behaviour) and <code>Python 3.11.1</code>.</p>
|
<python><logging><package>
|
2024-02-05 09:49:29
| 0
| 720
|
FluidMechanics Potential Flows
|
77,939,924
| 859,591
|
Importing pandas and cplex in a conda environment raises an ImportError: libstdc++.so.6: version `GLIBCXX_3.4.29' not found
|
<p>Importing the Python libraries <a href="https://pandas.pydata.org/" rel="nofollow noreferrer"><code>pandas</code></a> and <a href="https://en.wikipedia.org/wiki/CPLEX" rel="nofollow noreferrer"><code>cplex</code></a> in an conda raises the following exception:</p>
<pre><code>ImportError, because 'GLIBCXX_3.4.29 not found'
</code></pre>
<p>When importing cplex first, there is no error (see code example below). Everything is done in a conda environment.</p>
<p><code>cplex</code> is a propertary libary (there is a <a href="https://community.ibm.com/community/user/ai-datascience/blogs/xavier-nodet1/2020/07/09/cplex-free-for-student" rel="nofollow noreferrer">free academic version for students</a>), but I am pretty sure that this issue is more general and can happen with any two libraries which use C++.</p>
<p>Many <a href="https://github.com/rstudio/reticulate/issues/1282" rel="nofollow noreferrer">others</a> <a href="https://stackoverflow.com/questions/74556574/importerror-lib-x86-64-linux-gnu-libstdc-so-6-version-glibcxx-3-4-30-not">seem</a> to
<a href="https://github.com/rstudio/reticulate/issues/841" rel="nofollow noreferrer">run</a>
<a href="https://stackoverflow.com/questions/48453497/anaconda-libstdc-so-6-version-glibcxx-3-4-20-not-found">into</a>
<a href="https://github.com/ContinuumIO/anaconda-issues/issues/483" rel="nofollow noreferrer">similar</a>
<a href="https://stackoverflow.com/questions/73776372/how-to-solve-cpp-library-confliction-within-anaconda">issues</a>
<a href="https://stackoverflow.com/questions/73317676/importerror-usr-lib-aarch64-linux-gnu-libstdc-so-6-version-glibcxx-3-4-30">with</a>
<a href="https://stackoverflow.com/questions/58424974/anaconda-importerror-usr-lib64-libstdc-so-6-version-glibcxx-3-4-21-not-fo">other</a>
<a href="https://stackoverflow.com/questions/49875588/importerror-lib64-libstdc-so-6-version-cxxabi-1-3-9-not-found">packages</a>: the R interface to Python <a href="https://github.com/rstudio/reticulate" rel="nofollow noreferrer">reticulate</a> seems to cause the
same issue often (Github tickets: <a href="https://github.com/rstudio/reticulate/issues/841" rel="nofollow noreferrer">#841</a>,
<a href="https://github.com/rstudio/reticulate/issues/1282" rel="nofollow noreferrer">#1282</a>). Also <a href="https://stackoverflow.com/questions/58424974/anaconda-importerror-usr-lib64-libstdc-so-6-version-glibcxx-3-4-21-not-fo">tensorflow can cause the same
error</a>.
<a href="https://github.com/JuliaPy/PythonCall.jl/issues/237#issuecomment-1304121366" rel="nofollow noreferrer">Julia</a> seems also to
be affected, but the issue <a href="https://github.com/JuliaPy/PythonCall.jl/issues/237#issuecomment-1304900120" rel="nofollow noreferrer">is already fixed</a>.</p>
<p>I think the error <code>ImportError: /lib64/libstdc++.so.6: version 'CXXABI_1.3.9' not found</code> might be caused by a very similar situation (see: <a href="https://stackoverflow.com/questions/49875588/importerror-lib64-libstdc-so-6-version-cxxabi-1-3-9-not-found">1</a>, <a href="https://stackoverflow.com/questions/39844772/cxxabi-1-3-8-not-found-in-tensorflow-gpu-install-from-source/41703611#41703611">2</a>, <a href="https://github.com/AllenDowney/ThinkStats2/issues/92" rel="nofollow noreferrer">3</a>).</p>
<p>The problem is always the same: a Python package which contains C++ code is used in a conda environment.</p>
<p><strong>What is happening here and what is the proper solution to this situation?</strong></p>
<h2>Setup of a minimal example</h2>
<p>Tested using Ubuntu 20.04.6 LTS. Pop!_OS 22.04 LTS seems not to be affected.</p>
<p>Create a conda environment and install pandas:</p>
<pre><code>conda create -n glibcxx_test
conda activate glibcxx_test
# Python 3.10 is necessary, because cplex does not support Python 3.11 yet
mamba install -c conda-forge pandas python=3.10 pyarrow
</code></pre>
<p>To <strong>install cplex</strong> follow <a href="https://community.ibm.com/community/user/ai-datascience/blogs/xavier-nodet1/2020/07/09/cplex-free-for-students" rel="nofollow noreferrer">these instructions</a>:</p>
<blockquote>
<p>For quick access to CPLEX Optimization Studio through this program, go to <a href="http://ibm.biz/CPLEXonAI" rel="nofollow noreferrer">http://ibm.biz/CPLEXonAI</a>. Click on Software, then you'll find, in the ILOG CPLEX Optimization Studio card, a link to register. Once your registration is accepted, you will see a link to download of the AI version.</p>
</blockquote>
<p>Note that after clicking the download link, you need to select "HTTP" as download method if you don't want to use the <em>Download director</em>. Select the version of the CPLEX Optimization Studio which suits your OS and then click download.</p>
<p>Make the file executable, run it and follow the instructions of the installer:</p>
<pre><code>chmod +x ~/Downloads/cplex_studio2211.linux_x86_64.bin
~/Downloads/cplex_studio2211.linux_x86_64.bin
</code></pre>
<p>It does not seem to make a difference if the conda environment is activated before running the installer.</p>
<p>Note that you don't need root permissions if you install it to your home folder, e.g.
<code>/home/YOUR_USER/cplex_studio2211</code>.</p>
<p>The installer will print out a command to install the Python package to access CPLEX via a Python
API. Activate the conda environment and then install the cplex package:</p>
<pre><code>conda activate glibcxx_test
python /home/YOUR_USER/cplex_studio2211/python/setup.py install
</code></pre>
<h2>Error message and more debugging details</h2>
<p>Importing pandas after cplex then raises the ImportError, mentioning that 'GLIBCXX_3.4.29 not found':</p>
<pre><code>$ python -c 'import cplex; import pandas'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/__init__.py", line 49, in <module>
from pandas.core.api import (
File "/home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/core/api.py", line 47, in <module>
from pandas.core.groupby import (
File "/home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/core/groupby/__init__.py", line 1, in <module>
from pandas.core.groupby.generic import (
File "/home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/core/groupby/generic.py", line 68, in <module>
from pandas.core.frame import DataFrame
File "/home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/core/frame.py", line 149, in <module>
from pandas.core.generic import (
File "/home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/core/generic.py", line 193, in <module>
from pandas.core.window import (
File "/home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/core/window/__init__.py", line 1, in <module>
from pandas.core.window.ewm import (
File "/home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/core/window/ewm.py", line 11, in <module>
import pandas._libs.window.aggregations as window_aggregations
ImportError: /lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/_libs/window/aggregations.cpython-310-x86_64-linux-gnu.so)
</code></pre>
<p>Importing pandas first seems to work fine without error:</p>
<pre><code>$ python -c 'import pandas; import cplex' # no error!
</code></pre>
<p>Also setting the <code>LD_LIBRARY_PATH</code> explicitly seems to solve the issue:</p>
<pre><code>$ LD_LIBRARY_PATH=$HOME/.conda/envs/glibcxx_test/lib/ python -c 'import cplex; import pandas' # no error!
</code></pre>
<p>It seems as if pandas is linked to a newer libstdc++.so.6 library than py310_cplex2211.so:</p>
<pre><code>$ ldd /home/MY_USERNAME/cplex_studio2211/cplex/python/3.10/x86-64_linux/build/lib/cplex/_internal/py310_cplex2211.so
linux-vdso.so.1 (0x00007ffcfb94e000)
libcplex2211.so => /home/MY_USERNAME/cplex_studio2211/cplex/python/3.10/x86-64_linux/build/lib/cplex/_internal/libcplex2211.so (0x00007f1ee3c2f000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f1ee3bf9000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f1ee3bf3000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f1ee3bd8000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f1ee39e6000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f1ee3802000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f1ee36b3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f1ee617d000)
$ realpath /lib/x86_64-linux-gnu/libstdc++.so.6
/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.28
</code></pre>
<p>Pandas uses 6.0.32:</p>
<pre><code>$ ldd /home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/_libs/window/aggregations.cpython-310-x86_64-linux-gnu.so
linux-vdso.so.1 (0x00007ffc007ec000)
libstdc++.so.6 => /home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/_libs/window/../../../../../libstdc++.so.6 (0x00007fef448a0000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fef4473e000)
libgcc_s.so.1 => /home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/_libs/window/../../../../../libgcc_s.so.1 (0x00007fef44723000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fef44531000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fef44527000)
/lib64/ld-linux-x86-64.so.2 (0x00007fef44add000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fef44502000)
$ realpath /home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/_libs/window/../../../../../libstdc++.so.6
/home/MY_USERNAME/.conda/envs/glibcxx_test/lib/libstdc++.so.6.0.32
</code></pre>
<p>The reason seems to be that <code>py310_cplex2211.so</code> specifies a <code>RUNPATH=[$ORIGIN]</code>:</p>
<pre><code>$ readelf -d /home/MY_USERNAME/cplex_studio2211/cplex/python/3.10/x86-64_linux/build/lib/cplex/_internal/py310_cplex2211.so
Dynamic section at offset 0x131d00 contains 30 entries:
Tag Type Name/Value
[...]
0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN]
[...]
</code></pre>
<p>...but Pandas uses an <code>RPATH=[$ORIGIN/../../../../..]</code>:</p>
<pre><code>$ readelf -d /home/MY_USERNAME/.conda/envs/glibcxx_test/lib/python3.10/site-packages/pandas/_libs/window/aggregations.cpython-310-x86_64-linux-gnu.so
Dynamic section at offset 0x54790 contains 28 entries:
Tag Type Name/Value
[...]
0x000000000000000f (RPATH) Library rpath: [$ORIGIN/../../../../..]
[...]
</code></pre>
|
<python><pandas><conda><shared-libraries><glibc>
|
2024-02-05 09:41:49
| 1
| 9,363
|
lumbric
|
77,939,687
| 2,305,768
|
Cuda Pytorch Error: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0
|
<p>I am trying to train a deep neural network on cuda. I found some code and I am trying to make it compatible to cuda. My code is as below:</p>
<pre class="lang-py prettyprint-override"><code>class UnsupervisedModel(torch.nn.Module):
def __init__(self):
super(UnsupervisedModel, self).__init__()
# Encoder
encoder_net = get_learning_net("cnn1d_fe",
{"input_channels": 1,
"dropout": 0,
"kernel_size": 3,
"stride": 1,
"mid_channels": 32,
"final_out_channels": 64},
state_dict=None,
freeze=False)
# Linear Classifier (single layer)
#classifier_net = get_neural_net(name='LinearNN',
#args={"input_dim": 64,
#"output_dim": 2},
#state_dict=None)
self.encoder_net = encoder_net
#self.classifier_net = classifier_net
def forward(self, x):
x = self.encoder_net(x)
#x = self.classifier_net(x)
return x
</code></pre>
<pre class="lang-py prettyprint-override"><code>def main(args):
# reproducibility
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed)
torch.backends.cudnn.enabled = False
torch.use_deterministic_algorithms = True
# set GPU device ID
device = "cuda:0"
args.device = device
logging.info('Torch, using device: %s' % device)
criter_cl=NTXentLoss(device, args.batch_size, 0.5, True)
# Data loaders
#train_loader_1, validation_loader_1, test_loader_1 = prepare_data_loaders(args)
train_loader, validation_loader, test_loader = prepare_unseen_data_loaders(args)
from models.helpers import proj_head
proj_head = proj_head(args.input_length)
#print(proj_head)
feature_extractor=UnsupervisedModel()
temporal_encoder=cnn1d_temporal()
#projet(feature_extractor)
from torch import nn
network = nn.Sequential(feature_extractor, proj_head)
network.to(device)
# Model definition
#model = SupervisedModel()
#model.to(device)
logging.info(
'Model initialization, the number of trainable parameters: %d' %
count_parameters(network))
# Optimizer and criterion
optimizer = torch.optim.Adam([
{"params": network.parameters()}],lr=args.learning_rate, weight_decay=args.weight_decay, betas=(0.9, 0.999), eps=1e-08)
#optimizer = torch.optim.Adam([
#{"params": model.encoder_net.parameters()},
#{"params": model.classifier_net.parameters()}],
#lr=args.learning_rate, weight_decay=args.weight_decay, betas=(0.9, 0.999), eps=1e-08)
loss_func_cl=criter_cl
#loss_function = torch.nn.CrossEntropyLoss()
#loss_function = torch.nn.CrossEntropyLoss()
#metrics = {"loss": Loss(loss_func_cl)}
metrics = {"loss": Average(output_transform=lambda x: x['loss'])}
#metrics = {"accuracy": Accuracy(), "loss": Loss(criter_cl)}
# Features
# 0: ACC_x
# 1: ACC_y
# 2: ACC_z
# 3: ACC_abs
# 4: BVP
# 5: EDA
# 6: TEMP
#features = {'EDA':5, 'BVP':4, 'All':[4,5,6]}
features={'EDA':2, 'BVP':1}
#[3,4,5,6]
feature_idx = features[args.feature_type]
# TODO: Later, adapt network input size/archtitecture to use all features.
def train_step(engine, train_batch):
loss = 0
output_dic = {}
data,data1, data2,_,_ = train_batch
#print(data1.shape)
aug1 = data1[:, feature_idx, :].type(torch.float)
#print(aug1.shape)
aug1 = torch.unsqueeze(aug1, dim=1)
#print(aug1.shape)
aug2 = data2[:, feature_idx, :].type(torch.float)
aug2 = torch.unsqueeze(aug2, dim=1)
aug1, aug2 = aug1.to(device), aug2.to(device)
network.train()
optimizer.zero_grad()
features1 = feature_extractor(aug1)
#print(features1.shape)
z1 = proj_head(features1)
#print(z1.shape)
features2 = feature_extractor(aug2)
z2 = proj_head(features2)
# normalize projection feature vectors
z1 = F.normalize(z1, dim=1)
z2 = F.normalize(z2, dim=1)
loss = criter_cl(z1, z2)
#print(loss)
loss.backward()
optimizer.step()
#print(loss.item())
#loss += los.detach().item()
output_dic['loss'] = loss.item()
#print(total_loss)
#net=[feature_extractor, temporal_encoder, proj_head]
return output_dic
def validation_step(engine, val_batch):
output_val={}
data,data1, data2,_,_ = val_batch
#print(data1.shape)
aug1 = data1[:, feature_idx, :].type(torch.float)
aug1 = torch.unsqueeze(aug1, dim=1)
#print(aug1.shape)
aug2 = data2[:, feature_idx, :].type(torch.float)
aug2 = torch.unsqueeze(aug2, dim=1)
aug1, aug2 = aug1.to(device), aug2.to(device)
network.eval()
with torch.no_grad():
features1 = feature_extractor(aug1)
z1 = proj_head(features1)
features2 = feature_extractor(aug2)
z2 = proj_head(features2)
z1 = F.normalize(z1, dim=1)
z2 = F.normalize(z2, dim=1)
loss = criter_cl(z1, z2)
output_val['loss'] = loss
return output_val
# Initialize trainer and evaluators
trainer = Engine(train_step)
lr_scheduler = LRScheduler(
torch.optim.lr_scheduler.MultiStepLR(
optimizer, milestones=[
2, 10], gamma=0.1))
trainer.add_event_handler(Events.EPOCH_STARTED, lr_scheduler)
trainer.logger = setup_logger("Trainer")
train_evaluator = Engine(validation_step)
validation_evaluator = Engine(validation_step)
trainer.run(train_loader, max_epochs=args.max_epoch)
# Evaluate the latest snapshot on the entire WESAD dataset
# Load trained weights
weight_files = [p for p in list(pathlib.Path(args.log_dir).rglob('*.pt'))]
print('Loading trained weights: %s' % weight_files[-1].as_posix())
network.load_state_dict(torch.load(weight_files[-1].as_posix()))
network.to(device)
logging.info(
'Model initialization, the number of trainable parameters: %d' %
count_parameters(network))
for param in network.parameters(): # freeze baseline
param.requires_grad = False
train_loader_1, validation_loader_1, test_loader = verbio_data_loaders(args)
model = SupervisedModel()
optimizer = torch.optim.Adam(model.parameters(), lr=args.learning_rate, weight_decay=args.weight_decay, betas=(0.9, 0.999), eps=1e-08)
loss_function = torch.nn.CrossEntropyLoss()
print('loss')
print(Loss(loss_function))
metrics = {"accuracy": Accuracy(), "loss": Loss(loss_function)}
def train_step_1(engine, train_batch):
data, labels, metadata = train_batch
data, labels = data[:, feature_idx, :], labels.squeeze().long()
data, labels = data.to(device), labels.to(device)
#print(data.shape)
data=torch.unsqueeze(data, dim=1)
network.train()
# forward pass
emb = network(data)
output = model(emb)
# calculate loss
loss = loss_function(output, labels)
# compute gradient and do SGD step
optimizer.zero_grad()
loss.backward()
optimizer.step()
#print(loss.item())
return loss.item()
def validation_step_1(engine, val_batch):
data, labels, metadata = val_batch
data, labels = data[:, feature_idx, :], labels.squeeze().long()
data, labels = data.to(device), labels.to(device)
data=torch.unsqueeze(data, dim=1)
model.eval()
with torch.no_grad():
# forward pass
**emb = network(data)**
**predicted = model(emb)**
return predicted, labels
# Initialize trainer and evaluators
loss_function = torch.nn.CrossEntropyLoss()
metrics = {"accuracy": Accuracy(), "loss": Loss(loss_function)}
trainer = Engine(train_step_1)
lr_scheduler = LRScheduler(
torch.optim.lr_scheduler.MultiStepLR(
optimizer, milestones=[
2, 10], gamma=0.1))
trainer.add_event_handler(Events.EPOCH_STARTED, lr_scheduler)
trainer.logger = setup_logger("Trainer")
train_evaluator = Engine(validation_step_1)
validation_evaluator = Engine(validation_step_1)
# Kick everything off
trainer.run(train_loader_1, max_epochs=args.max_epoch)
# Evaluate the latest snapshot on the entire WESAD dataset
# Load trained weights for supstream task (freezing)
###
weight_files_1 = [p for p in list(pathlib.Path(args.log_dir_1).rglob('*.pt'))]
#print('Loading trained weights: %s' % weight_files_1[-1].as_posix())
network.load_state_dict(torch.load(weight_files[-1].as_posix()))
network.to(device)
for param in network.parameters(): # freeze baseline
param.requires_grad = False
model = SupervisedModel()
model.load_state_dict(torch.load(weight_files_1[-1].as_posix()))
model.to(device)
test_labels = np.zeros(shape=(0,), dtype=np.float32)
test_predictions = np.zeros(shape=(0, 2), dtype=np.float32)
for iteration, test_batch in enumerate(test_loader):
data, labels, metadata = test_batch
data, labels = data[:, 5, :], labels.squeeze().long()
#print(data.shape)
data= torch.unsqueeze(data, dim=1)
#print(data.shape)
data, labels = data.to(device), labels.to(device)
#print(labels.shape)
# forward pass
with torch.no_grad():
features= network(data)
predicted = model(features)
#print(predicted.shape)
test_labels = np.concatenate(
(test_labels, labels.cuda().numpy()), axis=0)
test_predictions = np.concatenate(
(test_predictions, predicted.cuda().numpy()), axis=0)
</code></pre>
<p>Any help would be appreciated. I tried to send all data and labels to device (which is cuda:0) but apparently some remains in cpu. I am newbie in pytorch sorry if it is too obvious.</p>
|
<python><pytorch><pytorch-lightning>
|
2024-02-05 09:02:59
| 1
| 478
|
sersem1
|
77,939,544
| 8,920,642
|
Python H5PY warning getlimits.py:511: UserWarning: Signature <class 'numpy.float128'> does not match any known type
|
<p>I get following warning on RedHat 8.8 for H5PY Package:</p>
<p>Is there any way to fix this? I dont want to use <strong>warnings.filterwarnings("ignore")</strong></p>
<pre><code>bash-4.4$ python3
Python 3.11.6 (main, Oct 16 2023, 03:41:34) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import h5py
/scratch/Python/lib/python3.11/site-packages/numpy/core/getlimits.py:511: UserWarning: Signature b'\x00\xd0\xcc\xcc\xcc\xcc\xcc\xcc\xfb\xbfOE\xfc\x7f\x00\x00' for <class 'numpy.float128'> does not match any known type: falling back to type probe function.
This warnings indicates broken support for the dtype!
machar = _get_machar(dtype)
>>> numpy.__version__
'1.24.3'
>>> h5py.__version__
'3.10.0'
>>> exit()
bash-4.4$ cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.8 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.8 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.8
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.8"
bash-4.4$
</code></pre>
<p>This warning is seen only on Linux, Windows is clean.</p>
|
<python><python-3.x><linux><numpy><h5py>
|
2024-02-05 08:34:54
| 1
| 398
|
Ashish Shirodkar
|
77,939,099
| 9,495,110
|
How to import a Python class from a directory which has dependency in the same directory?
|
<p>I want to use a class <code>A</code>, which is in a subdirectory <code>dir</code> in file <code>A.py</code>. It has dependency with file <code>B.py</code> which is also in the same directory <code>dir</code>. I want to use the <code>A</code> class from its parent directory. The file tree looks like this:</p>
<pre><code>parent_dir/
dir/
__init__.py (empty file)
A.py (imports B)
B.py
C.py <-- I am here
</code></pre>
<p>Content of A:</p>
<pre class="lang-py prettyprint-override"><code>from B import B
class A:
def __init__():
pass
</code></pre>
<p>And contents of B:</p>
<pre class="lang-py prettyprint-override"><code>class B:
def __init__():
pass
</code></pre>
<p>Now when I try to import <code>A.py</code> from <code>C.py</code> by <code>from dir.A import A</code>, I get:</p>
<pre><code>ModuleNotFoundError Traceback (most recent call last)
----> 1 from stargan_v2_tensorflow.networks import Generator
File ~/parent_dir/C.py:1
---> 1 from B import *
ModuleNotFoundError: No module named 'B'
</code></pre>
<p>I tried importing <code>B.py</code> by <code>import dir.B as B</code> and <code>from dir import B</code> so that it finds <code>B</code>, but still I get the same error. How can I import <code>A</code> with its dependency?</p>
|
<python><python-3.x><python-module>
|
2024-02-05 07:00:07
| 1
| 1,027
|
let me down slowly
|
77,938,918
| 4,699,441
|
How to get Python imports to work consistently?
|
<p>The <a href="https://stackoverflow.com/questions/77936303/">solution to this question</a> works only for the special case of importing from the same directory.</p>
<p>However, if I put the <code>Child</code> class into a sibling directory then that solution doesn't work. Same goes if I do not <code>from [something] import Child</code> but instead <code>import [something]</code> and then try to access <code>[something].child</code>. Neither scenario can be written as a relative import.</p>
<p><strong>Problem</strong>: depending on whether I execute the tests or run the application, the <code>import</code> statement in the <code>parent.py</code> needs to look different.</p>
<p>So let me ask more generally: how can I write imports so they work independently of the scenario they are used in?</p>
<p>The project structure is like this:</p>
<pre><code>project-name/
tests/
__init__.py
test.py
project_name/
__init__.py
__main__.py
submodule/
parent.py
sibling/
child.py
</code></pre>
<p>I execute the program and the tests like this:</p>
<pre class="lang-bash prettyprint-override"><code># go to the root directory of the project
cd project-name/
# I cannot get those two lines get to work with the same imports:
python3 -m unittest discover
python3 project_name
</code></pre>
<p>The content of the files is like this:</p>
<pre class="lang-py prettyprint-override"><code># tests/__init__.py
# empty
</code></pre>
<pre class="lang-py prettyprint-override"><code># tests/test.py
import unittest
from project_name import Parent
class Test(unittest.TestCase):
def test(self): Parent().hello()
</code></pre>
<pre class="lang-py prettyprint-override"><code># project_name/__init__.py
from project_name.sibling.child import Child
from project_name.submodule.parent import Parent
</code></pre>
<pre class="lang-py prettyprint-override"><code># project_name/__main__.py
from submodule.parent import Parent
parent = Parent()
parent.hello()
</code></pre>
<pre class="lang-py prettyprint-override"><code># project_name/sibling/child.py
class Child:
def __init__(self):
pass
def hello(self):
print("Hello World")
</code></pre>
<pre class="lang-py prettyprint-override"><code># project_name/submodule/parent.py
# Works for __main__.py:
from sibling.child import Child
# Works for test:
#from project_name import Child
class Parent:
def __init__(self):
self.child = Child()
def hello(self):
self.child.hello()
</code></pre>
|
<python><python-3.x><python-import>
|
2024-02-05 06:07:09
| 5
| 1,078
|
user66554
|
77,938,831
| 119,527
|
Disambiguating expressions in Lark syntax
|
<p>I have a (Lark) grammar that I think should be unambiguous, but depending on the version of Lark, fails to parse in one way or another:</p>
<pre class="lang-py prettyprint-override"><code>import lark
syntax = r"""
stmt: mov_stmt
| special_stmt
mov_stmt: reg ASSIGN (reg | const)
special_stmt: ("RS" SPECIAL_ASSIGN const)
reg: REG
const: DEC_NUM
REG.2: /R[0-7]|RS/
DEC_NUM: /0|[1-9]\d*/i
ASSIGN: "="
SPECIAL_ASSIGN: "&="
WS: /[ \t]+/
%ignore WS
"""
parser = lark.Lark(syntax, start="stmt", parser="lalr")
print(parser.parse("R3 = 7")) # 1. ok
print(parser.parse("R3 = R7")) # 2. ok
print(parser.parse("RS &= 1")) # 3. Fails on lark==1.1.9; expected special_stmt
print(parser.parse("RS = R7")) # 4. Fails on lark-parser==0.12.0; expected mov_stmt
</code></pre>
<p>With <code>lark-parser==0.12.0</code>, invocation number 4. fails. I expect a <code>mov_stmt</code>, but it is expecting a <code>SPECIAL_ASSIGN</code> token, meaning it is matching <code>special_stmt</code>.</p>
<pre><code>lark.exceptions.UnexpectedToken: Unexpected token Token('ASSIGN', '=') at line 1, column 4.
Expected one of:
* SPECIAL_ASSIGN
Previous tokens: [Token('RS', 'RS')]
</code></pre>
<p>With <code>lark==1.1.9</code>, the opposite happens and invocation number 3. fails. I expect a <code>special_stmt</code>, but it is expecting an <code>ASSIGN</code> token, meaning it is matching <code>mov_stmt</code>.</p>
<pre><code>lark.exceptions.UnexpectedToken: Unexpected token Token('SPECIAL_ASSIGN', '&=') at line 1, column 4.
Expected one of:
* ASSIGN
Previous tokens: [Token('REG', 'RS')]
</code></pre>
<hr />
<p>In my mind, the grammar should be unambiguous. An <code>=</code> always means <code>mov_stmt</code>, and <code>&=</code> always means <code>special_stmt</code> (which only works for reg=<code>RS</code>).</p>
<p><strong>How do I disambiguate this?</strong></p>
<p>I tried assigning priorities to different terminals, to no effect.</p>
|
<python><ebnf><lark-parser>
|
2024-02-05 05:41:47
| 1
| 138,383
|
Jonathon Reinhart
|
77,938,797
| 264,136
|
Using python to build a jenkins job: how to pass checkbox values
|
<pre><code>jenkins_server = jenkins.Jenkins(project_path, username="moyemoye", password=password, timeout=120)
parameters = {
'IMG_PATH': "IMG_PATH",
'vcpu': "2",
'Topo_Traffic_Type': "imix,1518/1400",
'RUN_CEF_SUITE_FEATURE_LIST': "all",
'clean_reload': True,
}
jenkins_server.build_job(f"{project_name}", parameters=parameters, token=password)
</code></pre>
<p>RUN_CEF_SUITE_FEATURE_LIST and Topo_Traffic_Type both are of type "Basic Parameter Types" with parameter type as CheckBoxes.</p>
<p>When I build the project using the code above, RUN_CEF_SUITE_FEATURE_LIST fills in properly (maybe because I need to check only one box out of all options) but Topo_Traffic_Type stays empty and the project fails. Any idea on whats wrong?</p>
<p>The documentation:
<a href="https://i.sstatic.net/n4BaA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n4BaA.png" alt="enter image description here" /></a></p>
<p><strong>EDIT: Tried the below but still does not work:</strong></p>
<pre><code>'Topo_Traffic_Type': ["imix", "1518/1400"]
'Topo_Traffic_Type': "imix,1518/1400"
'Topo_Traffic_Type': [{"imix": True}, {"1518/1400": True}]
'Topo_Traffic_Type': {"imix": True, "1518/1400": True},
</code></pre>
|
<python><jenkins><jenkins-plugins><jenkins-api>
|
2024-02-05 05:30:04
| 1
| 5,538
|
Akshay J
|
77,938,741
| 3,121,975
|
Join based on substring operation
|
<p>Suppose I have two DataFrames, <code>a</code> and <code>b</code> with columns "Name" and "ID" respectively. What I'm trying to do is to find all the columns in <code>a["Name"]</code> that are a substring of a value in <code>b["ID"]</code> so that when I join <code>b</code> to <code>a</code>, the columns will match up. I have a guarantee that any value in <code>b</code> <strong>must</strong> match to at most one value in <code>a</code> but any value in <code>a</code> may match to any number of values in <code>b</code>.</p>
<pre><code>a = pd.DataFrame({"Name": ["Boomhaur", "Dale", "Bill", "Hank"]})
b = b = pd.DataFrame({"ID": ["Boomhaur-2345", "Dale-999999", "Bill-000", "Bill-001", "Peggy-420"]})
a = # Some Pandas magic here...
a
# Name ID
# 0 Boomhaur Boomhaur-2345
# 1 Dale Dale-999999
# 2 Bill Bill-000
# 2 Bill Bill-001
# 3 Hank NaN
</code></pre>
<p>Originally, I had multiple JSON files and a guarantee that <code>ID</code> would be unique for each, so I could do this:</p>
<pre><code>a["_merge"] = a["Name"].apply(lambda x: x in data["ID"])
b = json.normalize(data["Data"])
b["ID"] = data["ID"]
b["_merge"] = True
a = a.join(b.set_index("_merge"), on="_merge")
</code></pre>
<p>This works, but now I'm trying to merge the files together into a single data frame and then join that to <code>a</code>, but I'm not sure how to do it once the guarantee of a set <code>ID</code> is gone. Does anyone know how to do this?</p>
|
<python><pandas>
|
2024-02-05 05:06:43
| 2
| 8,192
|
Woody1193
|
77,938,707
| 16,405,935
|
How to find the first first occurrence of a character
|
<p>I have a simple dataframe as below:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'BAS_DT': ['2023-01-02', '2023-01-03', '2023-01-04', '2023-01-02', '2023-01-03'],
'CUS_NAME': ['A', 'A', 'A', 'B', 'B'],
'Y/N': ['Y', 'Y', 'Y', 'N', 'Y'],
'cum_count': [1, 2, 3, 1, 2]})
df
BAS_DT CUS_NAME Y/N cum_count
0 2023-01-02 A Y 1
1 2023-01-03 A Y 2
2 2023-01-04 A Y 3
3 2023-01-02 B N 1
4 2023-01-03 B Y 2
</code></pre>
<p>I want to find the first occurrence date of <code>Y</code> in the <code>Y/N</code> column for each <code>CUS_NAME</code>. Below is my expected Output:</p>
<pre><code>df2 = pd.DataFrame({'BAS_DT': ['2023-01-02', '2023-01-03', '2023-01-04', '2023-01-02', '2023-01-03'],
'CUS_NAME': ['A', 'A', 'A', 'B', 'B'],
'Y/N': ['Y', 'Y', 'Y', 'N', 'Y'],
'cum_count': [1, 2, 3, 1, 2],
'occur_date': ['2023-01-02', np.nan, np.nan, np.nan, '2023-01-03']})
df2
BAS_DT CUS_NAME Y/N cum_count occur_date
0 2023-01-02 A Y 1 2023-01-02
1 2023-01-03 A Y 2 NaN
2 2023-01-04 A Y 3 NaN
3 2023-01-02 B N 1 NaN
4 2023-01-03 B Y 2 2023-01-03
</code></pre>
<p>How can I get this result. Thank you.</p>
|
<python><pandas><dataframe>
|
2024-02-05 04:52:44
| 1
| 1,793
|
hoa tran
|
77,938,422
| 5,896,591
|
python: How to set F_NOTIFY from non-main thread?
|
<p>I am trying to watch a directory using F_NOTIFY in Linux (Ubuntu 16, Python 3.5.2). However, it only works if I call <code>fcntl</code> from the main thread. Why are there no signals when I call <code>fcntl</code> from other threads? (I don't see any mention of threads in the python <a href="https://docs.python.org/3/library/fcntl.html#module-fcntl" rel="nofollow noreferrer"><code>fcntl</code> documentation</a>. Also, if <code>fcntl</code> is failing, why doesn't it throw an exception?)</p>
<pre><code># Before running, create directory "a". To test, create a file in "a".
import fcntl, signal, os, threading
pipe_r, pipe_w = os.pipe()
def Unsafe_Handler(s,f):
os.write(pipe_w, b' ')
signal.signal(signal.SIGIO, Unsafe_Handler)
signal.siginterrupt(signal.SIGIO, False)
def Handler():
while True:
os.read(pipe_r, 1)
print("SIGNAL")
threading.Thread(target = Handler).start()
NOTIF_FLAGS = fcntl.DN_MODIFY | fcntl.DN_CREATE | fcntl.DN_MULTISHOT
def Watch_Dir(dn):
fd = os.open(dn, os.O_RDONLY)
fcntl.fcntl(fd, fcntl.F_SETSIG, 0)
fcntl.fcntl(fd, fcntl.F_NOTIFY, NOTIF_FLAGS)
print('Watching directory "%s", fd=%d' % (dn, fd))
return fd
def Init():
fd = Watch_Dir('a')
# this works
#Init()
# this doesn't work
t = threading.Thread(target = Init)
t.start()
t.join()
print("Awaiting signals...")
while True:
signal.pause()
</code></pre>
<p><strong>Why signal handling in Python requires threads</strong></p>
<p>The <code>Handler</code> thread is required because of the following combination of facts:</p>
<ol>
<li>The handler attached with <code>signal.signal</code> can't safely do anything but write to a pipe.</li>
<li>The main thread can't do real work either, for two reasons. First, because most blocking calls in the main thread will <a href="https://github.com/python/cpython/issues/85300" rel="nofollow noreferrer">prevent the signal from being handled</a>. Second, since the only allowed call is <code>signal.pause()</code>, there is no way to safely wake up the main thread without a race condition (signal arrives just before the <code>signal.pause()</code>) that could get the main thread stuck.</li>
</ol>
<p><strong>Why calling <code>fcntl</code> from the non-main thread is useful</strong></p>
<p>This is simply because the main thread can't do real work, as already noted. (In my actual application, the handler needs to call <code>fcntl</code>. For this question I've created a simpler example though).</p>
|
<python><linux><signals><fcntl>
|
2024-02-05 02:58:04
| 0
| 4,630
|
personal_cloud
|
77,938,379
| 21,107,707
|
Why does `math.log` not have a `__text_signature__` in Python?
|
<p>I was playing around in the Python REPL and I noticed most <code>math</code> functions have the <code>__text_signature__</code> attribute; I notice it provides information regarding function signatures, and is non-None for every function in the <code>math</code> module (this is Python 3.11) except the <code>log</code> function and the <code>hypot</code> function.</p>
<pre class="lang-py prettyprint-override"><code>>>> print(math.log.__text_signature__)
None
>>> math.exp.__text_signature__
'($module, x, /)'
</code></pre>
<p>Is there a reason for this? It seems pretty random to me.</p>
|
<python>
|
2024-02-05 02:40:13
| 1
| 801
|
vs07
|
77,938,305
| 18,730,707
|
Doesn't df.style.format() apply to Excel in version 2.1.4 of pandas?
|
<p>I want to apply number formatting. I know how to apply it through <code>df.style.map</code>. However, it is not possible to apply it to all rows using <code>df.style.format()</code>. I'd like to know if I'm doing something wrong.</p>
<p>Currently pandas is using version 2.1.4. I wrote code like this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({
'Name': ['John', 'Isak'],
'Value': [100000, 3000000000]
})
dfstyle = df.style.format(thousands=',')
with pd.ExcelWriter('test.xlsx', engine='xlsxwriter') as writer:
dfstyle.to_excel(writer, sheet_name='Sheet1', index=False)
</code></pre>
<p><a href="https://i.sstatic.net/oPhJ4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oPhJ4.png" alt="enter image description here" /></a></p>
<p>This is the result. I referred to <a href="https://pandas.pydata.org/docs/user_guide/style.html#Formatting-Values" rel="nofollow noreferrer">the relevant site</a> in the official document.</p>
|
<python><pandas>
|
2024-02-05 02:08:06
| 1
| 878
|
na_sacc
|
77,938,228
| 2,135,504
|
Complex query in SQLalchemy with dictionary like in MongoDB
|
<h1>Problem</h1>
<p>In pymongo, I can define my query using rather complex dictionaries:</p>
<pre class="lang-py prettyprint-override"><code>one_week_ago = datetime.utcnow() - timedelta(days=7)
query_dict = {
'name': {'$regex': 'John'},
'age': {'$in': [25, 30, 35]},
'created_at': {'$gte': one_week_ago},
}
results = collection.find(query_dict)
</code></pre>
<p>My understanding is that I cannot do this using SQLalchemy, but have to be more explicit:</p>
<pre><code>query = session.query(User)
query = query.filter(User.name.like('%John%'))
query = query.filter(User.age.in_([25, 30, 35]))
query = query.filter(User.created_at >= one_week_ago)
results = query.all()
</code></pre>
<h1>Question</h1>
<p>It seems to me that one could write a function which translates a complex query dictionary to a SQLalchemy-valid query (see e.g. <a href="https://stackoverflow.com/a/7605366/2135504">https://stackoverflow.com/a/7605366/2135504</a> ). Does a library for this already exist? Or an alternative library to SQLalchemy for this purpose?</p>
|
<python><sqlalchemy><pymongo>
|
2024-02-05 01:20:22
| 1
| 2,749
|
gebbissimo
|
77,938,199
| 15,476,955
|
langchain dynamic way to handle keys issues
|
<p>I know it's too open question for some moderators but let me explain before attacking.</p>
<p>Langchain is a really recent library, I have dozens of errors to post to the stackoverflow site so I prefer to begin with the good question: what should be the way of thinking to understand what the developers of langchain are doing ?
There is something about the philosophy of this library to understand I suppose because lot of people seems to like it, I'm a full stack developer working with multiple library in multiple langage but this one is a living nightmare for the moment for me.</p>
<p>I don't want to build a tiny library on my own, I want to be part of the movement and I feel langchain may become the biggest llm and agent library but I have big issues working with their dynamic way of working with keys.</p>
<p>I see a raising number of medium examples on how to use it but when we want to go deeper (which should be the purpose of learning a library) nothing really huge to work with, only hello worlds 'deprecated' or crashing.</p>
<p>If someone had difficulties working with this library and learned how to think to use it efficiently, please exmplain what the trick ! :)</p>
<p>Now, for the concretes examples:</p>
<ul>
<li><p>"agent_scratchpad" will make your script crash if you use simple Prompt and you will have to look in all docs to find this particular text (This is driven by an LLMChain. The prompt in the LLMChain MUST include
a variable called "agent_scratchpad" where the agent can put its
intermediary work.) <a href="https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html" rel="nofollow noreferrer">https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html</a></p>
</li>
<li><p>same goes for the key 'intermediate_steps'</p>
</li>
<li><p>some of the agents are waiting for the key 'question' and other for the key 'query' and you need to know them by heart, not autocompletion will guide.</p>
</li>
<li><p>the way you had words in the prompts, even without using {} may make your prompt waiting for keys !!! (i'm blown by this one). When you present a an example like this on the prompt: {'... that you may encounter ex: { partId: 00XX ...}.
This can make you crash sometime, and sometimes not, because the langchain agent may interpret that you need absolutely partId variable !!!???</p>
</li>
</ul>
<p>I mean, yes you just have to read, learn and go on the library everytime you encounter an error, but this library is so hard to debug compared to others, using REALLY too much dynamic magic keys. So how should we approach the spirit ? what is the way of thinking compared to flutter or for python pandas, matplotlib, flask, beautifoulsoup and legions of others who are really straightforward ?</p>
|
<python><langchain><large-language-model>
|
2024-02-05 01:06:13
| 0
| 1,168
|
Utopion
|
77,938,187
| 2,612,259
|
How can I display text when hovering over a legend item in Python Plotly?
|
<p>Sorry for asking a question with very little info, but its a very simple question.
I have looked thru the documentation of Plotly (python) and can't find a way to simply add hover text when hovering over an item in the legend.
The names of the traces are very long, so I am just using the defaults , ie trace 0. I would like to show the real name of the trace when I hover over the entry in the legend. Is this possible with the Python Plotly library?</p>
<p>Here is an example. I would like to be able to hover over "trace 0" in the legend and see a popup showing "the y value is the square of x", but still see "trace 0" in the legend.</p>
<pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go
trace_0_name = "the y value is the square of x"
trace_1_name = "the y value is the cube of x"
fig = go.Figure()
fig.add_trace(go.Scatter(x=[1,2,3], y=[1,2**2,3**2]))
fig.add_trace(go.Scatter(x=[1,2,3], y=[1,2**3,3**3]))
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/uoLzs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uoLzs.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2024-02-05 01:01:17
| 1
| 16,822
|
nPn
|
77,938,167
| 5,016,028
|
LiteLLM and Llama-Index Service Context creation
|
<p>I am going crazy with finding a solution to this. I am working with a proxy server for OpenAI models. I'm using an ssh tunnel to hit the server on my localhost.</p>
<pre><code>from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000", api_key="sk-xxx")
response = client.chat.completions.create(model="gpt_35_turbo", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
])
print(response)
</code></pre>
<p>This works perfectly. I need to use this with llamaindex, so I need to define my llm and embedding_model, and serve them to my service_context like so:</p>
<pre><code>llm = OpenAI(model="text-davinci-003", temperature=0, max_tokens=256)
embed_model = OpenAIEmbedding()
text_splitter = SentenceSplitter(chunk_size=1024, chunk_overlap=20)
prompt_helper = PromptHelper(
context_window=4096,
num_output=256,
chunk_overlap_ratio=0.1,
chunk_size_limit=None,
)
service_context = ServiceContext.from_defaults(
llm=llm,
embed_model=embed_model,
text_splitter=text_splitter,
prompt_helper=prompt_helper,
)
</code></pre>
<p>where I would be calling my own "ada-02" model through the server as well. How can I make this work with my setup? I am totally unable to find any answer to this anywhere and I've already wasted days trying to fix it.</p>
|
<python><python-3.x><openai-api><large-language-model><llama-index>
|
2024-02-05 00:48:48
| 1
| 4,373
|
Qubix
|
77,938,105
| 2,658,228
|
Unable to fix frame size in app - customtkinter, tkinter
|
<p>I'm working on an app to covert some files. The interface consists of two frames - one that displays selected files, the other will contain parameters the users can modify (e.g. file pre-fix, location etc.).</p>
<p>I'd like the <code>frame</code> width and height to be fixed (300px each) but it changes when I add widgets to them (shown in screenshot below). I'm open to using <code>tkinter</code> instead of <code>customtkinter</code> if that will solve the issue.</p>
<p>Code:</p>
<pre><code># define colour palette
app_color = '#121212'
frame_color = '#1e1e1e'
text_color = '#e3e3e3'
label_color = '#a8a8a8'
accent = '#bb86fc'
dark_grey = '#423e47'
# Fonts
font_small = ('Segoe UI', 16)
font_reg = ('Segoe UI', 18)
# Declare App
app = ctk.CTk(fg_color=app_color)
app.title('File Converter')
app.geometry('720x480')
""" Frame 1 for selecting file to confirm """
multiple_files = ctk.CTkFrame(master=app, fg_color=frame_color,
width=300, height=320, corner_radius=10)
multiple_files.grid(row=0, column=0, padx=(40,20), pady=40)
file_label = ctk.CTkLabel(master=multiple_files, text='Select Files', text_color=text_color,
font=font_small, anchor='nw')
file_label.grid(row=0, column=0, columnspan=3)
lb = tk.Listbox(master=multiple_files, bd=0, selectbackground=dark_grey)
lb.grid(row=1,column=0, columnspan=3, padx=(20,20), pady=(20,20))
# button to add files
btn_add = ctk.CTkButton(master=multiple_files, text='Add', width=50,
command=addFiles, fg_color=accent,
text_color=app_color)
btn_add.grid(row=2,column=0, padx=10, pady=(0,20))
# button to remove files
btn_remove = ctk.CTkButton(master=multiple_files, text='Remove', width=50,
fg_color=accent, text_color=app_color,
command=removeFiles)
btn_remove.grid(row=2,column=1, padx=10, pady=(0,20))
# button to clear files
btn_clear = ctk.CTkButton(master=multiple_files, text='Clear', width=50,
fg_color=accent, text_color=app_color,
command=clearFiles)
btn_clear.grid(row=2,column=2, padx=10, pady=(0,20))
""" Frame 2 for conversion parameters """
opts_frame = ctk.CTkFrame(master=app, fg_color=frame_color,
width=300, height=320, corner_radius=10)
opts_frame.grid(row=0, column=1, padx=(20,40), pady=40)
app.mainloop()
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/UNCWZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UNCWZ.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><tkinter><customtkinter>
|
2024-02-05 00:14:27
| 1
| 2,763
|
Gautam
|
77,937,995
| 12,369,606
|
in a yaml file, use the output of one target function as input parameters for a different function
|
<p>I have a python script that uses a yaml file to call a function. Currently it is formatted like this:</p>
<pre><code>name: DataSet
_target_: some.function.here
_params_:
root: ${paths.datadir}
label_name: label
version: 0.1.0
log_level: ${log_level}
</code></pre>
<p>I have another function, <code>my.label.function</code> that takes two parameters. I would like to be able to specify what parameters to use for this function in the same yaml file, or a different yaml file. Then pass the output of the function as the input parameter <code>label_name</code> but I am new to working with config files so I am not sure how to do this.</p>
|
<python><yaml>
|
2024-02-04 23:27:26
| 0
| 504
|
keenan
|
77,937,994
| 8,484,261
|
Find the rows in two dataframe where a 2nd column's value has changed for a 1st column which is a key column?
|
<p>I am trying to comparing <code>dataframe df1</code> with <code>df2</code> by column <code>cust_id</code> and town_id, and get all rows pf cust_id for which the town_id has changed. I can use list comprehension to get the list of <code>cust_id</code> which are in <code>df21</code> but not in <code>df2</code> or vice-versa. But how do I use town_id change to find the <code>cust_id</code> for which town_id has changed and generate the output as a <code>dataframe</code>?</p>
<pre><code>df1
name cust_id town_id
1 cxa c1001 t001
2 cxb c1002 t001
3 cxc c1003 t001
4 cxd c1004 t002
df2
name cust_id town_id
1 cxa c1001 t002
2 cxb c1002 t001
3 cxd c1004 t001
4 cxe c1005 t001
5 cxf c1006 t001
output
name cust_id townId_initial town_id_latter
1 cxa c1001 t001 t002
2 cxd c1006 t002 t001
</code></pre>
|
<python><pandas><list-comprehension>
|
2024-02-04 23:27:09
| 1
| 3,700
|
Alhpa Delta
|
77,937,790
| 3,498,864
|
Stuck formulating an adjacency constraint with pyomo
|
<p>I'm trying to assign <code>attributes</code> into a 3 x 3 matrix based on an adjacency constraint, but I'm stuck formulating the adjacency constraint. I keep thinking I've got the answer, then when I try it the result isn't as expected.</p>
<p>So, <code>element</code>s are pre-arranged into an <code>element_map</code> which is a 9 x 17 matrix. Each of the 9 <code>element</code>s has 17 <code>attributes</code>, which are always zero or one.</p>
<p>The <code>element</code>s are in fixed positions in the 3 x 3 matrix:</p>
<pre><code>[[0, 3, 6
1, 4, 7
2, 5, 8]]
</code></pre>
<p>For example, the <code>element</code> at index <code>2</code> in the <code>element_map</code> is fixed at position <code>(2,0)</code> in the 3 x 3 matrix. And <code>(2,0)</code> is adjacent to <code>(1,0), (1,1), (2,1)</code>.</p>
<p>The constraints are:</p>
<ol>
<li>each <code>place</code> can have at most 4 <code>attribute</code>s assigned to it (<strong>done</strong>)</li>
<li>each <code>place</code> must have one or more <code>attributes</code> assigned to it (<strong>done</strong>)</li>
<li>the selected <code>attribute</code>s for each <code>place</code> must be equal to zero (<strong>done</strong>)</li>
<li><strong>Adjacency constraint</strong>: Explained below with an example (<strong>need help</strong>)</li>
</ol>
<pre><code>import pyomo.environ as pyo
import numpy as np
"""
fit elements into matrix based on adjacency rules
"""
class Element:
"""a convenience to hold the rows of attribute values"""
def __init__(self, row):
self.attributes = tuple(row)
def attribute(self, idx):
return self.attributes[idx]
def __repr__(self):
return str(self.attributes)
class Position:
"""a convenience for (x, y) positions that must have equality & hash defined for consistency"""
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return f'({self.x}, {self.y})'
def __hash__(self):
return hash((self.x, self.y))
def __eq__(self, other):
if isinstance(other, Position):
return (self.x, self.y) == (other.x, other.y)
return False
# each 'row' corresponds to an element
# each 'column' corresponds to an attribute of the various elements
# here, each element has attributes which are always 0 or 1
element_map = np.array([[0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1],
[0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1]])
print(element_map, 'map')
print (element_map.shape)
matrix_a_rows = 3
matrix_a_cols = 3
matrix_a = np.zeros((matrix_a_rows, matrix_a_cols))
mask = np.arange(1,(matrix_a_rows*matrix_a_cols)+1).reshape(matrix_a_cols, matrix_a_rows).T
def get_element(position):
#get the element vector at a position
x,y =position.x, position.y
idx = mask[x,y]-1
return idx #return 0-indexed element idx
def adj_xy(mat, p: Position):
x, y = p.x, p.y
res = []
rows = len(mat) - 1
cols = len(mat[0]) - 1
for i in range(x - 1, x + 2):
for j in range(y - 1, y + 2):
if all((0 <= i <= rows, 0 <= j <= cols, (i, j) != (x, y))):
res.append(Position(i, j))
return res
# SET UP ILP
m = pyo.ConcreteModel('matrix_fitter')
# SETS
elements = np.array([np.array(row) for row in element_map])
m.P = pyo.Set(initialize=[Position(x, y) for x in range(len(matrix_a)) for y in range(len(matrix_a[0]))],
doc='positions')
m.A = pyo.Set(initialize=list(range(len(element_map[0]))), doc='attribute')
# VARS
# place element e in position p based on attribute a being 0...
m.place = pyo.Var(m.P, m.A, domain=pyo.Binary, doc='place')
# OBJ: minimize attributes assigned to each position
m.obj = pyo.Objective(expr=pyo.sum_product(m.place), sense=pyo.minimize)
#each place must have 4 or fewer attributes assigned to it
m.attr_constraint = pyo.ConstraintList()
for p in m.P:
s = 0
for a in m.A:
s += m.place[p,a]
m.attr_constraint.add(s <= 4)
#each place must have one or more attributes assigned to it
m.enzyme_constraint = pyo.ConstraintList()
for p in m.P:
s = 0
for a in m.A:
s += m.place[p,a]
m.attr_constraint.add(s >= 1)
#the selected attribute for each position must be equal to zero
m.cut_constraint = pyo.ConstraintList()
for p in m.P:
e_idx = get_element(p)
element = elements[e_idx]
for i,a in enumerate(m.A):
if element[i] == 1:
m.cut_constraint.add(m.place[p,a] == 0)
#adjacency constraint
#doesn't work as expected
#This is where I need help...
m.adjacency_constraint = pyo.ConstraintList()
for p in m.P:
plate_element = elements[get_element(p)]
neighbor_positions = adj_xy(matrix_a, p)
print (p, 'place')
for i,a in enumerate(m.A):
s = 0
for j,aa in enumerate(m.A):
for neighbor in neighbor_positions:
neighbor_element = elements[get_element(neighbor)]
neighbor_element_attribute = neighbor_element[i]
s += m.place[neighbor,aa]*neighbor_element_attribute
m.adjacency_constraint.add(s >= len(neighbor_positions)*m.place[p,a])
solver = pyo.SolverFactory('cbc')
results = solver.solve(m, tee=True)
print(results, 'results')
if results.solver.termination_condition == pyo.TerminationCondition.optimal:
for idx in m.place.index_set():
if m.place[idx].value == 1:
s = 0
att = idx[1]
neighs = adj_xy(matrix_a, idx[0])
for i in neighs:
place_element = elements[get_element(i)]
s += place_element[att]
print(idx, 'idx', s)
if pyo.value(m.obj) == matrix_a_rows * matrix_a_cols:
# all positions were filled
print('success!')
else:
print(f'the number of elements that can be placed is {pyo.value(m.obj)} / {matrix_a_rows * matrix_a_cols}')
else:
print('Problem with model...see results')
print (element_map)
</code></pre>
<p>With an <code>element_map</code> that looks like this:</p>
<pre><code>[[0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 1 0]
[1 0 0 0 1 0 0 1 0 1 0 0 0 0 0 1 1]
[0 0 0 1 1 0 0 1 0 1 0 1 0 0 0 1 1]
[0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1]
[1 1 0 1 0 1 0 1 0 1 0 0 0 0 0 0 1]
[1 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1]
[0 0 0 0 1 0 1 1 0 1 0 0 0 0 0 0 1]
[0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1]
[1 0 0 1 1 1 0 0 0 1 0 0 0 0 0 1 1]]
</code></pre>
<p>...the result I get is:</p>
<pre><code>((0, 0), 0) idx 2
((0, 0), 1) idx 2
((0, 0), 9) idx 2
((0, 0), 16) idx 3
((0, 1), 5) idx 2
((0, 1), 6) idx 2
((0, 1), 9) idx 4
((0, 2), 3) idx 2
((1, 0), 1) idx 2
((1, 0), 5) idx 2
((1, 1), 4) idx 7
((1, 1), 15) idx 4
((1, 2), 1) idx 2
((2, 0), 0) idx 3
((2, 1), 3) idx 4
((2, 2), 7) idx 2
</code></pre>
<p>The format is: <code>((x,y), selected_attribute)</code>. <code>(x,y)</code> is the coordinate in the 3 x 3 matrix and <code>selected_attribute</code> is one of the selected <code>attribute</code>s for that <code>place</code>.</p>
<p><strong>Example</strong></p>
<p>Let's consider the <code>element</code> at <code>(1,1)</code>, because it is adjacent to all other <code>places</code>. The data for this <code>element</code> is:</p>
<pre><code>[1 1 0 1 0 1 0 1 0 1 0 0 0 0 0 0 1]
</code></pre>
<p>The selected <code>attribute</code>s for <code>(1,1)</code> in the result are <code>4</code> and <code>15</code>. Taking a look at the <code>element_map</code>, <code>attribute</code> <code>4</code> is a good choice for this <code>element</code>, because all except for the second-to-last <code>element</code> have a value of <code>1</code> for that <code>attribute</code>. The other selected <code>attribute</code>, <code>15</code>, is not good, however, because the <code>attribute</code> at index <code>15</code> is a zero for the second-to-last element. For the <code>element</code> at <code>(1,1)</code>, <code>attribute</code> <code>4</code> would work in combination with the following <code>attribute</code> indices: <code>3 or 9 or 16</code>, because for the second-to-last <code>element</code>, those <code>attribute</code> values are all <code>1</code>.</p>
<p><strong>Adjacency Constraint Definition</strong></p>
<p>So the adjacency constraint should be: assign <code>attribute</code>s to a <code>place</code> such that all adjacent <code>place</code>s have at least one value of <code>1</code> among all the selected <code>attribute</code>s. And it is a minimization problem, so overall I want the minimum number of <code>attribute</code>s assigned to each <code>place</code> that meet the constraints.</p>
<p>I hope this was clear, please comment if there's something I can clarify; thanks for reading!</p>
|
<python><optimization><sparse-matrix><linear-programming><integer-programming>
|
2024-02-04 22:13:49
| 1
| 3,719
|
Ryan
|
77,937,698
| 476,074
|
One print expression strangely depends on another print expression
|
<p>I have this Jinja2 template:</p>
<pre><code>{% set input = 1 %}
{% set steps = [1, 2, 3, 4]|select("greaterthan", input) %}
{{ steps|list }}
{{ steps|first if steps|list|length > 0 else None }}
</code></pre>
<p>It prints:</p>
<pre><code>[2, 3, 4]
None
</code></pre>
<p>Now I remove <code>{{ steps|list }}</code>.</p>
<pre><code>{% set input = 1 %}
{% set steps = [1, 2, 3, 4]|select("greaterthan", input) %}
{{ steps|first if steps|list|length > 0 else None }}
</code></pre>
<p>It prints nothing at all.</p>
<p>Then instead of <code>{{ steps|list }}</code> I put <code>{{ steps|first }}</code>.</p>
<pre><code>{% set input = 1 %}
{% set steps = [1, 2, 3, 4]|select("greaterthan", input) %}
{{ steps|first }}
{{ steps|first if steps|list|length > 0 else None }}
</code></pre>
<p>Now it prints:</p>
<pre><code>2
</code></pre>
<p>What is wrong with <code>{{ steps|first if steps|list|length > 0 else None }}</code> that it behaves differently depending on what else I print?</p>
|
<python><jinja2>
|
2024-02-04 21:44:25
| 1
| 33,938
|
AndreKR
|
77,937,367
| 4,063,520
|
Why ctypes misses ptrdiff_t?
|
<p>I'm puzzled by the absence of <code>ptrdiff_t</code> type in <code>ctypes</code>.</p>
<p>Out of 3 types defined in <code>stddef.h</code>, <code>wchar_t</code> and <code>size_t</code> are present in <code>ctypes</code>. But not <code>ptrdiff_t</code>.</p>
<p>I've been working with Fortran's iso_c_binding, and there my expectation nicely matches with the fact that it does <a href="https://gcc.gnu.org/onlinedocs/gfortran/ISO_005fC_005fBINDING.html" rel="nofollow noreferrer">define</a> <code>C_PTRDIFF_T</code> since 2008.</p>
<p>So I'm thinking about Python in the same terms: how Python is supposed to be C-compatible, if it misses a well-standardized type definition?</p>
|
<python><python-3.x><cython><ctypes>
|
2024-02-04 19:51:05
| 0
| 1,943
|
Dmitry Mikushin
|
77,937,329
| 11,938,023
|
Major Speedup Question for a for loop in Pandas/Numpy on a bitwise_xor accumulate
|
<p>Ok i am using a for loop as show below to convert this data from to the one below
using xor accumulate. For the entries i have (830401) rows and this is very very slow. is there
any way to speed up this kind of accumulate in pandas or using numpy and then assiging
it back the numpy array itself</p>
<pre><code>
In [122]: acctable[0:20]
Out[122]:
what dx1 dx2 dx3 dx4 dx5 dx6 dx7 dx8 dx9
0 4 2 10 8 0 5 7 1 13 11
1 4 0 0 0 0 0 0 0 0 0
2 6 0 0 0 0 0 0 0 0 0
3 14 0 0 0 0 0 0 0 0 0
4 12 0 0 0 0 0 0 0 8 0
5 4 0 0 0 0 0 0 0 0 0
6 1 0 0 0 0 0 0 0 0 0
... ... ... ... ... ... ... ... ... ... ...
830477 15 0 0 0 0 0 0 0 0 0
830478 3 0 0 0 0 0 0 0 0 0
830479 11 0 0 0 0 0 0 0 0 0
830480 9 0 0 0 0 0 0 0 0 0
830481 11 0 0 0 0 0 0 0 0 0
[830482 rows x 10 columns]
</code></pre>
<p>Here is what i tried and it literally can take a full minute and i have larger data sets to work with
so any shortcuts or best methods would trull be helpful:</p>
<pre><code># Update: Instead of all 800k of 'what', i put the first 5 numbers in rstr so you can see how i'm xor accumulating. You should be able to copy/paste the first 6 elements of the data from with pd.read_clipboard() and assign to acctable.
In [121]: rstr
Out[121]: array([ 4, 4, 12, 14, 6, 4], dtype=int8)
dt = np.int8
rstr = np.array(acctable.loc[:5, ('what')], dtype=dt)
for x in range(4): # # Prime Sequencing Functions
wuttr = np.bitwise_xor.accumulate(np.r_[[rstr[-(x+1)]], acctable.loc[x, 'what':]], dtype=dt)
acctable.loc[x+1, "what":] = wuttr[:end]
</code></pre>
<p>After:</p>
<pre><code>
In [122]: acctable[0:20]
Out[122]:
what dx1 dx2 dx3 dx4 dx5 dx6 dx7 dx8 dx9
0 4 2 10 8 0 5 7 1 13 11
1 4 0 2 8 0 0 5 2 3 14
2 6 2 2 0 8 8 8 13 15 12
3 14 8 10 8 8 0 8 0 13 2
4 12 2 10 0 8 0 0 8 8 5
5 4 8 10 0 0 8 8 8 0 8
6 1 5 13 7 7 7 15 7 15 15
... ... ... ... ... ... ... ... ... ... ...
830477 15 15 7 0 0 5 9 14 10 3
830478 3 12 3 4 4 4 1 8 6 12
830479 11 8 4 7 3 7 3 2 10 12
830480 9 2 10 14 9 10 13 14 12 6
830481 11 2 0 10 4 13 7 10 4 8
[830482 rows x 10 columns]
</code></pre>
<p>It's a simple accumulate but you need to have the previous row to continue the accumulation and the only way i could of is using the for loop. Also "rstr" variable is actually the "what" column.</p>
<p>Thanks!</p>
<p>I received this result from an ai but it only works on the first rows:</p>
<pre><code>what_arr = acctable['what'].to_numpy().reshape(-1) # Reshape to ensure 1D array
# Modified XOR accumulation:
all_what_arr = np.concatenate([[what_arr[0]], what_arr[1:]])
cumulative_xor = np.bitwise_xor.accumulate(all_what_arr)
shifted_xor = cumulative_xor[1:].reshape(-1, 1)
acctable.iloc[1:, 1:] = shifted_xor ^ acctable.iloc[1:, 1:]
In [171]: acctable
Out[171]:
what dx1 dx2 dx3 dx4 dx5 dx6 dx7 dx8 dx9
0 4 2 10 8 0 5 7 1 13 11
1 6 0 2 8 0 0 5 2 3 14
2 14 4 6 6 12 14 12 11 11 10
3 12 2 10 0 10 10 8 8 15 8
4 4 12 14 4 4 14 4 12 4 11
</code></pre>
<p>Here are the timeit values as you can see Andrej's modification and njit use were a huge factor of speedup!</p>
<pre><code>In [262]: import timeit
...:
...: setup = """
...: import numpy as np
...: import pandas as pd
...: from numba import njit
...:
...:
...: def do_work_no_njit(df):
...: dt = np.int8
...: end = -1
...: rstr = np.array(df.loc[:, 0], dtype=dt)
...: for x in range(len(df)):
...: wuttr = np.bitwise_xor.accumulate(np.r_[[rstr[-(x+1)]], df.loc[x, 0:]], dtype=dt)
...: df.loc[x+1, 0:] = wuttr[:end]
...:
...: @njit
...: def do_work(vals):
...: for row in range(vals.shape[0] - 1):
...: for i in range(vals.shape[1] - 1):
...: vals[row + 1, i + 1] = vals[row, i] ^ vals[row + 1, i]
...:
...: # Replace with your DataFrame creation code
...: df = pd.DataFrame(np.random.randint(0, 15, size=(1000000, 10)), dtype=np.int8) # Example DataFrame, dtype=np.int8) # Example DataFrame
...: """
...:
...: stmt = """
...: do_work(df.values)
...: """
...:
...: stmtnonjit = """
...: do_work_no_njit(df.copy())
...: """
...:
...: number = 1 # Adjust the number of repetitions as needed
...:
...: time = timeit.timeit(stmtnonjit, setup, number=number)
...: print(f"Average time per execution no njit: {time / number:.4f} seconds")
...:
...: time = timeit.timeit(stmt, setup, number=number)
...: print(f"Average time per execution with njit and optimized code by Andrej: {time / number:.4f} seconds")
...:
Average time per execution no njit: 73.3801 seconds
Average time per execution with njit and optimized code by Andrej: 0.0442 seconds
</code></pre>
|
<python><pandas><numpy><bitwise-operators>
|
2024-02-04 19:38:00
| 1
| 7,224
|
oppressionslayer
|
77,937,287
| 1,016,428
|
Better way to find nonzero indices and values in a 2D array
|
<p>I am still fighting against the problem of finding the <code>i</code>, <code>j</code> indices (and the corresponding values) of nonzero entries in a <strong>dense</strong>, <strong>single precision</strong>, <strong>Fortran-ordered</strong> 2D array. I am using Cython via Python with some C in the middle.</p>
<p>I apologize in advance as this post is going to be horribly long.</p>
<h2>Introduction</h2>
<p>I have to process thousands (sometimes millions) of medium-size 2D arrays (sometimes 700 x 1,000, sometimes 6,000 x 7,000 and so on), which are quite sparse but they are provided as dense (density can be as low as 0.02% and as high as 1-2%). These matrices sometimes have some sort of structure but in general this is not predictable.</p>
<p>I have tried numpy.nonzero and the Scipy sparse stuff, but they are unfortunately slower than what I have.</p>
<p>I am asking this question to see <strong>if there are possibilities for improvements in performance</strong> in my (possibly incorrect) code - i.e., make it faster - but also to learn new things from more experienced people.</p>
<p>My proficiency in Cython is very limited. My knowledge of C is abysmal (really, pretty much zero). Everything I know about SIMD instructions can be written on a postage stamp in large letters.</p>
<p>I have scoured StackOverflow back and forth and found answers to kind-of-similar questions, many of them with very advanced SIMD solutions. But since I know nothing about SIMD, I am unable to modify them to suit my needs.</p>
<h2>Configuration</h2>
<ul>
<li>Windows 10 64 bit (Skylake AVX512, but I should target also Icelake-client and Alderlake, and possibly a few others)</li>
<li>Python 3.9.11 64 bit</li>
<li>Cython 0.29.32, NumPy 1.21.5</li>
<li>GCC 11.2.0 (I could go to GCC 12 if needed)</li>
</ul>
<p>I compile the Cython script posted below with these flags:</p>
<pre><code>-O3 -ffast-math -funroll-loops -ftree-vectorize
</code></pre>
<h2>Problem Description</h2>
<p>I cannot change the way these matrices are generated nor their dtype, and I have to repeatedly find the nonzero elements in these matrices, and in particular these 4 information:</p>
<ol>
<li>The number of nonzero elements</li>
<li>A 1D array of row indices at which elements of the matrix are nonzero</li>
<li>A 1D array of column indices at which elements of the matrix are nonzero</li>
<li>A 1D array containing the nonzero elements</li>
</ol>
<p>Of course, len(row_indices) = len(column_indices) = len(x) = number of nonzeros</p>
<p>In the scripts below I have implemented 3 approaches:</p>
<ol>
<li>"Naive": Loop through all the 2D array elements (first by columns, then by row) and see if the element is nonzero. If it is, store its <code>i</code> and <code>j</code> indices and the array value in 3 separated, previously <code>malloc</code>ed arrays</li>
<li>"Memcmp": For each column, compare 64 floats at the time against a zero-array using <code>memcmp</code>. If <code>memcmp</code> returns a non-zero value, there is at least one nonzero element between row <code>k</code> and row <code>k + 64</code></li>
<li>"Blocks": Similar to the "Memcmp" approach, but it uses the clever technique in the first answer of this post: (<a href="https://stackoverflow.com/questions/35450237/fastest-way-to-check-mass-data-if-null-in-c">Fastest way to check mass data if null in C?</a>) and later modifications by @Peter Cordes.</li>
</ol>
<p>I need these 3 arrays (<code>row_indices</code> for <code>i</code> indices, <code>col_starts</code> for <code>j</code> indices and <code>x</code> for nonzero floats in the matrix) to pass them to another library.</p>
<h2>C/Cython/Python Scripts</h2>
<p><em>nnz_c_support.h</em> (C code, used by the Cython script)</p>
<pre><code>#include <stdlib.h>
#include <stdint.h>
#include <limits.h>
#include <stddef.h>
__attribute__ ((hot)) int dataisnull(const void *data, size_t length) {
/* assuming data was returned by malloc, thus is properly aligned */
size_t i = 0, i0, n = length / sizeof(size_t);
const size_t *pw = data;
const unsigned char *pb = data;
#define UNROLL_FACTOR 8
#if UNROLL_FACTOR == 8
size_t n1 = n - n % UNROLL_FACTOR;
for (; i < n1; i += UNROLL_FACTOR) {
size_t val = pw[i + 0] | pw[i + 1] | pw[i + 2] | pw[i + 3] |
pw[i + 4] | pw[i + 5] | pw[i + 6] | pw[i + 7];
if (val)
return i+1;
}
#endif
size_t val = 0;
// gcc and clang think they should autovectorize this cleanup loop
// using a single-byte accumulator convinces them at least not to unpack bytes to size_t before ORing
i0 = i+1;
i *= sizeof(size_t);
// i = n * sizeof(size_t)
unsigned char bval = 0;
for (; i < length; i++) {
bval |= pb[i];
}
return ((val|bval) != 0 ? i0 : 0);
}
</code></pre>
<p><em>nonzero.pyx</em> (Cython code with the 3 different methods)</p>
<pre><code>#cython: language_level=3,boundscheck=False,wraparound=False,nonecheck=False,initializedcheck=False,cdivision=True
###############################################################################
import numpy as np
cimport numpy as np
import cython
from libc.stdlib cimport malloc, calloc, free
from libc.string cimport memcmp
cdef int MAXNNZ = 500000
DTYPE_float = np.float32
ctypedef np.float32_t DTYPE_float_t
cdef extern from "nnz_c_support.h" nogil:
cdef int dataisnull(const void *data, size_t length)
cdef int find_nnz_naive(DTYPE_float_t[::1, :] matrix) nogil:
cdef int n, m, i, j, k, nza, nelem, next_k
cdef float val1
m = matrix.shape[0]
n = matrix.shape[1]
nelem = min(m*n, MAXNNZ)
col_starts = <int*> malloc(sizeof(int)*nelem)
row_indices = <int*> malloc(sizeof(int)*nelem)
x = <double*> malloc(sizeof(double)*nelem)
nza = 0
for i in range(n):
for j in range(m):
val1 = matrix[j, i]
if val1 != 0.0:
nza = nza + 1
# Store i, j and val1 indices/values in 3 arrays
# x has to be an array of doubles
col_starts[nza] = i + 1
row_indices[nza] = j + 1
x[nza] = val1
free(x)
free(row_indices)
free(col_starts)
return nza
cdef int find_nnz_memcmp(DTYPE_float_t[::1, :] matrix) nogil:
cdef int n, m, i, j, k, nza, nelem, next_k
cdef int mcpi
cdef float val1
m = matrix.shape[0]
n = matrix.shape[1]
cdef int bytes_per_float = 4
cdef int n_floats_per_block = 64 if m > 512 else 32
cdef int comp_bytes = n_floats_per_block * bytes_per_float
test_block = <float*> calloc(n_floats_per_block, sizeof(float))
nelem = min(m*n, MAXNNZ)
col_starts = <int*> malloc(sizeof(int)*nelem)
row_indices = <int*> malloc(sizeof(int)*nelem)
x = <double*> malloc(sizeof(double)*nelem)
nza = 0
for i in range(n):
k = 0
while k < m:
mcpi = memcmp(&test_block[0], &matrix[k, i], comp_bytes)
next_k = min(m, k + n_floats_per_block)
if mcpi == 0:
k = next_k
continue
for j in range(k, next_k):
val1 = matrix[j, i]
if val1 != 0.0:
nza = nza + 1
# Store i, j and val1 indices/values in 3 arrays
# x has to be an array of doubles
col_starts[nza] = i + 1
row_indices[nza] = j + 1
x[nza] = val1
k = next_k
free(x)
free(row_indices)
free(col_starts)
free(test_block)
return nza
cdef int find_nnz_blocks(DTYPE_float_t[::1, :] matrix) nogil:
cdef int n, m, i, j, k, nza, nelem, next_k
cdef int mcpi, reminder, comp_bytes
cdef float val1
m = matrix.shape[0]
n = matrix.shape[1]
nelem = min(m*n, MAXNNZ)
col_starts = <int*> malloc(sizeof(int)*nelem)
row_indices = <int*> malloc(sizeof(int)*nelem)
x = <double*> malloc(sizeof(double)*nelem)
cdef int bytes_per_float = 4
cdef int n_floats_per_block = 64 if m > 512 else 32
cdef int comp_bytes_default = n_floats_per_block * bytes_per_float
reminder = 4*(m % n_floats_per_block)
nza = 0
for i in range(n):
k = 0
while k < m:
comp_bytes = comp_bytes_default
next_k = k + n_floats_per_block
if next_k >= m:
comp_bytes = reminder
next_k = m
mcpi = dataisnull(&matrix[k, i], comp_bytes)
if mcpi != 0:
for j in range(k + mcpi - 1, next_k):
val1 = matrix[j, i]
if val1 != 0.0:
nza = nza + 1
# Store i, j and val1 indices/values in 3 arrays
# x has to be an array of doubles
col_starts[nza] = i + 1
row_indices[nza] = j + 1
x[nza] = val1
k = next_k
free(x)
free(row_indices)
free(col_starts)
return nza
cpdef int find_nnz(DTYPE_float_t[::1, :] matrix, int method):
with nogil:
if method == 0:
return find_nnz_naive(matrix)
elif method == 1:
return find_nnz_memcmp(matrix)
elif method == 2:
return find_nnz_blocks(matrix)
</code></pre>
<p><em>setup.py</em> (Python file to compile the Cython one into pyd)</p>
<pre><code>import os
from setuptools import setup
from setuptools import Extension
from Cython.Build import cythonize
from Cython.Distutils import build_ext
import numpy as np
# ==================================================================================================================== #
# C extensions
# ==================================================================================================================== #
EXTRA_COMPILE_ARGS = ['-O3', '-ffast-math', '-funroll-loops', '-flto', '-ftree-vectorize', '-DMS_WIN64']
EXTRA_LINK_ARGS = ['-flto', '-static'] + EXTRA_COMPILE_ARGS
class CustomBuildExt(build_ext):
def build_extensions(self):
# Override the compiler executables. Importantly, this
# removes the "default" compiler flags that would
# otherwise get passed on to to the compiler, i.e.,
# distutils.sysconfig.get_var("CFLAGS").
self.compiler.set_executable("compiler_so", "gcc -mdll -O -Wall -DMS_WIN64")
self.compiler.set_executable("compiler_cxx", "g++ -O -Wall -DMS_WIN64")
self.compiler.set_executable("linker_so", "gcc -shared -static")
self.compiler.dll_libraries = []
build_ext.build_extensions(self)
def Compile():
extension_kwargs = {'extra_compile_args': EXTRA_COMPILE_ARGS,
'extra_link_args' : EXTRA_LINK_ARGS,
'include_dirs' : [np.get_include()],
'define_macros' : [('WIN32', 1)]}
module_name = 'nonzero'
extension = Extension(module_name, [module_name + '.pyx'], **extension_kwargs)
# build the core extension(s)
setup_kwargs = {'cmdclass': {'build_ext': CustomBuildExt},
'ext_modules': cythonize(extension,
compiler_directives={'embedsignature' : False,
'boundscheck' : False,
'wraparound' : False,
'initializedcheck': False,
'cdivision' : True,
'nonecheck' : False,
'language_level' : '3str'},
force=True,
cache=False,
quiet=False)}
setup(**setup_kwargs)
if __name__ == '__main__':
Compile()
</code></pre>
<p><em>main.py</em> (Python test file, to run timings and tests)</p>
<pre><code>import numpy as np
import timeit
from nonzero import find_nnz
def create_matrix(m, n, density):
"""
Creates a `m` x `n` single-precision, fortran-ordered dense matrix
with a density of nonzero elements specified by `density` (as percentage)
"""
n_elem = m * n
# Number of non zero values
size = int(round(density / 100.0 * n_elem))
raveled_ind = np.random.choice(n_elem, size=size, replace=False)
i_ind, j_ind = np.unravel_index(raveled_ind, shape=(m, n))
matrix = np.zeros((m, n), dtype=np.float32, order='F')
matrix[i_ind, j_ind] = 1e3 * np.random.randn(size)
return matrix
N_TRIALS = 1000
NAIVE = 0
MEMCMP = 1
BLOCKS = 2
METHODS_STRINGS = ['Naive', 'Memcmp', 'Blocks']
METHODS_ID = [NAIVE, MEMCMP, BLOCKS]
def time_nonzero(m, n, density):
matrix = create_matrix(m, n, density)
assert matrix.dtype == np.float32, 'Single precision floats only'
assert np.isfortran(matrix), 'Matrix is not Fortran ordered'
nnz = np.count_nonzero(matrix)
print('%-6d %-6d %-6d %-8.4f' % (m, n, nnz, density), end=' ')
nnz_naive = find_nnz(matrix, NAIVE)
nnz_memcmp = find_nnz(matrix, MEMCMP)
nnz_blocks = find_nnz(matrix, BLOCKS)
assert nnz_naive == nnz_memcmp, 'NNZ (naive) = %d, NNZ (memcmp) = %d' % (nnz_naive, nnz_memcmp)
assert nnz_naive == nnz_blocks, 'NNZ (naive) = %d, NNZ (blocks) = %d' % (nnz_naive, nnz_blocks)
glb = {'constraint_matrix': matrix}
for method_id, method_name in zip(METHODS_ID, METHODS_STRINGS):
elapsed = timeit.timeit("find_nnz(constraint_matrix, %d)" % method_id,
globals=glb,
setup="from __main__ import find_nnz", number=N_TRIALS)
elapsed = elapsed * 1e3 / N_TRIALS
print('%-7.3f' % elapsed, end=' ')
print()
# ------------------------------------------- #
# Test cases with various m, n and densities
# M N Density (%)
TEST_CASES = [[1694, 2684, 0.1262],
[6295, 6955, 0.0281],
[4126, 5335, 0.0386],
[625 , 860 , 0.2366],
[491 , 667 , 0.2931],
[478 , 680 , 0.3295],
[780 , 1545, 0.2254],
[1012, 756 , 0.2189],
[1333, 1724, 0.1312],
[1699, 4021, 0.0883],
[3248, 3677, 0.0598],
[195 , 588 , 1.2245],
[1915, 3013, 0.0935],
[546 , 1475, 0.2297]]
# ------------------------------------------- #
def main():
methods_names = ''.join(['%-8s' % name for name in METHODS_STRINGS])
print('M N NNZ Density ' + methods_names)
for m, n, density in TEST_CASES:
time_nonzero(m, n, density)
if __name__ == '__main__':
main()
</code></pre>
<p>Command to compile the Cython file:</p>
<pre><code>python setup.py build_ext --inplace --compiler=mingw32 --verbose --force
</code></pre>
<h2>Results</h2>
<p>Running the benchmarks above (it takes about 2 minutes on my machine), I get this (times are in milliseconds, Skylake AVX512):</p>
<p><a href="https://i.sstatic.net/n48EL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n48EL.png" alt="enter image description here" /></a></p>
<p>As you can see, the "Memcmp" approach is about 20% faster than the "Naive" one, while the "Blocks" approach is more than 30% faster than "Naive" (at least for larger matrices).</p>
<p>For some obscure (to me) reasons, if I <strong>remove/comment out</strong> these 3 lines in all 3 methods:</p>
<pre><code>col_starts[nza] = i + 1
row_indices[nza] = j + 1
x[nza] = val1
</code></pre>
<p>But leave in the <code>nza = nza + 1</code>, then the "Naive" approach is the fastest by a wide margin (30% or more) (!!!!).</p>
<p>In any case, any suggestion you may have to make my not-so-nice code run faster is most welcome.</p>
|
<python><c><gcc><cython><simd>
|
2024-02-04 19:26:43
| 1
| 1,449
|
Infinity77
|
77,937,285
| 1,028,289
|
Python error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xdb in position 0: unexpected end of data
|
<p>I am trying to append a column of a csv file in a jsonl file using python. I tried fol code:-</p>
<pre><code> import csv
import json
def csv_column_to_jsonl(csv_file, column_index, jsonl_file):
with open(csv_file, 'r', encoding='utf-8-sig') as file:
reader = csv.reader(file)
data = [row[column_index] for row in reader]
with open(jsonl_file, 'a',encoding='ascii') as file:
for item in data:
json.dump({"text": item}, file)
file.write('\n')
csv_file = 'mydataset.csv'
column_index = 2
jsonl_file = 'jsondata.jsonl'
csv_column_to_jsonl(csv_file, column_index, jsonl_file)
</code></pre>
<p>Error i am getting</p>
<p><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xdb in position 0: unexpected end of data</code></p>
<p>Using chardet, the encoding types are as fol:-</p>
<pre><code>jsondata.jsonl {'encoding': 'ascii', 'confidence': 1.0, 'language': ''}
mydataset.csv {'encoding': 'UTF-8-SIG', 'confidence': 1.0, 'language': ''}
</code></pre>
<p>The only combination that works and append csv in jsonl file is when i use encoding=unicode_escape for csv file but in that case. the resulting jsonl display data like fol:-</p>
<p><a href="https://i.sstatic.net/F7oIz.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F7oIz.jpg" alt="enter image description here" /></a></p>
<p>Till index 4 is the previous data while from index 120315, the appended data is shown which is not matching. Please let me know what to amend in my code.</p>
|
<python><json><csv><google-colaboratory><codec>
|
2024-02-04 19:25:33
| 1
| 391
|
jaykio77
|
77,937,064
| 960,115
|
OpenPyXL Set Worksheet Formula and Value
|
<p>I'm trying to create an Excel workbook in Python that includes formulas, and want to parse the workbook later without needing to open it in Excel first to evaluate the formula. E.g., I have a formula with a known computation result, and want to store both the formula and its resulting value using OpenPyXL.</p>
<p>How can I accomplish this with OpenPyXL?</p>
<p>E.g., I want to code the following worksheet:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Value</th>
<th>Log(10)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0 (=LOG10(A2))</td>
</tr>
<tr>
<td>10</td>
<td>1 (=LOG10(A3))</td>
</tr>
</tbody>
</table>
</div>
<p>The code I was expecting to write to produce this is something like the following:</p>
<pre><code>import math
from openpyxl import Workbook, load_workbook
wb = Workbook()
ws = wb.active
ws.cell(1, 1).value = 'Value'
ws.cell(1, 2).value = 'Log(10)'
for row, i in enumerate([1, 10], start = 2):
ws.cell(row, 1).value = i
formulaCell = ws.cell(row, 2)
# What do I have to assign here in order to be able to read both the formula and value later?
formulaCell.value = math.log10(i)
formulaCell.formula = f'=LOG10(A{row})'
wbName = 'test.xlsx'
wb.save(wbName)
# Read the value and formula back in
wb_data = load_workbook(wbName, data_only = True)
wb_formulas = load_workbook(wbName, data_only = False)
# The following is expected to print:
# 0
# =LOG10(A2)
print(wb_data['B2'].value)
print(wb_formulas['B2'].value)
</code></pre>
|
<python><python-3.x><openpyxl>
|
2024-02-04 18:22:56
| 0
| 4,735
|
Jeff G
|
77,936,766
| 14,250,641
|
Mapping embeddings to labels in PyTorch/Huggingface
|
<p>I am currently working on a project where I am using a pre-trained transformer model to generate embeddings for DNA sequences (some have a '1' label and some have a '0' label). I'm trying to map these embeddings back to their corresponding labels in my dataset, but I'm encountering an IndexError when attempting to do so. I think it has to do with the fact that I am batching since I'm running out of memory.</p>
<p>Here is the code I'm working with:</p>
<pre><code>from datasets import Dataset
from transformers import AutoTokenizer, AutoModel
import torch
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("InstaDeepAI/nucleotide-transformer-500m-human-ref")
model = AutoModel.from_pretrained("InstaDeepAI/nucleotide-transformer-500m-human-ref")
# Load the dataset
ds1 = Dataset.from_file('training.arrow') #this is already tokenized
# Convert tokenized sequences to tensor
inputs = torch.tensor(ds1['input_ids']).to(torch.device("cuda" if torch.cuda.is_available() else "cpu"))
# Reduce batch size
batch_size = 4
# Pass tokenized sequences through the model with reduced batch size
with torch.no_grad():
outputs = model(input_ids=inputs[:batch_size], output_hidden_states=True)
# Extract embeddings
hidden_states = outputs.hidden_states
embeddings1 = hidden_states[-1]
</code></pre>
<p>Here is the information about the size of the output embeddings and the original dataset:</p>
<pre><code>embeddings1.shape
torch.Size([4, 86, 1280])
ds1
Dataset({
features: ['labels', 'input_ids', 'attention_mask'],
num_rows: 22535512
})
</code></pre>
<p>I'm having a hard time figuring out how to map the labels back to the output embeddings, especially with the big discrepancy with the sizes. As you can see, I have 22million sequences, I would like a an embedding for each sequence.</p>
<p>My plan is to use these embeddings for downstream prediction using another model.
I have already split my data into train, test, and val, but does it make more sense to get the embeddings for a label1 dataset and label0 dataset and then combine and then split into train/test, so I don't have to worry about the mapping of the labels?</p>
|
<python><tensorflow><pytorch><huggingface-transformers><word-embedding>
|
2024-02-04 17:06:58
| 2
| 514
|
youtube
|
77,936,762
| 2,977,164
|
Making a shiny application using YOLOv5
|
<p>I want to create a dashboard application in Shiny in Rstudio using Python.
The objective is to deploy a TensorFlow model inside a Shiny app. We will build a model that can classify insects in images; then, we will build a Shiny app that lets you upload an image and get predictions from this model.
My idea is to create a dashboard that uses a training neural network (Yolov5) (<code>model = 'best.pt'</code>) to identify the targets with a bounding box in a JPG image chosen. I try to draft this:</p>
<pre><code># app.py
from shiny import UI
from yolo_v5_inference import Inference
from PIL import Image as view
from IPython.display import Image
</code></pre>
<p>and</p>
<pre><code># ui.page
# Load the model
model = r'best.pt' # Better model trains using Yolov5 neural network in Python
# Define the UI
ui <- fluidPage(
# App title ----
titlePanel("Hello TensorFlow!"),
# Sidebar layout with input and output definitions ----
sidebarLayout(
# Sidebar panel for inputs ----
sidebarPanel(
# Input: File upload
fileInput("image_path", label = "Input a JPEG image")
),
# Main panel for displaying outputs ----
mainPanel(
# Output:
textOutput(outputId = "prediction"),
plotOutput(outputId = "image")
)
)
)
# Define server logic required to draw a histogram ----
server <- function(input, output) {
image <- reactive({
req(input$image_path)
jpeg::readJPEG(input$image_path$datapath)
})
output$prediction <- renderText({
img <- image() %>%
array_reshape(., dim = c(1, dim(.), 1))
standart = Inference(crops_path=imgage, yolov5_model_path=model, conf_threshold=0.5, rescale=(1.5, 1.5), save_rescaled=True)
standart.standart
paste0("The predicted bounding-box ")
})
output$image <- renderPlot({
plot(as.raster(image()))
})
}
shinyApp(ui, server)
</code></pre>
<p>Please, could someone help me?</p>
|
<python><r><shiny><yolo>
|
2024-02-04 17:05:30
| 0
| 1,883
|
Leprechault
|
77,936,741
| 18,346,591
|
Plot arrow on each point towards the line in graph
|
<p>I am trying to plot arrows from each data point towards the line in the graph using matplotlib.</p>
<p><a href="https://i.sstatic.net/sCkRn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sCkRn.png" alt="Graph" /></a></p>
<p>I want the arrow to represent the distance between each point and the line. How can I do this?</p>
<p>Here's my code:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Create a straight line (45-degree angle)
x_line = np.linspace(0, 10, 100)
y_line = x_line
# Add some random points around the line
num_points = 20
x_points = np.linspace(2, 8, num_points) # Adjust the range as needed
y_points = x_points + np.random.normal(0, 0.5, num_points) # Add some randomness
# Plot the line
plt.plot(x_line, y_line, label='Line', color='blue')
# Plot the points
plt.scatter(x_points, y_points, label='Points', color='red')
# Set labels and title
plt.xlabel('X-axis')
plt.ylabel('Y-axis')
plt.title('Scatter Plot Around a Line')
# Show legend
plt.legend()
# Display the plot
plt.show()
</code></pre>
<p>I tried doing this myself but failed:
<a href="https://i.sstatic.net/bTxIP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bTxIP.png" alt="enter image description here" /></a></p>
<p>Code:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Create a straight line (45-degree angle)
x_line = np.linspace(0, 10, 100)
y_line = x_line
# Add some random points around the line
num_points = 20
x_points = np.linspace(2, 8, num_points) # Adjust the range as needed
y_points = x_points + np.random.normal(0, 0.5, num_points) # Add some randomness
# Plot the line
plt.plot(x_line, y_line, label='Line', color='blue')
# Plot the points
plt.scatter(x_points, y_points, label='Points', color='red')
# Add arrows from each point to the line
for x, y in zip(x_points, y_points):
plt.arrow(x, y, 0, y - x, color='black', linestyle='dashed', linewidth=0.5, head_width=0.2)
# Set labels and title
plt.xlabel('X-axis')
plt.ylabel('Y-axis')
plt.title('Scatter Plot Around a Line')
# Show legend
plt.legend()
# Display the plot
plt.show()
</code></pre>
<p>As you can see the data points shifted and the arrows point outwards rather than inwards or towards the line.</p>
|
<python><matplotlib><machine-learning><graph><least-squares>
|
2024-02-04 17:00:07
| 3
| 662
|
Alexander Obidiegwu
|
77,936,723
| 3,674,127
|
Conftest discovery with tests as part of application code
|
<p>I'm using pytest and decided to go with <a href="https://docs.pytest.org/en/7.1.x/explanation/goodpractices.html#tests-as-part-of-application-code" rel="nofollow noreferrer">Tests as part of application code</a> pattern.
Everything works fine besides conftest discovery.</p>
<p>I have the following structure:</p>
<pre><code>module_a/
a.py
__tests__/
conftest.py
test_a.py
submodule/
b.py
__tests__/
test_b.py
</code></pre>
<p>In <code>conftest.py</code> under <code>module_a/__tests__</code> I define fixtures that I want to be able to use in <code>module_a/submodule/__tests__</code>, but it doesn't discover that.
If I move the <code>conftest.py</code> to <code>module_a</code> (not inside <code>__tests__</code>), it works, but I dont want to define it outside of the <code>__tests__</code> directory.</p>
<p>Any idea how can I achieve this?</p>
|
<python><pytest><pytest-fixtures><conftest>
|
2024-02-04 16:55:41
| 1
| 4,295
|
Golan Kiviti
|
77,936,303
| 4,699,441
|
Python tests separate directory from production code: cannot get consistent imports
|
<p>In a submodule I have a class <code>Parent</code> that uses an auxillary class <code>Child</code> located in the same module.</p>
<p><strong>Problem</strong>: depending on whether I execute the tests or run the application, the <code>import</code> statement in the <code>parent.py</code> needs to look different.</p>
<p>The project structure is like this:</p>
<pre><code>project-name/
tests/
__init__.py
test.py
project_name/
__init__.py
__main__.py
submodule/
parent.py
child.py
</code></pre>
<p>I execute the program and the tests like this:</p>
<pre class="lang-bash prettyprint-override"><code># go to the root directory of the project
cd project-name/
# I cannot get those two lines get to work with the same imports:
python3 -m unittest discover
python3 project_name
</code></pre>
<p>The content of the files is like this:</p>
<pre class="lang-py prettyprint-override"><code># tests/__init__.py
# empty
</code></pre>
<pre class="lang-py prettyprint-override"><code># tests/test.py
import unittest
from project_name import Parent
class Test(unittest.TestCase):
def test(self): Parent().hello()
</code></pre>
<pre class="lang-py prettyprint-override"><code># project_name/__init__.py
from project_name.submodule.child import Child
from project_name.submodule.parent import Parent
</code></pre>
<pre class="lang-py prettyprint-override"><code># project_name/__main__.py
from submodule.parent import Parent
parent = Parent()
parent.hello()
</code></pre>
<pre class="lang-py prettyprint-override"><code># project_name/submodule/child.py
class Child:
def __init__(self):
pass
def hello(self):
print("Hello World")
</code></pre>
<pre class="lang-py prettyprint-override"><code># project_name/submodule/parent.py
# Works for __main__.py:
from submodule.child import Child
# Works for test:
#from project_name import Child
class Parent:
def __init__(self):
self.child = Child()
def hello(self):
self.child.hello()
</code></pre>
<hr />
<p>The solution suggested by nigh_anxiety is not sufficient: it solves this particular import where the other file is in the same folder, but imports from other packages will still not work.</p>
<p>For that, there's <a href="https://stackoverflow.com/questions/77938918/how-to-get-python-imports-to-work-consistently/">a follow-up question</a>.</p>
<p>The solution to the follow-up question also solves this problem, therefore it's the better solution.</p>
<p>But I'm not allowed to post the solution, so I just mention it here.</p>
|
<python><python-import>
|
2024-02-04 15:02:57
| 1
| 1,078
|
user66554
|
77,936,268
| 12,931,358
|
There are two subfolders A and B under a large folder. How to import the .py file from B into A?
|
<p>the structure of my folder like,</p>
<pre><code>my_proj(large folder)
-__init__.py
-dosomething(A)
--__init__.py
--test.py
-calculate(B)
--__init__.py
--add_num.py
</code></pre>
<p>While</p>
<pre><code>#in add_num.py
def my_add_num(a, b):
return a + b
#in __init__.py
from .add_num import my_add_num
</code></pre>
<p>I want to use <code>my_add_num</code> in <code>dosomething/test.py</code>, so I tried to use three methods,but both shows error, how to correct it?</p>
<pre><code># from ..calculate import my_add_num
#ImportError: attempted relative import with no known parent package
#from calculate.add_two_num import my_add_two_num
#No module named 'calculate'
from my_proj.calculate import my_add_num
# No module named 'my_proj'
print(my_add_num(100,300))
</code></pre>
|
<python><python-3.x>
|
2024-02-04 14:54:03
| 0
| 2,077
|
4daJKong
|
77,935,843
| 1,020,139
|
How to convert existing `TypeVar` into Python 3.12's type parameter syntax?
|
<p>Consider the following <code>TypeVar</code> declaration:</p>
<pre><code>T = TypeVar("T", bound=RedisCache)
def make_redis_client(c: ReadableContainer, cache_factory: Type[T], db: BffRedisDB) -> T:
return cache_factory(redis=redis.Redis(host=c[BffEnvData].get_redis_connection_url(), port=c[BffEnvData].get_redis_connection_port(), db=db.value, decode_responses=False))
</code></pre>
<p>How can I convert it into Python 3.12's new type parameter syntax?</p>
<pre><code>type HashableSequence[T: Hashable] = Sequence[T] # TypeVar with bound
</code></pre>
<p><a href="https://docs.python.org/3/whatsnew/3.12.html#new-features" rel="nofollow noreferrer">https://docs.python.org/3/whatsnew/3.12.html#new-features</a></p>
<p>I have tried these:</p>
<pre><code>type CacheFactory[T] = Type[T] # 1st attempt
type T[T: RedisCache] # 2nd attempt
...
</code></pre>
<p>For the first attempt, I get the following error: <code>PEP 695 type aliases are not yet supportedMypyvalid-type</code></p>
<p>Does that mean Mypy doesn't support Python 3.12 type parameter syntax?</p>
|
<python><python-3.x>
|
2024-02-04 12:47:39
| 1
| 14,560
|
Shuzheng
|
77,935,689
| 20,771,478
|
Python: MSAL --> Getting token with username and password
|
<p>I am following the <a href="https://learn.microsoft.com/en-us/entra/identity-platform/scenario-desktop-acquire-token-username-password?tabs=python" rel="nofollow noreferrer">Microsoft documentation for aquiring an MSAL token by username and password</a>.</p>
<p>For understanding how MSAL works I try something very simple: Aquiring a token using a username and password. Curly brackets ({}) in the below example represent IDs or credentials I have copied there.</p>
<pre><code>authority_url = 'https://login.microsoftonline.com/{Tenant}β
app = msal.PublicClientApplication(
authority=authority_url
, client_id={client_id}'
, client_credential=None
)
app.acquire_token_by_username_password(
'{mail_of_windows_user}'
, β{password_of_windows_user}β
, scopes = ["User.Read"])
</code></pre>
<p>When I run the code I get an error with ID 7000218:</p>
<blockquote>
<p>The request body must contain the following parameter:
'client_assertion' or 'client_secret'.</p>
</blockquote>
<p>This error is unexpected because a <code>PublicClientApplication</code> should work without a client secret. In the <a href="https://learn.microsoft.com/en-us/python/api/msal/msal.application.clientapplication?view=msal-py-latest" rel="nofollow noreferrer">documentation</a> for the parameter <code>client_credential</code>, one can read:</p>
<blockquote>
<p>For PublicClientApplication, you simply use None here.</p>
</blockquote>
<p>Why do I need a client secret even though the documentation states I need none?</p>
|
<python><azure><azure-active-directory><token><msal>
|
2024-02-04 12:02:50
| 1
| 458
|
Merlin Nestler
|
77,935,507
| 9,779,999
|
How to create a Chromadb after VectorstoreIndexCreator for your CSV?
|
<p>I have successfully created a chatbot that can answer question by referencing to the csv. My code is as below,</p>
<pre><code>loader = CSVLoader(file_path='data.csv') # load the csv
index_creator = VectorstoreIndexCreator() # initiation
docsearch = index_creator.from_loaders([loader]) # embedding
</code></pre>
<p>My chain is as follow,</p>
<pre><code>chain = RetrievalQA.from_chain_type(llm=OpenAI(model_name="gpt-3.5-turbo", temperature=0), chain_type="stuff", retriever=docsearch.vectorstore.as_retriever(), input_key="question")
</code></pre>
<p>Everything is perfect so far.</p>
<p>But I am trying to create an app which will solve problems by referencing to this csv, therefore I would like to store the vectorized data into a chromadb which can be retrieved without embedding again.</p>
<p>I understand that (please tell me if I am wrong) docsearch.vectorstore is already a chromadb. When I <code>print(docsearch.vectorstore)</code>, it returns,</p>
<pre><code><langchain_community.vectorstores.chroma.Chroma object at 0x13e079130>
</code></pre>
<p>But how do it store it as a file? Like that you would do after embedding a txt or pdf file, you persist it in a folder.</p>
<pre><code>embedding = OpenAIEmbeddings(openai_api_key=openai_api_key)
# embedding = OpenAIEmbeddings(openai_api_key=openai_api_key, model_name='text-embedding-3-small')
persist_directory = "embedding/chroma"
# Create a Chroma vector database for the current states after embedding
vectordb = Chroma(
persist_directory = persist_directory,
embedding_function = embedding)
vectordb.persist()
database = Chroma.from_documents(doc, embedding=embedding, persist_directory = persist_directory)
</code></pre>
<p>Then you will be able find the database file in the persist_directory.</p>
<p>So, my question is, how do I achieve a similar process with my csv data? I have googled, e.g. <a href="https://stackoverflow.com/questions/76174236/is-there-any-way-to-load-an-index-created-through-vectorstoreindexcreator-in-lan">Is there any way to load an index created through VectorstoreIndexCreator in langchain? How does it work?</a>
But it seems not the answer. Because the answer calls the "docs" and "embeddings", which I can't seem to find those in the op,.</p>
<p>Any help is much appreciated.</p>
|
<python><excel><csv><embedding><chromadb>
|
2024-02-04 11:05:10
| 1
| 1,669
|
yts61
|
77,935,397
| 4,862,162
|
Python InfiniteDefaultRevisionDictionary, any implementations?
|
<p>In programming, certain dictionary-based data structures are very useful and important. For example:</p>
<ol>
<li><p>Revision dictionary:
Revision dictionary is a sub-type of dictionary that keeps keys sorted in the order of last modification. It is commonly used in cache handling which remembers most recently used items.</p>
</li>
<li><p>Default dictionary:
Default dictionary can return a default element when the dictionary is accessed with a key that is not present in the dictionary. It already has an implementation in <code>collections.defaultdict</code>.</p>
</li>
<li><p>Infinite dictionary:
An infinite dictionary can be accessed with infinitely layer of nested recursion, i.e., infinite chaining of keys, e.g., <code>dct[person_name][gender], dct[company_name][employees]</code>, it can be used to create dynamic data structures, or even modeling file-system structures.</p>
</li>
</ol>
<p>So my question is that: in Python, is it possible to write a dictionary that has all the 3 features, i.e., an infinite default revision dictionary? In particular, one can specify options during creation, e.g., whether the items should be sorted in the order of insertion, or keys' order, or revision order. And how to implement if possible?</p>
<p>It would be very cool if future-version Python has a built-in Dictionary class that supports all of these features and options.</p>
<p><strong>Edits</strong>:
Point 2 might not be in contradiction with Point 3, that in principle, an InfiniteDefaultRevisionDictionary can have a default key other than lambda of itself. For example, if the default key is 0, then:</p>
<pre><code>dd=InfiniteDefaultRevisionDictionary(default=0, {})
print(dd['a']['b']['c']) # should give 0
dd['b']['c'][2] = [1, '2', 3.5] # should work fine
</code></pre>
|
<python><dictionary><defaultdict><revision><infinite-recursion>
|
2024-02-04 10:29:39
| 1
| 1,615
|
xuancong84
|
77,935,269
| 10,628,959
|
Performance results differ between run_in_threadpool() and run_in_executor() in FastAPI
|
<p>Here's a minimal reproducible example of my FastAPI app. I have a strange behavior and I'm not sure I understand the reason.</p>
<p>I'm using ApacheBench (<code>ab</code>) to send multiple requests as follows:</p>
<pre><code>ab -n 1000 -c 50 -H 'accept: application/json' -H 'x-data-origin: source' 'http://localhost:8001/test/async'
</code></pre>
<p><strong>FastAPI app</strong></p>
<pre><code>import time
import asyncio
import enum
from typing import Any
from fastapi import FastAPI, Path, Body
from starlette.concurrency import run_in_threadpool
app = FastAPI()
loop = asyncio.get_running_loop()
def sync_func() -> None:
time.sleep(3)
print("sync func")
async def sync_async_with_fastapi_thread() -> None:
await run_in_threadpool( time.sleep, 3)
print("sync async with fastapi thread")
async def sync_async_func() -> None:
await loop.run_in_executor(None, time.sleep, 3)
async def async_func() -> Any:
await asyncio.sleep(3)
print("async func")
@app.get("/test/sync")
def test_sync() -> None:
sync_func()
print("sync")
@app.get("/test/async")
async def test_async() -> None:
await async_func()
print("async")
@app.get("/test/sync_async")
async def test_sync_async() -> None:
await sync_async_func()
print("sync async")
@app.get("/test/sync_async_fastapi")
async def test_sync_async_with_fastapi_thread() -> None:
await sync_async_with_fastapi_thread()
print("sync async with fastapi thread")
</code></pre>
<p>Here's the ApacheBench results:</p>
<p><strong>async with (asyncio.sleep)</strong> :
*Concurrency Level: 50</p>
<ul>
<li>Time taken for tests: 63.528 seconds</li>
<li>Complete requests: 1000</li>
<li>Failed requests: 0</li>
<li>Total transferred: 128000 bytes</li>
<li>HTML transferred: 4000 bytes</li>
<li>Requests per second: 15.74 [#/sec] (mean)</li>
<li><strong>Time per request: 3176.407 [ms] (mean)</strong></li>
<li>Time per request: 63.528 [ms] (mean, across all concurrent requests)
Transfer rate: 1.97 [Kbytes/sec] received*</li>
</ul>
<p><strong>sync (with time.sleep):</strong>
Concurrency Level: 50</p>
<ul>
<li>*Time taken for tests: 78.615 seconds</li>
<li>Complete requests: 1000</li>
<li>Failed requests: 0</li>
<li>Total transferred: 128000 bytes</li>
<li>HTML transferred: 4000 bytes</li>
<li>Requests per second: 12.72 [#/sec] (mean)</li>
<li><strong>Time per request: 3930.751 [ms] (mean)</strong></li>
<li>Time per request: 78.615 [ms] (mean, across all concurrent requests)
Transfer rate: 1.59 [Kbytes/sec] received*</li>
</ul>
<p><strong>sync_async (time sleep with run_in_executor) :</strong> *Concurrency Level: 50</p>
<ul>
<li>Time taken for tests: 256.201 seconds</li>
<li>Complete requests: 1000</li>
<li>Failed requests: 0</li>
<li>Total transferred: 128000 bytes</li>
<li>HTML transferred: 4000 bytes</li>
<li>Requests per second: 3.90 [#/sec] (mean)</li>
<li><strong>Time per request: 12810.038 [ms] (mean)</strong></li>
<li>Time per request: 256.201 [ms] (mean, across all concurrent requests)
Transfer rate: 0.49 [Kbytes/sec] received*</li>
</ul>
<p><strong>sync_async_fastapi (time sleep with run_in threadpool):</strong>
*Concurrency Level: 50</p>
<ul>
<li>Time taken for tests: 78.877 seconds</li>
<li>Complete requests: 1000</li>
<li>Failed requests: 0</li>
<li>Total transferred: 128000 bytes</li>
<li>HTML transferred: 4000 bytes</li>
<li>Requests per second: 12.68 [#/sec] (mean)</li>
<li><strong>Time per request: 3943.841 [ms] (mean)</strong></li>
<li>Time per request: 78.877 [ms] (mean, across all concurrent requests)
Transfer rate: 1.58 [Kbytes/sec] received*</li>
</ul>
<p>In conclusion, I'm experiencing a surprising disparity in results; especially, when using <code>run_in_executor</code>, where I'm encountering significantly higher average times (12 seconds). I don't understand this outcome.</p>
<p>--- EDIT ---
<strong>After AKX answer.</strong></p>
<pre><code>Here the code working as expected:
import time
import asyncio
from anyio import to_thread
to_thread.current_default_thread_limiter().total_tokens = 200
loop = asyncio.get_running_loop()
executor = ThreadPoolExecutor(max_workers=100)
def sync_func() -> None:
time.sleep(3)
print("sync func")
async def sync_async_with_fastapi_thread() -> None:
await run_in_threadpool( time.sleep, 3)
print("sync async with fastapi thread")
async def sync_async_func() -> None:
await loop.run_in_executor(executor, time.sleep, 3)
async def async_func() -> Any:
await asyncio.sleep(3)
print("async func")
@app.get("/test/sync")
def test_sync() -> None:
sync_func()
print("sync")
@app.get("/test/async")
async def test_async() -> None:
await async_func()
print("async")
@app.get("/test/sync_async")
async def test_sync_async() -> None:
await sync_async_func()
print("sync async")
@app.get("/test/sync_async_fastapi")
async def test_sync_async_with_fastapi_thread() -> None:
await sync_async_with_fastapi_thread()
print("sync async with fastapi thread")
</code></pre>
|
<python><python-asyncio><fastapi><starlette><apachebench>
|
2024-02-04 09:46:30
| 3
| 475
|
Raphael Obadia
|
77,935,143
| 1,615,635
|
pip won't install requirements.txt because I'm missing a dependency, even though I already installed the dependency
|
<p>I used <code>pip install -r requirements.txt</code>, which I think is supposed to automatically install dependencies. I got <code>Exception: Building py-lmdb from source on Windows requires the "patch-ng" python module.</code>. Running <code>pip install patch-ng</code> just says <code>Requirement already satisfied: patch-ng in c:\users\daniel\appdata\local\programs\python\python312\lib\site-packages (1.17.4)</code>. In short, I can't use patch-ng because it's not installed, but I can't install it because it's already installed. I even tried uninstalling it and reinstalling it and this did not fix the issue.</p>
<p>I was worried that it may have somehow been doing that with different versions of Python. So I ran <code>py -0p</code> to see all versions installed, deleted all but the newest one, and then ran <code>py -0p</code> again, verifying that my computer is now haunted by previous python installations, which are still installed despite being in folders that no longer exist.</p>
|
<python><pip>
|
2024-02-04 09:06:28
| 0
| 7,481
|
DanielLC
|
77,935,013
| 199,554
|
How to convert Python time.monotonic to walltime accurately
|
<p>I have a time returned by <code>time.monotonic()</code> that I want to convert to walltime (i.e. <code>time.time()</code>, suitable for formatting with <code>time.strftime</code>). Is there a precise/atomic way to get the offset between both? I know I can just request both <code>time.monotonic()</code> and <code>time.time()</code> at an arbitrary point in time and calculate the difference, but this is inaccurate because there will be some delay between both calls (quite a lot since Python is slow, even more if there happen to be interrupts/rescheduling).</p>
|
<python><time>
|
2024-02-04 08:17:16
| 1
| 11,282
|
Wim
|
77,934,941
| 7,647,857
|
Algorithm for generating a non-overlapping rectangles
|
<p>I'm working on a problem where I need to generate a rectangle (let's call it 'box') within a larger rectangle ('field'), ensuring that the box doesn't overlap with a smaller rectangle ('window') contained within the field. For simplicity, <em>all the coordinates are integers greater or equal to 0</em>.</p>
<p>Here's the specific problem:</p>
<ul>
<li>I have the coordinates of the lower left corner (LL) and upper right
corner (UR) of both the field (light blue) and the window (white).</li>
<li>The randomly generated box (dark blue) needs to be completely within the field but should have <em>no common points</em> and <em>no overlaps</em> with the window.</li>
</ul>
<p><a href="https://i.sstatic.net/crTZU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/crTZU.png" alt="enter image description here" /></a></p>
<p>I've managed to solve the problem in the 2D case. However, my solution involved tedious consideration of all possible relations between the random coordinates of the box with respect to the field and the window.</p>
<p>Now, I'm looking for a more generalized and robust algorithm (if any) that can handle dimensions larger than 2. For instance, considering all possibilities in 3D would be extremely complex.</p>
<p>Could someone provide guidance or an algorithm for this generalized case? I'm implementing this in Python and using NumPy for numerical operations. Any insights or code snippets would be highly appreciated. Thank you!</p>
|
<python><algorithm><numpy>
|
2024-02-04 07:41:31
| 3
| 399
|
user7647857
|
77,934,898
| 11,152,487
|
Gstreamer image overlay not working based on seconds
|
<p>I have this pipeline in python that im using to add images over a mp4 video but the output video is same as input one and when i used <code>GST_DEBUG=3</code> i got</p>
<pre><code>0:00:00.046586337 11711 0x279d380 WARN gdkpixbufoverlay gstgdkpixbufoverlay.c:562:gst_gdk_pixbuf_overlay_start:<gdkpixbufoverlay0> no image location set, doing nothing
0:00:00.047215851 11711 0x2766360 FIXME videodecoder gstvideodecoder.c:1193:gst_video_decoder_drain_out:<pngdec0> Sub-class should implement drain()
0:00:00.047218585 11711 0x279d380 WARN basesrc gstbasesrc.c:3688:gst_base_src_start_complete:<filesrc0> pad not activated yet
0:00:00.055638677 11711 0x263df00 WARN qtdemux qtdemux_types.c:249:qtdemux_type_get: unknown QuickTime node type sgpd
0:00:00.055691634 11711 0x263df00 WARN qtdemux qtdemux_types.c:249:qtdemux_type_get: unknown QuickTime node type sbgp
0:00:00.055736798 11711 0x263df00 WARN qtdemux qtdemux.c:3121:qtdemux_parse_trex:<qtdemux0> failed to find fragment defaults for stream 1
0:00:00.055879661 11711 0x263df00 WARN qtdemux qtdemux.c:3121:qtdemux_parse_trex:<qtdemux0> failed to find fragment defaults for stream 2
0:00:00.057660594 11711 0x2766360 WARN videodecoder gstvideodecoder.c:2816:gst_video_decoder_chain:<pngdec0> Received buffer without a new-segment. Assuming timestamps start from 0.
0:00:00.058040800 11711 0x2766360 WARN video-info video-info.c:760:gst_video_info_to_caps: invalid matrix 0 for RGB format, using RGB
0:00:00.205414894 11711 0x2766400 WARN audio-resampler audio-resampler.c:274:convert_taps_gint16_c: can't find exact taps
0:00:01.263245091 11711 0x27661e0 FIXME basesink gstbasesink.c:3395:gst_base_sink_default_event:<filesink0> stream-start event without group-id. Consider implementing group-id handling in the upstream elements
0:00:01.264908606 11711 0x27661e0 FIXME aggregator gstaggregator.c:1410:gst_aggregator_aggregate_func:<mux> Subclass should call gst_aggregator_selected_samples() from its aggregate implementation.
DEBUG:root:Position: 2.7s / 53.0s
DEBUG:root:Position: 4.333333333s / 53.0s
</code></pre>
<p>Can anyone help me with this, I kinda new to GStreamer. This is my pipeline code and folder structure below</p>
<pre><code>def start_pipeline(video_file_path: str, output_file_path: str) -> None:
Gst.init(None)
# GStreamer pipeline for adding image overlay to a video
pipeline_string = (
f"filesrc location={video_file_path} ! decodebin name=dec "
f"dec. ! queue ! videoconvert ! x264enc ! queue ! mp4mux name=mux ! filesink location={output_file_path} "
f'multifilesrc location=images/image_%06d.png index=1 caps="image/png,framerate=(fraction)30/1" ! pngdec ! videoconvert ! gdkpixbufoverlay ! queue ! x264enc ! queue ! mux. '
f"dec. ! queue ! audioconvert ! audioresample ! voaacenc ! queue ! mux. "
)
pipeline = Gst.parse_launch(pipeline_string)
# Set up bus to receive messages
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", on_bus_message, GLib.MainLoop.new(None, False))
# Start the pipeline
pipeline.set_state(Gst.State.PLAYING)
# Run the main loop
loop = GLib.MainLoop()
# Add a timeout callback to check the progress every second
GLib.timeout_add_seconds(1, on_timeout, pipeline, loop)
loop.run()
loop.quit()
exit("Done")
</code></pre>
<pre><code>.
βββ images
β βββ image_000000.png
β βββ image_000001.png
β βββ image_000002.png
β βββ image_000003.png
β βββ image_000004.png
β βββ image_000005.png
β βββ image_000006.png
β βββ image_000007.png
β βββ image_000008.png
β βββ image_000009.png
βββ input.mp4
βββ requirements.txt
βββ stream.py
</code></pre>
|
<python><gstreamer><python-gstreamer>
|
2024-02-04 07:17:14
| 1
| 331
|
Sagar Yadav
|
77,934,699
| 5,267,751
|
How can I prevent the cell from being run if some conditions are satisfied in IPython shell?
|
<p>Assume I run the following code in IPython interactive shell:</p>
<pre class="lang-py prettyprint-override"><code>def pre_run_cell(info):
raise RuntimeError("please don't run")
get_ipython().events.register("pre_run_cell", pre_run_cell)
</code></pre>
<p>Then, each time before a cell is run, <code>pre_run_cell()</code> is called.</p>
<p>However, even though the <code>pre_run_cell()</code> raises an error, the cell is still executed:</p>
<pre class="lang-py prettyprint-override"><code>In [3]: 1
Error in callback <function pre_run_cell at 0x7f6eae1a6020> (for pre_run_cell), with arguments args (<ExecutionInfo object at 7f6eae4346d0, raw_cell="1" store_history=True silent=False shell_futures=True cell_id=None>,),kwargs {}:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[1], line 2, in pre_run_cell(info)
1 def pre_run_cell(info):
----> 2 raise RuntimeError("please don't run")
RuntimeError: please don't run
Out[3]: 1
</code></pre>
<p>How can I check some condition and possibly prevent the cell from running <strong>before running each cell</strong>? (or from within <code>pre_run_cell</code> hook)</p>
<p>I don't see anything in the <a href="https://ipython.readthedocs.io/en/stable/config/callbacks.html#pre-run-cell" rel="nofollow noreferrer">documentation of <code>pre_run_cell</code>)</a> with such a feature.</p>
|
<python><ipython>
|
2024-02-04 05:30:03
| 1
| 4,199
|
user202729
|
77,934,581
| 8,035,425
|
ppo_trainer generate and training step is extremely slow
|
<p>I am running the code exactly as shown in this Github repo: <a href="https://github.com/joeljang/RLPHF/blob/main/training/rlhf.py" rel="nofollow noreferrer">https://github.com/joeljang/RLPHF/blob/main/training/rlhf.py</a></p>
<p>For some reason, this code block:</p>
<pre><code>for steps, batch in tqdm(enumerate(ppo_trainer.dataloader)):
question_tensors = batch["input_ids"]
response_tensors = ppo_trainer.generate(
question_tensors,
return_prompt=False,
length_sampler=output_length_sampler,
**generation_kwargs,
)
batch["response"] = tokenizer.batch_decode(response_tensors, skip_special_tokens=True)
texts = [q + r for q, r in zip(batch["query"], batch["response"])]
input_ids = tokenizer(texts, max_length=script_args.max_length ,return_tensors="pt", padding=True, truncation=True).input_ids
reward_outputs = reward_model(input_ids = input_ids.to(reward_model.device))[0]
rewards = [torch.tensor(output[0].float()) - script_args.reward_baseline for output in reward_outputs]
v_min, v_max, v_mean = get_reward_stats(rewards)
# Run PPO step
stats = ppo_trainer.step(question_tensors, response_tensors, rewards)
</code></pre>
<p>Is taking an extremely long time, even though it's primarily just (1) generate, (2) obtain reward, and (3) backpropagate. I am using a single A100 80GB GPU and it is taking about ~15 minutes for a SINGLE training step. Is that how long it should take? I also checked that the model, question_tensors, response_tensors, and rewards are all on the gpu by using .device. I noticed that question_tensors is a list of tensors, rather than a tensor of tensors. But ppo_trainer.generate does not take in a tensor of tensors as an input datatype. This is with the Llama 7B model.</p>
<p>Why is it taking so long?</p>
|
<python><machine-learning><pytorch><large-language-model>
|
2024-02-04 04:17:33
| 0
| 539
|
dooder
|
77,934,327
| 1,398,834
|
HTMX and Flask form can only Submit once
|
<p>I am working on a simple web app using HTMX and Flask, I have my <code>app.py</code> set up as follows:</p>
<pre><code>from flask import Flask, render_template, request
app = Flask(__name__)
@app.route('/')
def index():
return render_template('index.html')
@app.route('/submit', methods=['POST'])
def submit():
print('submit clicked')
input_text = request.form.get('inputText')
return f'<p>You entered: {input_text}</p>'
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>I have my <code>templates/index.html</code> set up as follows:</p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>HTMX Demo</title>
<script src="https://unpkg.com/htmx.org@1.9.10"></script>
</head>
<body>
<form hx-post="/submit" hx-target="#output" hx-swap="outerHTML">
<div>
<label for="inputText">Enter Text:</label>
<input
type="text"
id="inputText"
name="inputText"
placeholder="Type something..."
/>
</div>
<button class="btn btn-default">Submit</button>
</form>
<div id="output">
<!-- The result will be displayed here -->
</div>
</body>
</html>
</code></pre>
<p>The goal is to be able to enter some text into the text box, hit the Submit button, and then have the text render in the output div.</p>
<p>However, this only works once. If I enter "test" and hit submit, it successfully renders the text in the output div. If I replace the form's text with "asasd" or something else and hit submit again, it does nothing. I don't even hit the <code>print('submit clicked')</code> line.</p>
<p>What am I doing wrong?</p>
|
<python><flask><htmx>
|
2024-02-04 01:25:33
| 2
| 2,097
|
FlameDra
|
77,934,324
| 1,646,928
|
Importing Library in Python does not work
|
<p>Using both Visual Studio 1.86.0 and Microsoft Visual StudioΒ Community 2022 the error is the same:</p>
<pre><code>Import "lxml" could not be resolvedPylancereportMissingImports
</code></pre>
<p>Trying this basic example from Android tutorial:</p>
<pre><code>from lxml import html
html_content = """
<html>
<body>
<div>
Test
</div>
</body>
</html>
"""
# Analisar o conteΓΊdo HTML
tree = html.fromstring(html_content)
xpath_expression = "//div/p"
result = tree.xpath(xpath_expression)
for element in result:
print(html.tostring(element, pretty_print=True))
</code></pre>
<p>Lxml Installation:</p>
<pre><code>pip install lxml
Requirement already satisfied: lxml in c:\users\base 6\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-pac
kages\python311\site-packages (5.1.0)
</code></pre>
<p>In terminal it prints for pip (as it is in tutotial):</p>
<pre><code>pip show lxml
Name: lxml
Version: 5.1.0
Summary: Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API.
Home-page: https://lxml.de/
Author-email: lxml-dev@lxml.de
License: BSD-3-Clause
Location: C:\Users\Base 6\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\
LocalCache\local-packages\Python311\site-packages
Requires:
Required-by:
PS C:\Users\Base 6\source\repos\Python> pip --version
pip 23.3.1 from C:\Users\Base 6\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\
LocalCache\local-packages\Python311\site-packages\pip (python 3.11)
</code></pre>
<p>Printing sys.path:</p>
<pre><code>import sys; print(sys.path)
</code></pre>
<p>Output:</p>
<pre><code>'C:\\Users\\Base 6\\source\\repos\\PythonTest\\PythonTest',
'C:\\Users\\Base 6\\source\\repos\\PythonTest\\PythonTest',
'C:\\Users\\Base 6\\AppData\\Local\\Programs\\Python\\Python312\\python312.zip',
'C:\\Users\\Base 6\\AppData\\Local\\Programs\\Python\\Python312\\DLLs',
'C:\\Users\\Base 6\\AppData\\Local\\Programs\\Python\\Python312\\Lib',
'C:\\Users\\Base 6\\AppData\\Local\\Programs\\Python\\Python312',
'C:\\Users\\Base 6\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages'] β
</code></pre>
|
<python><pip>
|
2024-02-04 01:22:03
| 0
| 529
|
JamesB
|
77,934,209
| 1,192,335
|
docker compose asyncio.sleep block statements before
|
<p>I've been beating myself to solve this problem that seems very weird. Consider this python code:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
async def main():
print("hello")
await asyncio.sleep(10)
print("world")
asyncio.run(main())
</code></pre>
<p>I'd like to run this as a docker compose service, so here's my Dockerfile.</p>
<pre><code>FROM python:3.12-slim
WORKDIR /app
COPY ./main.py /app
CMD ["python", "main.py"]
</code></pre>
<p>and docker-compose.yaml is simply:</p>
<pre class="lang-yaml prettyprint-override"><code>version: "3.9"
services:
dnsgen:
build: ./dnsgen
</code></pre>
<p>Now here's the issue if I run <code>docker compose up dnsgen --build --timestamps</code> the prints happen simultaneously after waiting 10 seconds:</p>
<pre><code>dnsgen-1 | 2024-02-04T01:16:25.341721172+01:00hello
dnsgen-1 | 2024-02-04T01:16:25.341784837+01:00world
</code></pre>
<p>This doesn't happen of course if I run <code>python main.py</code> on my machine or even when I run <code>docker compose run dnsgen</code>. I invite you to try this out and see if this problem persists, and please let me know what I'm doing wrong?</p>
|
<python><docker-compose><python-asyncio>
|
2024-02-04 00:18:13
| 2
| 951
|
elaich
|
77,934,099
| 2,905,108
|
Tileable object placement algorithm
|
<p>I've used noise algorithms like Perlin noise that produce textures that are tileable. I'm working on a tool for randomly placing objects on a canvas. The problem is the results do not tile i.e. if I take the image and repeat it on a grid, there are clear seams.</p>
<p>What I have so far is very basic:</p>
<pre><code>height = 1024
width = 1024
N = int(width * height / 4096)
np.random.seed(5000)
x_generated = np.random.uniform(0, width, N)
y_generated = np.random.uniform(0, height, N)
</code></pre>
<p>If I could use the perlin noise as a basis for this, I could probably get a tileable pattern, but I can't seem to figure out how to use it.</p>
<p>Which algorithm can achieve what I'm trying to do?</p>
<p>Here is an example of the object placement I'm talking about. Each square is what I mean by "object". It could be any shape and pattern placed on the canvas. You can see, at the edges, some of the squares go outside the bounds. If I were to repeat/tile this image in a rendering engine, there would be clear seams where the tiles meet and the squares are cutoff.</p>
<p>Perlin noise doesn't have this issue. You can tile it and there are no seams. I want to find a way to eliminate this "seam".</p>
<p>Maybe it's just a case of having an if statement check the bounds of each square when placing and if its out of the frame, don't place that object? It just doesn't feel super clean to do that.</p>
<p><a href="https://i.sstatic.net/3lVED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3lVED.png" alt="enter image description here" /></a></p>
|
<python><algorithm>
|
2024-02-03 23:19:48
| 1
| 487
|
Gordon13
|
77,934,004
| 823,859
|
Prevent pip in conda environment from using a global install
|
<p>I am trying to install a number of packages within a <code>conda</code> environment, using <code>pip</code>. I installed <code>pip</code> within the <code>conda</code> environment, as well.</p>
<p>However, a while ago I must have installed some of these packages globally. <code>pip</code> is using these global installations, and as a result it is not installing the packages in the <code>conda</code> environment and my script does not recognize the imports when I try to import the packages.</p>
<p>Further, I tried using <code>pip uninstall <package></code> outside of a <code>conda</code> environment to try to uninstall the global installation. It looked like it was successful, but then I had the same issue where it said the requirement was already satisfied in the global location.</p>
<p>Here is a snippet of the output, within the <code>conda</code> environment <code>streamlit0</code>:</p>
<pre><code>(streamlit0) 17:19:39 ~/Library/CloudStorage/Dropbox/.../Streamlit/theo
β β ΅ pip install streamlit openai llama-index nltk
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: streamlit in /Users/adamg/Library/Python/3.9/lib/python/site-packages (1.31.0)
Collecting openai
Downloading openai-1.11.0-py3-none-any.whl.metadata (18 kB)
Collecting llama-index
Downloading llama_index-0.9.43-py3-none-any.whl.metadata (8.4 kB)
Collecting nltk
Downloading nltk-3.8.1-py3-none-any.whl (1.5 MB)
ββββββββββββββββββββββββββββββββββββββββ 1.5/1.5 MB 9.8 MB/s eta 0:00:00
Requirement already satisfied: altair<6,>=4.0 in /Users/adamg/Library/Python/3.9/lib/python/site-packages (from streamlit) (5.2.0)
...
Requirement already satisfied: sniffio in /Users/adamg/Library/Python/3.9/lib/python/site-packages (from openai) (1.3.0)
...
Collecting aiosignal>=1.1.2 (from aiohttp<4.0.0,>=3.8.6->llama-index)
Downloading aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Requirement already satisfied: attrs>=17.3.0 in /Users/adamg/Library/Python/3.9/lib/python/site-packages (from aiohttp<4.0.0,>=3.8.6->llama-index) (23.1.0)
</code></pre>
<p>How can I 1) get rid of these global installs and 2) only install these packages within this <code>conda</code> environment?</p>
|
<python><pip><anaconda><conda>
|
2024-02-03 22:42:19
| 1
| 7,979
|
Adam_G
|
77,933,879
| 7,347,925
|
How to submit response for Google Invisible reCaptcha?
|
<p>I'm trying to log into website like this: <a href="https://google-invisible-captcha-m2.magento-demo.amasty.com/customer/account/create/" rel="nofollow noreferrer">https://google-invisible-captcha-m2.magento-demo.amasty.com/customer/account/create/</a></p>
<p>It is easy to get the site key info:</p>
<pre><code>
require([
'Amasty_InvisibleCaptcha/js/model/am-recaptcha',
], function (amRecaptchaModel) {
amRecaptchaModel.setConfig({
"formsToProtect": "form\u005Baction\u002A\u003D\u0022customer\u002Faccount\u002Fcreatepost\u0022\u005D,form\u005Baction\u002A\u003D\u0022customer\u002Faccount\u002FloginPost\u0022\u005D,form\u005Baction\u002A\u003D\u0022newsletter\u002Fsubscriber\u002Fnew\u0022\u005D,form\u005Baction\u002A\u003D\u0022contact\u002Findex\u002Fpost\u0022\u005D,form\u005Baction\u002A\u003D\u0022customer\u002Faccount\u002Fforgotpasswordpost\u0022\u005D,form\u005Baction\u002A\u003D\u0022review\u002Fproduct\u002Fpost\u0022\u005D,form\u005Baction\u002A\u003D\u0022customer\u002Faccount\u002Fresetpasswordpost\u0022\u005D,form\u005Baction\u002A\u003D\u0022checkout_payment_captcha\u0022\u005D",
"isEnabledOnPayments": "1",
"checkoutRecaptchaValidateUrl": "https://google-invisible-captcha-m2.magento-demo.amasty.com/default/amcapthca/checkout/validate/",
"invisibleCaptchaCustomForm": "-1",
"recaptchaConfig": {
"lang": "hl\u003Den",
"theme": "light",
"badge": "bottomleft",
"sitekey": "6Lck67wUAAAAAPsiz1Y59OrqrpzPcF_ydn40uZhJ",
"size": "invisible",
"isInvisible": true },
"reCaptchaErrorMessage": "Prove you are not a robot"
})
});
</code></pre>
<p>Then I use <code>2captcha</code> to get the answer_key successfully. But, I can't figure out where to post the answer key. Any ideas are welcome!</p>
|
<python><selenium-webdriver><web-scraping><recaptcha><invisible-recaptcha>
|
2024-02-03 21:53:16
| 0
| 1,039
|
zxdawn
|
77,933,809
| 313,768
|
Neato output for Mrecord produces significant overlap
|
<p>Given this demo graph, it renders (vaguely) well in the default engine, and awfully in Neato:</p>
<pre class="lang-py prettyprint-override"><code>import graphviz
a = [
[[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0]],
[[0, 0], [0, 0], [0, 0], [0, 0], [0, 18.442211], [0, 1.5577889]],
]
e = [
[0, 7.8787879, 15.353535, 0, 0, 31.212121],
[0, 0, 0, 0, 0, 0],
[11.392405, 0, 0, 22.025316, 46.582278, 0],
]
f = [88.607595, 12.121212, 64.646465, 37.974684, 84.97551, 67.23009]
w = [45.555556, 0, 0, 60, 150, 100]
Cuntreat = (100, 250, 80, 200, 150, 130)
Ctreat = (650, 200)
Cfresh = 1
plants = range(len(Cuntreat))
treatments = range(len(Ctreat))
graph = graphviz.Digraph(
name='treatment_flow', format='svg', engine='neato',
graph_attr={
'rankdir': 'LR',
'overlap': 'false', 'splines': 'true',
},
)
graph.node(name='fresh', label='Freshwater')
graph.node(name='waste', label='Wastewater')
for plant in plants:
graph.node(
name=str(plant),
shape='Mrecord',
label=
'{'
'{'
'<fresh_in> fresh|'
'<untreat_in> untreated|'
'<treat_in> treated'
'}|'
r'Plant \N|'
'{'
'<waste_out> waste|'
'<untreat_out> untreated|'
+ '|'.join(
f'<treat_{treatment}_out> treatment {treatment}'
for treatment in treatments
) +
'}'
'}'
)
for i, a_slice in enumerate(a):
for j, a_row in enumerate(a_slice):
for treatment, (contam, flow) in enumerate(zip(Ctreat, a_row)):
if flow > 0:
graph.edge(
tail_name=f'{i}:treat_{treatment}_out',
head_name=f'{j}:treat_in',
label=f'{flow:.1f} ({contam*flow:.1f})',
)
for i, (e_row, contam) in enumerate(zip(e, Cuntreat)):
for j, flow in enumerate(e_row):
if flow > 0:
graph.edge(
tail_name=f'{i}:untreat_out',
head_name=f'{j}:untreat_in',
label=f'{flow:.1f} ({contam*flow:.1f})',
)
for j, flow in enumerate(f):
if flow > 0:
graph.edge(
tail_name='fresh',
head_name=f'{j}:fresh_in',
label=f'{flow:.1f} ({Cfresh*flow:.1f})',
)
for i, (flow, contam) in enumerate(zip(w, Cuntreat)):
if flow > 0:
graph.edge(
tail_name=f'{i}:waste_out',
head_name='waste',
label=f'{flow:.1f} ({contam*flow:.1f})',
)
graph.view()
</code></pre>
<p><a href="https://i.sstatic.net/baxyn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/baxyn.png" alt="neato garbageheap" /></a></p>
<p>with output</p>
<pre><code>digraph treatment_flow {
graph [overlap=false rankdir=LR splines=true]
fresh [label=Freshwater]
waste [label=Wastewater]
0 [label="{{<fresh_in> fresh|<untreat_in> untreated|<treat_in> treated}|Plant \N|{<waste_out> waste|<untreat_out> untreated|<treat_0_out> treatment 0|<treat_1_out> treatment 1}}" shape=Mrecord]
1 [label="{{<fresh_in> fresh|<untreat_in> untreated|<treat_in> treated}|Plant \N|{<waste_out> waste|<untreat_out> untreated|<treat_0_out> treatment 0|<treat_1_out> treatment 1}}" shape=Mrecord]
2 [label="{{<fresh_in> fresh|<untreat_in> untreated|<treat_in> treated}|Plant \N|{<waste_out> waste|<untreat_out> untreated|<treat_0_out> treatment 0|<treat_1_out> treatment 1}}" shape=Mrecord]
3 [label="{{<fresh_in> fresh|<untreat_in> untreated|<treat_in> treated}|Plant \N|{<waste_out> waste|<untreat_out> untreated|<treat_0_out> treatment 0|<treat_1_out> treatment 1}}" shape=Mrecord]
4 [label="{{<fresh_in> fresh|<untreat_in> untreated|<treat_in> treated}|Plant \N|{<waste_out> waste|<untreat_out> untreated|<treat_0_out> treatment 0|<treat_1_out> treatment 1}}" shape=Mrecord]
5 [label="{{<fresh_in> fresh|<untreat_in> untreated|<treat_in> treated}|Plant \N|{<waste_out> waste|<untreat_out> untreated|<treat_0_out> treatment 0|<treat_1_out> treatment 1}}" shape=Mrecord]
1:treat_1_out -> 4:treat_in [label="18.4 (3688.4)"]
1:treat_1_out -> 5:treat_in [label="1.6 (311.6)"]
0:untreat_out -> 1:untreat_in [label="7.9 (787.9)"]
0:untreat_out -> 2:untreat_in [label="15.4 (1535.4)"]
0:untreat_out -> 5:untreat_in [label="31.2 (3121.2)"]
2:untreat_out -> 0:untreat_in [label="11.4 (911.4)"]
2:untreat_out -> 3:untreat_in [label="22.0 (1762.0)"]
2:untreat_out -> 4:untreat_in [label="46.6 (3726.6)"]
fresh -> 0:fresh_in [label="88.6 (88.6)"]
fresh -> 1:fresh_in [label="12.1 (12.1)"]
fresh -> 2:fresh_in [label="64.6 (64.6)"]
fresh -> 3:fresh_in [label="38.0 (38.0)"]
fresh -> 4:fresh_in [label="85.0 (85.0)"]
fresh -> 5:fresh_in [label="67.2 (67.2)"]
0:waste_out -> waste [label="45.6 (4555.6)"]
3:waste_out -> waste [label="60.0 (12000.0)"]
4:waste_out -> waste [label="150.0 (22500.0)"]
5:waste_out -> waste [label="100.0 (13000.0)"]
}
</code></pre>
<p>Among the specific problems I need to fix:</p>
<ul>
<li>Edge heads and tails should not enter ports at nonsensical directions</li>
<li>At least some effort should be made to avoid edge-node overlap</li>
<li>Node placement needn't be as it's shown here. For example, moving plant 3 to the right would probably help avoid overlap.</li>
</ul>
<p>I'm sure there isn't a magic bullet for all of these, but surely Neato should be able to do better?</p>
|
<python><graphviz><neato>
|
2024-02-03 21:26:40
| 3
| 16,660
|
Reinderien
|
77,933,780
| 10,574,250
|
xlsxwriter .add_table() set columns works for one dataframe but not the other
|
<p>I am trying to add a data table in Excel using python library xlsxwriter</p>
<p>My Dataframe named <code>whole</code> has 26 columns (exactly the letters in the alphabet) and so I am trying to set them using this code:</p>
<pre><code>workbook = xlsxwriter.Workbook(f'workbook.xlsx', {"nan_inf_to_errors": True})
worksheet = workbook.add_worksheet("Column Setter")
worksheet.add_table(f"A1:{chr(ord('@')+len(whole.columns))}{len(whole)}", {'data':
whole.values.tolist(),
'banded_columns': True, 'banded_rows': False, 'header_row': True,
'columns': [{'header': col} for col in whole.columns.tolist()]})
worksheet.autofit()
</code></pre>
<p>However the output seems to set all the column names to Column1 etc.</p>
<p><a href="https://i.sstatic.net/y213P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y213P.png" alt="enter image description here" /></a></p>
<p>This code seems to work perfectly for another dataframe with 3 columns so I'm struggling to understand why this doesn't work. Any answers would be very helpful.</p>
<p>EDIT</p>
<p>I have changed the code to match jmcnamara cpde. It looks as such:</p>
<pre><code>workbook = xlsxwriter.Workbook(f'workbook.xlsx', {"nan_inf_to_errors": True})
worksheet = workbook.add_worksheet("Column Setter")
(max_row, max_col) = whole.shape
worksheet.add_table(0, 0, max_row, max_col - 1,
{
'data': whole.values,
'banded_columns': True,
'banded_rows': False,
'header_row': True,
}
)
worksheet.autofit()
</code></pre>
<p>However this doesn't seem to write the data in at all and just appears with 7 of 26 columns populated. it also doesn't seem to have created a table at all. Please see below.</p>
<p><a href="https://i.sstatic.net/ZIIig.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZIIig.png" alt="enter image description here" /></a></p>
|
<python><xlsxwriter>
|
2024-02-03 21:19:01
| 1
| 1,555
|
geds133
|
77,933,542
| 13,783,624
|
Running one jupyter notebook from another and passing arguments with magic commands
|
<p>I am unable to pass arguments and or access variables from one jupyter notebook to another:</p>
<h1>using %run with args</h1>
<p>First issue I am having is not being able to pass arguments from one notebook to another when using the %run magic command. <a href="https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-run" rel="nofollow noreferrer">magic command docs</a>. Here is the code from the notebooks: <br>
Notebook 1:<br></p>
<pre><code>import pandas as pd
import datetime
import mplfinance as mpf
import numpy as np
import os
from IPython.display import display, HTML
import random
import warnings
display(HTML("<style>.container { width:95% !important; }</style>"))
pd.set_option('display.width', 1000)
pd.set_option('display.max_columns', 500)
pd.set_option('display.expand_frame_repr', False)
pd.options.mode.chained_assignment = None
warnings.filterwarnings('ignore')
%run "notebook2.ipynb" 1 2
</code></pre>
<p>Here is the code in Notebook 2:<br></p>
<pre><code>import pandas as pd
import datetime
import mplfinance as mpf
import numpy as np
import os
import sys
from IPython.display import display, HTML
import random
import warnings
display(HTML("<style>.container { width:95% !important; }</style>"))
pd.set_option('display.width', 1000)
pd.set_option('display.max_columns', 500)
pd.set_option('display.expand_frame_repr', False)
pd.options.mode.chained_assignment = None
warnings.filterwarnings('ignore')
print(sys.argv)
</code></pre>
<p>The output I get doesnt contain either variables that I am trying to pass in. Here is the output with my name redacted <code>['C:\\Users\\Name\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\ipykernel_launcher.py', '-f', 'C:\\Users\\Name\\AppData\\Roaming\\jupyter\\runtime\\kernel-dc8af21c-af3f-488f-bfef-10c7b8e8f8ea.json']</code></p>
<p>Here is another stackoverflow answer I tried to follow with no luck <a href="https://stackoverflow.com/questions/14409167/how-to-pass-a-variable-to-magic-%C2%B4run%C2%B4-function-in-ipython">Link</a></p>
<h1>using %store</h1>
<p>So trying this another route I decided to just store the variables / dataframes I wanted to pass in from notebook one to notebook 2. <a href="https://stackoverflow.com/questions/31621414/share-data-between-ipython-notebooks">Here is an example of that with an answer on stackoverflow</a>, <a href="https://stackoverflow.com/questions/35935670/share-variables-between-different-jupyter-notebooks">And another</a>. Yet, when I try to do it I have no luck. <br>
Code in Notebook 1:<br></p>
<pre><code>%store df
</code></pre>
<p>output of that cell <code>Stored 'df' (DataFrame)</code><br>
And in another cell <br></p>
<pre><code>var = 1
%store var
</code></pre>
<p>output of that cell <code>Stored 'var' (int)</code>
<br>
Now on to Notebook 2:<br></p>
<pre><code>%store -r df
%store -r var
</code></pre>
<p>Error Message <code>TypeError: 'PickleShareDB' object is not subscriptable</code> <br></p>
<p>I am not sure what the issue is, here is my current jupyter notebook setup and versioning. I have tried to restart Kernels. I am also using the same Kernel. These notebooks are in the same directory.<br>
Current Jupyter notebook versioning:<br>
<a href="https://i.sstatic.net/bcHiZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bcHiZ.png" alt="enter image description here" /></a></p>
|
<python><jupyter-notebook><jupyter>
|
2024-02-03 20:02:37
| 1
| 331
|
Duke3e33
|
77,933,490
| 1,091,935
|
How to validate custom encoded data using @field_validator in Pydatic?
|
<p>I am trying to use a custom encoder and decoder in Pydantic (V2). My goal is to convert a reference to another class into its name instead of encoding the whole class and all its children, and resolve it back to the class instance when the JSON is decoded. Example:</p>
<pre><code>from pydantic import BaseModel, field_validator, ValidationInfo
class B(BaseModel):
id: str
value: int
class A(BaseModel):
id: str
ref: B
class Config:
json_encoders = {
B: lambda t: f'{t.id}',
}
@classmethod
@field_validator("ref", mode="before")
def map_ref(cls, v: str, info: ValidationInfo) -> B:
return Bs[v]
b1 = B(id="B1", value=1)
b2 = B(id="B2", value=2)
Bs = { "B1" : b1, "B2" : b2}
a1 = A(id="A1", ref=b1)
a1d = a1.model_dump_json()
print("JSON: ", a1d)
a1r = A.model_validate_json(a1d)
print("Recon: ", a1r)
</code></pre>
<p>The output is:</p>
<pre><code>JSON: {"id":"A1","ref":"B1"}
Traceback (most recent call last):
File "d:\Projects\2022 NeoCity MeDT\medtproofofconcept\AppServer\custom_deserialize_test.py", line 29, in <module>
a1r = A.model_validate_json(a1d)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\2022 NeoCity MeDT\medtproofofconcept\.venv\Lib\site-packages\pydantic\main.py", line 531, in model_validate_json
return cls.__pydantic_validator__.validate_json(json_data, strict=strict, context=context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for A
ref
Input should be an object [type=model_type, input_value='B1', input_type=str]
For further information visit https://errors.pydantic.dev/2.3/v/model_type
</code></pre>
<p>So the encoding works, instead of the whole model it only prints the <code>id</code>. But the decoding throws an error, and when I debug it my custom validator is never called, even though it is marked to run <code>before</code> internal validation.</p>
<p>What am I doing wrong? Any hints welcome!</p>
|
<python><validation><types><pydantic>
|
2024-02-03 19:44:05
| 1
| 518
|
DirkR
|
77,933,460
| 5,790,653
|
python telegram bot conversationbot horizontal buttons instead of vertical
|
<p>In <a href="https://github.com/python-telegram-bot/python-telegram-bot/blob/master/examples/conversationbot.py" rel="nofollow noreferrer">this sample</a>, it show buttons like this:</p>
<pre><code>Number1ToChoose Number2ToChoose Number3ToChoose Number4ToChoose Number5ToChoose
</code></pre>
<p>While I'm going to see this:</p>
<pre><code>Number1ToChoose
Number2ToChoose
Number3ToChoose
Number4ToChoose
Number5ToChoose
</code></pre>
<p>I didn't find a way to do this up to now.</p>
|
<python><telegram><telegram-bot><python-telegram-bot>
|
2024-02-03 19:32:48
| 1
| 4,175
|
Saeed
|
77,933,446
| 4,408,275
|
Independently set Label width of Frames in multi-frame tkinter App
|
<p>I want to create a GUI, that has two labels at the bottom of the GUI for status (<code>Status 0</code> and <code>Status 1</code>) and the the upper part of the GUI, that has all the GUI stuff of the main application.</p>
<p>Basically the GUI layout should look like this:</p>
<pre><code>βββββ³βββββββββββββββββββββββββββββ³ββββ
β @ β hello world β β³ β
β£ββββ»βββββββββββββββββββββββββββββ»ββββ«
β β <- Frame 0
β MAIN β <- Frame 0
β β <- Frame 0
β£βββββββββββββββββββββββββββββββββββββ«
β OTHER β <- Frame 1
β£ββββββββββββββββββββββ³βββββββββββββββ« <- Frame 1
β Status 0 β Status 1 β <- Frame 1
βββββββββββββββββββββββ»βββββββββββββββ
</code></pre>
<p>However, my code produces something like this</p>
<p><a href="https://i.sstatic.net/Cy63S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cy63S.png" alt="example" /></a></p>
<p>or even worse, on resize something like this</p>
<p><a href="https://i.sstatic.net/cmmWh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cmmWh.png" alt="resized example" /></a></p>
<p>My question therefore is, how can I set the <code>Status 0</code> and <code>Status 1</code> widths independently to something like</p>
<ul>
<li><code>Status 0</code> shall be 2/3 of the current window width</li>
<li><code>Status 1</code> shall be 1/3 of the current window width</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
class MyApp(tk.Tk):
def __init__(self):
super().__init__()
self.frame0 = tk.Frame(self, background="green")
self.title('hello world')
self.frame0.pack(fill=tk.X)
self.frame0_label0 = tk.Label(self.frame0, text='MAIN', border=10)
self.frame0_label0.grid(row=0, column=0)
self.frame1 = tk.Frame(self, background="blue")
self.frame1.pack(fill=tk.X)
self.frame1_label0 = tk.Label(self.frame1, text='OTHER', border=10)
self.frame1_label0.grid(row=0, column=0)
self.var0 = tk.StringVar()
self.var0.set("S0")
self.var1 = tk.StringVar()
self.var1.set("S1")
sbar0 = tk.Label(self.frame1, textvariable=self.var0, relief=tk.SUNKEN, anchor="w")
sbar1 = tk.Label(self.frame1, textvariable=self.var1, relief=tk.SUNKEN, anchor="w")
sbar0.grid(row=1, column=0)
sbar1.grid(row=1, column=1)
def main():
MyApp().mainloop()
if __name__ == "__main__":
main()
</code></pre>
|
<python><user-interface><tkinter>
|
2024-02-03 19:29:09
| 1
| 1,419
|
user69453
|
77,933,376
| 6,837,086
|
TypeError: 'type' object is not subscriptable in Airflow timetable overridden serialize method
|
<p>I am trying to implement a parameterized timetable following this <a href="https://airflow.apache.org/docs/apache-airflow/stable/howto/timetable.html#parameterized-timetables" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow/stable/howto/timetable.html#parameterized-timetables</a> and I am getting an error when Airflow webserver is trying to serialize the DAG. Stack trace:</p>
<pre><code>Traceback (most recent call last):
File "/opt/airflow/plugins/fiscal_calendar_plugin.py", line 74, in <module> class EveryFiscalPeriod(Timetable):
File "/opt/airflow/plugins/fiscal_calendar_plugin.py", line 127, in EveryFiscalPeriod def serialize(self) -> dict:
TypeError: 'type' object is not subscriptable
</code></pre>
<p>The error is happening in the serialize method below. Per the guide above I need to override the serialize method to be able to pass the hour and minute parameters to the timetable. The original serialize method is this one: <a href="https://apache.googlesource.com/airflow/+/HEAD/airflow/timetables/base.py#170" rel="nofollow noreferrer">https://apache.googlesource.com/airflow/+/HEAD/airflow/timetables/base.py#170</a></p>
<p>This is the timetable code in fiscal_calendar_plugin.py:</p>
<pre><code>class EveryFiscalPeriod(Timetable):
def __init__(self, hour: Time, minute: Time) -> None:
self._hour = hour
self._minute = minute
def serialize(self) -> dict[str, Any]:
return {"hour": self._hour.isoformat(), "minute": self._minute.isoformat()}
@classmethod
def deserialize(cls, value: dict[str, Any]) -> Timetable:
return cls(Time.fromisoformat(value["hour"]), Time.fromisoformat(value["minute"]))
def next_dagrun_info(
self,
*,
last_automated_data_interval: Optional[DataInterval],
restriction: TimeRestriction,
) -> Optional[DagRunInfo]:
delta = timedelta(days=28)
if last_automated_data_interval is not None: # There was a previous run on the regular schedule.
next_start = last_automated_data_interval.end
next_end = last_automated_data_interval.end + delta
else: # This is the first ever run on the regular schedule.
restriction_earliest = restriction.earliest
next_start = restriction_earliest - delta
if next_start is None: # No start_date. Don't schedule.
return None
next_end = restriction_earliest
return DagRunInfo(
data_interval=DataInterval(start=next_start, end=next_end),
run_after=DateTime.combine(next_end.date(), self.hour, self.minute).replace(tzinfo=UTC),
)
</code></pre>
<p>This is my DAG code where I use the <code>EveryFiscalPeriod</code> class. The timetable itself without parameters works, but it breaks when I make it parameterized.</p>
<pre><code>with DAG(
catchup=False,
.........
),
max_active_runs=1,
schedule=EveryFiscalPeriod(hour=Time(15), minute=Time(30)),
)......
</code></pre>
<p>Any help will be much appreciated.</p>
|
<python><airflow><timetable>
|
2024-02-03 19:10:34
| 2
| 307
|
Luis Lema
|
77,933,206
| 3,179,179
|
pyproject.toml dynamic version dependency
|
<p>I have a Python package (aabbcc) in namespace form consisting of two sub-packages (aabbcc-core, aabbcc-aws) that I am converting over to pyproject.toml format for packaging. I would like them to be versioned together and I got that to work using dynamic version reference pointing to a text file VERSION - below is my pyproject.toml for the aabbcc-aws subpackage:</p>
<pre><code>[build-system]
requires = ["setuptools>=64.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "aabbcc-aws"
dynamic = ["version"]
dependencies = ["boto3","aabbcc-core"]
requires-python = ">=3.12"
[tool.setuptools.dynamic]
version = {file = "../VERSION"}
</code></pre>
<p>What I would like to be able to do is for aabbcc-aws dependencies to point to aabbcc-core package of the same version. Essentially I want to figure out making dependencies dynamic and have the resulting dependency for aabbcc-aws be:</p>
<pre><code>[project]
dependencies = ["boto3","aabbcc-core==VERSION"]
</code></pre>
<p>And similarly to have the main package be:</p>
<pre><code>[project]
dependencies = ["aabbcc-core==VERSION"]
[project.optional-dependencies]
aws = ["aabbcc-aws==VERSION"]
</code></pre>
<p>Is there any way to plug in this VERSION string dynamically into dependencies and optional-dependencies sections?</p>
|
<python><setuptools><pyproject.toml>
|
2024-02-03 18:13:17
| 1
| 441
|
Aleksandr Krymskiy
|
77,932,948
| 7,408,848
|
purpose of variable annotation in python
|
<p>Trying to understand the rationale for using annotations in python. I have come across code that looks like this</p>
<pre><code>import logging
logger: logging.Logger = logging.getLogger(__name__)
def example(a,b):
return a+b
if __name__ == '__main__':
example()
</code></pre>
<p>vs</p>
<pre><code>logger = logging.getLogger(__name__)
</code></pre>
<p>and I am confused as to why do this. I have read the documentation that states the purpose of the <code>:</code> is to annotate and notify the receiving variable what type it is going to be. but shouldn't setting it with <code>=</code> do the same thing?</p>
<p>Is there a performance reason for this? a cleaner way to write code? what is the function of annotating a variable like this?</p>
<p>edit:</p>
<p>I understand the purpose of annotation when requesting inputs as in the following code as this prevents errors in regards to typing from input error:</p>
<pre><code>def carl(word: str = None):
return(word + "is a pickle:)
</code></pre>
<p>edit: changed example due to comments</p>
|
<python><python-typing>
|
2024-02-03 16:49:44
| 0
| 1,111
|
Hojo.Timberwolf
|
77,932,884
| 14,824,108
|
Histogram without vertical separation lines and with custom contour width
|
<p>I'm wondering how to obtain the following result in <code>plotly</code> for histograms, i.e. no vertical lines displayed and a custom line width for contour</p>
<p><a href="https://i.sstatic.net/CmhSE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CmhSE.png" alt="enter image description here" /></a></p>
<p>the image is taken from the following <a href="https://arxiv.org/abs/2312.03687" rel="nofollow noreferrer">paper</a>. It seems that the figures were indeed produced using <code>plotly</code>.</p>
<p>EDIT:</p>
<p>I have found out that in <code>matplotlib</code> it is pretty straightforward doing something similar, like illustrated in the following example:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
plt.rcParams['font.size'] = 10
plt.rcParams['figure.dpi'] = 600
x = np.random.randn(400)
plt.hist(x, histtype='step', linewidth=5,color='green')
plt.hist(x, alpha=0.5, color='green')
</code></pre>
<p>but so far I haven't found an equivalent of <code>histtype='step'</code> in <code>plotly</code>.</p>
|
<python><plotly><histogram>
|
2024-02-03 16:28:32
| 1
| 676
|
James Arten
|
77,932,861
| 630,517
|
pipenv: Permission denied for ~/.local/share/virtualenvs
|
<p>I'm working with a codebase that uses pipenv to install dependencies as part of its setup. However, I'm getting the following error when the <code>pipenv install --deploy --dev</code> command is run:</p>
<p><code>PermissionError: [Errno 13] Permission denied: '/Users/XXX/.local/share/virtualenvs'</code></p>
<p>I've checked the <code>.local/share</code> folder and it does not contain a <code>virtualenvs</code> folder. Also, the entire <code>.local</code> folder is owned by <code>root</code>.</p>
<p>Tool versions:</p>
<ul>
<li>python - 3.11.7</li>
<li>pip - 24.0</li>
<li>pipenv - 2023.12.0</li>
</ul>
<p>I tried sudo creating a 'virtualenvs' folder in there, but it didn't help... still permission denied. Very reluctant to <code>chown</code> a directory that I don't understand the security scope of.</p>
|
<python><macos><pipenv>
|
2024-02-03 16:22:34
| 2
| 1,129
|
Joshua Sullivan
|
77,932,770
| 1,612,986
|
scipy optimize.root_scalar() with multiple arguments each being a list
|
<pre><code>def f(m_l, m_B, y):
value = -1.0 + np.sum(m_l*np.exp(-m_B*y))
return value
def df(m_l, m_B, y):
value = -np.sum(m_B*m_l*np.exp(-m_B*y))
return value
</code></pre>
<p>where m_l and m_b is a numpy array. I use give the following input</p>
<pre><code> m_l = np.array([0.0036132256153053369,0.95110068028445593])
m_B = np.array([0.48884897299905006,0.95605658765269563])
guess=-0.048557088449677460
root = optimize.root_scalar(f, guess, fprime=df, args=(m_l,m_B),method='newton',rtol=1e-9,maxiter=1000)
</code></pre>
<p>I get the following error:</p>
<pre><code> TypeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_15644/3699083294.py in <module>
4 print(f(m_l, m_B, y))
5 guess=-0.048557088449677460
----> 6 root = optimize.root_scalar(f, guess, fprime=df, args=
(m_l,m_B),method='newton',rtol=1e-9,maxiter=1000)
TypeError: root_scalar() got multiple values for argument 'args'
</code></pre>
<p>I guess the error is happening because the input arguments are not passed in correctly. Can someone please help out with what should be the correct way to pass the argument. Thanks in advance.</p>
|
<python><scipy><root-finding>
|
2024-02-03 15:55:15
| 1
| 1,415
|
user1612986
|
77,932,526
| 4,788,546
|
Setting GStreamer custom logging from Python triggers Segmentation fault
|
<p><strong>Notes</strong>:</p>
<ul>
<li><p>That there's already an (almost) identical question, but I wanted to give more context (and marked that one as a duplicate)</p>
</li>
<li><p>I'll be comparing log levels (probably using terms like: <em>lower</em>, <em>higher</em>, <em>smaller</em>, <em>greater</em>, <em>decrease</em>, <em>increase</em>, <em><strong><</strong></em>, <em><strong>></strong></em>, ...). That <strong>applies to their verbosity</strong> (amount of produced output): <code>ERROR < WARNING < ... < INFO < ... < TRACE < ...</code>, and <strong>NOT to their criticality</strong> (impact)</p>
</li>
</ul>
<p>Working on a task to make logging on a complex <em>GStreamer</em> application consistent / uniform. One part is having <em>GStreamer</em> elements logs (generated by <em>GST_*</em> macros (expanding to <em>GST_CAT_LEVEL_LOG</em>, which in turn expands to a <em>gst_debug_log</em> call)) <strong>in <em>Python</em></strong>.<br>
That should be possible using <a href="https://gstreamer.freedesktop.org/documentation/gstreamer/gstinfo.html?gi-language=python#gst_debug_add_log_function" rel="nofollow noreferrer">[FreeDesktop.GStreamer]: GstInfo - Gst.debug_add_log_function</a> (noticed the <em>G_GNUC_NO_INSTRUMENT</em> remark, but don't think there's anything that can be done at <em>Python</em> level).</p>
<p>Things generally work fine, although I fear that if left to run for long times (and higher log levels) some problems (memory consumption increase, even <em>SegFault</em>s, ...) may arise (but I'm not there yet).<br>
But some elements run into serious problems.</p>
<p><strong>Environment</strong></p>
<p><strong><em>Nvidia</em>'s <em>DeepStream 6.2</em></strong> container (<em>Ubuntu 20.04.5 LTS</em> - debugged mostly on <em>Triton</em> flavor, but reproducible on the others):</p>
<ul>
<li><p><em>GStreamer 1.16.3</em></p>
</li>
<li><p><em>Python 3.8.10</em></p>
</li>
<li><p><em>PyGObject 3.36.0</em></p>
</li>
<li><p><em>NVV4L2</em> - version seems a bit off:</p>
<blockquote>
<pre><code>gst-inspect-1.0 video4linux2 | grep ersion
Version 1.16.3
gst-inspect-1.0 nvvideo4linux2 | grep ersion
Version 1.14.0
</code></pre>
</blockquote>
</li>
</ul>
<p>Narrowed the problem down to a <em>MVCE</em>: to have a pipeline that is short (both textually and also structurally (<em>e.g.</em>: not using bins - which have a short name but can be complex, containing a bunch of other elements)).</p>
<p><em>code00.py</em>:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
import sys
import time
import gi
gi.require_version("Gst", "1.0")
from gi.repository import GLib, Gst
pipeline_string = "videotestsrc pattern=18 ! nvvideoconvert ! nvv4l2h264enc ! fakesink sync=0"
#pipeline_string = "filesrc location=\"/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4\" ! qtdemux ! h264parse ! nvv4l2decoder ! nvvideoconvert ! xvimagesink sync=0"
def log_function(
category: Gst.DebugCategory,
level: Gst.DebugLevel,
file: str,
function: str,
line: int,
obj: Gst.Object,
msg: Gst.DebugMessage,
# user_data: Any = None,
):
txt = (
f"{Gst.debug_level_get_name(level)} {category.name} - {file}:{line}:{function}:",
f"<{obj.get_name()} ({getattr(obj.__class__, '__name__')})>" if obj else "<None> ",
f"{msg.get()}"
)
print("".join(txt))
def run_pipeline():
pipeline = Gst.parse_launch(pipeline_string)
loop = GLib.MainLoop()
bus = pipeline.get_bus()
#bus.add_signal_watch()
pipeline.set_state(Gst.State.PLAYING)
start_time = time.time()
try:
loop.run()
except KeyboardInterrupt:
print("<Ctrl + C> pressed.")
except:
print(f"Funky exception caught: {sys.exc_info()[:2]}")
finally:
pipeline.set_state(Gst.State.NULL)
loop.quit()
print(f"--- Ran for {time.time() - start_time:.3f} seconds")
def main(*argv):
custom_log = bool(argv) # Custom log if any argument passed
print(f"---\nCustom log function: {custom_log}\nRunning pipeline:\n gst-launch-1.0 -e {pipeline_string}\n---\n")
Gst.init(None)
if custom_log:
Gst.debug_remove_log_function()
Gst.debug_add_log_function(log_function)
run_pipeline()
if __name__ == "__main__":
print("Python {:s} {:03d}bit on {:s}\n".format(" ".join(elem.strip() for elem in sys.version.split("\n")),
64 if sys.maxsize > 0x100000000 else 32, sys.platform))
rc = main(*sys.argv[1:])
print("\nDone.\n")
sys.exit(rc)
</code></pre>
<p>Some may argue that code is not complete (no callback to monitor messages on the bus), but I wanted to keep it simple (and I don't care if when playing a video, at the end pipeline doesn't quit - I interrupt it by pressing <kbd>Ctrl + C</kbd> anyway). Just to be explicit, I did try with a complete version, but same results.</p>
<p><strong>Output</strong>:</p>
<blockquote>
<pre><code>[root@cfati-5510-0:~/Work/Dev/StackExchange/StackOverflow/q077932526]> . ~/sopr.sh
### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ###
[064bit prompt]>
[064bit prompt]> ls
code00.py
[064bit prompt]>
[064bit prompt]> # ---------- DEFAULT LOG ----------
[064bit prompt]> GST_DEBUG=2 python ./code00.py
Python 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] 064bit on linux
---
Custom log function: False
Running pipeline:
gst-launch-1.0 -e videotestsrc pattern=18 ! nvvideoconvert ! nvv4l2h264enc ! fakesink sync=0
---
0:00:00.038000133 2604541 0x1385550 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2h264enc0:sink> Unable to try format: Unknown error -1
0:00:00.038029358 2604541 0x1385550 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2h264enc0:sink> Could not probe minimum capture size for pixelformat YM12
0:00:00.038046357 2604541 0x1385550 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2h264enc0:sink> Unable to try format: Unknown error -1
0:00:00.038061061 2604541 0x1385550 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2h264enc0:sink> Could not probe maximum capture size for pixelformat YM12
0:00:00.038080317 2604541 0x1385550 WARN v4l2 gstv4l2object.c:2395:gst_v4l2_object_add_interlace_mode:0x13873f0 Failed to determine interlace mode
0:00:00.038109676 2604541 0x1385550 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2h264enc0:sink> Unable to try format: Unknown error -1
0:00:00.038123283 2604541 0x1385550 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2h264enc0:sink> Could not probe minimum capture size for pixelformat NM12
0:00:00.038134452 2604541 0x1385550 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2h264enc0:sink> Unable to try format: Unknown error -1
0:00:00.038148304 2604541 0x1385550 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2h264enc0:sink> Could not probe maximum capture size for pixelformat NM12
0:00:00.038161000 2604541 0x1385550 WARN v4l2 gstv4l2object.c:2395:gst_v4l2_object_add_interlace_mode:0x13873f0 Failed to determine interlace mode
0:00:00.038243374 2604541 0x1385550 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2h264enc0:src> Unable to try format: Unknown error -1
0:00:00.038262117 2604541 0x1385550 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2h264enc0:src> Could not probe minimum capture size for pixelformat H264
0:00:00.038276388 2604541 0x1385550 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2h264enc0:src> Unable to try format: Unknown error -1
0:00:00.038289047 2604541 0x1385550 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2h264enc0:src> Could not probe maximum capture size for pixelformat H264
0:00:00.098620457 2604541 0x107bde0 WARN v4l2bufferpool gstv4l2bufferpool.c:1082:gst_v4l2_buffer_pool_start:<nvv4l2h264enc0:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:00.103610735 2604541 0x1399060 WARN v4l2bufferpool gstv4l2bufferpool.c:1533:gst_v4l2_buffer_pool_dqbuf:<nvv4l2h264enc0:pool:src> Driver should never set v4l2_buffer.field to ANY
^C<Ctrl + C> pressed.
--- Ran for 4.508 seconds
Done.
[064bit prompt]>
[064bit prompt]> # ---------- CUSTOM LOG ----------
[064bit prompt]> GST_DEBUG=2 python ./code00.py 1
Python 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] 064bit on linux
---
Custom log function: True
Running pipeline:
gst-launch-1.0 -e videotestsrc pattern=18 ! nvvideoconvert ! nvv4l2h264enc ! fakesink sync=0
---
WARN v4l2 - gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<sink (Pad)>Unable to try format: Unknown error -1
WARN v4l2 - gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<sink (Pad)>Could not probe minimum capture size for pixelformat YM12
WARN v4l2 - gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<sink (Pad)>Unable to try format: Unknown error -1
WARN v4l2 - gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<sink (Pad)>Could not probe maximum capture size for pixelformat YM12
code00.py:44: Warning: g_object_is_floating: assertion 'G_IS_OBJECT (object)' failed
pipeline.set_state(Gst.State.PLAYING)
code00.py:44: Warning: g_object_get_qdata: assertion 'G_IS_OBJECT (object)' failed
pipeline.set_state(Gst.State.PLAYING)
Segmentation fault (core dumped)
</code></pre>
</blockquote>
<p><strong>Notes</strong>:</p>
<ul>
<li><p>Error is reproducible for any of the 3 coders in the <em>NVV4L2</em> plugin</p>
</li>
<li><p>Looking at the above differences, the 1<sup>st</sup> line that appears in the good run <strong>after</strong> the last one seen in the crashed one is: <code>v4l2 gstv4l2object.c:2395:gst_v4l2_object_add_interlace_mode:0x13873f0 Failed to determine interlace mode</code> (this happens every time)</p>
</li>
<li><p>Works fine when decreasing the log level (<em>ERROR</em>, <em>1</em>), but then it's pretty much useless (log wise)</p>
</li>
</ul>
<p>Based on the observations above, one can conclude that the error has something to do with that particular line. Worth mentioning that I browsed several <em>Gst-Plugins-Good</em> versions (<em>gst-plugins-good/sys/v4l2/gstv4l2object.c</em>) but <strong>none of them has the exact line numbers</strong> (those particular lines of code don't seem to change though).</p>
<p>Also noted that error occurs even for:</p>
<ul>
<li><p>Dummy log functions (containing just a <code>pass</code> statement)</p>
</li>
<li><p>Extension modules (I created one with a (dummy) log function)</p>
</li>
</ul>
<p><strong>References</strong>:</p>
<ul>
<li><p><a href="https://stackoverflow.com/q/77886002/4788546">[SO]: Collecting GStreamer logs in Python ends with Segmentation fault</a></p>
</li>
<li><p><a href="https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3253" rel="nofollow noreferrer">[FreeDesktop.GitLab]: [BUG] Collecting GStreamer logs in Python ends with Segmentation fault</a> (referenced by previous)</p>
</li>
<li><p><a href="https://stackoverflow.com/q/75677726/4788546">[SO]: Redirecting Gstreamer logs in python from stdout/stderr to a custom logging handler</a></p>
</li>
<li><p><a href="https://gist.github.com/pwoolvett/a20c3a45af44dfe58d06e4bbe65b35d5" rel="nofollow noreferrer">[GitHub.Gist]: pwoolvett/nvidia.py</a></p>
</li>
<li><p><a href="https://forums.developer.nvidia.com/t/nvidia-gstreamer-element-segfaults-with-gst-debug-add-log-function-race-condition/155999" rel="nofollow noreferrer">[Nvidia.Developer.Forums]: NVIDIA gstreamer element segfaults with `Gst.debug_add_log_function` race condition</a> (referenced by previous)</p>
</li>
<li><p><a href="https://forums.developer.nvidia.com/t/deepstream-h265parser-segfaults-with-gstreamer-gst-debug-add-log-function/269688" rel="nofollow noreferrer">[Nvidia.Developer.Forums]: Deepstream h265parser segfaults with gstreamer gst_debug_add_log_function</a></p>
</li>
</ul>
<p>On a closing note, marshaling the <em>GStreamer</em> objects to <em>Python</em> will have a negative impact on performance (especially if happening often during pipeline run).</p>
|
<python><c><gstreamer><pygobject><nvidia-deepstream>
|
2024-02-03 14:47:35
| 1
| 41,517
|
CristiFati
|
77,932,375
| 16,374,636
|
Loading from list of tuples fails
|
<p>I am trying to load a list of tuples into a <code>pl.Dataframe</code>.</p>
<pre class="lang-py prettyprint-override"><code>data = [
(1, UUID('some-uuid'), None, datetime.datetime(2023, 2, 3, 6, 18, 11), None, 24040228, None, 0.0625, False, <Fruit.BANANA: 'BANANA'>, None),
]
pl.DataFrame(data, orient='row')
</code></pre>
<p>However, this results in the following</p>
<pre><code>shape: (1, 1)
βββββββββββββββββββββββββββββββββββ
β column_0 β
β --- β
β object β
βββββββββββββββββββββββββββββββββββ‘
β (1, UUID('3231989-c122-222b-1cβ¦ β
βββββββββββββββββββββββββββββββββββ
</code></pre>
<p>instead of a Dataframe with multiple columns for each value. How can this be fixed?</p>
|
<python><dataframe><python-polars>
|
2024-02-03 14:03:05
| 1
| 407
|
zacko
|
77,932,262
| 6,842,112
|
Flask/Python: localhost => ok, Heroku => error
|
<p>I read <a href="https://stackoverflow.com/questions/40714887/flask-web-app-getting-error-on-heroku-but-working-on-localhost">this</a>, <a href="https://stackoverflow.com/questions/50711572/heroku-runs-ok-local-but-deployment-app-crash-flask-and-python">this</a>, <a href="https://stackoverflow.com/questions/20772104/flask-python-application-failed-to-start-before-deployment-on-heroku">this</a> and lots of other questions before asking you.</p>
<p>I have launched Flask app locally. However, when I deploy on Heroku I got an error:</p>
<pre><code>-----> Building on the Heroku-22 stack
-----> Using buildpack: heroku/python
-----> Python app detected
-----> No Python version was specified. Using the same version as the last build: python-3.12.1
To use a different version, see: https://devcenter.heroku.com/articles/python-runtimes
-----> Requirements file has been changed, clearing cached dependencies
-----> Installing python-3.12.1
-----> Installing pip 23.3.2, setuptools 68.2.2 and wheel 0.42.0
-----> Installing SQLite3
-----> Installing requirements with pip
Collecting asgiref==3.7.2 (from -r requirements.txt (line 1))
Downloading asgiref-3.7.2-py3-none-any.whl.metadata (9.2 kB)
Collecting blinker==1.7.0 (from -r requirements.txt (line 2))
Downloading blinker-1.7.0-py3-none-any.whl.metadata (1.9 kB)
Collecting certifi==2022.12.7 (from -r requirements.txt (line 3))
Downloading certifi-2022.12.7-py3-none-any.whl (155 kB)
Collecting chardet==5.2.0 (from -r requirements.txt (line 4))
Downloading chardet-5.2.0-py3-none-any.whl.metadata (3.4 kB)
Collecting charset-normalizer==2.1.1 (from -r requirements.txt (line 5))
Downloading charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Collecting click==8.1.7 (from -r requirements.txt (line 6))
Downloading click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting colorama==0.4.6 (from -r requirements.txt (line 7))
Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting Django==5.0.1 (from -r requirements.txt (line 8))
Downloading Django-5.0.1-py3-none-any.whl.metadata (4.2 kB)
Collecting filelock==3.9.0 (from -r requirements.txt (line 9))
Downloading filelock-3.9.0-py3-none-any.whl (9.7 kB)
Collecting Flask==3.0.1 (from -r requirements.txt (line 10))
Downloading flask-3.0.1-py3-none-any.whl.metadata (3.6 kB)
Collecting fsspec==2023.4.0 (from -r requirements.txt (line 11))
Downloading fsspec-2023.4.0-py3-none-any.whl (153 kB)
Collecting gunicorn==21.2.0 (from -r requirements.txt (line 12))
Downloading gunicorn-21.2.0-py3-none-any.whl.metadata (4.1 kB)
Collecting idna==3.4 (from -r requirements.txt (line 13))
Downloading idna-3.4-py3-none-any.whl (61 kB)
Collecting itsdangerous==2.1.2 (from -r requirements.txt (line 14))
Downloading itsdangerous-2.1.2-py3-none-any.whl (15 kB)
Collecting Jinja2==3.1.2 (from -r requirements.txt (line 15))
Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting MarkupSafe==2.1.3 (from -r requirements.txt (line 16))
Downloading MarkupSafe-2.1.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.9 kB)
Collecting mpmath==1.3.0 (from -r requirements.txt (line 17))
Downloading mpmath-1.3.0-py3-none-any.whl (536 kB)
Collecting networkx==3.0 (from -r requirements.txt (line 18))
Downloading networkx-3.0-py3-none-any.whl (2.0 MB)
Collecting numpy==1.24.1 (from -r requirements.txt (line 19))
Downloading numpy-1.24.1.tar.gz (10.9 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'error'
error: subprocess-exited-with-error
Γ Getting requirements to build wheel did not run successfully.
β exit code: 1
β°β> [33 lines of output]
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/app/.heroku/python/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 112, in get_requires_for_build_wheel
backend = _build_backend()
^^^^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend
obj = import_module(mod_path)
^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1310, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 994, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/tmp/pip-build-env-e43lqbna/overlay/lib/python3.12/site-packages/setuptools/__init__.py", line 16, in <module>
import setuptools.version
File "/tmp/pip-build-env-e43lqbna/overlay/lib/python3.12/site-packages/setuptools/version.py", line 1, in <module>
import pkg_resources
File "/tmp/pip-build-env-e43lqbna/overlay/lib/python3.12/site-packages/pkg_resources/__init__.py", line 2172, in <module>
register_finder(pkgutil.ImpImporter, find_on_path)
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
Γ Getting requirements to build wheel did not run successfully.
β exit code: 1
β°β> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
! Push rejected, failed to compile Python app.
! Push failed
</code></pre>
<p>Requirements.txt:</p>
<pre><code>asgiref==3.7.2
blinker==1.7.0
certifi==2022.12.7
chardet==5.2.0
charset-normalizer==2.1.1
click==8.1.7
colorama==0.4.6
Django==5.0.1
filelock==3.9.0
Flask==3.0.1
fsspec==2023.4.0
gunicorn==21.2.0
idna==3.4
itsdangerous==2.1.2
Jinja2==3.1.2
MarkupSafe==2.1.3
mpmath==1.3.0
networkx==3.0
numpy==1.24.1
packaging==23.2
Pillow==9.3.0
python==3.12.1
requests==2.28.1
sqlparse==0.4.4
sympy==1.12
torch==2.1.2+cu118
torchaudio==2.1.2+cu118
torchvision==0.16.2+cu118
typing_extensions==4.4.0
tzdata==2023.4
urllib3==1.26.13
Werkzeug==3.0.1
</code></pre>
<p>Procfile</p>
<pre><code>web: python app.py
</code></pre>
<p>app.py</p>
<pre><code>from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
return "Hello, it's me!"
if __name__ == '__main__':
# run() method of Flask class runs the application
# on the local development servers
app.run()
</code></pre>
<p>One of the unclear things - if I run:</p>
<pre><code>pip freeze >requirements.txt
</code></pre>
<p>then the newly generated requirements.txt <strong>does NOT include "python==3.12.1"</strong> so I add it manually.</p>
<p>If I deploy without "python==3.12.1" in requirements.txt, then error on Heroku is:</p>
<pre><code>-----> Building on the Heroku-22 stack
-----> Using buildpack: heroku/python
-----> Python app detected
-----> No Python version was specified. Using the same version as the last build: python-3.12.1
To use a different version, see: https://devcenter.heroku.com/articles/python-runtimes
-----> Requirements file has been changed, clearing cached dependencies
-----> Installing python-3.12.1
-----> Installing pip 23.3.2, setuptools 68.2.2 and wheel 0.42.0
-----> Installing SQLite3
-----> Installing requirements with pip
Collecting asgiref==3.7.2 (from -r requirements.txt (line 1))
Downloading asgiref-3.7.2-py3-none-any.whl.metadata (9.2 kB)
Collecting blinker==1.7.0 (from -r requirements.txt (line 2))
Downloading blinker-1.7.0-py3-none-any.whl.metadata (1.9 kB)
Collecting certifi==2022.12.7 (from -r requirements.txt (line 3))
Downloading certifi-2022.12.7-py3-none-any.whl (155 kB)
Collecting chardet==5.2.0 (from -r requirements.txt (line 4))
Downloading chardet-5.2.0-py3-none-any.whl.metadata (3.4 kB)
Collecting charset-normalizer==2.1.1 (from -r requirements.txt (line 5))
Downloading charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Collecting click==8.1.7 (from -r requirements.txt (line 6))
Downloading click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting colorama==0.4.6 (from -r requirements.txt (line 7))
Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting distlib==0.3.8 (from -r requirements.txt (line 8))
Downloading distlib-0.3.8-py2.py3-none-any.whl.metadata (5.1 kB)
Collecting Django==5.0.1 (from -r requirements.txt (line 9))
Downloading Django-5.0.1-py3-none-any.whl.metadata (4.2 kB)
Collecting filelock==3.13.1 (from -r requirements.txt (line 10))
Downloading filelock-3.13.1-py3-none-any.whl.metadata (2.8 kB)
Collecting Flask==3.0.1 (from -r requirements.txt (line 11))
Downloading flask-3.0.1-py3-none-any.whl.metadata (3.6 kB)
Collecting fsspec==2023.4.0 (from -r requirements.txt (line 12))
Downloading fsspec-2023.4.0-py3-none-any.whl (153 kB)
Collecting gunicorn==21.2.0 (from -r requirements.txt (line 13))
Downloading gunicorn-21.2.0-py3-none-any.whl.metadata (4.1 kB)
Collecting idna==3.4 (from -r requirements.txt (line 14))
Downloading idna-3.4-py3-none-any.whl (61 kB)
Collecting itsdangerous==2.1.2 (from -r requirements.txt (line 15))
Downloading itsdangerous-2.1.2-py3-none-any.whl (15 kB)
Collecting Jinja2==3.1.2 (from -r requirements.txt (line 16))
Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting MarkupSafe==2.1.3 (from -r requirements.txt (line 17))
Downloading MarkupSafe-2.1.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.9 kB)
Collecting mpmath==1.3.0 (from -r requirements.txt (line 18))
Downloading mpmath-1.3.0-py3-none-any.whl (536 kB)
Collecting networkx==3.0 (from -r requirements.txt (line 19))
Downloading networkx-3.0-py3-none-any.whl (2.0 MB)
Collecting numpy==1.24.1 (from -r requirements.txt (line 20))
Downloading numpy-1.24.1.tar.gz (10.9 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'error'
error: subprocess-exited-with-error
Γ Getting requirements to build wheel did not run successfully.
β exit code: 1
β°β> [33 lines of output]
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/app/.heroku/python/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 112, in get_requires_for_build_wheel
backend = _build_backend()
^^^^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend
obj = import_module(mod_path)
^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1310, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 994, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/tmp/pip-build-env-1s3nzyic/overlay/lib/python3.12/site-packages/setuptools/__init__.py", line 16, in <module>
import setuptools.version
File "/tmp/pip-build-env-1s3nzyic/overlay/lib/python3.12/site-packages/setuptools/version.py", line 1, in <module>
import pkg_resources
File "/tmp/pip-build-env-1s3nzyic/overlay/lib/python3.12/site-packages/pkg_resources/__init__.py", line 2172, in <module>
register_finder(pkgutil.ImpImporter, find_on_path)
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
Γ Getting requirements to build wheel did not run successfully.
β exit code: 1
β°β> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
! Push rejected, failed to compile Python app.
! Push failed
</code></pre>
<p>I see the line "<em>No Python version was specified.</em>" and this is the reason why I added "python==3.12.1" to requirements.txt</p>
<p>I will be absolutely grateful for any help.</p>
|
<python><flask><heroku>
|
2024-02-03 13:30:36
| 0
| 921
|
Maryna K.
|
77,932,167
| 6,109,283
|
SPI connection in Docker containerβpermission denied
|
<p>I have a Raspberry Pi Zero W (armv6) and an RFID MFRC522 module. I can use it fine from Python as follows:</p>
<pre class="lang-py prettyprint-override"><code>import RPi.GPIO as GPIO
from mfrc522 import SimpleMFRC522
reader = SimpleMFRC522()
try:
id = reader.read()[0]
print("The ID for this card is:", id)
finally:
GPIO.cleanup()
</code></pre>
<p>However, I'm now trying to access the device form Docker with no luck. This is my Docker setup:</p>
<pre><code>FROM python:3.9-alpine
WORKDIR /usr/src/app
COPY controller/requirements.txt ./
RUN pip install --index-url=https://www.piwheels.org/simple --no-cache-dir -r requirements.txt
COPY controller .
CMD ["python", "controller.py"]
</code></pre>
<pre><code>version: '3.8'
services:
controller:
build:
context: .
dockerfile: Dockerfile_controller
privileged: true
volumes:
- /lib/modules:/lib/modules
- /sys:/sys
- /dev:/dev
devices:
- /dev/gpiomem
- /dev/gpiochip0
- /dev/ttyAMA0
</code></pre>
<p>I'm trying to run in privileged mode and to include relevant devices and volumes as advised online. I'm getting the following error though:</p>
<pre><code>Traceback (most recent call last):
File "/usr/src/app/controller.py", line 4, in <module>
import RPi.GPIO as GPIO
File "/usr/local/lib/python3.9/site-packages/RPi/GPIO/__init__.py", line 23, in <module>
from RPi._GPIO import *
ImportError: Error loading shared library ld-linux-armhf.so.3: No such file or directory (needed by /usr/local/lib/python3.9/site-packages/RPi/_GPIO.cpython-39-arm-linux-gnueabihf.so)
</code></pre>
<p>It could be that this Alpine image is not compatible? I'm using it mainly for its light weight.</p>
<p>Any ideas welcome!</p>
<hr />
<h2>Update</h2>
<p>Fixed this issue above as per Klaus D.'s comment below. Needed to install some C libraries:</p>
<pre><code>FROM python:3.9-alpine
WORKDIR /usr/src/app
COPY controller .
RUN apk add --update --no-cache gcc libc-dev gcompat
RUN pip install --index-url=https://www.piwheels.org/simple --no-cache-dir -r requirements.txt
CMD ["python", "controller.py"]
</code></pre>
<p>However, I'm now getting the following error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/src/app/controller.py", line 30, in <module>
reader = SimpleMFRC522()
File "/usr/local/lib/python3.9/site-packages/mfrc522/SimpleMFRC522.py", line 14, in __init__
self.READER = MFRC522()
File "/usr/local/lib/python3.9/site-packages/mfrc522/MFRC522.py", line 130, in __init__
self.spi.open(bus, device)
PermissionError: [Errno 13] Permission denied
</code></pre>
<p>There's some discussion online about giving users access to SPI (and other GPIO stuff) but haven't managed to get it to work. If I open a terminal inside my running container I can see the following:</p>
<pre><code>/usr/src/app # ls -all /dev/*spi*
crw-rw---- 1 nobody nobody 153, 0 Feb 2 19:24 /dev/spidev0.0
crw-rw---- 1 nobody nobody 153, 1 Feb 2 19:24 /dev/spidev0.1
</code></pre>
<p>and</p>
<pre><code>/usr/src/app # whoami
root
</code></pre>
<p>I've tried to change permissions but failed:</p>
<pre><code>/usr/src/app # chmod 660 /dev/spidev0.0
chmod: /dev/spidev0.0: Operation not permitted
</code></pre>
<p>I've also tried <code>chown</code>.</p>
|
<python><docker><raspberry-pi><spi>
|
2024-02-03 12:59:05
| 0
| 410
|
AlvaroP
|
77,931,910
| 1,194,864
|
Process data in panda frame and replace nan with zero
|
<p>I do have a spreadsheet file that I would like to process read some of the columns sequentially and perform some calculations per row. To make my life easier I will need to replace all the nan values with zero to perform the calculations. So my code looks like:</p>
<pre><code>df = pd.read_csv('documents/doc.csv', error_bad_lines=False)
for i, row in df.iterrows():
if i==0:
continue
else:
try:
id_1 = row[0]
var1 = float(row["first column"])
var2 = float(row["second column"])
var3 = float(row["third column"])
var4 = float(row["fourth column"])
var5 = float(row["fifth column"])
if math.isnan(float(var4)) or float(var4) == 0:
final= round(0.25*(var1+ var2 + var3)/3 + 0.75*var5, 1)
pdb.set_trace()
else:
final_grade = round(0.166*(var1+ var2+ var3)/3 + 0.33*var4 + 0.5*var5, 1)
print (var4)
except Exception as e:
print(e)
pdb.set_trace()
</code></pre>
<p>Whats the optimal way to extract my variables and replace all potential <code>nan</code> values with <code>zeros</code>?</p>
|
<python><pandas>
|
2024-02-03 11:31:33
| 1
| 5,452
|
Jose Ramon
|
77,931,704
| 6,282,576
|
decorator declared and used inside of class
|
<p>I wrote a manager class to interact with Metabase and I wish to write a decorator inside this class to renew sessions if they're expired.</p>
<pre class="lang-py prettyprint-override"><code>import requests
from datetime import datetime, timedelta
from typing import Callable, Any
from urllib.parse import urljoin
from django.utils import timezone
from rest_framework import status
from core.service.logger import exceptionErrorLog
from core.service.sensetiveApi.parameterModel import Parameter
class MetabaseManager:
"""
A manager class for interacting with Metabase
Usage:
metabaseManager = MetabaseManager()
metabaseManager.getQuestion(questionId=34)
Attributes:
Parameter.METABASE_BASE_URL (str): The base url of the Metabase server
Parameter.METABASE_USERNAME (str): The username to authenticate with Metabase
Parameter.METABASE_PASSWORD (str): The password to authenticate with Metabase
Parameter.METABASE_SESSION_ID (str): The session id provided by Metabase (no need to manually set this value)
Methods:
getQuestion(self, questionId: int) -> dict:
Retrieves question data from Metabase
"""
METABASE_SESSION_DURATION_IN_DAYS: int = 14
def __init__(self) -> None:
self.METABASE_BASE_URL: str = Parameter.objects.get(key=Parameter.METABASE_BASE_URL).value
self.METABASE_USERNAME: str = Parameter.objects.get(key=Parameter.METABASE_USERNAME).value
self.METABASE_PASSWORD: str = Parameter.objects.get(key=Parameter.METABASE_PASSWORD).value
self.METABASE_HEADERS: dict = {}
def _renewSession(self) -> None:
now: datetime = timezone.now()
url: str = urljoin(self.METABASE_BASE_URL, "/api/session/")
response: requests.Response = requests.post(
url=url,
json={
"username": self.METABASE_USERNAME,
"password": self.METABASE_PASSWORD,
},
)
response.raise_for_status()
sessionId: str = response.json()["id"]
Parameter.objects(key=Parameter.METABASE_SESSION_ID).update(
value=sessionId,
createdAt=now,
expireAt=now + timedelta(days=MetabaseManager.METABASE_SESSION_DURATION_IN_DAYS),
upsert=True,
)
self.METABASE_HEADERS["X-Metabase-Session"] = sessionId
@staticmethod
def _renewSessionIfExpired(function: Callable) -> Callable:
def wrapper(self, *args, **kwargs):
sessionId: Parameter = Parameter.objects.filter(key=Parameter.METABASE_SESSION_ID).first()
if (
sessionId is None or
sessionId.expireAt is None or
sessionId.expireAt <= timezone.now()
):
self._renewSession()
return function(self, *args, **kwargs)
return wrapper
@_renewSessionIfExpired
def getQuestion(self, questionId: int) -> dict:
url: str = urljoin(self.METABASE_BASE_URL, f"api/card/{questionId}/query")
response: requests.Response = requests.post(url=url, headers=self.METABASE_HEADERS)
if response.status_code == status.HTTP_401_UNAUTHORIZED:
self._renewSession()
response: requests.Response = requests.post(url=url, headers=self.METABASE_HEADERS)
data = response.json().get("data", {}).get("rows")
return data
</code></pre>
<p>Now this doesn't work, throwing this error:</p>
<pre class="lang-py prettyprint-override"><code>Python 3.6.15 (default, Dec 21 2021, 12:28:02)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> from core.service.metabaseManager import MetabaseManager
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/opt/project/app/core/service/metabaseManager.py", line 27, in <module>
class MetabaseManager:
File "/opt/project/app/core/service/metabaseManager.py", line 88, in MetabaseManager
def getQuestion(self, questionId: int) -> dict:
TypeError: 'staticmethod' object is not callable
</code></pre>
<p>I can solve this if I use the decorator like this outside of the class:</p>
<pre class="lang-py prettyprint-override"><code>MetabaseManager.getQuestion = MetabaseManager._renewSessionIfExpired(MetabaseManager.getQuestion)
</code></pre>
<p>This works, but isn't what I want. For one thing, I want the <code>_renewSessionIfExpired</code> decorator to remain private and using it like this outside of the class definition doesn't seem the right thing to do. And it's not clean to me, because it separates the usage of the decorator from the function declaration and I have to remember which functions I want to use the decorator on. How can I change this so that this works?</p>
<pre class="lang-py prettyprint-override"><code>@_renewSessionIfExpired
def getQuestion(self, questionId: int) -> dict:
url: str = urljoin(self.METABASE_BASE_URL, f"api/card/{questionId}/query")
response: requests.Response = requests.post(url=url, headers=self.METABASE_HEADERS)
if response.status_code == status.HTTP_401_UNAUTHORIZED:
self._renewSession()
response: requests.Response = requests.post(url=url, headers=self.METABASE_HEADERS)
data = response.json().get("data", {}).get("rows")
return data
</code></pre>
<p>Python version: 3.6.15</p>
|
<python><python-decorators>
|
2024-02-03 10:23:41
| 0
| 4,313
|
Amir Shabani
|
77,931,620
| 3,628,119
|
How to avoid O(n+1) problem with peewee object?
|
<p>I have these following models in peewee:</p>
<p>db.py</p>
<pre><code>class Item(BaseModel):
item_id = IntegerField()
item_name = CharField()
class Sales(BaseModel):
sales_id = IntegerField()
class SalesItem(BaseModel):
sales_item_id = Integer_Field()
sales = ForeignKeyField(Sales, backref='items')
item = ForeignKeyField(Item)
</code></pre>
<p>view.py</p>
<pre><code>templates = Jinja2Templates(directory="templates")
def html_get(request, sales_id):
sales = Sales.get(Sales.sales_id==sales_id)
return templates.templateResponse('view_sales.html',{'sales':sales})
</code></pre>
<p>view_sales.html</p>
<pre><code>Sales: {{ sales.sales_id }}
Items:
{% for it in sales.items %}
<div>{{ it.item.item_name }}</div>
{% endfor %}
</code></pre>
<p>The problem is the query <code>sales = Sales.get(Sales.sales_id==sales_id)</code> generates O(n+1) queries to get to each item's name. I have tried to create the join <code>sales = Sales.select(Sales,SalesItem,Item).join(SalesItem).join(Item).where(Sales.sales_id==sales_id)</code> but it also generates O(n+1) queries.</p>
<p>I looked into using prefetch, but couldn't get it to work. Any idea how to reduce the queries to O(k) where k=number of tables?</p>
|
<python><peewee>
|
2024-02-03 09:55:01
| 2
| 357
|
user3628119
|
77,931,609
| 5,091,467
|
Why does pandas sum() give wrong answers for Sparse dataframe?
|
<p>In a <code>Sparse</code> dataframe, the <code>sum()</code> method applied on the whole dataframe gives wrong results, while <code>sum()</code> applied to specific column or to a dataframe subset works.</p>
<p>It looks like an overflow issue for <code>sum()</code> when applied to the whole dataframe, since type <code>Sparse[int8, 0]</code> is chosen for sum result. However, why isn't that the case for the other two scenarios?</p>
<p>Note: Strangely, when run in Anaconda terminal, each scenario gives correct result, while in Pycharms I see the error.</p>
<pre class="lang-py prettyprint-override"><code>>>> import numpy as np
>>> import pandas as pd
>>> # Generate standard and sparse DF with binary variable.
>>> # Use int8 to minimize memory usage.
>>> df = pd.DataFrame(np.random.randint(low=0, high=2, size=(50_000, 1)))
>>> sdf = df.astype(pd.SparseDtype(dtype='int8', fill_value=0))
>>> print(df.sum(axis=0))
0 24954
dtype: int64
>>> # Why does this give a wrong answer while the other two work?
>>> print(sdf.sum(axis=0))
0 122
dtype: Sparse[int8, 0]
>>> # Works
>>> print(sdf[0].sum())
24954
>>> # Works
>>> print(sdf[sdf==1].sum())
0 24954.0
dtype: float64
</code></pre>
<p>Finally, what's a safe way for summing Sparse df columns without going dense or changing the <code>dtype</code>? I currently iterate over each column and save the <code>sum()</code> result in a dictionary (similar to Scenario 2 in this example), then transform to dataframe, which seems a bit cumbersome.</p>
|
<python><pandas><sum><sparse-matrix><integer-overflow>
|
2024-02-03 09:52:34
| 1
| 714
|
Dudelstein
|
77,931,440
| 1,795,245
|
View value in Data Viewer stopped showing - view Dataframes
|
<p>Yesterday it worked, but now it has stopped working. <br /><br /><a href="https://i.sstatic.net/YxccO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YxccO.png" alt="Menu" /></a><br /><br />It's probably something simple that I've missed. I've restarted the computer, checked that I have Jupyter installed, and even tried reinstalling it. Any tips?</p>
<p>Edit: Added code</p>
<pre><code>import pandas as pd
# Define the data
data = {'Name': ['John', 'Emma', 'Michael', 'Sophia'],
'Age': [25, 28, 32, 30],
'City': ['New York', 'London', 'Paris', 'Tokyo']}
# Create the DataFrame
df = pd.DataFrame(data)
</code></pre>
|
<python><visual-studio-code>
|
2024-02-03 08:54:27
| 1
| 649
|
Jonas
|
77,931,366
| 4,710,828
|
What is "python-fwf" engine in pandas read_csv() method and when to use it?
|
<p>I was going through the pandas documentation, and going through read_csv() method <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">here</a>, I saw that the latest version (2.2 stable) mentions only 3 engines:</p>
<pre><code>engine: {βcβ, βpythonβ, βpyarrowβ}, optional
</code></pre>
<p>However, in my PyCharm (with pandas version 2.2.0), I can see that a fourth engine is mentioned as well, which is <code>python-fwf</code></p>
<p><a href="https://i.sstatic.net/hYC5D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hYC5D.png" alt="enter image description here" /></a></p>
<p>I have searched in pandas documentation, but could not find any information about this.
Can someone please explain in what scenarios this should be used?</p>
<p>Note: I know that there is a <code>pandas.read_fwf()</code> which can be used to read a table of fixed-width formatted lines into DataFrame. Is the engine connected to this scenario? If yes, why we should use <code>read_csv()</code> with <code>python-fwf</code> engine instead of <code>read_fwf()</code>?</p>
|
<python><pandas>
|
2024-02-03 08:27:10
| 1
| 373
|
sdawar
|
77,931,280
| 1,129,194
|
String representation of nested function with stackframe/inspection
|
<p>I am working on generating a call graph for a large codebase, and I would like to generate a nice string representation for nested functions/closures - e.g. something like <code>Test.a.b</code> for this function:</p>
<pre class="lang-py prettyprint-override"><code>class Test:
def a(self):
def b():
pass
</code></pre>
<p>Currently I have the following:</p>
<pre class="lang-py prettyprint-override"><code>import inspect
def trace(frame):
code = frame.f_code
module = inspect.getmodule(code)
module_name = module.__name__ if module else None
try:
class_name = frame.f_locals["self"].__class__.__name__
except (KeyError, AttributeError):
class_name = None
func_name = code.co_name
print(module_name, class_name, func_name)
</code></pre>
<p>If I call this from my test class:</p>
<pre class="lang-py prettyprint-override"><code>class Test:
def a(self):
trace(inspect.currentframe())
def b():
trace(inspect.currentframe())
b()
def main():
test = Test()
test.a()
</code></pre>
<p>Then I get the following output:</p>
<pre><code>__main__ Test a
__main__ None b
</code></pre>
<p>Is there a way to detect that <code>b</code> is defined within <code>Test.a</code>?</p>
<p>I know that it's possible to achieve this with the string representation of the function itself, e.g. <code><function Test.a.<locals>.b at 0x105976fc0></code> obtained <a href="https://stackoverflow.com/a/4506081/1129194">via this method</a>, but I am wondering if there's a more elegant way to do this?</p>
|
<python><trace>
|
2024-02-03 07:48:18
| 1
| 9,013
|
Alex L
|
77,931,118
| 6,837,086
|
TypeError: __init__() missing 2 required positional arguments in Airflow timetable
|
<p>I am trying to implement a parameterized timetable following this <a href="https://airflow.apache.org/docs/apache-airflow/stable/howto/timetable.html#parameterized-timetables" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow/stable/howto/timetable.html#parameterized-timetables</a> and I am getting the following error message in the logs: <code>TypeError: __init__() missing 2 required positional arguments: 'hour' and 'minute'</code>. This is the timetable code:</p>
<pre><code>`class EveryFiscalPeriod(Timetable):
def __init__(self, hour: int, minute: int) -> None:
self._hour = hour
self._minute = minute
def next_dagrun_info(
self,
*,
last_automated_data_interval: Optional[DataInterval],
restriction: TimeRestriction,
) -> Optional[DagRunInfo]:
delta = timedelta(days=28)
if last_automated_data_interval is not None: # There was a previous run on the regular schedule.
next_start = last_automated_data_interval.end
next_end = last_automated_data_interval.end + delta
else: # This is the first ever run on the regular schedule.
restriction_earliest = restriction.earliest
next_start = restriction_earliest - delta
if next_start is None: # No start_date. Don't schedule.
return None
next_end = restriction_earliest
return DagRunInfo(
data_interval=DataInterval(start=next_start, end=next_end),
run_after=DateTime.combine(next_end.date(), Time(self.hour), Time(self.minute)).replace(tzinfo=UTC),
)`
</code></pre>
<p>In my DAG code I am passing the hour and minute into the class so I shouldn't be getting this error. The timetable itself without parameters works, but it breaks when I make it parameterized.</p>
<pre><code>`with DAG(
catchup=False,
.........
),
max_active_runs=1,
schedule=EveryFiscalPeriod(hour=15, minute=30),
)......`
</code></pre>
<p>Stack trace:</p>
<pre><code> [2024-02-03T06:29:28.294+0000] {app.py:1744} ERROR - Exception on /dags/accrual_repot_missing_orders/grid [GET]
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 2529, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/auth.py", line 53, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/decorators.py", line 168, in view_func
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/decorators.py", line 127, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 79, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/views.py", line 2936, in grid
dag = get_airflow_app().dag_bag.get_dag(dag_id, session=session)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 76, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dagbag.py", line 189, in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dagbag.py", line 271, in _add_dag_from_db
dag = row.dag
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/serialized_dag.py", line 221, in dag
return SerializedDAG.from_dict(data)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/serialization/serialized_objects.py", line 1413, in from_dict
return cls.deserialize_dag(serialized_obj["dag"])
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/serialization/serialized_objects.py", line 1341, in deserialize_dag
v = _decode_timetable(v)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/serialization/serialized_objects.py", line 211, in _decode_timetable
return timetable_class.deserialize(var[Encoding.VAR])
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/timetables/base.py", line 168, in deserialize
return cls()
TypeError: __init__() missing 2 required positional arguments: 'hour' and 'minute'
</code></pre>
<p>Any help will be much appreciated.</p>
|
<python><airflow><timetable>
|
2024-02-03 06:45:20
| 1
| 307
|
Luis Lema
|
77,930,942
| 5,324,306
|
Sum of frequency/probability distributions vs convolutions
|
<p>From my understanding, the sum of independent random variables will be the same as the convolution of the input distributions.</p>
<p>However, when experimenting with it, I see the distribution of the sum of the variables does not match that of the convolution result.</p>
<p>For example: The sum of two independent random variables from a uniform distribution should follow a triangular distribution. But convolution does not. I tried using <code>numpy.convolve</code>.</p>
<p>Am I missing something?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Generate two uniform distributions
uniform1 = np.random.uniform(0, 1, 100000)
uniform2 = np.random.uniform(0, 1, 100000)
# Convolution of two uniform distributions
convolution_result = np.convolve(uniform1, uniform2, mode='full')
# Plot the histograms
plt.figure(figsize=(10, 5))
plt.subplot(1, 2, 1)
plt.hist(uniform1 + uniform2, bins=50, density=True, alpha=0.7)
plt.title('Sum of Uniform Distributions')
plt.subplot(1, 2, 2)
plt.hist(convolution_result, bins=50, density=True, alpha=0.7)
plt.title('Convolution of Uniform Distributions')
plt.show()
enter image description here
</code></pre>
<p><a href="https://i.sstatic.net/VMy9V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VMy9V.png" alt="enter image description here" /></a></p>
<p><strong>Context</strong>:</p>
<p>I have a state transition matrix (probabilities), where each transition is associated with rewards/costs like, say, latency, price, etc. Instead of a constant latency, each step can be associated with a probability distribution for the latency.</p>
<p>Assuming there is a single starting state and a single final state, I want to find the distribution of the total costs.</p>
<p>In practice, there will be a large number of transitions with different probabilities for each transition, and the cost/rewards could have arbitrary frequency distributions and I should be able to find the histogram of the total cost/rewards.</p>
<p>Note: I previously asked this question in the math community, but I couldn't understand the response so far.
<a href="https://math.stackexchange.com/questions/4855739/sum-of-frequency-distributions-vs-convolutions">https://math.stackexchange.com/questions/4855739/sum-of-frequency-distributions-vs-convolutions</a></p>
|
<python><numpy><scipy><probability><convolution>
|
2024-02-03 05:08:47
| 1
| 1,090
|
JackDaniels
|
77,930,920
| 7,662,164
|
JAX `vjp` fails for vmapped function with `custom_vjp`
|
<p>Below is an example where a function with a custom-defined vector-Jacobian product (<code>custom_vjp</code>) is <code>vmap</code>ped. For a simple function like this, invoking <code>vjp</code> fails:</p>
<pre><code>@partial(custom_vjp, nondiff_argnums=(0,))
def test_func(f: Callable[..., float],
R: Array
) -> float:
return f(jnp.dot(R, R))
def test_func_fwd(f, primal):
primal_out = test_func(f, primal)
residual = 2. * primal * primal_out
return primal_out, residual
def test_func_bwd(f, residual, cotangent):
cotangent_out = residual * cotangent
return (cotangent_out, )
test_func.defvjp(test_func_fwd, test_func_bwd)
test_func = vmap(test_func, in_axes=(None, 0))
if __name__ == "__main__":
def f(x):
return x
# vjp
primal, f_vjp = vjp(partial(test_func, f),
jnp.ones((10, 3))
)
cotangent = jnp.ones(10)
cotangent_out = f_vjp(cotangent)
print(cotangent_out[0].shape)
</code></pre>
<p>The error message says:</p>
<pre><code>ValueError: Shape of cotangent input to vjp pullback function (10,) must be the same as the shape of corresponding primal input (10, 3).
</code></pre>
<p>Here, I think the error message is misleading, because the cotangent input should have the same shape as the primal output, which should be <code>(10, )</code> in this case. Still, it's not clear to me why this error occurs.</p>
|
<python><vectorization><jax><automatic-differentiation>
|
2024-02-03 04:56:18
| 1
| 335
|
Jingyang Wang
|
77,930,695
| 11,277,108
|
Output the result of an input generator and then the result of a translation of the result of the generator
|
<p>The following code:</p>
<pre><code>def test(x):
for i in x:
yield i
i = list(i)
i[1] = "X"
yield tuple(i)
list(test(it.product(["A", "B"], ["C"])))
</code></pre>
<p>Outputs the following list:</p>
<pre><code>[('A', 'C'), ('A', 'X'), ('B', 'C'), ('B', 'X')]
</code></pre>
<p>How would I adapt the function such that the input generator results are listed first and then the results of the translation?</p>
<p>So:</p>
<pre><code>[('A', 'C'), ('B', 'C'), ('A', 'X'), ('B', 'X')]
</code></pre>
|
<python><generator><python-itertools>
|
2024-02-03 02:37:42
| 1
| 1,121
|
Jossy
|
77,930,656
| 13,498,838
|
How to Schedule Async Coroutine Functions with APScheduler?
|
<p>I am developing a service in Python to send automated emails and am facing challenges with scheduling asynchronous coroutine functions using <strong>APScheduler</strong>. The service is built with <strong>FastAPI</strong> to handle user inputs and <strong>aiomysql</strong> for asynchronous database operations. My goal is to schedule email sending tasks that involve asynchronous database calls.</p>
<p>Here's the setup for my APScheduler within the application:</p>
<pre><code>from apscheduler.schedulers.asyncio import AsyncIOScheduler
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
from apscheduler.executors.pool import ThreadPoolExecutor
class APScheduler:
jobstores = {
'default': SQLAlchemyJobStore(url='mysql+pymysql://USER:PASSWORD@HOST/DATABASE')
}
executors = {
'default': ThreadPoolExecutor(5)
}
def __init__(self):
self._scheduler = AsyncIOScheduler(jobstores=self.jobstores, executors=self.executors)
self._logger = create_logger("Scheduler")
self.exchange_state = ExchangeStateMachine()
def schedule_email(self, schedule_time, bot, contact, subject, email_body, campaign_id, thread_id=None, reply_id=None, attachments=[]):
self._scheduler.add_job(
self.exchange_state.send_email, 'date', run_date=schedule_time,
kwargs={'bot': bot, 'contact': contact, 'subject': subject, 'email_body': email_body, 'campaign_id': campaign_id, 'thread_id': thread_id, 'reply_id': reply_id, 'attachments': attachments},
id=f"send-email-{bot.id}-{contact.id}-{campaign_id}",
replace_existing=True
)
self._logger.info(f"Email scheduled: To {contact.email} from {bot.email} at {schedule_time}.")
</code></pre>
<p>The <code>ExchangeStateMachine.send_email</code> method, intended for scheduling, includes several asynchronous database operations:</p>
<pre><code>class ExchangeStateMachine:
async def send_email(self, bot, contact, subject, email_body, campaign_id, thread_id=None, reply_id=None, attachments=[]):
# Asynchronous operations here
self._logger.info("Send email callback invoked.")
</code></pre>
<p>However, attempting to schedule this coroutine results in a traceback indicating that the coroutine was never awaited:</p>
<pre><code>RuntimeWarning: coroutine 'ExchangeStateMachine.send_email' was never awaited
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
</code></pre>
<p>I tried capturing the asyncio event loop at the start of the FastAPI application and passing it to the APScheduler or directly to the job function, using <code>asyncio.run_coroutine_threadsafe</code> to schedule the coroutine.</p>
<p>Despite these efforts, I continue to encounter the "coroutine was never awaited" warning, and I'm unsure how to proceed to correctly schedule and execute asynchronous coroutine functions with APScheduler.</p>
<p>How can I properly schedule and execute async coroutine functions with APScheduler?</p>
|
<python><python-asyncio><apscheduler>
|
2024-02-03 02:17:19
| 0
| 1,454
|
jda5
|
77,930,609
| 2,840,697
|
How to change column values as new column in python pandas
|
<p>I have a table like</p>
<pre><code> Name Class Rank
Karl Math 1
George English 1
Karl English 2
George Math 3
Rex Math 2
Rex English 3
</code></pre>
<p>to change to something like</p>
<pre><code> Name Math English
Karl 1 2
George 3 1
Rex 2 3
</code></pre>
<p>Basically, change the distinct values of class as new columns and remove rank column and insert values directly.</p>
<p>Is there an easy way or in-built python function that could help with this?</p>
<p>I've been searching for ways to do it, but I wasn't able to do it well.</p>
|
<python><pandas>
|
2024-02-03 01:43:52
| 1
| 942
|
user98235
|
77,930,212
| 22,674,380
|
Multitask learning to classify on dog images
|
<p>I'm trying to train a multitask classification model (mobilenet). Basically, a single model that given an image of a dog, it classifies both the color and the breed. Each dataset simply has directories for each class, and images inside those classes. <a href="https://filebin.net/5or5r8h2mswzbmhe" rel="nofollow noreferrer">Here</a> is a sample subset of the dataset. Eventually, I need to cover 5 color classes and 10 breeds.</p>
<p>When I run it, I get an error <code>logits and labels must be broadcastable</code>. How to fix it?</p>
<p><strong>UPDATE</strong>: initially, I had two different datasets for the two task. But apparently it mqkes it very difficult to solve. Let's assume I have a single dataset annotated for both tasks (color and breed).</p>
<p>This is my code:</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D, Input
from tensorflow.keras.models import Model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Set the directories for the color and breed datasets
color_data_dir = './color_dataset'
breed_data_dir = './breed_dataset'
# Define the input shape for the MobileNet model
input_shape = (224, 224, 3)
# Create an input layer for clarity and potential customization
input_layer = Input(shape=input_shape)
# Load the MobileNetV2 model (excluding the top fully connected layers)
base_model = MobileNetV2(include_top=False, input_tensor=input_layer)
# Add task-specific fully connected layers for color classification
color_branch = GlobalAveragePooling2D()(base_model.output)
color_branch = Dense(12, activation='softmax', name='color_output')(color_branch)
# Add task-specific fully connected layers for breed classification
breed_branch = GlobalAveragePooling2D()(base_model.output)
breed_branch = Dense(6, activation='softmax', name='breed_output')(breed_branch)
# Create the multi-task model with both branches
model = Model(inputs=input_layer, outputs=[color_branch, breed_branch])
# Compile the model with appropriate loss functions for each task
model.compile(loss={'color_output': 'categorical_crossentropy', 'breed_output': 'categorical_crossentropy'},
optimizer='adam', metrics=['accuracy'])
# Set up data generator for the combined color and breed datasets
data_generator = ImageDataGenerator(rescale=1.0/255.0)
class MultiTaskDataGenerator(tf.keras.utils.Sequence):
def __init__(self, color_data_dir, breed_data_dir, batch_size, target_size):
self.color_data_dir = color_data_dir
self.breed_data_dir = breed_data_dir
self.batch_size = batch_size
self.target_size = target_size
self.color_data = data_generator.flow_from_directory(
directory=color_data_dir,
target_size=target_size[:2],
batch_size=batch_size,
class_mode='categorical'
)
self.breed_data = data_generator.flow_from_directory(
directory=breed_data_dir,
target_size=target_size[:2],
batch_size=batch_size,
class_mode='categorical'
)
def __len__(self):
return min(len(self.color_data), len(self.breed_data))
def __getitem__(self, index):
color_batch, color_labels = self.color_data[index]
breed_batch, breed_labels = self.breed_data[index]
# Concatenate the batches along the batch dimension
return tf.concat([color_batch, breed_batch], axis=0), [color_labels, breed_labels]
# Create an instance of the custom data generator
data_gen = MultiTaskDataGenerator(color_data_dir, breed_data_dir, batch_size=32, target_size=input_shape)
# Train the model on both tasks simultaneously using the custom data generator
model.fit(
data_gen,
epochs=1
)
</code></pre>
<p>And here is the full stack trace:</p>
<pre><code>To enable the following instructions: SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Found 11 images belonging to 3 classes.
Found 6 images belonging to 3 classes.
Traceback (most recent call last):
File "\test\train.py", line 73, in <module>
model.fit(
File "\Python\Python310\site-packages\keras\src\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "\Python\Python310\site-packages\tensorflow\python\eager\execute.py", line 53, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:
Detected at node 'categorical_crossentropy_1/softmax_cross_entropy_with_logits' defined at (most recent call last):
File "\test\train.py", line 73, in <module>
model.fit(
File "\Python\Python310\site-packages\keras\src\utils\traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "\Python\Python310\site-packages\keras\src\engine\training.py", line 1783, in fit
tmp_logs = self.train_function(iterator)
File "\Python\Python310\site-packages\keras\src\engine\training.py", line 1377, in train_function
return step_function(self, iterator)
File "\Python\Python310\site-packages\keras\src\engine\training.py", line 1360, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "\Python\Python310\site-packages\keras\src\engine\training.py", line 1349, in run_step
outputs = model.train_step(data)
File "\Python\Python310\site-packages\keras\src\engine\training.py", line 1127, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "\Python\Python310\site-packages\keras\src\engine\training.py", line 1185, in compute_loss
return self.compiled_loss(
File "\Python\Python310\site-packages\keras\src\engine\compile_utils.py", line 277, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "\Python\Python310\site-packages\keras\src\losses.py", line 143, in __call__
losses = call_fn(y_true, y_pred)
File "\Python\Python310\site-packages\keras\src\losses.py", line 270, in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "\Python\Python310\site-packages\keras\src\losses.py", line 2221, in categorical_crossentropy
return backend.categorical_crossentropy(
File "\Python\Python310\site-packages\keras\src\backend.py", line 5581, in categorical_crossentropy
return tf.nn.softmax_cross_entropy_with_logits(
Node: 'categorical_crossentropy_1/softmax_cross_entropy_with_logits'
logits and labels must be broadcastable: logits_size=[17,6] labels_size=[6,3]
[[{{node categorical_crossentropy_1/softmax_cross_entropy_with_logits}}]] [Op:__inference_train_function_21323]
</code></pre>
|
<python><tensorflow><keras><deep-learning><classification>
|
2024-02-02 22:43:46
| 1
| 5,687
|
angel_30
|
77,930,211
| 16,717,009
|
Removing duplicates in a Pandas dataframe for only a specified value
|
<p>Let's say I have a dataframe:</p>
<pre><code>df = pd.DataFrame({
'ID': [1, 2, 3, 1, 2, 3],
'Value': ['A', 'B', 'A', 'B', 'C', 'A']
})
</code></pre>
<p>If I wanted to only remove duplicates on ID when ID is a specified value (let's say 1), how would I do that? In other words, the resulting dataframe would look like:</p>
<pre><code>|ID|Value|
|--|-----|
|1 |A |
|2 |B |
|3 |A |
|2 |C |
|3 |A |
</code></pre>
<p>AI assistants are having a surprisingly difficult time with this one.</p>
|
<python><pandas><dataframe>
|
2024-02-02 22:43:26
| 1
| 343
|
MikeP
|
77,930,209
| 5,790,653
|
python how to add new key in dictionary with range function
|
<p>This is json file:</p>
<pre class="lang-json prettyprint-override"><code>[
{"name": "Saeed"},
{"name": "Joseph"},
{"name": "Mary"},
{"name": "Peter"}
]
</code></pre>
<p>I'm going to iterate over it and then add a new key called <code>id</code> for each one, and its value is unique.</p>
<p>This is my current for loop but does increment the first item only:</p>
<pre><code>import json
with open('db.json', 'r') as file:
data = json.load(file)
data.sort(key=lambda x: x['name'])
for i in range(1, len(data) + 1):
for d in data:
d['id'] = i
break
</code></pre>
<p>Current output:</p>
<pre class="lang-py prettyprint-override"><code>for i in data:
{'name': 'Joseph', 'id': 4}
{'name': 'Mary'}
{'name': 'Peter'}
{'name': 'Saeed'}
</code></pre>
<p>Expected output is:</p>
<pre><code>{'name': 'Joseph', 'id': 1}
{'name': 'Mary', 'id': 2}
{'name': 'Peter', 'id': 3}
{'name': 'Saeed', 'id': 4}
</code></pre>
|
<python>
|
2024-02-02 22:43:24
| 2
| 4,175
|
Saeed
|
77,930,157
| 594,355
|
python: Incorrect(?) time difference between `datetime` objects spanning a daylight saving time change
|
<hr />
<p>(edited to add)</p>
<p><strong>PROLOG:</strong> Today I became aware of the concept of "wall time". I will always and forever consider it harmful.</p>
<hr />
<p>I have two <code>datetimes</code>, one representing a certain time-of-day just before a time change, and the other representing the same time-of-day one calendar day after the the first <code>datetime</code>. I would expect the time difference between these two objects to not be exactly one day, but that's what I see (with python3.8, which is all I have to work with).</p>
<p>Taking the difference of the <em>timestamps</em> associated with the <code>datetimes</code> returns exactly what I would expect to see. Taking the difference of <code>datetime</code> objects when they span a time change looks flat-out wrong to me.</p>
<p>Is this expected behavior?</p>
<pre><code>from datetime import datetime, timedelta
from dateutil.tz import gettz # pip install dateutil
def iso(dt):
return dt.strftime('%FT%T%z')
# Daylight saving time begins at 2 a.m. local time on Sunday, March 10, 2024
us_central = gettz('US/Central')
before = datetime(2024, 3, 9, 15, 22, 1, tzinfo=us_central)
after = datetime(2024, 3, 10, 15, 22, 1, tzinfo=us_central)
print()
print(f'before time change: {iso(before)}')
print(f' after time change: {iso(after)}')
naive = after - before
by_timestamps = timedelta(seconds = after.timestamp() - before.timestamp())
difference_difference = naive.total_seconds() - by_timestamps.total_seconds()
print()
print('Differences:')
print(f' naive: {repr(naive)}')
print(f'by timestamps: {repr(by_timestamps)}')
print(f' error: {difference_difference}s')
</code></pre>
<p>Output (python3.8)</p>
<pre><code>before time change: 2024-03-09T15:22:01-0600
after time change: 2024-03-10T15:22:01-0500
Differences:
naive: datetime.timedelta(days=1)
by timestamps: datetime.timedelta(seconds=82800)
error: 3600.0s
</code></pre>
<hr />
<p>(Edited to add)</p>
<p>This is so counterintuitive to me. "Wall time" is very strange.</p>
<pre><code>from datetime import datetime, timedelta
from dateutil.tz import gettz # pip install dateutil
# Daylight saving time begins at 2 a.m. local time on Sunday, March 10, 2024
us_central = gettz('US/Central')
before = datetime(2024, 3, 9, 15, 22, 1, tzinfo=us_central)
after = datetime(2024, 3, 10, 15, 22, 1, tzinfo=us_central)
print(repr((before + timedelta(days=1)) - after))
print(repr((before + timedelta(seconds=86400)) - after))
# what I think the above should do...
print(repr(datetime.fromtimestamp(before.timestamp() + 86400, tz=before.tzinfo) - after))
</code></pre>
<p>Output (python3.8)</p>
<pre><code>datetime.timedelta(0)
datetime.timedelta(0)
datetime.timedelta(seconds=3600)
</code></pre>
|
<python><datetime><dst>
|
2024-02-02 22:29:55
| 2
| 739
|
smcdow
|
77,930,099
| 2,142,728
|
Python generics for Scala programmers
|
<p>I come from the Scala world where the type system allows for very powerful abstractions.</p>
<p>What I'm trying right now is very simple, yet the Python mechanisms are confusing me a lot: I want to create a type alias that swaps the type params of a <code>dict</code>.</p>
<p>In Scala I can do:</p>
<pre class="lang-scala prettyprint-override"><code>type Pam[A,B] = Map[B,A]
</code></pre>
<p>But I've tried the same in Python and can't make it work:</p>
<pre class="lang-py prettyprint-override"><code>B= TypeVar('B')
A= TypeVar('A')
Tcid = dict[B,A]
def test()->Tcid[int,str]:
return {
"asd":1 ## MyPy complains
}
</code></pre>
<p>I've swapped the definition order of typevars B and A, in the dict, everywhere, but it just doesn't work.</p>
<p>How can I tell MyPy/Python when creating type aliases, in which order to apply them?</p>
|
<python><mypy><python-typing>
|
2024-02-02 22:12:33
| 2
| 3,774
|
caeus
|
77,930,086
| 736,479
|
How to alter python's json encoder to convert NaN to None
|
<p>Say I have this (all this is Python 3.11 in case that matters):</p>
<pre><code>not_a_number = float("NaN") # this actually comes from somewhere else
json.encode([not_a_number])
</code></pre>
<p>The output is an (invalid) JSON literal <code>NaN</code>. I've been trying to create an JSONEncoder subclass that would use <code>math.isnan()</code> to determine if the value is a <code>NaN</code> and output <code>None</code> instead.</p>
<p>I first tried subclassing JSONEncoder and doing it in <code>default()</code>, which I found later isn't called for things like float. I then found a recommendation to override the <code>encode()</code> method instead, so I tried this:</p>
<pre><code>class NanEncoder(json.JSONEncoder):
def encode(self, obj):
if isinstance(obj, float):
if math.isnan(obj):
return None
return super(NanEncoder, self).encode(obj)
</code></pre>
<p>This works:</p>
<pre><code>>>> json.dumps(not_a_number, cls=NanEncoder)
>>> json_string = json.dumps(not_a_number, cls=NanEncoder)
>>> print(json_string)
None
</code></pre>
<p>Cool, I think I've got it. BUT, this does not work:</p>
<pre><code>not_a_number_list = [not_a_number]
print(not_a_number_list)
[nan]
json_string = json.dumps(not_a_number_list, cls=NanEncoder)
print(json_string)
[NaN]
</code></pre>
<p>So, as I see in the python docs, maybe I need to call the encode method slightly differently, so I try that:</p>
<pre><code>json_string = NanEncoder().encode(not_a_number_list)
print(json_string)
[NaN]
</code></pre>
<p>Alas, no difference.</p>
<p>So, here's my question: is it possible to create a JSONEncoder subclass that will find instances of the float that is <code>NaN</code> in Python and output <code>None</code> instead? Or am I relegated to do a search/replace on the string <code>NaN</code> with <code>null</code> in the output JSON (which, theoretically anyway, could alter data I don't want to)? Fixing the input dictionary is not a great option because the dict that the values are in is quite large and it's construction is not under my control (so I can't stop <code>NaN</code> from getting in there in the first place).</p>
|
<python><jsonencoder>
|
2024-02-02 22:07:49
| 1
| 619
|
machomeautoguy
|
77,930,045
| 1,574,054
|
Randomly run "a few" tests with pytest
|
<p>I am working on a numeric calculation application and have two sets of "unit" tests. "Fast" tests which run in < 10ms per tests and "long" tests which take several minutes per test. Let's assume that this is necessary here, for example, when testing an example for the integration of a differential equation. You probably would not see something like this in tests for regular applications or webdev or other things. In numerics, you sometimes have to create relatively large grids (arrays) on which you perform calculations and this takes time.</p>
<p>I am using automation using gitlab (but this should be generalizablet to some extent). The problem is that I cannot run all long tests everytime I push some code since this would take tens of hours. The idea is therefore to always run all "fast" tests and run <em>some</em> long tests.</p>
<p>Is there a way to do this using pytest? For example, let pytest run tests for 2 hours and then mark all remaining long tests as "expected failure". Or alternatively, let pytest run a fixed number of tests.</p>
<p>The tests which do get run would have to be chosen in a completely random order so that ideally, if this procedure is repeated several times, all tests will have (almost certainly) been executed at least once.</p>
|
<python><testing><pytest>
|
2024-02-02 21:55:59
| 3
| 4,589
|
HerpDerpington
|
77,930,020
| 5,790,653
|
python load json file and print values in different line
|
<p>This is <code>file.json</code>:</p>
<pre><code>[
{"date": "1402/11/03", "time": "18-21", "count": 5},
{"date": "1402/11/04", "time": "12-15", "count": 2},
{"date": "1402/11/05", "time": "10-13", "count": 4},
{"date": "1402/11/06", "time": "09-12", "count": 7},
{"date": "1402/11/06", "time": "15-18", "count": 3},
{"date": "1402/11/07", "time": "17-20", "count": 2},
{"date": "1402/11/07", "time": "14-17", "count": 1},
{"date": "1402/11/09", "time": "13-16", "count": 1}
]
</code></pre>
<p>This is my python code:</p>
<pre class="lang-py prettyprint-override"><code>with open('file.json', 'r') as file:
data = json.load(file)
new_list = [f'Date --> {x["date"]}, Remaining counts --> {x["count"]}' for x in data]
</code></pre>
<p>I'm not looking for this:</p>
<pre><code>print('This is our data:')
for i in new_list:
i
</code></pre>
<p>I'm looking for a way to store the <code>i</code>s into a variable, so that this line would have the same output as above:</p>
<pre class="lang-py prettyprint-override"><code>print(f'This is our data:\n{new_var}')
</code></pre>
|
<python>
|
2024-02-02 21:50:46
| 1
| 4,175
|
Saeed
|
77,929,999
| 11,277,108
|
How to generate name combinations with initials
|
<p>Given a string of a person's name:</p>
<pre><code>"Richard David"
</code></pre>
<p>I'd like to create a list of all combinations of this name where up to all but one of the individual names are replaced with initials:</p>
<pre><code>[
"Richard David",
"R David",
"Richard D",
]
</code></pre>
<p>An example with three individual names:</p>
<pre><code>"Richard Anthony David"
</code></pre>
<p>Would output:</p>
<pre><code>[
# replace no initials
"Richard Anthony David",
# replace one name with initial
"Richard Anthony D",
"Richard A David",
"R Anthony David",
# replace two names with initials
"R A David",
"R Anthony D",
"Richard A D",
]
</code></pre>
<p><code>itertools</code> is usually my go to for combinations but I can't quite work out how to build in the replacement of a name with an initial.</p>
|
<python>
|
2024-02-02 21:46:05
| 3
| 1,121
|
Jossy
|
77,929,966
| 20,006,915
|
Performance of Sorting Output of Database (python/sqlite)
|
<p>I'm writing a function to output certain fields of data within a database, based on a provided sort key. The key signals to the function if the output should be sorted by either title, year or budget (these are the only 3 attributes which need to be displayed).</p>
<p>Example database:</p>
<pre><code>ID | name | year | budget | other fields
---------------------------------------------
id1 | foo | 2020 | 50000000 | xxx
id2 | bar | 1997 | 200000 | yyy
id3 | baz | 2016 | 3333333 | zzz
...
</code></pre>
<p>Currently, I'm using the following code to do this:</p>
<pre><code>def db_output(sort_key: str) -> None:
# ...
CURSOR.execute("SELECT * FROM movies")
output = CURSOR.fetchall()
CONN.close()
clean_table = [x for x in output if x[3] is not None]
sorted_list = sorted(
clean_table, key=lambda x: x[{"title": 1, "year": 2, "budget": 3}[sort_key]]
)
# Output clause
</code></pre>
<p>My question is would it be more efficient to use the ORDER BY command to retrieve the sorted data rather than sorting it in python as shown above? I do not need to actually organize the database, just provide a sorted output to the user. Forgive my lack of knowledge here, I'm confused after reading about "stable" vs "unstable" sorting and the like.</p>
|
<python><database><sqlite>
|
2024-02-02 21:37:38
| 0
| 536
|
Jackxx
|
77,929,959
| 9,560,908
|
com_error (-2147352567, 'Exception occurred.', (0, None, None, None, 0, -2147352561), None)
|
<p>I'm writing a python script that interfaces with a VBA project and I've encountered this error:</p>
<pre><code>C:\Python37\lib\site-packages\win32com\client\dynamic.py in Add(self, Type, Operator, Formula1, Formula2, String, TextOperator, DateOperator, ScopeType)
com_error: (-2147352567, 'Exception occurred.', (0, None, None, None, 0, -2147352561), None)
</code></pre>
<p>I got the error when running this code:</p>
<pre class="lang-py prettyprint-override"><code>import xlwings as xw # (very old) version 0.7.2
wb = xw.Workbook.active()
sh = xw.Sheet.active()
xw.Range("$1:$500").xl_range.FormatConditions.Add(2, Formula1='CELL("protect", A1)=1')
</code></pre>
<p>I've looked at documentation and examples, searched for the error codes, and confirmed that the equivalent VBA code runs.</p>
|
<python><excel><vba>
|
2024-02-02 21:35:29
| 1
| 648
|
Terry Davis
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.