QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,958,736
2,986,153
ComputeError: TypeError: np.mean() got an unexpected keyword argument 'axis'
<p>When I try to use numpy.mean within a polars dataframe I get an error message:</p> <pre><code>import numpy as np #v2.1.0 import polars as pl #v1.6.0 from polars import col TRIALS = 1 SIMS = 10 np.random.seed(42) df = pl.DataFrame({ 'a': np.random.binomial(TRIALS, .45, SIMS), 'b': np.random.binomial(TRIALS, .5, SIMS), 'c': np.random.binomial(TRIALS, .55, SIMS) }) df.head() df.with_columns( z_0 = np.mean(col(&quot;a&quot;)) ).head() </code></pre> <p><code>TypeError: mean() got an unexpected keyword argument 'axis'</code></p> <p>Adding an explicit axis argument will not change the error.</p> <pre><code>df.with_columns( z_0 = np.mean(a=col(&quot;a&quot;), axis=0) ).head() </code></pre> <p><code>TypeError: mean() got an unexpected keyword argument 'axis'</code></p> <p>I know that I could complete this computation with:</p> <pre><code>df.with_columns( z_0 = ncol(&quot;a&quot;).mean() ).head() </code></pre> <p>But I am trying to understand how non-polars functions work within polars.</p>
<python><python-polars>
2024-09-06 20:56:56
0
3,836
Joe
78,958,692
4,784,683
SciPy binom.logsf - just a convenience function?
<p>I thought that <code>scipy.stats.binom.log*</code> family of functions were able to return a greater range of values because somehow, they kept all computations in log-space.</p> <p>But this example shows that both <code>binom.logsf</code> and <code>binom.sf</code> fail due to precision for the same inputs so the <code>log*</code> functions are just convenience functions i.e. equivalent to the code <code>log(binom.sf())</code></p> <p><strong>QUESTION</strong><br /> Is this the expected behavior or is there a configuration step I missed?</p> <pre><code>for n in range(n_min_max, n_min_max + 20): p_trial_log = binom.logsf(n, N, p_let) p_trial = binom.sf(n, N, p_let) print(f&quot;args = {n}, {N}, {p_let}&quot;) print(f&quot;p_trial_log={p_trial_log:.1e}&quot;) print(f&quot;p_trial= {p_trial:.4e}&quot;) print(f&quot;p_trial_exp={exp(p_trial_log):.4e}&quot;) print(&quot;&quot;) </code></pre> <p>Output:</p> <pre><code>args = 42, 100000, 4e-10 p_trial_log=-5.6e+02 p_trial= 1.2691e-242 p_trial_exp=1.2691e-242 args = 43, 100000, 4e-10 p_trial_log=-5.7e+02 p_trial= 1.1532e-248 p_trial_exp=1.1532e-248 args = 44, 100000, 4e-10 p_trial_log=-5.8e+02 p_trial= 1.0246e-254 p_trial_exp=1.0246e-254 args = 45, 100000, 4e-10 p_trial_log=-6.0e+02 p_trial= 8.9059e-261 p_trial_exp=8.9059e-261 args = 46, 100000, 4e-10 p_trial_log=-6.1e+02 p_trial= 7.5760e-267 p_trial_exp=7.5760e-267 args = 47, 100000, 4e-10 p_trial_log=-6.3e+02 p_trial= 6.3104e-273 p_trial_exp=6.3104e-273 args = 48, 100000, 4e-10 p_trial_log=-6.4e+02 p_trial= 5.1488e-279 p_trial_exp=5.1488e-279 args = 49, 100000, 4e-10 p_trial_log=-6.5e+02 p_trial= 4.1171e-285 p_trial_exp=4.1171e-285 args = 50, 100000, 4e-10 p_trial_log=-6.7e+02 p_trial= 3.2274e-291 p_trial_exp=3.2274e-291 args = 51, 100000, 4e-10 p_trial_log=-6.8e+02 p_trial= 2.4814e-297 p_trial_exp=2.4814e-297 args = 52, 100000, 4e-10 p_trial_log=-7.0e+02 p_trial= 1.8718e-303 p_trial_exp=1.8718e-303 args = 53, 100000, 4e-10 p_trial_log=-7.1e+02 p_trial= 1.3858e-309 p_trial_exp=1.3858e-309 args = 54, 100000, 4e-10 p_trial_log=-7.3e+02 p_trial= 1.0073e-315 p_trial_exp=1.0073e-315 args = 55, 100000, 4e-10 p_trial_log=-7.4e+02 p_trial= 7.2134e-322 p_trial_exp=7.2134e-322 args = 56, 100000, 4e-10 p_trial_log=-inf p_trial= 0.0000e+00 p_trial_exp=0.0000e+00 args = 57, 100000, 4e-10 p_trial_log=-inf p_trial= 0.0000e+00 p_trial_exp=0.0000e+00 args = 58, 100000, 4e-10 p_trial_log=-inf p_trial= 0.0000e+00 p_trial_exp=0.0000e+00 args = 59, 100000, 4e-10 p_trial_log=-inf p_trial= 0.0000e+00 p_trial_exp=0.0000e+00 args = 60, 100000, 4e-10 p_trial_log=-inf p_trial= 0.0000e+00 p_trial_exp=0.0000e+00 args = 61, 100000, 4e-10 p_trial_log=-inf p_trial= 0.0000e+00 p_trial_exp=0.0000e+00 </code></pre>
<python><scipy><scipy.stats>
2024-09-06 20:43:07
2
5,180
Bob
78,958,689
2,893,712
Pandas Add Unique Values from Groupby to Column
<p>I have a dataframe that lists a users codes</p> <pre><code>UserID Code 123 A 123 B 123 A 456 C 456 D </code></pre> <p>How do I add a column that shows all of the users unique codes?</p> <pre><code>UserID Code UniqueCodes 123 A [A, B] 123 B [A, B] 123 A [A, B] 456 C [C, D] 456 D [C, D] </code></pre> <p>I tried doing <code>df.groupby(by='UserID')['Code'].agg(['unique'])</code> but that did not work.</p> <p>I also tried to do <code>df.groupby(by='UserID')['Code'].transform('unique')</code> but I got an error:</p> <blockquote> <p>'unique' is not a valid function name for transform(name)</p> </blockquote>
<python><pandas>
2024-09-06 20:42:39
3
8,806
Bijan
78,958,635
1,209,416
Can't import module from within it in a poetry package
<p>I have a poetry package that looks so:</p> <pre><code>recon/ - pyproject.toml - recon/ - main.py - config.py </code></pre> <p>From my <code>main.py</code>, I'm not able to do <code>from recon.config import *</code> - it says <code>&quot;ModuleNotFoundError: No module named 'recon.config'; 'recon' is not a package&quot;</code>.</p> <p>I tried adding:</p> <pre><code>packages = [ { include = &quot;recon&quot; } ] </code></pre> <p>And running <code>poetry -C recon/ install</code> - no change.</p> <p>For context, <code>recon</code> is part of my other project, so in reality the structure looks like this:</p> <pre><code>mainproject/ - recon/ - recon/ - main.py </code></pre> <p>And I run <code>main.py</code> like this: <code>poetry -C recon run python main.py</code>. I've also tried to activate the <code>virtualenv</code> and run the file, still no luck.</p>
<python><python-poetry>
2024-09-06 20:18:12
0
2,307
SiddharthaRT
78,958,556
4,272,651
Alembic migration tests failing for already existing database
<p>I had an already existing database where I had to run migrations. In attempts to do so, I used alembic and was able to make the migration.</p> <h2>Step 0: Model already exists.</h2> <pre><code># models.py USERS = Table('user', admin_meta, Column('column1', ...), Column('column2', ...), Column('column3', ...), ) </code></pre> <h2>STEP 1: Add column, make migration, upgrade</h2> <pre><code># models.py USERS = Table('user', admin_meta, Column('column1', ...), Column('column2', ...), Column('column3', ...), Column('column4', ...), # add new column ) </code></pre> <p>create custom migrations:</p> <pre><code>alembic -c database/alembic.ini revision -m &quot;Add backup column&quot; </code></pre> <p>The migration file:</p> <pre><code>revision: str = '2aa4282617d4' ... def upgrade(): op.add_column('users', sa.Column('column4', sa.DateTime)) def downgrade(): op.drop_column('users', 'column4') </code></pre> <p>Finally, run upgrade.</p> <pre><code>alembic -c database/alembic.ini upgrade </code></pre> <h2>Step 2: Testing</h2> <p>This is where the problem is. I am testing using an inmemory database and am still loading the models from models.py. So when the database is created in memory, it has the updated models. i.e. model with <code>column4</code> added.</p> <p>My idea for testing was simple, start at <code>current/head</code>(because that is what the models have) and go back all the way to <code>step 0</code>, come back all the way to current.</p> <pre><code>def test_upgrade(dbclient): # migrate down to the required one first with dbclient.engine.connect() as connection: config.attributes['connection'] = connection alembic.command.downgrade(config, &quot;base&quot;) alembic.command.upgrade(config, &quot;head&quot;) </code></pre> <p>This fails with the error:</p> <pre><code>FAILED tests/unit/test_migrations.py::test_upgrade - sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) duplicate column name: column4 </code></pre> <p>So in some ways the downgrade didn't work and it just tried to upgrade from the current state.</p> <p>What am I doing wrong here? Any leads regarding this would be hugely appreciated. Thanks in advance.</p>
<python><sqlalchemy><alembic>
2024-09-06 19:44:43
1
4,746
Prasanna
78,958,489
10,746,224
Sort a list of objects based on the index of the object's property from another list
<p>I have a tuple, <code>RELAY_PINS</code> that holds GPIO pin numbers in the order that the relays are installed. <code>RELAY_PINS</code> is immutable, and its ordering does not change, while the order that the devices are defined in changes frequently.</p> <p>The <strong>MRE</strong>:</p> <pre><code>from random import shuffle, randint class Device: def __init__(self, pin_number): self.pin_number = pin_number def __str__(self): return str(self.pin_number) RELAY_PINS = ( 14, 15, 18, 23, 24, 25, 1, 12, 16, 20, 21, 26, 19, 13, 6, 5 ) def MRE(): devices = [ Device(pin) for pin in RELAY_PINS ] # the ordering for the list of devices should be considered random for the sake of this question shuffle(devices) return devices </code></pre> <p>My solution &quot;works&quot;, but frankly, it's embarrassing:</p> <pre><code>def main(): devices = MRE() pin_map = { pin_number : index for index, pin_number in enumerate(RELAY_PINS) } ordered_devices = [ None for _ in range(len(RELAY_PINS)) ] for device in devices: index = pin_map[device.pin_number] ordered_devices[index] = device return [ dev for dev in ordered_devices if dev is not None ] </code></pre> <p>I <strong>know</strong> there is a better solution, but I can't quite wrap my head around it.</p> <p>What is the <em>pythonic</em> solution to this problem?</p>
<python><python-3.x><list>
2024-09-06 19:20:22
1
16,425
Lord Elrond
78,958,365
23,260,297
.py file is not a valid Win32 application error
<p>I am trying to run a .py script with the <a href="https://stackoverflow.com/questions/37692780/error-28000-login-failed-for-user-domain-user-with-pyodbc/37702329#37702329">Runas</a> command but I get the following error:</p> <pre><code>193: C:\temp\module1.py is not a valid Win32 application. </code></pre> <p>My command is:</p> <pre><code>runas /user:domain\username &quot;C:\temp\module1.py&quot; </code></pre> <p>I am unsure why this error is being thrown and every SO post I have read has not helped.</p> <p>What exactly does this message mean and how could I resolve it?</p>
<python><windows><runas>
2024-09-06 18:32:45
2
2,185
iBeMeltin
78,958,124
2,986,153
What qualities of a UDF create a need for map_batches() vs map_elements() in polars?
<p>I am trying to grok when I should expect to need <code>map_batches()</code> and <code>map_elements()</code> in polars. In my example below I am able to obtain the same results (<code>z1:z_3</code>) regardless of which method I use.</p> <p><strong>What qualities of the UDF, <code>my_sum()</code>, would need to change in order for differences to emerge between my three methods of computing z?</strong> I do understand that <code>map_batches()</code> computes in parallel, but I am unclear about how I can know what qualities of my UDF will require vs. forbid its use.</p> <pre><code>pip install numpy==2.1.0 </code></pre> <pre><code>pip install polars==1.6.0 </code></pre> <pre><code>import numpy as np import polars as pl from polars import col, lit TRIALS = 1 SIMS = 10 np.random.seed(42) df = pl.DataFrame({ 'a': np.random.binomial(TRIALS, .45, SIMS), 'b': np.random.binomial(TRIALS, .5, SIMS), 'c': np.random.binomial(TRIALS, .55, SIMS) }) df.head() </code></pre> <p><a href="https://i.sstatic.net/19dOfhN3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19dOfhN3.png" alt="enter image description here" /></a></p> <pre><code>def my_sum(x, y): z = x + y return(z) df2 = df.with_columns( z_1 = my_sum(col(&quot;a&quot;), col(&quot;b&quot;)), z_2 = pl.struct(&quot;a&quot;, &quot;b&quot;) .map_elements(lambda x: my_sum(x[&quot;a&quot;], x[&quot;b&quot;]), return_dtype=pl.Int64), z_3 = pl.struct(&quot;a&quot;, &quot;b&quot;) .map_batches(lambda x: my_sum(x.struct[&quot;a&quot;], x.struct[&quot;b&quot;])) # note use of &quot;x.struct[]&quot; with map_batches ) df2.head() </code></pre> <p><a href="https://i.sstatic.net/mdnRlGjD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mdnRlGjD.png" alt="enter image description here" /></a></p>
<python><python-polars>
2024-09-06 17:07:52
1
3,836
Joe
78,958,080
9,921,853
How to recover from pyodbc cursor.fetchall() deadlock
<p>Querying a SQL Server with pyodbc, sometimes <code>cursor.execute()</code> with a SELECT query completes successfully, but <code>cursor.fetchall()</code> gets deadlocked and throws an exception.</p> <p>Code example:</p> <pre><code>self.cursor.execute(sql, *values) self.rowcount = self.cursor.rowcount results = self.cursor.fetchall() </code></pre> <p>Exception:</p> <blockquote> <p>results = self.cursor.fetchall() pyodbc.Error: ('40001', '[40001] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Transaction (Process ID 118) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. (1205) (SQLFetch)')</p> </blockquote> <p>What would be a proper way to recover there - re-run <code>cursor.fetchall()</code> or re-run <code>cursor.execute()</code> followed by <code>cursor.fetchall()</code>?</p>
<python><sql-server><pyodbc>
2024-09-06 16:57:33
0
1,441
Sergey Nudnov
78,958,079
13,467,891
Docker build fails while trying to install atari libraries - gymnaisum[atari] : conflicts with ale-py
<p>I am trying to build dockerfile with the below code:</p> <pre><code>FROM python:3.10 WORKDIR /app COPY ./requirements.txt . RUN pip install --no-cache-dir -r requirements.txt </code></pre> <p>requirements.txt is as below:</p> <pre><code>DI-engine==0.5.1 gymnasium[atari] </code></pre> <p>and the error message is like:</p> <pre><code>ERROR: Could not find a version that satisfies the requirement ale-py~=0.8.1; extra == &quot;atari&quot; (from shimmy[atari]) (from versions: none) 53.54 ERROR: No matching distribution found for ale-py~=0.8.1; extra == &quot;atari&quot; </code></pre> <p>I guess this is the similar issue: <a href="https://github.com/Farama-Foundation/Arcade-Learning-Environment/issues/504" rel="nofollow noreferrer">https://github.com/Farama-Foundation/Arcade-Learning-Environment/issues/504</a></p> <p>If I try to install ale-py first, &quot;No matching distribution found for ale-py&quot; error occurs again.</p> <p>Also I tried with below settings:</p> <pre><code>python version w/ 3.8 ~ 3.12 tried to install ale-py 0.7.6~0.8.1 RUN pip install https://files.pythonhosted.org/packages/xx/yy/ale-py-0.8.1.tar.gz </code></pre> <p>any of those did not help..</p> <p>I guess it is impossible to install ale-py in docker..? Could someone plz help?</p>
<python><docker><pip><reinforcement-learning><gymnasium>
2024-09-06 16:55:31
0
715
Geonsu Kim
78,958,033
6,141,238
In Python, can I round all printed floats with a single command?
<p>I am working with a Python script that prints many floating point numbers. Within each <code>print</code> command, I use <code>round(x, 3)</code> to round the printed number <code>x</code> to 3 decimal places before printing. Here is a simplified example:</p> <pre><code>x1 = 1.61803398875 print('x1 = ' + str(round(x1, 3))) # x1 = 1.618 x2 = x1**2 print('x2 = ' + str(round(x2, 3))) # x2 = 2.618 x3 = 1/(x2-1) print('x3 = ' + str(round(x3, 3))) # x3 = 0.618 </code></pre> <p>The actual code contains hundreds of these print commands. Is there a more efficient way to perform the rounding part of this script? For example, is there a single overarching command that will round all printed floats to a given precision, as there is for <code>numpy</code> and <code>pandas</code> variables?</p> <p>I am looking for a solution that does not involve moving or combining the <code>print</code> commands or converting the printed variables to <code>numpy</code> and <code>pandas</code> variables before printing.</p>
<python><floating-point><output><rounding>
2024-09-06 16:40:03
1
427
SapereAude
78,958,005
257,948
APScheduler in a Python flask app to execute recurring tasks isn't working with Gunicorn
<p>I'm a newbie with Python and flask in general and I'm trying to create (I think) a kind of basic project: I have a flask web app that makes requests every 12 hours to a certain API endpoint and stores the result in a PostgreSQL database.</p> <p>I'm it works very well in development mode, but as soon as it's deployed to a Digital Ocean droplet with Ubuntu, I'm getting the following error (and there's no other error):</p> <pre><code>[2024-09-06 12:10:23 +0200] [1] [INFO] Starting gunicorn 23.0.0 [2024-09-06 12:10:23 +0200] [1] [DEBUG] Arbiter booted [2024-09-06 12:10:23 +0200] [1] [INFO] Listening at: http://0.0.0.0:5000 (1) [2024-09-06 12:10:23 +0200] [1] [INFO] Using worker: gevent [2024-09-06 12:10:23 +0200] [8] [INFO] Booting worker with pid: 8 [2024-09-06 12:10:23 +0200] [1] [DEBUG] 1 workers [2024-09-06 12:10:25 +0200] [1] [ERROR] Worker (pid:8) exited with code 3 [2024-09-06 12:10:25 +0200] [1] [ERROR] Shutting down: Master [2024-09-06 12:10:25 +0200] [1] [ERROR] Reason: Worker failed to boot. </code></pre> <p>This is my <code>docker-compose.yml</code> file:</p> <pre><code>services: web: build: . command: [ &quot;gunicorn&quot;, &quot;-k&quot;, &quot;gevent&quot;, &quot;-b&quot;, &quot;0.0.0.0:5000&quot;, &quot;app:create_app()&quot;, &quot;--log-level&quot;, &quot;debug&quot;, ] environment: ENABLE_SCHEDULER: ${ENABLE_SCHEDULER} volumes: - .:/app ports: - 5000:5000 depends_on: - db networks: - fullstack db: container_name: scrape-jobs-db image: postgres:16.4-alpine3.20 restart: always environment: POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} POSTGRES_DB: ${POSTGRES_DB} volumes: - postgres_data:/var/lib/postgresql/data - ./init-db.sh:/docker-entrypoint-initdb.d/init-db.sh - ./.env:/app/.env networks: - fullstack </code></pre> <p>I'm not sure this is the most interesting part of <code>__init__.py</code> but here it is:</p> <pre><code>if os.getenv(&quot;ENABLE_SCHEDULER&quot;, &quot;true&quot;): print(&quot;Scheduler IS enabled.&quot;) scheduler = GeventScheduler() scheduler.add_job( next_run_time=datetime.now(), func=scheduled_job, trigger=&quot;interval&quot;, hours=12, args=[app], max_instances=3, ) scheduler.start() else: print(&quot;Scheduler is NOT enabled.&quot;) </code></pre> <p>I have tried using <code>BackgroundScheduler()</code> but I'm still getting the same result. Am I doing something else wrong? Is there anything else I should check?</p> <p>NOTE: I'm not doing anything aysnc or similar, nor am I using request. I'm just using <a href="https://github.com/VeNoMouS/cloudscraper" rel="nofollow noreferrer">this</a> library for scraping content.</p>
<python><flask><gunicorn><gevent><apscheduler>
2024-09-06 16:28:07
0
12,045
noloman
78,957,805
1,195,198
packing python wheel with pybind11 using bazel
<p>I am trying to generate a wheel file using bazel, for a target that has pybind dependencies. The package by itself works fine (though testing), but when I'm packing it, the .so file is missing from the <code>site_packges</code> folder. This is my build file:</p> <pre><code>load(&quot;@pybind11_bazel//:build_defs.bzl&quot;, &quot;pybind_extension&quot;) load(&quot;@python_pip_deps//:requirements.bzl&quot;, &quot;requirement&quot;) load(&quot;@rules_python//python:defs.bzl&quot;, &quot;py_library&quot;, &quot;py_test&quot;) load(&quot;@rules_python//python:packaging.bzl&quot;, &quot;py_wheel&quot;, &quot;py_package&quot;) # wrapper for so file py_library( name = &quot;example&quot;, srcs = [&quot;example.py&quot;,&quot;__init__.py&quot;], deps = [ ], data = [&quot;:pyexample_inf&quot;], imports = [&quot;.&quot;], ) # compile pybind cpp pybind_extension( name = &quot;pyexample_inf&quot;, srcs = [&quot;pyexample_inf.cpp&quot;], deps = [], linkstatic = True, ) # test wrapper py_test( name = &quot;pyexample_test&quot;, srcs = [&quot;tests/pyexample_test.py&quot;], deps = [ &quot;:example&quot;, ], ) # Use py_package to collect all transitive dependencies of a target, # selecting just the files within a specific python package. py_package( name = &quot;pyexample_pkg&quot;, visibility = [&quot;//visibility:private&quot;], # Only include these Python packages. deps = [&quot;:example&quot;], ) # using pip, this copies the files to the site_packges, but not the so file py_wheel( name = &quot;wheel&quot;, abi = &quot;cp311&quot;, author = &quot;me&quot;, distribution = &quot;example&quot;, license = &quot;Apache 2.0&quot;, platform = select({ &quot;@bazel_tools//src/conditions:linux_x86_64&quot;: &quot;linux_x86_64&quot;, }), python_requires = &quot;&gt;=3.9.0&quot;, python_tag = &quot;cpython&quot;, version = &quot;0.0.1&quot;, deps = [&quot;:example&quot;], ) </code></pre> <p>How can I make the py_wheel copy the so file?</p>
<python><bazel><pybind11>
2024-09-06 15:27:39
1
1,985
Mercury
78,957,769
13,023,224
Pandas 2.0.3? Problems keeping format when file is saved in .json or .csv format
<p>Here is some random code.</p> <pre><code># create df import pandas as pd df2 = pd.DataFrame({'var1':['1_0','1_0','1_0','1_0','1_0'], 'var2':['X','y','a','a','a']}) df2.to_json('df2.json') # import df df2 = pd.read_json('df2.json') df2 </code></pre> <p>This is the expected output:</p> <pre><code> var1 var2 0 1_0 X 1 1_0 y 2 1_0 a 3 1_0 a 4 1_0 a </code></pre> <p>However it generates:</p> <pre><code> var1 var2 0 10 X 1 10 y 2 10 a 3 10 a 4 10 a </code></pre> <p>If I modify an entry inside <code>['var1']</code> to a string, then the code it generates when <code>df</code> is imported is correct.</p> <p>Here is an example to illustrate it. I replaced one of the entries with <code>'hello'</code></p> <pre><code>df2 = pd.DataFrame({'var1':['1_0','hello','1_0','1_0','1_0'], 'var2':['X','y','a','a','a']}) df2.to_json('df2.json') # import df df2 = pd.read_json('df2.json') df2 </code></pre> <p>Generates this</p> <pre><code> var1 var2 0 1_0 X 1 hello y 2 1_0 a 3 1_0 a 4 1_0 a </code></pre> <p>Same problem is observed if file is saved in csv format and then imported.</p> <p>Has anyone encountered the same issue?</p>
<python><pandas><jupyter>
2024-09-06 15:18:07
1
571
josepmaria
78,957,694
1,477,337
Pandas groupby is changing column values
<p>I have a multiindex Pandas DataFrame and I'm using groupby to extract the rows containing the first appearances of the first index. After this operation, however, the output column values does not always correspond to the original values. Here is a simple example to reproduce this behaviour:</p> <pre><code>df = pd.DataFrame([{'myIndex1' : 'A', 'myIndex2' : 0, 'C1' : 1.0, 'C2' : None}, {'myIndex1' : 'A', 'myIndex2' : 1, 'C1' : 0.5, 'C2' : 'ca'}, {'myIndex1' : 'B', 'myIndex2' : 0, 'C1' : 3.0, 'C2' : 'cb'}, {'myIndex1' : 'C', 'myIndex2' : 0, 'C1' : 2.0, 'C2' : 'cc'}]) df.set_index(['myIndex1','myIndex2'],inplace=True) df </code></pre> <p><a href="https://i.sstatic.net/TMpytrIJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMpytrIJ.png" alt="enter image description here" /></a></p> <p>Now if I use groupby to extract the first appearances of myIndex1:</p> <pre><code>df.groupby(level='myIndex1').first() </code></pre> <p><a href="https://i.sstatic.net/kE54SeNb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kE54SeNb.png" alt="enter image description here" /></a></p> <p>So the column 'C2' for the first appearance of myIndex1 = A is no longer None, but it has been changed to 'ca'.</p> <p>I've checked that this happens if the column value is None or NaN. Of course, I can replace these values, but I would like to avoid that.</p> <p>Any thoughts about what could be causing this behavior and how I can avoid it? Thanks!</p>
<python><pandas><group-by>
2024-09-06 14:57:29
1
395
user1477337
78,957,615
23,260,297
Cannot Connect to SQL Server Database with SqlAlchemy
<p>I am trying to connect to a SQL Server Database with SqlAlchemy but I get an error message whether I use Windows Authentication or SQL server authentication.</p> <p>Here is what I have tried:</p> <pre><code>from sqlalchemy import create_engine serverName = &quot;server&quot; driverName = &quot;ODBC Driver 17 for SQL Server&quot; databaseName = &quot;POC&quot; uid= r'domain\username' pwd= r'password' conn_string = f&quot;mssql://@{serverName}/{databaseName}?trusted_connection=yes&amp;driver={driverName}&quot; engine = create_engine(conn_string) try: conn = engine.connect() except Exception as e: print(e) </code></pre> <p>AND</p> <pre><code>conn_string = f&quot;mssql://{uid}:{pwd}@{serverName}/{databaseName}?driver={driverName}&quot; </code></pre> <p>I understand why windows authentication does not work because I am running the script from an account that does not have access to the Server. But what is throwing me off is when I give credentials to my account that does have access, I still get the following error:</p> <pre><code>(pyodbc.InterfaceError) ('28000', &quot;[28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user 'domain\\username'. (18456) (SQLDriverConnect); [28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user 'domain\\username'. (18456)&quot;) </code></pre> <p>I triple checked all the parameters I pass to the connection string and they are all correct. Everything I found on SO relating to this has not helped me. Any ideas on how I could proceed?</p>
<python><sql-server><sqlalchemy>
2024-09-06 14:34:29
0
2,185
iBeMeltin
78,957,611
4,560,370
Reformat complex file output from an old fortran program to csv using python
<p>I want to convert complex file output into a simpler version, but I can't seem to get the regex right.</p> <p>I have tried using regex and pandas to convert this weird formatted code to something nicer but the code I'm using just gives headers and not data. The data looks like this (be warned it's horrible), the sample file is here: <a href="https://wetransfer.com/downloads/d5c0588d5dd08d0e67ddf854d4a3c3bb20240906142948/0d9147" rel="nofollow noreferrer">https://wetransfer.com/downloads/d5c0588d5dd08d0e67ddf854d4a3c3bb20240906142948/0d9147</a>.</p> <p>Weather Summary for summer 2024</p> <hr /> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th></th> <th>Rainfall</th> <th>Air Temperature</th> <th>Sunshine</th> <th>Wind</th> <th>Grass Temp</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>Most in Day</td> <td>Means of</td> <td></td> <td>Extreme Temperature</td> <td>Most in Day</td> <td>Lowest</td> </tr> <tr> <td></td> <td>Total</td> <td>______</td> <td>_____</td> <td>______</td> <td>_____</td> <td>_____</td> <td>_____</td> <td>Total</td> <td>_____</td> <td>Grass date</td> </tr> <tr> <td>Station</td> <td></td> <td>Amount</td> <td>Date</td> <td>Max.</td> <td>Min.</td> <td>Max</td> <td>Min</td> <td></td> <td>Date</td> <td>min</td> </tr> <tr> <td>___________________</td> <td>_______</td> <td>______</td> <td>_____</td> <td>______</td> <td>_____</td> <td>_____</td> <td>_____</td> <td>________</td> <td>_____</td> <td>______________</td> </tr> <tr> <td>station 1</td> <td>121.5</td> <td>13.4</td> <td>29 Ju</td> <td>19.7</td> <td>10.2</td> <td>26.6</td> <td>4.5</td> <td>--</td> <td>--</td> <td>-0.7 19 Ju</td> </tr> <tr> <td>___________________</td> <td>_______</td> <td>______</td> <td>_____</td> <td>______</td> <td>_____</td> <td>_____</td> <td>_____</td> <td>________</td> <td>_____</td> <td>______________</td> </tr> <tr> <td>station 2</td> <td>235.9</td> <td>21.1</td> <td>26 Ag</td> <td>16.6</td> <td>11.9</td> <td>21.9</td> <td>6.5</td> <td>--</td> <td>--</td> <td>-1.3 11 Ju</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>___________________</td> <td>_______</td> <td>______</td> <td>_____</td> <td>______</td> <td>_____</td> <td>_____</td> <td>_____</td> <td>________</td> <td>_____</td> <td>______________</td> </tr> <tr> <td>station 3</td> <td>135.7</td> <td>13.6</td> <td>29 Ju</td> <td>19.3</td> <td>10.1</td> <td>24.0</td> <td>3.5</td> <td>--</td> <td>--</td> <td>-0.7 12 Ju</td> </tr> <tr> <td>___________________</td> <td>_______</td> <td>______</td> <td>_____</td> <td>______</td> <td>_____</td> <td>_____</td> <td>_____</td> <td>________</td> <td>_____</td> <td>______________</td> </tr> </tbody> </table></div> <p>I want to get it in simple csv format but I can't figure it out.</p> <p>Screenshot of what it looks like is here: <a href="https://i.sstatic.net/xVRtYh3i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xVRtYh3i.png" alt="what the data looks like in notepad++" /></a></p> <p>My latest code:</p> <pre><code>import re import csv #Input and output file paths input_file = 'table_example.txt output_file = 'weather_summary.csv # Define a function to parse the file and extract the table data def parse_weather_data(file_path): with open(file_path, 'r') as file: lines = file.readlines() # List to hold the processed rows data = [] # Regex to match valid data rows (ignoring separators and headers) pattern = re.compile(r'\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|') for line in lines: match = pattern.match(line) if match: # Extract matched groups and clean the values row = [group.strip() for group in match.groups()] data.append(row) return data # Define a function to write data to CSV def write_to_csv(data, output_file): # Column headers headers = [ &quot;Station&quot;, &quot;Rainfall Total&quot;, &quot;Most in Day (Amount)&quot;, &quot;Most in Day (Date)&quot;, &quot;Max Air Temp&quot;, &quot;Min Air Temp&quot;, &quot;Mean Air Temp&quot;, &quot;Max Temp&quot;, &quot;Max Temp Date&quot;, &quot;Min Temp&quot;, &quot;Min Temp Date&quot;, &quot;Sunshine Total&quot;, &quot;Most in Day Sunshine (Amount)&quot;, &quot;Most in Day Sunshine (Date)&quot;, &quot;Max Wind Gust&quot;, &quot;Max Wind Gust Date&quot;, &quot;Grass Min Temp&quot;, &quot;Grass Min Temp Date&quot; ] with open(output_file, mode='w', newline='') as file: writer = csv.writer(file) writer.writerow(headers) # Write the headers writer.writerows(data) # Write the data rows` # Parse the weather data from the input file weather_data = parse_weather_data(input_file) # Write the parsed data to CSV write_to_csv(weather_data, output_file) </code></pre> <p>EDIT Updated code which gives the dataframe but it's still messy:</p> <pre><code>data = pd.read_csv('example_table.txt', sep=&quot;|&quot;, header=None, skiprows = 8) data.columns = [ &quot;Station&quot;, &quot;Rainfall Total&quot;, &quot;Most in Day (Amount)&quot;, &quot;Most in Day (Date)&quot;, &quot;Max Air Temp&quot;, &quot;Min Air Temp&quot;, &quot;Mean Air Temp&quot;, &quot;Max Temp&quot;, &quot;Max Temp Date&quot;, &quot;Min Temp&quot;, &quot;Min Temp Date&quot;, &quot;Sunshine Total&quot;, &quot;Most in Day Sunshine (Amount)&quot;, &quot;Most in Day Sunshine (Date)&quot;, &quot;Max Wind Gust&quot;, &quot;Max Wind Gust Date&quot;, &quot;Grass Min Temp&quot;, &quot;Grass Min Temp Date&quot; ] data </code></pre>
<python><pandas><python-re>
2024-09-06 14:34:14
1
911
Pad
78,957,512
22,466,650
How to find the combination of numbers that sum (or closer) to a given target?
<p>My input is a list of dicts that correspond to tasks :</p> <pre><code>tasks = [ {'ID': 'ID_001', 'VOLUME': 50, 'STATUS': 'ENDED_OK'}, {'ID': 'ID_002', 'VOLUME': 10, 'STATUS': 'WAITING'}, {'ID': 'ID_003', 'VOLUME': 10, 'STATUS': 'WAITING'}, {'ID': 'ID_004', 'VOLUME': 20, 'STATUS': 'WAITING'}, {'ID': 'ID_005', 'VOLUME': 10, 'STATUS': 'ONGOING'}, {'ID': 'ID_006', 'VOLUME': 10, 'STATUS': 'WAITING'}, {'ID': 'ID_007', 'VOLUME': 25, 'STATUS': 'WAITING'}, ] </code></pre> <p>My goal is to find a random combination of tasks where:</p> <ul> <li>The number of waiting tasks is exactly as specified</li> <li>The total volume is as close as possible to a given target volume</li> </ul> <p>For that I made the code below but it's extremely slow in my real dataset (~10k tasks) and I also don't trust it because I feel like sometimes it doesn't give the right combination.</p> <pre><code>import itertools def find_combination(tasks, number_ids, target_volume=50): waiting_tasks = [task for task in tasks if task['STATUS'] == 'WAITING'] sorted_tasks = sorted(waiting_tasks, key=lambda x: x['VOLUME'], reverse=True) closest_combination = [] closest_volume_diff = float('inf') for combination in itertools.combinations(sorted_tasks, number_ids): total_volume = sum(task['VOLUME'] for task in combination) volume_diff = abs(total_volume - target_volume) if volume_diff &lt; closest_volume_diff: closest_combination = combination closest_volume_diff = volume_diff if volume_diff == 0: break return [task['ID'] for task in closest_combination], sum( task['VOLUME'] for task in closest_combination ) </code></pre> <pre><code>ids_selected, total_volume = find_combination(tasks, 3, target_volume=50) print(f'Selected IDs: {ids_selected}, Total Volume: {total_volume}') # Selected IDs: ['ID_007', 'ID_004', 'ID_002'], Total Volume: 55 </code></pre> <p>Do you guys have an idea on how to fix my code ?</p> <hr /> <p>Constraints of the problem</p> <ul> <li><code>tasks: list[dict]</code> is of size <em>10**4</em></li> <li><code>number_ids</code> can be from <em>1</em> to <em>10</em>,</li> <li><code>target_volume</code> ranges from <em>50</em> to a max of <em>1000</em></li> <li><code>task['VOLUME']</code> is always an integer between <em>1</em> to <em>300</em>.</li> </ul>
<python><combinations>
2024-09-06 14:08:11
2
1,085
VERBOSE
78,957,499
6,163,621
cronjob running python script to log - timestamp incorrect
<p>I have a nightly cronjob that runs a python script. The job sends all of the outputs to a log wherein each line is timestamped to the microsecond.</p> <p>Part of that script includes a loop with a 5-second pause between each iteration. When I was troubleshooting some issues, I noticed that the cronjob timestamp did not reflect that 5-second pause. So I then included some python code to print its own timestamp, and the 5-second pause is clearly shown.</p> <p>So, my question is: Is my timestamp format correct for the cronjob log?<br /> If so - why is the 5-second pause not showing? If not - what is the correct format?</p> <p>cronjob: <code>python myscript.py | ts '[\%Y-\%m-\%d \%H:\%M:\%.S]' &gt;&gt; cronjob.log 2&gt;&amp;1</code></p> <p>output:</p> <pre><code>[2024-09-05 00:23:24.226220] &gt; Doing something (time = 2024-09-05 00:22:19.738), attempt #1... [2024-09-05 00:23:24.226234] &gt; Doing something (time = 2024-09-05 00:22:24.746), attempt #2... [2024-09-05 00:23:24.226248] &gt; Doing something (time = 2024-09-05 00:22:29.753), attempt #3... [2024-09-05 00:23:24.226261] &gt; Doing something (time = 2024-09-05 00:22:34.760), attempt #4... [2024-09-05 00:23:24.226275] &gt; Doing something (time = 2024-09-05 00:22:39.769), attempt #5... </code></pre> <p>I'm less concerned about the timestamps lining up (cronjob vs. python), but maybe that's a clue?!</p> <p>Note: If I run <code>&gt;ts '[%Y-%m-%d %H:%M:%.S]'</code> from the command line, the timestamp works as expected.</p>
<python><cron><timestamp>
2024-09-06 14:05:44
1
9,134
elPastor
78,957,463
4,129,091
Convert empty lists to nulls
<p>I have a polars DataFrame with two list columns.</p> <p>However one column contains empty lists and the other contains nulls.</p> <p>I would like consistency and convert empty lists to nulls.</p> <pre><code> In [306]: df[[&quot;spcLink&quot;, &quot;proprietors&quot;]] Out[306]: shape: (254_654, 2) ┌───────────┬─────────────────────────────────┐ │ spcLink ┆ proprietors │ │ --- ┆ --- │ │ list[str] ┆ list[str] │ ╞═══════════╪═════════════════════════════════╡ │ [] ┆ null │ │ [] ┆ null │ │ [] ┆ null │ │ [] ┆ null │ │ [] ┆ null │ │ … ┆ … │ │ [] ┆ [&quot;The Steel Company of Canada … │ │ [] ┆ [&quot;Philips' Gloeilampenfabrieke… │ │ [] ┆ [&quot;AEG-Telefunken&quot;] │ │ [] ┆ [&quot;xxxx… │ │ [] ┆ [&quot;yyyy… │ └───────────┴─────────────────────────────────┘ </code></pre> <p>I have attempted this:</p> <pre><code># Convert empty lists to None for col, dtype in df.schema.items(): if isinstance(dtype, pl.datatypes.List): print(col, dtype) df = df.with_columns( pl.when(pl.col(col).list.len() == 0).then(None).otherwise(pl.col(col)) ) </code></pre> <p>But no change happens in the output; the empty lists remain as such and are not converted.</p>
<python><dataframe><python-polars>
2024-09-06 13:58:59
1
3,665
tsorn
78,957,355
1,184,842
Ansible error python not executable if virtualenv/venv is inside home directory?
<p>in some Ubuntu 22.04 machines I see the following Ansible error:</p> <pre><code>TASK [pbalucode.postgresql : Create database] ************************************************************************************************************************************************ fatal: [localhost]: FAILED! =&gt; {&quot;changed&quot;: false, &quot;module_stderr&quot;: &quot;/bin/sh: 1: /home/user/installer/ansible/bin/python3.10: Permission denied\n&quot;, &quot;module_stdout&quot;: &quot;&quot;, &quot;msg&quot;: &quot;MODULE FAILURE\nSee stdout/stderr for the exact error&quot;, &quot;rc&quot;: 126} </code></pre> <p>/home/user/installer is a virtualenv/venv created in which Ansible 2.17.3 is installed. The strange thing is, that this error occurs only if the virtualenv/venv is inside the home directory. If I install the virtualenv/venv in /tmp or /opt this error is not occurring.</p> <p>Even more strange is, that the playbook did already plenty of other things, but it is always failing in the very same task (if failing). It is not failing in a Ubuntu 22.04 docker image, but in all real VMs.</p> <p>The Ansible code is rather unsuspicious:</p> <pre><code>- name: Create database ansible.builtin.command: &quot;{{ postgresql_bin_dir }}/initdb \ -D {{ postgresql_pgdata }}&quot; become: true become_user: &quot;{{ postgresql_user_name }}&quot; when: not dbinit.stat.exists </code></pre> <p>The symbolic links are ok and executable, I can run them manually fine and a lot of other Ansible tasks worked fine, too.</p> <pre><code>$ ll /home/user/installer/ansible/bin/python3.10 lrwxrwxrwx 1 user user 6 Sep 6 10:06 /home/user/installer/ansible/bin/python3.10 -&gt; python* $ ll /home/user/installer/ansible/bin/python lrwxrwxrwx 1 user user 16 Sep 6 10:06 /home/user/installer/ansible/bin/python -&gt; /usr/bin/python3* $ ll /usr/bin/python3 lrwxrwxrwx 1 root root 10 Aug 8 12:28 /usr/bin/python3 -&gt; python3.10* $ ll /usr/bin/python3.10 -rwxr-xr-x 1 root root 5904936 Jul 29 16:56 /usr/bin/python3.10* $ /home/user/installer/ansible/bin/python3.10 --version Python 3.10.12 </code></pre> <p>Any idea what is wrong here and how to fix this? Feels like a rather harsh restriction to avoid creating a virtualenv/venv inside a home directory.</p> <p>There was a comment shortly asking for the mount options, they are straight forward, no special things here: <code>/dev/mapper/ubuntu--vg-ubuntu--lv on / type ext4 (rw,relatime)</code></p>
<python><ansible><virtualenv><python-venv>
2024-09-06 13:21:37
1
2,850
jan
78,957,338
5,168,582
AWS GLUE python shell with PostgreSQL
<p>I wanted to create AWS GLUE job with Python Shell. in my py script I have import</p> <pre><code>from pg import DB </code></pre> <p>I wanted to import lib from s3 using</p> <pre><code>--extra-py-files=s3://${var.glue_bucket}/scripts/PyGreSQL-5.2.3-cp39-cp39-win32.whl </code></pre> <p>but I'm getting error</p> <blockquote> <p>is not a supported wheel on this platform.</p> </blockquote> <p>I use python 3.9 with glue 4</p>
<python><amazon-web-services><aws-glue>
2024-09-06 13:18:05
0
1,643
Mateusz Sobczak
78,957,250
1,838,076
Pandas FutureWarning about concatenating DFs with NaN-only cols seems wrong
<p>I am getting Future Warning with Pandas 2.2.2 when I try to concatenate DFs with Floating Values and Nones.</p> <p>But the same won't happen if I use INT instead of FLOAT</p> <pre><code>import pandas as pd # Block with INT df1 = pd.DataFrame({'A': [1], 'B': [4]}) df2 = pd.DataFrame({'A': [2], 'B': [None]}) print(len(pd.concat([df1, df2]))) # Block with FLOAT df1 = pd.DataFrame({'A': [1], 'B': [4.0]}) df2 = pd.DataFrame({'A': [2], 'B': [None]}) print(len(pd.concat([df1, df2]))) </code></pre> <p><strong>Block with INT</strong> runs fine, while <strong>Block with FLOAT</strong> gives warning</p> <p>Sample output</p> <pre><code>2 ./test.py:18: FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation. print(len(pd.concat([df1, df2]))) 2 </code></pre> <ol> <li>Why are INTs fine while Floats are not?</li> <li>The datatype of the <code>None</code> column seems inferred from other DFs in concatenation (see comments).</li> <li>If the entire data frame is empty or None, it can be checked and skipped from concatenation, so the warning makes sense. But when a few cols have valid data, it's not clear what the user is expected to do in the future with the NaN cols.</li> </ol> <p>PS: I don't think this is a duplicate of this: <a href="https://stackoverflow.com/questions/77254777">Alternative to .concat() of empty dataframe, now that it is being deprecated?</a></p>
<python><pandas><dataframe>
2024-09-06 12:54:58
1
1,622
Krishna
78,957,179
4,129,091
Polars cannot infer date columns on DataFrame creation, but succeed in automatic casting afterwards
<p>I have a list of dictionaries, <code>results</code>, with the following schema:</p> <pre><code>import polars as pl results = ... schema = { &quot;id&quot;: pl.Int64, &quot;title&quot;: pl.Utf8, &quot;proprietors&quot;: pl.List(pl.Utf8) &quot;registrationDate&quot;: pl.Date, } </code></pre> <p>When I attempt to create a dataframe with the schema, it fails:</p> <pre><code>df = pl.DataFrame(results, schema=schema) </code></pre> <p>yields</p> <blockquote> <p>ComputeError: could not append value: &quot;2024-08-16T00:00:00&quot; of type: str to the builder; make sure that all rows have the same schema or consider increasing <code>infer_schema_length</code></p> <p>it might also be that a value overflows the data-type's capacity</p> </blockquote> <p>However, creating the DataFrame with the date columns set to the <code>pl.Utf8</code> String datatype, then converting the date columns to <code>pl.Date</code> <em>afterwards</em>, works:</p> <pre><code>for col, dtype in schema.items(): if col.endswith(&quot;Date&quot;): schema[col] = pl.Utf8 df = pl.DataFrame(results, schema=schema) for col, dtype in schema.items(): if col.endswith(&quot;Date&quot;): df = df.with_columns(pl.col(col).str.strptime(pl.Datetime)) # Succeeds, no errors </code></pre> <p>polars 1.6.0 python 3.12.4</p>
<python><dataframe><python-polars>
2024-09-06 12:35:56
1
3,665
tsorn
78,957,022
6,930,340
Apply multiple window sizes to rolling aggregation functions in polars dataframe
<p>In a number of aggregation function, such as <a href="https://docs.pola.rs/api/python/stable/reference/series/api/polars.Series.rolling_min.html" rel="nofollow noreferrer">rolling_mean</a>, <a href="https://docs.pola.rs/api/python/stable/reference/series/api/polars.Series.rolling_min.html" rel="nofollow noreferrer">rolling_max</a>, <a href="https://docs.pola.rs/api/python/stable/reference/series/api/polars.Series.rolling_min.html" rel="nofollow noreferrer">rolling_min</a>, etc, the input argument <code>window_size</code> is supposed to be of type <code>int</code></p> <p>I am wondering how to efficiently compute results when having a list of <code>window_size</code>.</p> <p>Consider the following dataframe:</p> <pre><code>import polars as pl pl.Config(tbl_rows=-1) df = pl.DataFrame( { &quot;symbol&quot;: [&quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;], &quot;price&quot;: [100, 110, 105, 103, 107, 200, 190, 180, 185], } ) shape: (9, 2) ┌────────┬───────┐ │ symbol ┆ price │ │ --- ┆ --- │ │ str ┆ i64 │ ╞════════╪═══════╡ │ A ┆ 100 │ │ A ┆ 110 │ │ A ┆ 105 │ │ A ┆ 103 │ │ A ┆ 107 │ │ B ┆ 200 │ │ B ┆ 190 │ │ B ┆ 180 │ │ B ┆ 185 │ └────────┴───────┘ </code></pre> <p>Let's say I have a list with <code>n</code> elements, such as <code>periods = [2, 3]</code>. I am looking for a solution to compute the rolling means for all periods grouped by <code>symbol</code> in parallel. Speed and memory efficiency is of the essence.</p> <p>The result should be a tidy/long dataframe like this:</p> <pre><code>shape: (18, 4) ┌────────┬───────┬─────────────┬──────────────┐ │ symbol ┆ price ┆ mean_period ┆ rolling_mean │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ u8 ┆ f64 │ ╞════════╪═══════╪═════════════╪══════════════╡ │ A ┆ 100 ┆ 2 ┆ null │ │ A ┆ 110 ┆ 2 ┆ 105.0 │ │ A ┆ 105 ┆ 2 ┆ 107.5 │ │ A ┆ 103 ┆ 2 ┆ 104.0 │ │ A ┆ 107 ┆ 2 ┆ 105.0 │ │ B ┆ 200 ┆ 2 ┆ null │ │ B ┆ 190 ┆ 2 ┆ 195.0 │ │ B ┆ 180 ┆ 2 ┆ 185.0 │ │ B ┆ 185 ┆ 2 ┆ 182.5 │ │ A ┆ 100 ┆ 3 ┆ null │ │ A ┆ 110 ┆ 3 ┆ null │ │ A ┆ 105 ┆ 3 ┆ 105.0 │ │ A ┆ 103 ┆ 3 ┆ 106.0 │ │ A ┆ 107 ┆ 3 ┆ 105.0 │ │ B ┆ 200 ┆ 3 ┆ null │ │ B ┆ 190 ┆ 3 ┆ null │ │ B ┆ 180 ┆ 3 ┆ 190.0 │ │ B ┆ 185 ┆ 3 ┆ 185.0 │ └────────┴───────┴─────────────┴──────────────┘ </code></pre>
<python><python-polars>
2024-09-06 11:52:47
2
5,167
Andi
78,957,012
10,425,150
itertools.product in dataframe
<p><strong>Inputs:</strong></p> <pre><code>arr1 = [&quot;A&quot;,&quot;B&quot;] arr2 = [[1,2],[3,4,5]] </code></pre> <p><strong>Expected output:</strong></p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: left;">short_list</th> <th style="text-align: right;">long_list</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: left;">A</td> <td style="text-align: right;">1</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">A</td> <td style="text-align: right;">2</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">B</td> <td style="text-align: right;">3</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: left;">B</td> <td style="text-align: right;">4</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: left;">B</td> <td style="text-align: right;">5</td> </tr> </tbody> </table></div> <p><strong>Current output:</strong></p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: left;">short_list</th> <th style="text-align: left;">long_list</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: left;">A</td> <td style="text-align: left;">[1, 2]</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">A</td> <td style="text-align: left;">[3, 4, 5]</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">B</td> <td style="text-align: left;">[1, 2]</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: left;">B</td> <td style="text-align: left;">[3, 4, 5]</td> </tr> </tbody> </table></div> <p><strong>Current Code (using <code>itertools</code>):</strong></p> <pre><code>import pandas as pd from itertools import product def custom_product(arr1, arr2): expand_short_list = [[a1]*len(a2) for a1, a2 in zip(arr1,arr2)] return [[a1,a2] for a1, a2 in zip(sum(expand_short_list,[]),sum(arr2,[]))] arr1 = [&quot;A&quot;,&quot;B&quot;] arr2 = [[1,2],[3,4,5]] df2 = pd.DataFrame(data = product(arr1,arr2),columns=[&quot;short_list&quot;, &quot;long_list&quot;]) </code></pre> <p><strong>Alternative code using nested list comprehensions to get the desired output:</strong></p> <pre><code>import pandas as pd def custom_product(arr1, arr2): expand_short_list = [[a1]*len(a2) for a1, a2 in zip(arr1,arr2)] return [[a1,a2] for a1, a2 in zip(sum(expand_short_list,[]),sum(arr2,[]))] arr1 = [&quot;A&quot;,&quot;B&quot;] arr2 = [[1,2],[3,4,5]] df1 = pd.DataFrame(data = custom_product(arr1, arr2),columns=[&quot;short_list&quot;, &quot;long_list&quot;]) </code></pre> <p><strong>Question:</strong></p> <p>I'm wondering how could I achieve the desired output using <code>itertools</code>?</p>
<python><pandas><dataframe><python-itertools><pandas-explode>
2024-09-06 11:51:17
1
1,051
Gооd_Mаn
78,956,750
5,704,198
Generating these strategies using Hypothesis (strings with repetitions)
<p>Following a tutorial on Hypothesis I found this problem. I have to do a roundtrip test with a run_length encode/decode (explained in docstrings):</p> <pre><code>from hypothesis import given, strategies as st from itertools import groupby from typing import List, Union def run_length_encoder(in_string: str) -&gt; List[Union[str, int]]: &quot;&quot;&quot; &gt;&gt;&gt; run_length_encoder(&quot;aaaaabbcbc&quot;) ['a', 'a', 5, 'b', 'b', 2, 'c', 'b', 'c'] &quot;&quot;&quot; assert isinstance(in_string, str) out = [] for item, group in groupby(in_string): cnt = sum(1 for x in group) if cnt == 1: out.append(item) else: out.extend((item, item, cnt)) assert isinstance(out, list) assert all(isinstance(x, (str, int)) for x in out) return out def run_length_decoder(in_list: List[Union[str, int]]) -&gt; str: &quot;&quot;&quot; &gt;&gt;&gt; run_length_decoder(['a', 'a', 5, 'b', 'b', 2, 'c', 'b', 'c']) &quot;aaaaabbcbc&quot; &quot;&quot;&quot; assert isinstance(in_list, list) assert all(isinstance(x, (str, int)) for x in in_list) out: str = &quot;&quot; for item in in_list: if isinstance(item, int): out += out[-1] * (item - 2) else: out += item # alternative # for n, item in enumerate(in_list): # if isinstance(item, int): # char = in_list[n - 1] # assert isinstance(char, str) # out += char * (item - 2) # else: # out += item assert isinstance(out, str) return out </code></pre> <p>I can choose the form of the test: <code>encode(decode(in_list))</code> or <code>decode(encode(in_string))</code>.</p> <pre><code>@given( in_string = st.text() ) def test_roundtrip_run_length_encoder_decoder(in_string): in_string = in_string encoded_list = run_length_encoder(in_string) assert isinstance(encoded_list, list) assert all(isinstance(x, (str, int)) for x in encoded_list) decoded_string = run_length_decoder(encoded_list) assert isinstance(decoded_string, str) assert in_string == decoded_string, (in_string, decoded_string) test_roundtrip_run_length_encoder_decoder() </code></pre> <p>This was easy but <code>in_string</code> doesn't have enough repetitions. They ask me to do something better (suggestion: use one_of).</p> <p>So i should add a random numbers of ripetition with a random lenght (<code>ee</code> vs <code>eee</code>) in random positions. How can I do that with Hypothesis? Maybe they are asking me something more simple</p> <p>I think to generate the list is more difficoult: I should generate a list without repetitions and add some sequence like <code>[..., 'k', 'k', #, ...</code> where k is a character (string) and # is an integer (int). Ofcourse before <code>'k'</code> I need a different character.</p>
<python><python-hypothesis>
2024-09-06 10:36:40
1
1,385
fabio
78,956,696
1,185,254
How to make a field read-only in a `mongoengine` Document while allowing conditional updates?
<p>I have a <code>mongoengine</code> document such as:</p> <pre><code>from mongoengine.document import Document from mongoengine import StringField class Class_A(Document): field_a = StringField() field_b = StringField() </code></pre> <p>I'd like to <em>lock</em> <code>field_b</code> so it cannot be altered, e.g.</p> <pre><code>var = &lt;fetch Class_a document from DB&gt; var.field = 'abc' </code></pre> <p>would raise an error.</p> <p>This on itself is not a problem, but <strong>I'd like to be able to set <code>field_b</code> when <code>field_a</code> is set</strong>. To give an example, <code>field_a</code> could be some data and <code>field_b</code> would be computed hash for this data - user can set the data, but not the hash for it (it should be only set automatically when data is assigned).</p> <p>I tried using <a href="https://stackoverflow.com/a/39709237/1185254"><code>__setattr__</code>/<code>__dict__</code></a>, but <code>mongoengine</code> seems to be doing some attributes magic behind the scene and I couldn't make it work. I also had an idea to subclass <code>StringField</code> and use a <a href="https://stackoverflow.com/a/63439831/1185254">metaclass</a> to wrap it's <code>__setattr__</code>, with similar effect.</p> <p>How to achieve such a behaviour?</p>
<python><mongodb><orm><attributes><mongoengine>
2024-09-06 10:23:04
1
11,449
alex
78,956,628
9,604,989
Static type checking or IDE intelligence support for a numpy array/matrix shape possible?
<p>Is it possible to have static type checking or IDE intelligence support for a numpy array/matrix shape?</p> <p>For example, if I imagine something like this:</p> <pre><code>A_MxN: NDArray(3,2) = ... B_NxM: NDArray(2,3) = ... </code></pre> <p>Even better would be:</p> <pre><code>N = 3 M = 2 A_MxN: NDArray(M,N) = ... B_NxM: NDArray(N,M) = ... </code></pre> <p>And if I assign A to B, I would like to have an IDE hint during development time (not runtime), that the shapes are different.</p> <p>Something like:</p> <pre><code>A_MxN = B_NxM Hint/Error: Declared shape 3,2 is not compatible with assigned shape 2,3 </code></pre> <p>As mentioned by @simon, this seems to be possible:</p> <pre><code>M = Literal[3] N = Literal[2] A_MxN: np.ndarray[tuple[M,N], np.dtype[np.int32]] </code></pre> <p>But if I assign an array which does not fulfill the shape requirement, the linter does not throw an error. Does someone know if there is a typechecker like mypy or pyright which supports the feature?</p>
<python><numpy><python-typing><mypy><pyright>
2024-09-06 10:04:51
2
361
tobias hassebrock
78,956,588
6,930,340
Polars selector for columns of dtype pl.List
<p>In the <a href="https://docs.pola.rs/api/python/stable/reference/selectors.html#" rel="noreferrer">polars documentation</a> regarding selectors, there are many examples for selecting columns based on their dtypes. I am missing <code>pl.List</code></p> <p>How can I quickly select all columns of type <code>pl.List</code> within a <code>pl.DataFrame</code>?</p>
<python><python-polars>
2024-09-06 09:54:56
1
5,167
Andi
78,956,523
9,554,640
ValueError: Could not use APOC procedures. Please ensure the APOC plugin is installed in Neo4j and that 'apoc.meta.data()' is allowed in Neo4j
<p>I'm trying to use the Neo4jGraph class from the langchain_community.graphs module in my Python project to interact with a Neo4j database. My script here:</p> <pre class="lang-py prettyprint-override"><code>from langchain.chains import GraphCypherQAChain from langchain_community.graphs import Neo4jGraph from langchain_openai import ChatOpenAI enhanced_graph = Neo4jGraph( url=&quot;bolt://localhost:7687&quot;, username=&quot;neo4j&quot;, password=&quot;password&quot;, enhanced_schema=True, ) print(enhanced_graph.schema) chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=enhanced_graph, verbose=True ) chain.invoke({&quot;query&quot;: &quot;Who is Bob?&quot;}) </code></pre> <p>Error here:</p> <pre><code>ValueError: Could not use APOC procedures. Please ensure the APOC plugin is installed in Neo4j and that 'apoc.meta.data()' is allowed in Neo4j configuration neo4j.exceptions.ClientError: {code: Neo.ClientError.Procedure.ProcedureNotFound} {message: There is no procedure with the name `apoc.meta.data` registered for this database instance. Please ensure you've spelled the procedure name correctly and that the procedure is properly deployed.} </code></pre> <p>How to solve the problem?</p>
<python><neo4j><langchain><graphrag>
2024-09-06 09:36:23
1
372
Dmitri Galkin
78,956,221
898,042
nrzi signal decoding implementation in python - whats wrong?
<p>solving simple nrzi python decoding. 0 bit means no changed occurred but 1 bit means a change occurred in digital signal sequence.</p> <pre><code>&quot;_&quot;: low signal '¯': high signal &quot;|&quot;: pipe signal( pipe leads to change of signal and itself pipe is not recorded) </code></pre> <p>given input: _ | ¯ | _ _ _ <em>| ¯ |</em> _| ¯ ¯ ¯</p> <p>output: 011000110100</p> <p>my code:</p> <pre><code>def nrzi(signal: str) -&gt; str: res = '' prev = '_' for i in range(len(signal)): cur = signal[i] prev = signal[i-1] #signal same as prev - no change if (prev == cur == '_') or (prev == cur == '¯'): res += '0' #signal changed cuz prev is pipe elif prev == '|' and cur == '_': res += '0' elif prev == '|' and cur == '¯': res += '1' return ''.join(res) signal = &quot;_|¯|____|¯|__|¯¯¯&quot; result = nrzi(signal) print(result) </code></pre> <p>it produces</p> <p>10000100100 vs correct 011000110100</p> <p>whats wrong?</p>
<python><implementation>
2024-09-06 08:10:08
1
24,573
ERJAN
78,956,198
7,841,521
open Webui : how to create a new chat, sending a question and getting the answer with python
<p>I use open webui to manage different LLM Model. I would like to use it as my central point for llm management. For example if I code a python function to connect to a model, I juste have to change de LLM model name to test another LLM.</p> <p>But, even considering elements here : https://myopenwebuiurl/api/v1/docs I'm not progressing in my work.</p> <p>Curently I defined 2 functions:</p> <pre><code>def create_new_chat(api_url, api_key): &quot;&quot;&quot;Crée un nouveau chat et retourne l'ID du chat.&quot;&quot;&quot; headers = { 'Authorization': f'Bearer {api_key}', 'Content-Type': 'application/json' } payload = {'chat': {}} response = requests.post(f'{api_url}/chats/new', json=payload, headers=headers) return response.json()# Création d'un chat new_chat_response = create_new_chat(API_URL, API_KEY) </code></pre> <p>and</p> <pre><code>def send_message_to_chat(api_url, api_key, chat_id, message): &quot;&quot;&quot;Envoie un message au chat spécifié et retourne la réponse du serveur.&quot;&quot;&quot; headers = { 'Authorization': f'Bearer {api_key}', 'Content-Type': 'application/json' } payload = { 'chat': { 'content': message, 'role': 'user', } } response = requests.post(f'{api_url}/chats/{chat_id}', json=payload, headers=headers) return response.json() </code></pre> <p>it seems something is happening because I don't have error message, and whem I'm going in the interface, the new chat is existing in the list of chats, but if I select it it is completly white and empty, like if something was not loaded.</p> <p>Any recommandation on how to create a chat, post an answer and get the result with python?</p>
<python><large-language-model>
2024-09-06 08:03:00
0
347
lelorrain7
78,955,998
6,930,340
Add new column with multiple literal values to polars dataframe
<p>Consider the following toy example:</p> <pre><code>import polars as pl pl.Config(tbl_rows=-1) df = pl.DataFrame({&quot;group&quot;: [&quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;], &quot;value&quot;: [1, 2, 3, 4, 5]}) print(df) shape: (5, 2) ┌───────┬───────┐ │ group ┆ value │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═══════╪═══════╡ │ A ┆ 1 │ │ A ┆ 2 │ │ A ┆ 3 │ │ B ┆ 4 │ │ B ┆ 5 │ └───────┴───────┘ </code></pre> <p>Further, I have a list of indicator values, such as <code>vals=[10, 20, 30]</code>.</p> <p>I am looking for an efficient way to insert each of these values in a new column called <code>ìndicator</code> using <code>pl.lit()</code> while expanding the dataframe vertically in a way all existing rows will be repeated for every new element in <code>vals</code>.</p> <p>My current solution is to insert a new column to <code>df</code>, append it to a list and subsequently do a <code>pl.concat</code>.</p> <pre><code>lit_vals = [10, 20, 30] print(pl.concat([df.with_columns(indicator=pl.lit(val)) for val in lit_vals])) shape: (15, 3) ┌───────┬───────┬───────────┐ │ group ┆ value ┆ indicator │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i32 │ ╞═══════╪═══════╪═══════════╡ │ A ┆ 1 ┆ 10 │ │ A ┆ 2 ┆ 10 │ │ A ┆ 3 ┆ 10 │ │ B ┆ 4 ┆ 10 │ │ B ┆ 5 ┆ 10 │ │ A ┆ 1 ┆ 20 │ │ A ┆ 2 ┆ 20 │ │ A ┆ 3 ┆ 20 │ │ B ┆ 4 ┆ 20 │ │ B ┆ 5 ┆ 20 │ │ A ┆ 1 ┆ 30 │ │ A ┆ 2 ┆ 30 │ │ A ┆ 3 ┆ 30 │ │ B ┆ 4 ┆ 30 │ │ B ┆ 5 ┆ 30 │ └───────┴───────┴───────────┘ </code></pre> <p>As <code>df</code> could potentially have quite a lot of rows and columns, I am wondering if my solution is efficient in terms of speed as well as memory allocation?</p> <p>Just for my understanding, if I append a new <code>pl.DataFrame</code> to the list, will this dataframe use additional memory or will just some new pointers be created that link to the chunks in memory which hold the data of the original <code>df</code>?</p>
<python><dataframe><python-polars>
2024-09-06 07:09:08
2
5,167
Andi
78,955,800
13,344,453
I am not able to Overriding django model
<p>I have a django model that has multiple fields</p> <pre><code>class data(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) pj = ArrayField(models.IntegerField(), null=True, blank=True) name = models.CharField(max_length=225) explain = models.TextField(max_length=50, null=True) def save(self, *args, **kwargs): self.name=f&quot;new {self.name}&quot; super(data, self).save(*args, **kwargs) </code></pre> <p>when i save the data model the name is not getting changed to &quot;new name&quot; it is getting saved as &quot;name&quot;</p> <p>I want the new naming convention to reflect.</p>
<python><django><django-models><django-rest-framework>
2024-09-06 06:01:18
2
840
Alphonse Prakash
78,955,720
2,897,115
sqlachemy: 'tuple' object has no attribute 'label'
<p>I have a model</p> <pre><code>class Question(Base): __tablename__ = 'messages' id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True, index=True) guid: Mapped[uuid.UUID] = mapped_column(UUID(as_uuid=True),default=uuid.uuid4,index=True, unique=True) text: Mapped[str] is_sent_by_user: Mapped[bool] = mapped_column(Boolean, default=False) chat_id = mapped_column(ForeignKey(&quot;chats.id&quot;)) chat: Mapped[&quot;Chat&quot;] = relationship(back_populates=&quot;messages&quot;) stopped: Mapped[bool] = mapped_column(default=False), related_to: Mapped[int] = mapped_column(default=None,nullable=True) </code></pre> <p><code>Question.id</code> shows</p> <pre><code>&lt;sqlalchemy.orm.attributes.InstrumentedAttribute at 0x7f3608b0d620&gt; </code></pre> <p>where as</p> <p><code>Question.stopped</code></p> <pre><code>(&lt;sqlalchemy.orm.properties.MappedColumn at 0x7f3609101750&gt;,) </code></pre> <p>Because of the above i am getting error when trying to do</p> <pre><code> questions_query = ( select( Question.id.label('id'), Question.is_sent_by_user.label('is_sent_by_user'), Question.related_to.label('related_to'), Question.stopped.label('stopped'), Question.text.label('text'), Question.id.label('order_by') ) .where(Question.is_sent_by_user == True) .where(Question.chat_id == 49) ) </code></pre> <pre><code>at Question.stopped.label('stopped'), i get AttributeError: 'tuple' object has no attribute 'label' </code></pre>
<python><sqlalchemy>
2024-09-06 05:25:57
1
12,066
Santhosh
78,955,459
302,274
Automatically create a free trial Stripe checkout
<p>We use Stripe for our checkout and have a free trial enabled. So when a user comes to the site to enable the free trial they just enter their email in the checkout.</p> <p>What I would like is to tie the start of the free trial with creating an account on my site. Since I get the user's email with account creation I would like to create the free trial programmatically and remove that step. Is that possible?</p> <p>Here is my code to create the checkout session.</p> <pre><code>def create_checkout_session(request, app_type): subscription_plan_duration = SubscriptionPlanDuration.objects.filter().first() user_subscription = UserSubscription.objects.create( subscription_plan_duration_id = subscription_plan_duration.id, user_id = request.user.id, app_type = app_type, is_active = False #True - change for 3. ) discount_coupon = DiscountCoupon.objects.filter(subscription_plan_duration_id = subscription_plan_duration.id).first() checkout_session = stripe.checkout.Session.create( line_items=[ { 'price': subscription_plan_duration.price_id, 'quantity': 1, }, ], metadata={ &quot;user_subscription_id&quot;: user_subscription.id }, mode=&quot;subscription&quot;, # discounts=[{ # 'coupon': discount_coupon.coupon_id, # }], subscription_data={ &quot;trial_period_days&quot;: 30, &quot;trial_settings&quot;: { &quot;end_behavior&quot;: { &quot;missing_payment_method&quot;: &quot;cancel&quot; } } }, payment_method_collection=&quot;if_required&quot;, allow_promotion_codes=True, success_url=settings.APP_END_POINT + '/stripe/success', cancel_url=settings.APP_END_POINT + '/stripe/cancel-subscription/' + app_type ) return redirect(checkout_session.url) </code></pre>
<python><django><stripe-payments>
2024-09-06 02:56:40
1
3,035
analyticsPierce
78,955,322
14,771,666
sampling unbalanced data frame columns
<p>If I have a data frame <code>df</code>, which has five columns: 'A', 'B', 'C', 'D', and 'E', which contains python strings. Currently, 'B', 'C', 'D', and 'E' has unbalanced unique values (i.e., some unique values have more rows than the others). How can I sample <code>df</code> so that column 'B', 'C', 'D', and 'E' have balanced number of unique values (i.e., each unique value in a specific column has the same number of rows)? I want to sample with replacement so that the resulting data frame has the same length as the original data frame, though some rows may be duplicated and some may be omitted. Thanks!</p>
<python><pandas><dataframe>
2024-09-06 01:38:29
0
368
Kaihua Hou
78,955,311
14,771,666
pytorchvideo.transforms.RandAugment import error
<p>I am trying to run <code>from pytorchvideo.transforms import RandAugment</code> in Python, but it returns the following error:</p> <pre><code>ModuleNotFoundError: No module named 'torchvision.transforms.functional_tensor' </code></pre> <p>I can run <code>import pytorchvideo</code> just fine. I tried uninstalling and reinstalling both <code>pytorchvideo</code> and <code>torchvision</code> but the problem persists. Why would this happen?</p>
<python><pytorch><torchvision>
2024-09-06 01:28:31
1
368
Kaihua Hou
78,955,298
1,429,402
Python opencv draws polygons outside of lines
<p>[edited] It appears there is a new bug in opencv that introduces an issue causing <code>fillPoly</code>'s boundaries to exceed <code>polylines</code>'s.</p> <p>Here is humble code to draw a red filled polygon with a blue outline.</p> <pre><code>import cv2 import numpy as np def draw_polygon(points, resolution=50): # create a blank black canvas img = np.zeros((resolution, resolution, 3), dtype=np.uint8) pts = np.array(points, np.int32) pts = pts.reshape((-1, 1, 2)) # draw a filled polygon in blue cv2.fillPoly(img, [pts], (0, 0, 255)) # draw an outline in red cv2.polylines(img, [pts], True, (255, 0, 0), 1) # show the image cv2.imshow(&quot;Polygon&quot;, img) cv2.waitKey(0) cv2.destroyAllWindows() # why is the infill outside the line? if __name__ == &quot;__main__&quot;: # 4 vertices of the quad (clockwise) quad = np.array([[[44, 27], [7, 37], [7, 19], [38, 19]]]) draw_polygon(quad) </code></pre> <p><strong>QUESTION</strong></p> <p>The polygon's infill appears to bleed outside of the outline (two highlighted pixels). I'm looking for a temporary solution until this bug is addressed so the infill stays completely inside the outline.</p> <p>Solution has to work with concave polygons.</p> <p><a href="https://i.sstatic.net/rEEOat4k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rEEOat4k.png" alt="enter image description here" /></a></p>
<python><opencv><graphics><2d><drawing>
2024-09-06 01:20:39
2
5,983
Fnord
78,955,112
2,192,824
How to use python regex to get the first occurrence of a substring
<p>I have the following script trying to get the first occurrence of the word &quot;symbol&quot;, but it keeps returning the last one (print &quot;symbol in the middle and end&quot;). How can I achieve that with re.search? I need to use re.search so that I can get the contents before and after the first &quot;symbol&quot;.</p> <pre><code>test_string = &quot;I have three symbol but I want the first occurence of symbol instead of the symbol in the middle and end&quot; regex = &quot;[\s\S]*(symbol[\s\S]*)&quot; match = re.search(regex, test_string) if match: result = match.group(1) print(result) </code></pre>
<python><regex>
2024-09-05 23:09:46
4
417
Ames ISU
78,955,088
4,008,485
What does '()' mean to numpy apply along axis and how does it differ from 0
<p>I was trying to get a good understanding of numpy apply along axis. Below is the code from the numpy documentation (<a href="https://numpy.org/doc/stable/reference/generated/numpy.apply_along_axis.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/reference/generated/numpy.apply_along_axis.html</a>)</p> <pre><code>import numpy as np def my_func(a): &quot;&quot;&quot;Average first and last element of a 1-D array&quot;&quot;&quot; return (a[0] + a[-1]) * 0.5 b = np.array([[1,2,3], [4,5,6], [7,8,9]]) print(np.apply_along_axis(my_func, 0, b)) #array([4., 5., 6.]) print(np.apply_along_axis(my_func, 1, b)) #array([2., 5., 8.]) </code></pre> <p>According to webpage, the above code has a similar functionality to the code below which I took from the webpage and modified it (played around with it) to understand it:</p> <pre><code>arr = np.array([[1,2,3], [4,5,6], [7,8,9]]) axis = 0 def my_func(a): &quot;&quot;&quot;Average first and last element of a 1-D array&quot;&quot;&quot; print(a, a[0], a[-1]) return (a[0] + a[-1]) * 0.5 out = np.empty(arr.shape[axis+1:]) Ni, Nk = arr.shape[:axis], arr.shape[axis+1:] print(Ni) for ii in np.ndindex(Ni): for kk in np.ndindex(Nk): f = my_func(arr[ii + np.s_[:,] + kk]) Nj = f.shape for jj in np.ndindex(Nj): out[ii + jj + kk] = f[jj] #The code below may help in understanding what I was trying to figure out. #print(np.shape(np.asarray(1))) #x = np.int32(1) #print(x, type(x), x.shape) </code></pre> <p>I understand from the numpy documentation that scalars and arrays in numpy have the same attributes and methods. I am trying to understand the difference between '()' and 0. I understand that () is a tuple. See below.</p> <p>Example:</p> <p>In the code below, the first for-loop does not iterate but the second for-loop iterates once. <strong>I am trying to understand why.</strong></p> <pre><code>import numpy as np for i in np.ndindex(0): print(i) #does not run. for i in np.ndindex(()): print(i) #runs once </code></pre> <p>In summary: Given the above context, what is the difference between () and 0?</p>
<python><numpy><numpy-ndarray><numpy-slicing>
2024-09-05 22:57:01
2
779
rert588
78,954,956
5,454
Is there a way to configure VSCode to discover Python test files under a subdirectory?
<p>I have a project containing a Python backend and a React frontend. So my file/folder structure looks something like this:</p> <pre><code>. ├── backend │   ├── mypackage │   │ └── main.py │   ├── tests │   │   └── test_main.py │   ├── Dockerfile │   └── pyproject.toml ├── frontend │   ├── src │   │   └── App.js │   ├── Dockerfile │   └── package.json └── README.md </code></pre> <p>And my test file contains code like this:</p> <pre class="lang-py prettyprint-override"><code>from mypackage.main import my_function def test_that_my_function_works(): my_function() </code></pre> <p>When I click on the Testing tab in VSCode, I get a <strong>pytest Discovery Error</strong>, and digging into the logs shows this:</p> <pre><code>ImportError while importing test module '/home/user/myproject/backend/tests/test_main.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: ../../../.pyenv/versions/3.12.1/lib/python3.12/importlib/__init__.py:90: in import_module return _bootstrap._gcd_import(name[level:], package, level) backend/tests/test_main.py:1: in &lt;module&gt; from mypackage.main import my_function E ModuleNotFoundError: No module named 'mypackage' =========================== short test summary info ============================ ERROR backend/tests/test_main.py !!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!! ===================== no tests collected, 1 error in 0.17s ===================== </code></pre> <p>It seems Visual Studio Code is treating the root of the repository as the base of where to run pytest, when really I want it to only focus on what's in the <code>backend/</code> folder. How can I achieve this?</p>
<python><visual-studio-code>
2024-09-05 21:53:03
1
10,170
soapergem
78,954,825
50,385
How to have an asyncio StreamReader and StreamWriter that work on a buffer?
<p>For writing unit tests for IO code in the past I've used <code>BytesIO</code> instead of a real file/socket/etc. so I can test my functions on hand crafted data. As far as I can tell Python provides no async equivalent; I realize there's no real IO to do for a buffer, but you need classes that obey the same interface in order for your test code manipulating a buffer to Just Work with the same production code that normally uses files/sockets.</p> <p>I tried writing a subclass like <code>class AsyncBytesIO(asyncio.StreamReader, asyncio.StreamWriter)</code> but at runtime it crashes whenever an exception occurs because <code>asyncio</code> assumes there is a <code>_transport</code> member on my subclass, which is mentioned nowhere in the docs which makes me skeptical these are intended to be subclassed. But it also seems like the only common interface for doing asynchronous streaming reading/writing. How is this supposed to be done?</p> <p>Reproducer:</p> <pre><code>#!/usr/bin/env python import asyncio from io import BytesIO class AsyncBytesIO(asyncio.StreamReader, asyncio.StreamWriter): def __init__(self, data: bytes = b&quot;&quot;): self._buffer = BytesIO(data) self._closed = False async def read(self, n: int = -1) -&gt; bytes: return self._buffer.read(n) async def readline(self) -&gt; bytes: return self._buffer.readline() async def readexactly(self, n: int) -&gt; bytes: return self._buffer.read(n) def at_eof(self) -&gt; bool: return self._buffer.tell() == len(self._buffer.getvalue()) def write(self, data: bytes) -&gt; None: if self._closed: raise ValueError(&quot;Write operation on a closed stream&quot;) self._buffer.write(data) async def drain(self) -&gt; None: pass def close(self) -&gt; None: self._closed = True def is_closing(self) -&gt; bool: return self._closed def getvalue(self) -&gt; bytes: return self._buffer.getvalue() def seek(self, pos: int) -&gt; None: self._buffer.seek(pos) def tell(self) -&gt; int: return self._buffer.tell() def reset(self) -&gt; None: self._buffer.seek(0) async def test(): x = AsyncBytesIO() assert False asyncio.run(test()) </code></pre> <p>Results in:</p> <pre><code>Traceback (most recent call last): File &quot;/tmp/async_test.py&quot;, line 53, in &lt;module&gt; asyncio.run(test()) File &quot;/usr/lib/python3.12/asyncio/runners.py&quot;, line 194, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3.12/asyncio/runners.py&quot;, line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3.12/asyncio/base_events.py&quot;, line 687, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File &quot;/tmp/async_test.py&quot;, line 51, in test assert False ^^^^^ AssertionError Exception ignored in: &lt;function StreamWriter.__del__ at 0x77061961ed40&gt; Traceback (most recent call last): File &quot;/usr/lib/python3.12/asyncio/streams.py&quot;, line 411, in __del__ AttributeError: 'AsyncBytesIO' object has no attribute '_transport' </code></pre>
<python><async-await><stream><python-asyncio>
2024-09-05 20:57:14
0
22,294
Joseph Garvin
78,954,770
1,658,617
How can I annotate variadic generics in Python?
<p>I have a generic class:</p> <pre><code>class A[T]: pass </code></pre> <p>And a function that extracts <code>T</code> from <code>A</code>:</p> <pre><code>def func[T](arg: A[T]) -&gt; T: pass </code></pre> <p>Now my function can accept <code>*args</code> and extract all <code>Ts</code> from all of the given <code>As</code>. Each '<code>T</code>' may of course be different.</p> <p>Something along these lines:</p> <pre><code>def func[*Ts](*args: A[*Ts]) -&gt; *Ts: pass </code></pre> <p>How do I annotate it correctly?</p> <p>Unfortunately the <code>TypeVarTuple</code> does not seem to support it.</p> <p>ATM I'm using many overloads of different amount of args, and a general <code>*args: A -&gt; Any</code> for 6 or more args.</p>
<python><python-typing><variadic-functions>
2024-09-05 20:40:39
0
27,490
Bharel
78,954,687
1,232,087
Unable to open file in Databricks DBFS
<p>Without using spark or any other external tools, I'm trying to read following file from <code>DBFS</code> FileStore of Databricks. But the following code in a Databricks notebook gives error shown below. File has only few lines:</p> <pre><code>f = open('/dbfs/FileStore/tables/Test.csv', 'r') </code></pre> <blockquote> <p>FileNotFoundError: [Errno 2] No such file or directory: '/dbfs/FileStore/tables/Test.csv'</p> </blockquote> <p><strong>Question</strong>: What could be a cause of the issue, and how we can resolve it? This <a href="https://stackoverflow.com/a/49530874/1232087">post</a> did not help</p> <p><strong>Remarks</strong>: I'm using <code>Databricks community edition</code>. Same file successfully loads data into df:</p> <pre><code>df = spark.read.csv(&quot;/FileStore/tables/Test.csv&quot;, header=True, inferSchema=True) </code></pre>
<python><databricks><databricks-community-edition>
2024-09-05 20:13:05
0
24,239
nam
78,954,547
2,986,153
How to display great_tables in databricks
<p>I love the control that the great_tables package offers, but in databricks the table results are outputted in a really vanilla fashion. Is there anyway to display great_tables results in their full splendor within databricks?</p> <pre><code>from great_tables import GT GT(pl_ci) </code></pre> <p><a href="https://i.sstatic.net/1KVVoYd3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1KVVoYd3.png" alt="enter image description here" /></a></p>
<python><databricks><great-tables>
2024-09-05 19:22:28
0
3,836
Joe
78,954,432
219,159
Sending a signal from a QThread closes the program
<p>Python 3.11, PyQT6 GUI app, testing on Windows 10. I have a QThread doing some background work, and I'm trying to report progress back via a custom signal:</p> <pre><code>class TheWindow(QMainWindow): #... def on_menu_item(self): class MyThread(QThread): def __init__(self, parent): QThread.__init__(self, parent) self.progress = pyqtBoundSignal(int) def run(self): try: for i in range(1000): self.progress.emit(i) except Exception as exc: pass th = MyThread(self) th.progress.connect(lambda i:print(i)) th.start() </code></pre> <p>When the thread is invoked, the program crashes with no error message in the debug console and no exception caught. Commenting out the connect/emit lines removes the crash. What am I doing wrong?</p>
<python><qt><pyqt><pyqt6><qt-signals>
2024-09-05 18:40:18
0
61,826
Seva Alekseyev
78,954,381
825,227
How to dynamically update a bar plot in Python
<p>I have a barplot that I'd like to update dynamically using data from a dataframe.</p> <p>The original plot is based on dataframe, <code>d</code>:</p> <p><strong>d</strong></p> <pre><code>Position Operation Side Price Size 1 9 0 1 0.7289 -19 2 8 0 1 0.729 -427 3 7 0 1 0.7291 -267 4 6 0 1 0.7292 -18 5 5 0 1 0.7293 -16 6 4 0 1 0.7294 -16 7 3 0 1 0.7295 -429 8 2 0 1 0.7296 -23 9 1 0 1 0.7297 -31 10 0 0 0 0.7299 41 11 1 0 0 0.73 9 12 2 0 0 0.7301 10 13 3 0 0 0.7302 18 14 4 0 0 0.7303 16 15 5 0 0 0.7304 18 16 6 0 0 0.7305 429 17 7 0 0 0.7306 16 18 8 0 0 0.7307 268 19 9 0 0 0.7308 18 </code></pre> <p>Which, when plotted via the below returns this:</p> <pre><code>f, ax = plt.subplots() sns.set_color_codes('muted') sns.barplot(data = d[d.Side==0], x = 'Size', y = 'Price', color = 'b', orient = 'h', native_scale=True) sns.barplot(data = d[d.Side==1], x = 'Size', y = 'Price', color = 'r', orient = 'h', native_scale=True) ax.yaxis.set_major_locator(ticker.MultipleLocator(.0001)) sns.despine() </code></pre> <p><a href="https://i.sstatic.net/yrLY1da0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrLY1da0.png" alt="enter image description here" /></a></p> <p>I'd like to loop through a 2nd dataframe, <code>new_d</code>, to update the data that's plotted.</p> <p><strong>new_d</strong></p> <pre><code> Position Operation Side Price Size 34 0 1 0 0.7299 39 35 1 1 0 0.73 9 36 3 1 0 0.7302 18 37 0 1 1 0.7298 -8 38 0 1 1 0.7298 -9 39 0 1 1 0.7298 -8 40 0 1 1 0.7298 -9 41 0 1 1 0.7298 -14 42 0 1 1 0.7298 -9 43 0 2 1 0.0 0 44 9 0 1 0.7288 -17 45 0 1 1 0.7297 -29 46 1 1 1 0.7296 -23 47 9 2 1 0.0 0 48 0 0 1 0.7298 -3 49 1 1 1 0.7297 -31 50 0 1 1 0.7298 -10 51 0 1 0 0.7299 41 52 0 1 1 0.7298 -4 53 0 2 1 0.0 0 54 9 0 1 0.7288 -17 55 9 2 0 0.0 0 56 0 0 0 0.7298 2 57 0 1 0 0.7298 4 58 1 1 0 0.7299 39 59 0 1 0 0.7298 5 </code></pre> <p>Assuming I have code/logic that will update <code>d</code> from the rows of <code>new_d</code>, how can I dynamically update my plot?</p> <p>I've tried the below using <code>FuncAnimation</code> but think I may be missing something.</p> <pre><code>def init(): # f, ax = plt.subplots() sns.set_color_codes('muted') sns.barplot(data = d[d.Side==0], x = 'Size', y = 'Price', color = 'b', orient = 'h', native_scale=True) s = sns.barplot(data = d[d.Side==1], x = 'Size', y = 'Price', color = 'r', orient = 'h', native_scale=True) sns.despine() return s def update_d(row): if row.Operation == 1: d.loc[((d.Position==row.Position) &amp; (d.Side==row.Side)), 'Size'] = row.Size elif row.Operation == 2: idx = d.loc[((d.Position==row.Position) &amp; (d.Side==row.Side))].index d.drop(idx, inplace=True) elif row.Operation == 0: d = pd.concat([pd.DataFrame([[row.Time, row.Symbol, row.Position, row.Operation, row.Side, row.Price, row.Size]], columns=d.columns), d], ignore_index=True) d['Position'] = d.groupby('Side')['Price'].rank().astype('int').sub(1) d.sort_values('Price', inplace=True) sns.barplot(data = d[d.Side==0], x = 'Size', y = 'Price', color = 'b', orient = 'h', native_scale=True) s = sns.barplot(data = d[d.Side==1], x = 'Size', y = 'Price', color = 'r', orient = 'h', native_scale=True) return s f, ax = plt.subplots() ax.yaxis.set_major_locator(ticker.MultipleLocator(.0001)) ani = FuncAnimation(f, update_d, init_func=init, frames=new_d[20:].iterrows(), interval = 100) plt.show() </code></pre>
<python><matplotlib><animation><seaborn><visualization>
2024-09-05 18:21:59
1
1,702
Chris
78,954,203
1,029,902
Connecting to Coinbase Sandbox API using
<p>I am trying to use Coinbase in Sandbox mode to try out a strategy.</p> <p>I created an Sandbox API using the instructions here <a href="https://docs.cdp.coinbase.com/exchange/docs/sandbox/" rel="nofollow noreferrer">https://docs.cdp.coinbase.com/exchange/docs/sandbox/</a></p> <p>However, when I use them in a Python script, they don't work. It returns an error.</p> <p>I am using a simple script</p> <pre><code>from coinbase.wallet.client import Client import os from dotenv import load_dotenv # Load environment variables from a .env file load_dotenv() # Your API credentials from Coinbase API_KEY = os.getenv(&quot;API_KEY&quot;) API_SECRET = os.getenv(&quot;API_SECRET&quot;) # Initialize the Coinbase client client = Client(API_KEY, API_SECRET) def get_cash_balance(): # Get accounts accounts = client.get_accounts() # Find and print the cash balances for account in accounts.data: if account['type'] == 'fiat': print(f&quot;Currency: {account['currency']}&quot;) print(f&quot;Balance: {account['balance']['amount']}{account['balance']['currency']}&quot;) if __name__ == &quot;__main__&quot;: get_cash_balance() </code></pre> <p>It gives me an error ending with</p> <pre><code>raise RequestsJSONDecodeError(e.msg, e.doc, e.pos) requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0) </code></pre> <p>Am I doing something wrong? How can I fix this? My keys are exactly as they were on the site.</p> <p>I even tried using <code>curl</code></p> <pre><code>curl -X GET https://api.coinbase.com/v2/accounts -u &quot;API_KEY:API_SECRET&quot; </code></pre> <p>The response from that is <code>Unauthorized</code></p>
<python><api-key><coinbase-api>
2024-09-05 17:23:03
1
557
Tendekai Muchenje
78,954,085
1,103,069
Python project with flask doesn't think I have a email column for some reason
<p>I've been trying to follow this <a href="https://www.youtube.com/watch?v=GMppyAPbLYk&amp;t=3705s" rel="nofollow noreferrer">tutorial</a> but I changed my code slightly to understand better, but I'm getting a error that there is a missing column when I'm trying to create a new resource via a post request, I'm utilizing <code>flask</code>, <code>flask_restful</code> and <code>flask_alchemy</code></p> <p>Model definition:</p> <pre><code>class UserModel(db.Model): id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(100), nullable=False) email = db.Column(db.String(100), nullable=False) def __repr__(self): return f&quot;User(name={name}, email={email})&quot; </code></pre> <p>User class definition:</p> <pre><code># user class class User(Resource): @marshal_with(resource_fields_user) def get(self, user_id): result = UserModel.query.get(id=user_id) return {&quot;request_type&quot;: &quot;get&quot;, &quot;data&quot;: result} # @marshal_with(resource_fields_user) def post(self): args = user_post_args.parse_args() user = UserModel(name=args[&quot;name&quot;], email=args[&quot;email&quot;]) db.session.add(user) db.session.commit() return {&quot;request_type&quot;: &quot;post&quot;, &quot;data&quot;: user}, 201 </code></pre> <p>The post request:</p> <pre><code>curl --location 'http://127.0.0.1:5000/users' \ --header 'Content-Type: application/json' \ --data-raw '{ &quot;name&quot;: &quot;Jane Doe&quot;, &quot;email&quot;: &quot;jh@testing.com&quot; }' </code></pre> <p>But when I run a post to this I get the following error:</p> <pre><code>sqlalchemy.exc.OperationalError sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) table user_model has no column named email [SQL: INSERT INTO user_model (name, email) VALUES (?, ?)] [parameters: ('Jane Doe', 'jh@testing.com')] (Background on this error at: http://sqlalche.me/e/13/e3q8) </code></pre> <p>Trying to figure out why it thinks that I don't have an email column?</p>
<python><flask><alchemy>
2024-09-05 16:47:37
0
1,049
Tom Bird
78,953,862
5,020,803
Issue with Mismatching GSHHS and Natural Earth Shapefiles
<p>I am trying to plot data for the region of American Samoa but I am having issues with the coastlines. I generally use the higher resolution data from the <a href="https://www.soest.hawaii.edu/pwessel/gshhg/" rel="nofollow noreferrer">Global Self-consistent, Hierarchical, High-resolution Geography Database</a> but the coastline appears shifted from the data I am plotting and from other datasets such as NaturalEarth and Google maps.</p> <p>Does anyone know the source of this offset? Is this an issue with how I am plotting the coastlines, a map projection issue that I haven't taken into consideration, an issue with the GSHHS dataset, or maybe an issue with its implementation into Cartopy?</p> <p>Below is an example code illustrating my issue with the island of American Samoa shifted to the west in GSHHS (blue).</p> <pre><code>import matplotlib.pyplot as plt import cartopy proj = cartopy.crs.PlateCarree() fig, ax = plt.subplots(subplot_kw=dict(projection=proj), figsize=(12,12)) ax.add_feature(cartopy.feature.GSHHSFeature(scale='f'), facecolor=&quot;blue&quot;) ax.add_feature(cartopy.feature.NaturalEarthFeature(&quot;physical&quot;, &quot;land&quot;, &quot;10m&quot;), ec=&quot;red&quot;, fc=&quot;yellow&quot;, lw=2, alpha=0.4) lon_min_ = -170.9 lon_max_ = -170.55 lat_min_ = -14.4 lat_max_ = -14.2 ax.set_extent([lon_min_, lon_max_, lat_min_, lat_max_],crs=proj) ax.gridlines(draw_labels=True,alpha=.4,linewidth=2, color='black', linestyle='--') plt.show() </code></pre> <p><a href="https://i.sstatic.net/19RIi9x3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19RIi9x3.png" alt="enter image description here" /></a></p>
<python><matplotlib><cartopy>
2024-09-05 15:44:48
0
3,210
BenT
78,953,656
4,508,605
AWS Glue python script create multiple files of a specific size not working
<p>I have below <code>Python</code> script where currently it generates several gz files with size <code>4MB</code> in <code>S3 bucket</code>. Its bydeafult what <code>AWS glue</code> has created. But now i want to create multiple files of specific size around <code>100-250MB</code> in <code>s3 bucket</code>. I have tried below logic in python script but it did not work and still creates several gz files with size <code>4MB</code>.</p> <pre><code>import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job import datetime args = getResolvedOptions(sys.argv, ['target_BucketName', 'JOB_NAME']) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args['JOB_NAME'], args) outputbucketname = args['target_BucketName'] timestamp = datetime.datetime.now().strftime(&quot;%Y%m%d&quot;) filename = f&quot;tbd{timestamp}&quot; output_path = f&quot;{outputbucketname}/{filename}&quot; # Script generated for node AWS Glue Data Catalog AWSGlueDataCatalog_node075257312 = glueContext.create_dynamic_frame.from_catalog(database=&quot;ardt&quot;, table_name=&quot;_ard_tbd&quot;, transformation_ctx=&quot;AWSGlueDataCatalog_node075257312&quot;) # Script generated for node Amazon S3 AmazonS3_node075284688 = glueContext.write_dynamic_frame.from_options(frame=AWSGlueDataCatalog_node075257312, connection_type=&quot;s3&quot;, format=&quot;csv&quot;, format_options={&quot;separator&quot;: &quot;|&quot;}, connection_options={&quot;path&quot;: output_path, &quot;compression&quot;: &quot;gzip&quot;, &quot;recurse&quot;: True, &quot;groupFiles&quot;: &quot;inPartition&quot;, &quot;groupSize&quot;: &quot;100000000&quot;}, transformation_ctx=&quot;AmazonS3_node075284688&quot;) job.commit() </code></pre>
<python><amazon-s3><aws-glue>
2024-09-05 14:51:44
2
4,021
Marcus
78,953,646
7,321,700
Creating JSON style API call dict from Pandas DF data
<p><strong>Scenario:</strong> I have a dataframe which contains one row of data. Each column is an year and it has the relevant value. I am trying to use the data from this df to create a json style structure to pass to an API requests.post.</p> <p><strong>Sample DF:</strong></p> <pre><code>+-------+-------+-------+-------+-------+-------+-------------+-------------+-------------+-------------+-------------+-------------+ | | 2020 | 2021 | 2022 | 2023 | 2024 | 2025 | 2026 | 2027 | 2028 | 2029 | 2030 | +-------+-------+-------+-------+-------+-------+-------------+-------------+-------------+-------------+-------------+-------------+ | Total | 23648 | 20062 | 20555 | 22037 | 26208 | 28224.88801 | 29975.87934 | 31049.01582 | 32170.68853 | 33190.35298 | 34031.93951 | +-------+-------+-------+-------+-------+-------+-------------+-------------+-------------+-------------+-------------+-------------+ </code></pre> <p><strong>Sample JSON style structure:</strong></p> <pre><code>parameters = { &quot;first_Id&quot;:first_id, &quot;version&quot;:2, &quot;overrideData&quot;:[ { &quot;period&quot;:2024, &quot;TOTAL&quot;:101.64, }, { &quot;period&quot;:2025, &quot;TOTAL&quot;:104.20, } ] } </code></pre> <p><strong>Question:</strong> What would be the best approach to use the data from the Df to fill and expand the JSON style object? I tried the following, but this only separates two lines, one for total and one for period, which does not result in the pairings needs:</p> <pre><code> parameters = {} parameters['first_Id'] = first_id parameters['version'] = 2 parameters['overrideData'] = { } parameters['overrideData']['total'] = test_input.iloc[0].tolist() parameters['overrideData']['period'] = list(test_input.columns) </code></pre> <p><strong>This results in:</strong></p> <pre><code>{ &quot;companyId&quot;: 11475, &quot;version&quot;: 2, &quot;overrideData&quot;: { &quot;TOTAL&quot;: [ 23647.999999999996, 20061.999999999996, 20555, 22036.999999999996, 26207.999999999993, 28224.88800768, 29975.879336500002, 31049.015816740008, 32170.68852577, 33190.3529754, 34031.93951397 ], &quot;period&quot;: [ 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2030 ] } } </code></pre>
<python><json><pandas><dataframe>
2024-09-05 14:48:49
1
1,711
DGMS89
78,953,635
462,794
How can I read standard input by lines in a Python one liner?
<p>What is the equivalent of</p> <pre><code>ls | perl -ne 'print &quot;test $_&quot;' </code></pre> <p>in python</p> <p>I just want to pass the result of a unix command to the python interpreter.</p>
<python>
2024-09-05 14:45:18
1
1,244
Bussiere
78,953,424
3,965,347
How do I create a decile column in Python polars?
<p>Let's say I have a column of FICO scores. I'd like to create another column FICO_DECILE that ranks the FICO scores descending and assigns a decile group, i.e. FICO=850 would have FICO_DECILE=1, and something like FICO=360 would have FICO_DECILE=10.</p> <p>I tried:</p> <pre><code># decile rank df1 = df.with_columns( ( (pl.col('fico').rank(method='dense')/df.height*10).cast(pl.UInt32).alias('fico_decile') ) ) </code></pre> <p>But I only get DECILE_GROUP equal to 0 and null.</p>
<python><python-polars><ranking><quantile><dense-rank>
2024-09-05 14:00:14
1
871
kstats9pt3
78,953,324
11,725,056
How does the data splitting actually work in Multi GPU Inference for Accelerate when used in a batched inference setting?
<p>I followed the code given in this <a href="https://github.com/huggingface/accelerate/issues/2018" rel="nofollow noreferrer">github issue</a> and this <a href="https://medium.com/@geronimo7/llms-multi-gpu-inference-with-accelerate-5a8333e4c5db" rel="nofollow noreferrer">medium blog</a></p> <p>I ran the batched experiment with <code>process = 1</code> and <code>process=4</code> it gave me the result but I'm confused right now because I thought the result would be in order. If they are not in orger, them I won't be able to map those with the ground Truth</p> <p>For example let's say my <code>data_length=5</code> and my <code>batch=3</code>. So if I got results <code>[[1,2,3], [4,5]]</code> for <code>process=1</code> then I'm expecting when using <code>process = 4</code>, I should get the same results when I flatten the results.</p> <p>they are coming out of order. What am I doing wrong?</p> <p><strong><em>NOTE: I used a <code>zip(text,label)</code> while passing data to processes to get the correct mapping BUT that is not the question</em></strong></p> <p>Below is the code:</p> <pre><code>def seed_everything(seed=13): random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) set_seed(seed) torch.backends.cudnn.deterministic = True seed_everything(seed = 13) def test(): accelerator = Accelerator() accelerator.wait_for_everyone() seed_everything(seed = 13) model = load_model(model = &quot;my_model_path&quot; lora = &quot;./my_lora_checkpoint/checkpoint-8200&quot;, device = {&quot;&quot;: accelerator.process_index}, num_labels = NUM_LABELS, merge_unload = False) with accelerator.split_between_processes(zipped_text_label) as prompts: res = {&quot;pred_probs&quot;: [], &quot;pred_labels&quot;: []} BATCH_SIZE = 10 BATCHES = [prompts[i:i + BATCH_SIZE] for i in range(0, len(prompts), BATCH_SIZE)] print(len(BATCHES[0])) pred_probs = [] pred_labels = [] for batch in tqdm(BATCHES): text_batch = [i[0] for i in batch] score_batch = [i[1] for i in batch] with torch.no_grad(): inputs = tokenizer(text_batch,truncation= True, max_length=MAX_LENGTH, padding=&quot;max_length&quot;, return_tensors = &quot;pt&quot;).to(model.device) logits = model(**inputs).logits.cpu().to(torch.float32) probs = torch.softmax(logits, dim = 1).numpy() res[&quot;pred_probs&quot;].append(probs.tolist()) res[&quot;pred_labels&quot;].append(probs.argmax(axis = 1).tolist()) res = [res] result = gather_object(res) if accelerator.is_main_process: print(result) notebook_launcher(test, num_processes=1) </code></pre> <pre><code></code></pre>
<python><pytorch><huggingface-transformers><huggingface><accelerate>
2024-09-05 13:35:26
1
4,292
Deshwal
78,953,239
4,451,315
Minimum periods in rolling mean
<p>Say I have:</p> <pre class="lang-py prettyprint-override"><code>data = { 'id': ['a', 'a', 'a', 'b', 'b', 'b', 'b'], 'd': [1,2,3,0,1,2,3], 'sales': [5,1,3,4,1,2,3], } </code></pre> <p>I would like to add a column with a rolling mean with window size 2, with <code>min_periods=2</code>, over <code>'id'</code></p> <p>In Polars, I can do:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame(data) df.with_columns(sales_rolling = pl.col('sales').rolling_mean(2).over('id')) </code></pre> <pre><code>shape: (7, 4) ┌─────┬─────┬───────┬───────────────┐ │ id ┆ d ┆ sales ┆ sales_rolling │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ f64 │ ╞═════╪═════╪═══════╪═══════════════╡ │ a ┆ 1 ┆ 5 ┆ null │ │ a ┆ 2 ┆ 1 ┆ 3.0 │ │ a ┆ 3 ┆ 3 ┆ 2.0 │ │ b ┆ 0 ┆ 4 ┆ null │ │ b ┆ 1 ┆ 1 ┆ 2.5 │ │ b ┆ 2 ┆ 2 ┆ 1.5 │ │ b ┆ 3 ┆ 3 ┆ 2.5 │ └─────┴─────┴───────┴───────────────┘ </code></pre> <p>What's the DuckDB equivalent? I've tried</p> <pre class="lang-py prettyprint-override"><code>import duckdb duckdb.sql(&quot;&quot;&quot; select *, mean(sales) over ( partition by id order by d range between 1 preceding and 0 following ) as sales_rolling from df &quot;&quot;&quot;).sort('id', 'd') </code></pre> <p>but get</p> <pre><code>┌─────────┬───────┬───────┬───────────────┐ │ id │ d │ sales │ sales_rolling │ │ varchar │ int64 │ int64 │ double │ ├─────────┼───────┼───────┼───────────────┤ │ a │ 1 │ 5 │ 5.0 │ │ a │ 2 │ 1 │ 3.0 │ │ a │ 3 │ 3 │ 2.0 │ │ b │ 0 │ 4 │ 4.0 │ │ b │ 1 │ 1 │ 2.5 │ │ b │ 2 │ 2 │ 1.5 │ │ b │ 3 │ 3 │ 2.5 │ └─────────┴───────┴───────┴───────────────┘ </code></pre> <p>This is very close, but duckdb still calculates the rolling mean when there's only a single value in the window. How can I replicate the <code>min_periods=2</code> (default) behaviour from Polars?</p>
<python><duckdb>
2024-09-05 13:17:09
1
11,062
ignoring_gravity
78,953,053
1,169,091
Convert a string literal to a float in Python
<p>I have a string: &quot;0xFF&quot; How do I convert it to a float type?</p> <p>I tried</p> <pre><code>&quot;0xFF&quot;.fromhex() </code></pre> <p>and that does not seem to exist.</p>
<python><types>
2024-09-05 12:32:57
3
4,741
nicomp
78,952,799
4,609,089
The chatCompletion operation does not work with the specified model gpt-4o-mini
<h2>Context</h2> <p>I have below the Python code.</p> <pre><code>client = AzureOpenAI( api_key = os.getenv(&quot;AZURE_OPENAI_API_KEY&quot;), api_version = os.getenv('AZURE_OPENAI_API_VERSION'), azure_endpoint = os.getenv('AZURE_OPENAI_ENDPOINT') ) messages = [ {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: prompt} ] response = client.chat.completions.create( model=&quot;gpt-4o-mini&quot;, messages=messages, temperature=0, ) </code></pre> <h2>Issue</h2> <p>I faced the below error.</p> <pre><code>openai.BadRequestError: Error code: 400 - { &quot;error&quot;:{ &quot;code&quot;:&quot;OperationNotSupported&quot;, &quot;message&quot;:&quot;The chatCompletion operation does not work with the specified model, gpt-4o-mini. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.&quot; } } </code></pre> <p>The above code works fine when I change the model to &quot;gpt-4o&quot;</p>
<python><openai-api><azure-openai><gpt-4o-mini>
2024-09-05 11:31:11
1
833
John
78,952,771
9,318,323
Powershell script: Loop over executables
<p>I want to run a pretty basic script that has a list of <strong>executables</strong> (different python versions in this case) that runs them one by one in a loop as part of a command.</p> <p>Example:</p> <pre><code>$py39 = python # can be full path to an .exe if needed $py310 = python3.10 $py311 = python3.11 $pythons = $py39,$py310,$py311 Foreach ($py in $pythons) { $py --version $py -c &quot;print('hello world')&quot; } </code></pre> <p>&quot;python&quot;, &quot;python3.10&quot;, &quot;python3.11&quot; all launch their respective python version when I run it myself correctly.</p> <p>When I run the script I get these errors. How do I make it work?</p> <pre><code>+ $py --version + ~~~~~~~ Unexpected token 'version' in expression or statement. + $py -c &quot;print('hello world')&quot; + ~~ Unexpected token '-c' in expression or statement. </code></pre>
<python><powershell>
2024-09-05 11:22:56
1
354
Vitamin C
78,952,705
5,723,565
encrypt from java and decrypt from python | AES GCM ncryption
<p>I wanted to encrypt a string from Java and decrypt that encrypted value in Python. using AEC GCM algorytham. below is my java code</p> <pre><code>import java.nio.ByteBuffer; import java.nio.charset.Charset; import java.nio.charset.StandardCharsets; import java.security.SecureRandom; import java.util.Arrays; import java.util.Base64; import javax.crypto.Cipher; import javax.crypto.spec.GCMParameterSpec; import javax.crypto.spec.SecretKeySpec; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class AESEncryptionUtil { public static void main(String[] args) { String encString = &quot;Hello, World!&quot;; String secKey = &quot;hellow world&quot;; String encrypted = encrypt(encString, secKey); System.out.println(&quot;Encrypted (Java): &quot; + encrypted); String decrypted = decrypt(encrypted, secKey); System.out.println(&quot;Decrypted (Java): &quot; + decrypted); } private static final Logger logger = LoggerFactory.getLogger(AESEncryption.class); private static final String ENCRYPT_ALGO = &quot;AES/GCM/NoPadding&quot;; private static final int TAG_LENGTH_BIT = 128; private static final int IV_LENGTH_BYTE = 12; private static final int SALT_LENGTH_BYTE = 16; private static final Charset UTF_8 = StandardCharsets.UTF_8; public static String encrypt(String pText, String secKey) { try { if (pText == null || pText.equals(&quot;null&quot;)) { return null; } byte[] salt = getRandomNonce(SALT_LENGTH_BYTE); byte[] iv = getRandomNonce(IV_LENGTH_BYTE); byte[] keyBytes = secKey.getBytes(StandardCharsets.UTF_16); SecretKeySpec skeySpec = new SecretKeySpec(Arrays.copyOf(keyBytes, 16), &quot;AES&quot;); Cipher cipher = Cipher.getInstance(ENCRYPT_ALGO); cipher.init(Cipher.ENCRYPT_MODE, skeySpec, new GCMParameterSpec(TAG_LENGTH_BIT, iv)); byte[] cipherText = cipher.doFinal(pText.getBytes()); byte[] cipherTextWithIvSalt = ByteBuffer.allocate(iv.length + salt.length + cipherText.length) .put(iv) .put(salt) .put(cipherText) .array(); return Base64.getEncoder().encodeToString(cipherTextWithIvSalt); } catch (Exception ex) { logger.error(&quot;Error while encrypting:&quot;, ex); } return null; } public static String decrypt(String cText, String secKey) { try { if (cText == null || cText.equals(&quot;null&quot;)) { return null; } byte[] decode = Base64.getDecoder().decode(cText.getBytes(UTF_8)); ByteBuffer bb = ByteBuffer.wrap(decode); byte[] iv = new byte[IV_LENGTH_BYTE]; bb.get(iv); byte[] salt = new byte[SALT_LENGTH_BYTE]; bb.get(salt); byte[] cipherText = new byte[bb.remaining()]; bb.get(cipherText); byte[] keyBytes = secKey.getBytes(StandardCharsets.UTF_16); SecretKeySpec skeySpec = new SecretKeySpec(Arrays.copyOf(keyBytes, 16), &quot;AES&quot;); Cipher cipher = Cipher.getInstance(ENCRYPT_ALGO); cipher.init(Cipher.DECRYPT_MODE, skeySpec, new GCMParameterSpec(TAG_LENGTH_BIT, iv)); byte[] plainText = cipher.doFinal(cipherText); return new String(plainText, UTF_8); } catch (Exception ex) { logger.error(&quot;Error while decrypting:&quot;, ex); } return null; } public static byte[] getRandomNonce(int numBytes) { byte[] nonce = new byte[numBytes]; new SecureRandom().nextBytes(nonce); return nonce; } } </code></pre> <p>i cannot change my Java code; I tried many ways in Python but was not able to achieve. most of the time i am getting secretKey decoding error and cryptography.exceptions.InvalidTag from the Python side. your suggestions are appreciated.</p> <p>Python code:</p> <pre><code>import base64 import os from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes from cryptography.hazmat.backends import default_backend def decrypt(cipher_text_base64, secret_key): cipher_text_with_iv_salt = base64.b64decode(cipher_text_base64) iv = cipher_text_with_iv_salt[:12] salt = cipher_text_with_iv_salt[12:28] tag = cipher_text_with_iv_salt[-16:] # The last 16 bytes are the tag ciphertext = cipher_text_with_iv_salt[28:-16] # Ciphertext excluding tag key = secret_key.encode('utf-16') key = key[:16].ljust(16, b'\0') decryptor = Cipher(algorithms.AES(key), modes.GCM(iv, tag), backend=default_backend()).decryptor() plaintext = decryptor.update(ciphertext) + decryptor.finalize() return plaintext.decode('utf-8') if __name__ == &quot;__main__&quot;: text_to_encrypt = &quot;Hello, World!&quot; secret_key = &quot;hellow world&quot; # Decrypt in Python decrypted_text = decrypt(&quot;encrypted_text&quot;, secret_key) print(f&quot;Decrypted (Python): {decrypted_text}&quot;) </code></pre>
<python><java><spring-boot><aes><aes-gcm>
2024-09-05 11:04:58
1
422
chethankumar
78,952,621
8,037,521
What is the correct way to downsample with laspy?
<p>I am taking this example from <code>laspy</code> docs where the new <code>LasData</code> is created and written to a file:</p> <pre><code>import laspy import numpy as np # 0. Creating some dummy data my_data_xx, my_data_yy = np.meshgrid(np.linspace(-20, 20, 15), np.linspace(-20, 20, 15)) my_data_zz = my_data_xx ** 2 + 0.25 * my_data_yy ** 2 my_data = np.hstack((my_data_xx.reshape((-1, 1)), my_data_yy.reshape((-1, 1)), my_data_zz.reshape((-1, 1)))) # 1. Create a new header header = laspy.LasHeader(point_format=3, version=&quot;1.2&quot;) header.add_extra_dim(laspy.ExtraBytesParams(name=&quot;random&quot;, type=np.int32)) header.offsets = np.min(my_data, axis=0) header.scales = np.array([0.1, 0.1, 0.1]) # 2. Create a Las las = laspy.LasData(header) las.x = my_data[:, 0] las.y = my_data[:, 1] las.z = my_data[:, 2] las.random = np.random.randint(-1503, 6546, len(las.points), np.int32) las.write(&quot;new_file.las&quot;) </code></pre> <p>My use case is just slightly different: <code>my_data</code> comes itself from a laz file (let's say, each 10th point), which has its own <code>LasHeader</code>.</p> <p>I have seen the possibility to create new <code>LasData</code> based on the existing header as:</p> <pre><code>header = copy(las.header) d_las = laspy.LasData(header) </code></pre> <p>However, then I get unmatching array dimensions error due to the fact (I suppose) that <code>point_count</code> in the old header does not match the new data.</p> <p><strong>Question is then:</strong> if I create a laz by taking 10th point of the already-existing laz, should I manually recompute <code>offsets</code> and <code>scales</code> as in example below &amp; manually adjust the <code>point_count</code> in the header? Or is there some more elegant way which updates those automatically based on the new x/y/z data I provide?</p>
<python><laspy>
2024-09-05 10:44:38
3
1,277
Valeria
78,952,338
478,676
In a flet app I cannot access assets file, I receive a PathAccessException each time
<p>I'm using/learning flet v0.23.2 and every time I access (in read mode) a file on the /assets directory (the path is correct and the directory exists) I get this error:</p> <pre><code>PathAccessException: Cannot open file, path = 'mypath/assets/icon.png' (OS Error: Operation not permitted, errno = 1) </code></pre> <p>This is my code:</p> <pre><code>import flet as ft def main(page: ft.Page): page.title = &quot;Images Example&quot; page.theme_mode = ft.ThemeMode.LIGHT page.padding = 50 page.update() img = ft.Image( src=f&quot;/assets/icon.png&quot;, width=100, height=100, fit=ft.ImageFit.CONTAIN, ) images = ft.Row(expand=1, wrap=False, scroll=&quot;always&quot;) page.add(img, images) page.update() ft.app(main) </code></pre> <p>I tried to run the app in these ways:</p> <pre><code>flet run myapp.py flet run -a &lt;path to assets&gt; myapp.py </code></pre> <p>and I tried with the app call is in this way:</p> <pre><code>... img = ft.Image( src=f&quot;icon.png&quot;, width=100, height=100, fit=ft.ImageFit.CONTAIN, ) ... flet.app(target=main, assets_dir=&quot;assets&quot;) </code></pre> <p>I'm using a MAC OS and the assets directory and files are readable.</p>
<python><flutter><file><flet>
2024-09-05 09:42:02
1
487
Alessandra
78,952,264
375,666
Blending an inflated equirectilinear image with another camera image
<p>I'm struggling for this problem for one week. And I can't find a solution, or any approach. I'm wondering is that something impossible to do. I have provided all the information and trying to find a solution. So I have the following fisheye distorted image, and I'm trying to enhance the fisheye inflated equirectilinear image.</p> <p>The fish distorted image, is the following and the next is fisheye inflated equirectilinear image. The purpose of this, I'm doing NERF and the output for the second image doesn't have all the details, and having a lot of bad pixels as shown in the figure. So the point is to blend or whatever, to have better image in terms of details and clarity. The engine that is used: <a href="https://github.com/fbriggs/lifecast_public/blob/main/nerf/source/lifecast_nerf_lib.cc#L1069" rel="nofollow noreferrer">https://github.com/fbriggs/lifecast_public/blob/main/nerf/source/lifecast_nerf_lib.cc#L1069</a> The paper that is describing their approach <a href="https://lifecast.ai/baking_nerf_to_ldi.pdf" rel="nofollow noreferrer">https://lifecast.ai/baking_nerf_to_ldi.pdf</a></p> <p>The equation for generating their distored image</p> <p><strong>This is the output from the script provided in the post</strong> <a href="https://i.sstatic.net/v8vvqQlo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8vvqQlo.png" alt="enter image description here" /></a></p> <p>My approach: Tried to generate the camera image to have similar distortion to target image, plus matching features, and add blending but the result is really not promising. This is the output: <a href="https://i.sstatic.net/rLZplrkZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rLZplrkZ.png" alt="enter image description here" /></a></p> <p>I'm looking for a C++ or a python solution(for trial). Here is my attempt and MVC</p> <pre><code>import cv2 import numpy as np camera_image = cv2.imread('camera_0001 (1).tif') undistorted_layer = cv2.imread('fused_bgra_000001.jpg') height, width = camera_image.shape[:2] undistorted_layer = cv2.resize(undistorted_layer, (width, height)) beta = 0.7 gamma = 3 S = 1.7 # Scaling factor r90 = S * (width / 2) # Adjust r90 with scaling map_x = np.zeros((height, width), dtype=np.float32) map_y = np.zeros((height, width), dtype=np.float32) for y in range(height): for x in range(width): nx = (x - width / 2) / r90 ny = (y - height / 2) / r90 r = np.sqrt(nx**2 + ny**2) # Apply the inflated equiangular transformation phi = (np.pi / 2) * (beta * r + (1 - beta) * (r ** gamma)) if r != 0: new_x = (phi / r) * nx * r90 + width / 2 new_y = (phi / r) * ny * r90 + height / 2 else: new_x = width / 2 new_y = height / 2 map_x[y, x] = new_x map_y[y, x] = new_y destination_image = cv2.remap(camera_image, map_x, map_y, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT) cv2.imwrite('destination_image.png', destination_image) sift = cv2.SIFT_create() keypoints1, descriptors1 = sift.detectAndCompute(undistorted_layer, None) keypoints2, descriptors2 = sift.detectAndCompute(destination_image, None) FLANN_INDEX_KDTREE = 1 index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5) search_params = dict(checks=50) flann = cv2.FlannBasedMatcher(index_params, search_params) matches = flann.knnMatch(descriptors1, descriptors2, k=2) # Apply Lowe's ratio test good_matches = [] for m, n in matches: if m.distance &lt; 0.7 * n.distance: good_matches.append(m) matched_image = cv2.drawMatches(undistorted_layer, keypoints1, destination_image, keypoints2, good_matches, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS) cv2.imwrite('matched_image.png', matched_image) points1 = np.float32([keypoints1[m.queryIdx].pt for m in good_matches]).reshape(-1, 1, 2) points2 = np.float32([keypoints2[m.trainIdx].pt for m in good_matches]).reshape(-1, 1, 2) H, mask = cv2.findHomography(points1, points2, cv2.RANSAC, 5.0) aligned_layer = cv2.warpPerspective(undistorted_layer, H, (destination_image.shape[1], destination_image.shape[0])) cv2.imwrite('aligned_layer.png', aligned_layer) # Blend the images blended = cv2.addWeighted(destination_image, 0.5, aligned_layer, 0.5, 0) # Save and display the result cv2.imwrite('blended_image.png', blended) cv2.imshow('Blended Image', blended) cv2.waitKey(0) cv2.destroyAllWindows() </code></pre> <p><strong>This image is the one that I want to correct and to have more details using the camera image (the next image)</strong> <a href="https://i.sstatic.net/b3SArPUr.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b3SArPUr.jpg" alt="enter image description here" /></a></p> <p><strong>This is the input image</strong></p> <p><a href="https://i.sstatic.net/iY7CD6j8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iY7CD6j8.png" alt="enter image description here" /></a> expected output: Notice the clarity and the details of the image compared to synsethized one from NERF</p> <p><strong>This is expected output that are two layers, I just need one layer of the two, as you see the image has higher details, no missing things</strong></p> <p><a href="https://i.sstatic.net/3KSo7rpl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3KSo7rpl.png" alt="enter image description here" /></a></p>
<python><c++><opencv><computer-vision><camera-calibration>
2024-09-05 09:25:55
1
1,919
Andre Ahmed
78,952,082
1,117,789
python docx processing encouunter ValueError: WD_COLOR_INDEX has no XML mapping for 'none'
<p>I have google this error and find no-one else encounter this ValueError before. You could see from the traceback log below that the error is triggered by my code line <code>bg_color = run.font.highlight_color</code> I guess <code>python-docx</code> library encounter some weird case that it cannot handle? Maybe it's some <code>WD_COLOR_INDEX</code> value that contain in the docx that python-docx do not recognize.</p> <p>If python-docx meet and place where there's no font.hightlight_color, it should just return a None to me instead trying to get a translation value from some xml mapping.</p> <p>How could I fixed this?</p> <pre><code>Traceback (most recent call last): File &quot;XXX.py&quot;, line 377, in &lt;module&gt; main() ... bg_color = run.font.highlight_color File &quot;/home/xxx/.local/lib/python3.8/site-packages/docx/text/font.py&quot;, line 139, in highlight_color return rPr.highlight_val File &quot;/home/xxx/.local/lib/python3.8/site-packages/docx/oxml/text/font.py&quot;, line 183, in highlight_val return highlight.val File &quot;/home/xxx/.local/lib/python3.8/site-packages/docx/oxml/xmlchemy.py&quot;, line 254, in get_attr_value return self._simple_type.from_xml(attr_str_value) File &quot;/home/xxx/.local/lib/python3.8/site-packages/docx/enum/base.py&quot;, line 64, in from_xml raise ValueError(f&quot;{cls.__name__} has no XML mapping for '{xml_value}'&quot;) ValueError: WD_COLOR_INDEX has no XML mapping for 'none' </code></pre>
<python><python-3.x><docx><python-docx>
2024-09-05 08:44:23
1
5,297
Allan Ruin
78,951,985
2,050,158
How to add extra graphical information to a ridge plot
<p>My data consists of several nested categories, for each category I am able to generate a stacked density plot such as the one <a href="https://stackoverflow.com/questions/78812350/how-to-plot-a-mean-line-on-kdeplot-for-each-variable/78812598?noredirect=1#comment138985507_78812598">illustrated here</a></p> <p>Since I have several such density plots each having data in the x domain 0 till 100. For each stacked density plot I would like a single ridge plot. The end result would be a plot of Ridge plots where each row is a single stacked density plot. Is this possible?</p> <p>Due to the nature of the ridge plots of having each plot obscuring the previous plot, I think the area under the curves of the stacked density plots may be misinterpreted by the observer as some section of the curve may be hidden by the next ridge plot. Hence I would like to drop the idea of having a stacked density plot in each ridge plot. But I would like to plot each variable as a ridge, but this time to include the mean and the standard deviation lines and have the area under the curve between both standard deviation lines shaded.</p> <p>As requested (by JohanC), below is the code I would like to seek assistance on. Somehow I am unable to get rid of the &quot;Density&quot; label on the y-axis.</p> <pre><code># seaborn ridge plots with penguins dataset import logging; import pandas as pd; import pandas; import matplotlib.pyplot as plt; import numpy as np; #!pip install seaborn; import seaborn as sns; LOG_FORMAT=(&quot;%(levelname) -5s time:%(asctime)s [%(funcName) &quot;&quot;-5s %(lineno) -5d]: %(message)s&quot;); logging.basicConfig(level=logging.INFO, format=LOG_FORMAT); LOGGER = logging.getLogger(__name__); logger_obj: logging.Logger=LOGGER; my_df = sns.load_dataset(&quot;penguins&quot;); sns.set_theme(style=&quot;white&quot;, rc={&quot;axes.facecolor&quot;: (1, 1, 1, 1)});#background transparency import errno; def mkdir_p(path): if(not(os.path.exists(path) and os.path.isdir(path))): try: os.makedirs(path,exist_ok=True); except OSError as exc: # Python &gt;2.5 if exc.errno == errno.EEXIST and os.path.isdir(path): pass; else: raise exc; def generate_plot( logger_obj: logging.Logger ,my_df: pandas.DataFrame ,sample_size: int ,axs2 ): my_df2 = my_df.copy(deep=True); species_list: list=list(my_df2[&quot;species&quot;].unique()); my_df3: pd.DataFrame(); sample_size2: int=sample_size; for i2, species in enumerate(species_list): species_record_count=len(my_df2[my_df2[&quot;species&quot;]==species]); flipper_length_mm_sum=my_df2[(my_df2[&quot;species&quot;]==species)][&quot;flipper_length_mm&quot;].sum(); logger_obj.info(&quot;species is :'{0}', count is:{1}, flipper_length_mm_sum is:{2}&quot;.format(species, species_record_count, flipper_length_mm_sum)); if sample_size2&gt;species_record_count: sample_size2=species_record_count; for i2, species in enumerate(species_list): my_df4=my_df2[my_df2[&quot;species&quot;]==species].sample(sample_size2); species_record_count=len(my_df4); flipper_length_mm_sum=my_df4[&quot;flipper_length_mm&quot;].sum(); logger_obj.info(&quot;species is :'{0}', count is:{1}, flipper_length_mm_sum is:{2}&quot;.format(species, species_record_count, flipper_length_mm_sum)); if i2==0: my_df3=my_df4[:]; else: my_df3=pd.concat([my_df3, my_df4], ignore_index=True); if 1==1: sns.set_theme(style=&quot;white&quot;, rc={&quot;axes.facecolor&quot;: (0, 0, 0, 0), 'axes.linewidth':2}); palette = sns.color_palette(&quot;Set2&quot;, 12); g = sns.FacetGrid(data=my_df3, palette=palette, row=&quot;species&quot;, hue=&quot;species&quot;, aspect=9, height=1.2) sns.set_theme(style=&quot;white&quot;, rc={&quot;axes.facecolor&quot;: (0, 0, 0, 0)}); g.map_dataframe(sns.kdeplot, x=&quot;flipper_length_mm&quot;, fill=True, alpha=1); g.map_dataframe(sns.kdeplot, x=&quot;flipper_length_mm&quot;, color=&quot;white&quot;); def label_f(x, color, label): ax2=plt.gca(); ax2.text(0, .2, label, color=&quot;black&quot;, fontsize=13, ha=&quot;left&quot;, va=&quot;center&quot;, transform=ax2.transAxes); g.map(label_f, &quot;species&quot;); g.fig.subplots_adjust(hspace=-.5); g.set_titles(&quot;&quot;); g.set(yticks=[], xlabel=&quot;flipper_length_mm&quot;); g.set_titles(col_template=&quot;&quot;, row_template=&quot;&quot;); g.despine(left=True); image_png_fn: str=&quot;images/penguins.ridge_plot/sample_day_feature.flipper_length_mm.all_species.png&quot;; logger_obj.info(&quot;image_png_fn is :'{0}'&quot;.format(image_png_fn)); mkdir_p(os.path.abspath(os.path.join(image_png_fn, os.pardir))); plt.savefig(image_png_fn); image_png_fn=None; sample_size: int=30000; generate_plot( logger_obj ,my_df ,sample_size ,None ); </code></pre> <p><a href="https://i.sstatic.net/v8Kk7zTo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8Kk7zTo.png" alt="enter image description here" /></a></p>
<python><matplotlib><seaborn>
2024-09-05 08:20:32
1
503
Allan K
78,951,893
6,658,422
Seaborn graph in streamlit does not let you customize theme
<p>I would like to modify the theme of a seaborn objects graph displayed through streamlit. However, it seems that the graph only uses the standard theme, disregarding the <code>.theme()</code> part.</p> <p>The following code in a ipython notebook</p> <pre><code>import seaborn.objects as so import seaborn as sns import pandas as pd import numpy as np df = pd.DataFrame({'x': np.arange(100), 'y': np.sin(np.arange(100)/10)}) theme_dict = {**sns.axes_style(&quot;darkgrid&quot;), &quot;grid.linestyle&quot;: &quot;--&quot;} so.Plot(df, x='x', y='y').add(so.Bars()).theme(theme_dict) </code></pre> <p>generates this graph: <a href="https://i.sstatic.net/Ixb2Q0tW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ixb2Q0tW.png" alt="enter image description here" /></a></p> <p>while the equivalent code for streamlit:</p> <pre><code>import streamlit as st import seaborn as sns import seaborn.objects as so import matplotlib.pyplot as plt import pandas as pd import numpy as np def main(): fig, ax = plt.subplots(1, 1) theme_dict = {**sns.axes_style(&quot;darkgrid&quot;), &quot;grid.linestyle&quot;: &quot;--&quot;} df = pd.DataFrame({&quot;x&quot;: np.arange(100), &quot;y&quot;: np.sin(np.arange(100) / 10)}) so.Plot(df, x=&quot;x&quot;, y=&quot;y&quot;).add(so.Bars()).on(ax).theme(theme_dict).plot() st.pyplot(fig) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>with <code>streamlit run main.py</code> generates the following in the browser: <a href="https://i.sstatic.net/jyzrBLWF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jyzrBLWF.png" alt="enter image description here" /></a> without the theme. It does not matter if I move the <code>.theme()</code> further up in the chain.</p> <p>Any suggestion on how to do this correctly?</p> <pre><code>streamlit 1.38.0 pyhd8ed1ab_0 conda-forge seaborn 0.13.2 hd8ed1ab_2 conda-forge seaborn-base 0.13.2 pyhd8ed1ab_2 conda-forge </code></pre>
<python><seaborn><streamlit><seaborn-objects>
2024-09-05 07:58:53
0
2,350
divingTobi
78,951,740
10,912,170
Remove Exec Command After Execution Or Run in Local Thread
<p>I've created a function in python to execute generic script via <code>exec()</code>. It works perfectly but it creates functions globally. I need to create them locally and run them then I need to remove them.</p> <p>My example script:</p> <pre><code>def demo_handler(request): ... return None </code></pre> <p>My executor function is:</p> <pre><code>def execute(body: ExecutorCommand): try: exec(body.script, globals()) result = demo_handler(body.request) return {&quot;success&quot;: True, &quot;data&quot;: result} except Exception as ex: return {&quot;success&quot;: False, &quot;error&quot;: {&quot;type&quot;: type(ex).__name__, &quot;message&quot;: repr(ex)}} </code></pre> <p>When I send a first-request with script, it works like expected. Then even if I don't send a script, I can reach previous executed functions. But I don't want it. Is there a way to remove previous executed-functions?</p>
<python><exec>
2024-09-05 07:14:02
1
1,291
Sha
78,951,394
4,042,083
Alternative to --install-option in newer python versions
<p>When using python 3.8 I installed PyMQI with the following command:</p> <pre><code>pip install pymqi --install-option server </code></pre> <p>In order to make <code>pip install</code> run as if I had instead issued this command:</p> <pre><code>python setup.py build server </code></pre> <p>I see that in python 3.12 (and perhaps earlier?) the <code>--install-option</code> flag has been removed.</p> <p>What pip command can I now use to achieve the same?</p> <p>Attempting to replace <code>--install-option</code> with <code>--config-settings</code> results in the following error:</p> <pre><code>Arguments to --config-settings must be of the form KEY=VAL </code></pre>
<python><python-3.x>
2024-09-05 05:22:19
0
7,534
Morag Hughson
78,951,392
4,669,984
How to run a python script without opening docker pseudo tty?
<p>I have a docker image which has gdal libraries and my python code using said libraries. I start into the docker image with '<code>docker run -v path/to/mydata:/data -it localhost/software --name xyz</code>' and then run my python code as '<code>python Processes.py arguments</code>', it runs perfectly fine.</p> <p>I tried with '<code>docker run -v path/to/mydata:/data localhost/software --name xyz /path/to/python/inside/image/python /path/to/mysoftware/Processes.py arguments</code>', it says that it cannot find <code>libgdal.so</code>.</p> <p>Edit:<br /> Sample of the issue. <a href="https://i.sstatic.net/65DZOXvB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65DZOXvB.png" alt="SampleOfIssue" /></a></p>
<python><docker><docker-run>
2024-09-05 05:21:42
1
3,136
Tarun Maganti
78,951,308
5,976,033
Gemini Status 500 when requesting cache
<p>I'm receiving this status 500 when trying to cache a large text corpus (~1.8M tokens):</p> <p><strong>Script</strong>:</p> <pre><code>import vertexai from vertexai.preview import caching from vertexai.generative_models import Part from google.oauth2.service_account import Credentials def create_new_gemini_cache(display_name, content, time_to_live): try: cached_content = caching.CachedContent.create( model_name='gemini-1.5-pro-001', contents=content, ttl=time_to_live ) logging.info(f'#### Content cached') return cached_content except Exception as e: handle_exception(e) # Get creds gcp_credentials = Credentials.from_service_account_info(gemini_cred_json, scopes=['https://www.googleapis.com/auth/cloud-platform']) # Init project vertexai.init(project=GCP_PROJECT_ID, location=GCP_REGION, credentials=gcp_credentials) # Get content content = [Part.from_text(&lt;1.8M token text&gt;)] # Assume large text here. # Define cache ttl time_to_live = datetime.timedelta(minutes=60) # Request gemini cache content cache_content = create_new_gemini_cache(content, time_to_live) </code></pre> <p><strong>Error</strong>:</p> <pre><code>#### Error details: { &quot;message&quot;: &quot;500 Internal error encountered.&quot;, &quot;status_code&quot;: null, &quot;traceback&quot;: [ &quot;Traceback (most recent call last):&quot;, &quot; File \&quot;/home/site/wwwroot/.python_packages/lib/site-packages/google/api_core/grpc_helpers.py\&quot;, line 76, in error_remapped_callable&quot;, &quot; return callable_(*args, **kwargs)&quot;, &quot; ^^^^^^^^^^^^^^^^^^^^^^^^^^&quot;, &quot; File \&quot;/home/site/wwwroot/.python_packages/lib/site-packages/grpc/_channel.py\&quot;, line 1181, in __call__&quot;, &quot; return _end_unary_response_blocking(state, call, False, None)&quot;, &quot; ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&quot;, &quot; File \&quot;/home/site/wwwroot/.python_packages/lib/site-packages/grpc/_channel.py\&quot;, line 1006, in _end_unary_response_blocking&quot;, &quot; raise _InactiveRpcError(state) # pytype: disable=not-instantiable&quot;, &quot; ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&quot;, &quot;grpc._channel._InactiveRpcError: &lt;_InactiveRpcError of RPC that terminated with:&quot;, &quot;\tstatus = StatusCode.INTERNAL&quot;, &quot;\tdetails = \&quot;Internal error encountered.\&quot;&quot;, &quot;\tdebug_error_string = \&quot;UNKNOWN:Error received from peer ipv4:142.251.33.106:443 {grpc_message:\&quot;Internal error encountered.\&quot;, grpc_status:13, created_time:\&quot;2024-09-05T03:51:37.113071157+00:00\&quot;}\&quot;&quot;, &quot;&gt;&quot;, &quot;&quot;, &quot;The above exception was the direct cause of the following exception:&quot;, &quot;&quot;, &quot;Traceback (most recent call last):&quot;, &quot; File \&quot;/home/site/wwwroot/shared/gemini_utils.py\&quot;, line 94, in create_new_gemini_cache&quot;, &quot; cached_content = caching.CachedContent.create(&quot;, &quot; ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&quot;, &quot; File \&quot;/home/site/wwwroot/.python_packages/lib/site-packages/vertexai/caching/_caching.py\&quot;, line 228, in create&quot;, &quot; cached_content_resource = client.create_cached_content(request)&quot;, &quot; ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&quot;, &quot; File \&quot;/home/site/wwwroot/.python_packages/lib/site-packages/google/cloud/aiplatform_v1beta1/services/gen_ai_cache_service/client.py\&quot;, line 825, in create_cached_content&quot;, &quot; response = rpc(&quot;, &quot; ^^^^&quot;, &quot; File \&quot;/home/site/wwwroot/.python_packages/lib/site-packages/google/api_core/gapic_v1/method.py\&quot;, line 131, in __call__&quot;, &quot; return wrapped_func(*args, **kwargs)&quot;, &quot; ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&quot;, &quot; File \&quot;/home/site/wwwroot/.python_packages/lib/site-packages/google/api_core/grpc_helpers.py\&quot;, line 78, in error_remapped_callable&quot;, &quot; raise exceptions.from_grpc_error(exc) from exc&quot;, &quot;google.api_core.exceptions.InternalServerError: 500 Internal error encountered.&quot; ] } </code></pre> <p>Is my request incorrectly setup or is the service down? How do I know moving forward?</p> <hr /> <p><strong>EDIT 1</strong>:</p> <p>Tried this <a href="https://github.com/lavinigam-gcp/gemini-assets/blob/main/context_caching.ipynb" rel="nofollow noreferrer">example</a> in Google Collab. Still getting status 500. I've now tried with MANY combinations of environment (local, Azure Function, Google Collab) and text length (very short (under the minimum cache), over the minimum, over 1M tokens...all with the same <code>InternalServerError: 500 Internal error encountered.</code> error.</p> <pre><code>_InactiveRpcError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/google/api_core/grpc_helpers.py in error_remapped_callable(*args, **kwargs) 75 try: ---&gt; 76 return callable_(*args, **kwargs) 77 except grpc.RpcError as exc: 6 frames _InactiveRpcError: &lt;_InactiveRpcError of RPC that terminated with: status = StatusCode.INTERNAL details = &quot;Internal error encountered.&quot; debug_error_string = &quot;UNKNOWN:Error received from peer ipv4:172.217.7.42:443 {created_time:&quot;2024-09-05T16:06:16.178297904+00:00&quot;, grpc_status:13, grpc_message:&quot;Internal error encountered.&quot;}&quot; &gt; The above exception was the direct cause of the following exception: InternalServerError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/google/api_core/grpc_helpers.py in error_remapped_callable(*args, **kwargs) 76 return callable_(*args, **kwargs) 77 except grpc.RpcError as exc: ---&gt; 78 raise exceptions.from_grpc_error(exc) from exc 79 80 return error_remapped_callable InternalServerError: 500 Internal error encountered. </code></pre> <hr /> <p><strong>Edit 2</strong>:</p> <ul> <li>So far I've tried: <ul> <li>Different environments: <ul> <li><a href="https://github.com/SeaDude/gemini-context-caching-errors/blob/main/grrr.ipynb" rel="nofollow noreferrer">Google Collab</a>: Failed</li> <li>Azure Function: Failed</li> <li>Local machine: Failed</li> </ul> </li> <li>Different context lengths: <ul> <li><code>content</code> # A string of 1M tokens, 1.8M tokens, etc.</li> <li><code>content[:100000]</code></li> <li><code>this is a test</code></li> </ul> </li> <li>Content with and without <code>[]</code> surrounding</li> <li>Different models: <ul> <li><code>flash-001</code> and <code>pro-001</code></li> </ul> </li> <li><code>preview</code> and non-preview versions of <code>GenerativeModels</code> library</li> <li>Posting in StackOverflow, Google Bug Tracker and Google AI Dev Forum.</li> <li>Different types of text: <ul> <li>My own text</li> <li><a href="https://www.gutenberg.org/cache/epub/84/pg84.txt" rel="nofollow noreferrer">Project Gutenberg text</a></li> </ul> </li> <li>Text stored in different places: <ul> <li>Azure Blob Storage: Failed</li> <li>Google Buckets: Failed</li> <li>Local: Failed</li> </ul> </li> <li>Various versions of text: <ul> <li>Raw: Failed</li> <li>Cleaned up: Failed # Removed non-ascii chars, removed <code>\n\r</code>, etc.</li> </ul> </li> </ul> </li> </ul> <p>Can someone else verify they are able to use <code>vertex</code> Context Caching?</p>
<python><google-cloud-vertex-ai>
2024-09-05 04:39:25
1
4,456
SeaDude
78,951,239
17,802,067
Neovim command to create print statement for Python function arguments
<p>As the title suggests, I had the idea to create a command that can automatically write a print statement for all arguments to the current function I am in. Initially, I wanted to do this for Python functions - as I find myself doing this manually as a debug tool.</p> <p>For example, take the Python function:</p> <pre><code>def my_func(a: int, b: str) -&gt; None: ... </code></pre> <p>If my cursor is within the function definition and I run the command <code>PythonPrintParams</code>, I want to write the following to the first line of the function:</p> <pre><code>print( f&quot;a={a}, &quot; f&quot;b={b}&quot; ) </code></pre> <p>I want to write a Lua function that can do this using treesitter and then setup a Neovim autocommand. So far I have:</p> <pre><code>local ts_utils = require('nvim-treesitter.ts_utils') local function list_fn_params() local node = ts_utils.get_node_at_cursor() while node and node:type() ~= 'function_definition' do node = node:parent() end if not node then print(&quot;Could not find params, cursor not inside a function.&quot;) return end local param_nodes = node:field('parameters') if param_nodes then for i, param_node in ipairs(param_nodes) do print(&quot; Node&quot;, i, &quot;type:&quot;, param_node:type()) print(&quot; Node text:&quot;, vim.inspect(ts_utils.get_node_text(param_node))) end end end vim.api.nvim_create_user_command('PythonPrintParams', function() list_fn_params() end, {}) </code></pre> <p>This ends up printing:</p> <pre><code> Node 1 type: parameters Node text: { &quot;(&quot;, &quot; a: int, b: str&quot;, &quot;)&quot; } </code></pre> <p>This is a good start but then requires some manual parsing that I think will get messy. How can I parse deeper to get the variable and then write my print statement? Is there an alternate route with the LSP API that makes identifying args easier?</p> <p>I did spend time with the py-tree-sitter docs <a href="https://github.com/tree-sitter/py-tree-sitter/tree/master" rel="nofollow noreferrer">https://github.com/tree-sitter/py-tree-sitter/tree/master</a>.</p>
<python><parsing><lua><neovim><treesitter>
2024-09-05 03:49:23
1
324
Razumov
78,951,225
188,331
JAX TypeError: 'Device' object is not callable
<p>I found a piece of JAX codes from few years ago.</p> <pre><code>import jax import jax.random as rand device_cpu = None def do_on_cpu(f): global device_cpu if device_cpu is None: device_cpu = jax.devices('cpu')[0] def inner(*args, **kwargs): with jax.default_device(device_cpu): return f(*args, **kwargs) return inner seed2key = do_on_cpu(rand.PRNGKey) seed2key.__doc__ = '''Same as `jax.random.PRNGKey`, but always produces the result on CPU.''' </code></pre> <p>and I call it with:</p> <pre><code>key = seed2key(42) </code></pre> <p>But it results in <code>TypeError</code>:</p> <pre><code>TypeError Traceback (most recent call last) Cell In[2], line 14 ---&gt; 14 key = seed2key(42) File ~/bert-tokenizer-cantonese/lib/seed2key.py:12, in do_on_cpu.&lt;locals&gt;.inner(*args, **kwargs) 11 def inner(*args, **kwargs): ---&gt; 12 with jax.default_device(device_cpu): 13 return f(*args, **kwargs) TypeError: 'Device' object is not callable </code></pre> <p>I think the function has breaking changes after version upgrade.</p> <p>Current versions:</p> <ul> <li>jax 0.4.31</li> <li>jaxlib 0.4.31</li> </ul> <p>(latest version at the moment of writing)</p> <p>How can I change the codes to avoid the error? Thanks.</p>
<python><jax>
2024-09-05 03:41:51
1
54,395
Raptor
78,951,224
4,844,789
Use single arg to pass a list or two separate values in python function
<p>How to use a single argument to pass multiple variables in Python function.</p> <p>[code]</p> <pre><code>def f(*args): a, b, _ = args print(a, b) f(10, 2) # two separate values f([10, 2]) # list </code></pre> <p>Output:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;&lt;stdin&gt;&quot;, line 2, in f ValueError: not enough values to unpack (expected 3, got 2) </code></pre> <p>Expected Output:</p> <pre><code>10 2 10 2 </code></pre>
<python><oop><argument-unpacking>
2024-09-05 03:41:41
3
395
Jeyakumar Kasi
78,951,033
176,269
Display UTC time in local time with timezone
<p>I'm attempting to display some UTC time in ISO format (<code>2024-09-05T00:00:00</code>), as local time (<code>2024-09-05T09:00:00+0900</code>). It's been challenging to achieve this in an elegant way, ideally only using native Python libraries.</p> <p>To get a date-time string I do</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import time &gt;&gt;&gt; import datetime &gt;&gt;&gt; import pytz &gt;&gt;&gt; time.tzname ('JST', 'JST') &gt;&gt;&gt; utc_time = &quot;2024-09-05T00:00:00&quot; &gt;&gt;&gt; utc_dt = datetime.datetime.fromisoformat(utc_time) &gt;&gt;&gt; utc_dt.strftime(&quot;%Y-%m-%dT%H:%M:%S%z&quot;) '2024-09-05T00:00:00' </code></pre> <p>However, this is <strong>not</strong> UTC time! This is localized:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; utc_dt.astimezone(pytz.timezone(&quot;Asia/Tokyo&quot;)).strftime(&quot;%Y-%m-%dT%H:%M:%S%z&quot;) '2024-09-05T00:00:00+0900' </code></pre> <p>My expectation was that <code>fromisoformat</code> would assume UTC when timezone information was not provided, meaning that the above would also have 9 hours <strong>added</strong> after the <code>astimezone</code> operation (<code>2024-09-05T09:00:00+0900</code>). Instead it <strong>remains the same</strong>.</p> <p>Buy why? This is not intuitive! If timezone is not provided then one would most probably assume UTC IMO. How to fix this in an elegant way? The best I could come up with is to add a <code>Z</code> to my input:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; utc_time = &quot;2024-09-05T01:09:11&quot; + &quot;Z&quot; &gt;&gt;&gt; utc_dt = datetime.datetime.fromisoformat(utc_time) &gt;&gt;&gt; utc_dt.strftime(&quot;%Y-%m-%dT%H:%M:%S%z&quot;) '2024-09-05T00:00:00+0000' &gt;&gt;&gt; utc_dt.astimezone(pytz.timezone(&quot;Asia/Tokyo&quot;)).strftime(&quot;%Y-%m-%dT%H:%M:%S%z&quot;) '2024-09-05T09:00:00+0900' </code></pre> <p>This is what I wanted, but feels like a hack... is there a better/elegant way to do this?</p> <hr /> <p>And then there's this:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; datetime.datetime.fromisoformat(utc_time).replace(tzinfo=pytz.timezone(&quot;Asia/Tokyo&quot;)).strftime(&quot;%Y-%m-%dT%H:%M:%S%z&quot;) '2024-09-05T00:00:00+0919' </code></pre> <p>🤯</p>
<python><datetime><timezone>
2024-09-05 01:48:19
1
2,391
Solenoid
78,950,993
10,570,372
Should timer capture full time for list creation?
<p>Following this <a href="https://stackoverflow.com/questions/33987060/python-context-manager-that-measures-time">Python context manager that measures time</a> I tried myself.</p> <pre class="lang-py prettyprint-override"><code>from time import perf_counter class catchtime: def __enter__(self): self.start = perf_counter() return self def __exit__(self, type, value, traceback): self.time = perf_counter() - self.start self.readout = f&quot;Time: {self.time:.3f} seconds&quot; print(self.readout) with catchtime() as timer: a = [1] * 1000000000 print(1) print(timer.readout) </code></pre> <p>The thing is even after timing is finished, the python program does not stop executing until a few seconds later. I read that is related to memory and garbage collection. Can anyone explain to me this and if the timer should actually capture the full time?</p>
<python>
2024-09-05 01:12:25
1
1,043
ilovewt
78,950,848
16,462,878
get size of PNG from bytes
<p>I am trying to extract the size of an PNG image from a datastream</p> <p>Consider the starting data of the stream</p> <pre><code>137 80 78 71 13 10 26 10 0 0 0 13 73 72 68 82 0 0 2 84 0 0 3 74 8 2 0 0 0 195 81 71 33 0 0 0 ... ^ ^ ^ ^ ^ ^ </code></pre> <p>which contains the following information</p> <ul> <li>signature: <code>137 80 78 71 13 10 26 10</code></li> <li><em>IHDR</em> chunk of: <ul> <li>length <code>0 0 0 13</code></li> <li>type <code>73 72 68 82</code></li> <li>data <code>0 0 2 84 0 0 3 74 8 2 0 0 0</code></li> <li>crc: <code>195 81 71 33</code></li> </ul> </li> <li>then a new chunk start.</li> </ul> <p>The information about the size of the image are encoded in the 8 bytes of the <em>data</em> chunk:</p> <ul> <li>width <code>0 0 2 84</code> or in bytes <code>b'\x00\x00\x02T'</code></li> <li>height <code>0 0 3 74</code> or in bytes <code>b'\x00\x00\x03J'</code>.</li> </ul> <p>I know that the image has a width of <code>596</code> px and a height of <code>842</code> px but I cannot figure out how to compute the actual size of the image.</p> <p>PS the values are given in <em>Python</em> and the here the datastream in binary form <code>b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x02T\x00\x00\x03J\x08\x02\x00\x00\x00\xc3QG!\x00\x00\x00\tpHY'</code></p>
<python><size><png><chunks>
2024-09-04 23:17:28
1
5,264
cards
78,950,667
11,330,010
Group elements in dataframe and show them in chronological order
<p>Consider the following dataframe, where <code>Date</code> is in the format <code>DD-MM-YYY</code>:</p> <pre class="lang-none prettyprint-override"><code>Date Time Table 01-10-2000 13:00:03 B 01-10-2000 13:00:04 A 01-10-2000 13:00:05 B 01-10-2000 13:00:06 A 01-10-2000 13:00:07 B 01-10-2000 13:00:08 A </code></pre> <p>How can I 1) group the observations by <code>Table</code>, 2) sort the rows according to <code>Date</code> and <code>Time</code> within each group, 3) show the groups in chronological order according to <code>Date</code> and <code>Time</code> of their first observation?</p> <pre class="lang-none prettyprint-override"><code>Date Time Table 01-10-2000 13:00:03 B 01-10-2000 13:00:05 B 01-10-2000 13:00:07 B 01-10-2000 13:00:04 A 01-10-2000 13:00:06 A 01-10-2000 13:00:08 A </code></pre> <hr /> <p>Input data:</p> <pre><code>data = { 'Date': ['01-10-2000', '01-10-2000', '01-10-2000', '01-10-2000', '01-10-2000', '01-10-2000'], 'Time': ['13:00:03', '13:00:04', '13:00:05', '13:00:06', '13:00:07', '13:00:08'], 'Table': ['B', 'A', 'B', 'A', 'B', 'A'] } df = pd.DataFrame(data) </code></pre>
<python><pandas><dataframe><sorting><group-by>
2024-09-04 21:51:12
3
407
NC520
78,950,546
7,674,262
Can't mock a AWS S3 Client
<p>I am trying to mock a call to the <strong>get_paginator</strong> function on the Python AWS S3 Client. Here my production code:</p> <p><em>handler.py</em></p> <pre><code>import boto3 class RealClass: def __init__(self): self.s3_client = boto3.client(&quot;s3&quot;) def get_unprocessed_files(self) -&gt; list[str]: paginator = self.s3_client.get_paginator(&quot;list_objects_v2&quot;) operation_parameters = {&quot;Bucket&quot;: self.bronze_bucket, &quot;Prefix&quot;: self.prefix} page_iterator = paginator.paginate(**operation_parameters) un_processed_files = [] for page in page_iterator: for obj in page.get(&quot;Contents&quot;, []): key = obj[&quot;Key&quot;] if key.endswith(&quot;.content.txt&quot;) or key.endswith(&quot;.metadata.json&quot;): un_processed_files.append(key) return un_processed_files </code></pre> <p><em>test_handler.py</em></p> <pre><code>import unittest from unittest.mock import patch, MagicMock from handler import RealClass class TestRealClass(unittest.TestCase): def setUp(self) -&gt; None: self.real = RealClass() @patch(&quot;boto3.client&quot;) def test_get_unprocessed_files(self, mock_boto3_client): response = [ { &quot;Contents&quot;: [ {&quot;Key&quot;: &quot;files/1001000284.txt&quot;}, ] } ] # What to do here? result = self.pii.get_unprocessed_files() self.assertIsInstance(result, list) self.assertTrue(len(result) &gt; 0) self.assertTrue(result[0].find(&quot;1001000284&quot;) &gt; -1) </code></pre> <p>All I get is <em>The provided token has expired</em>, which means I guess the real functions aren't being mocked.</p> <p>Thank you!</p>
<python><amazon-s3><mocking><boto3><python-unittest>
2024-09-04 20:58:31
1
442
NeoFahrenheit
78,950,520
2,287,458
Use format specifier to convert float/int column in polars dataframe to string
<p>I have this code:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({'size': [34.2399, 1232.22, -479.1]}) df.with_columns(pl.format('{:,.2f}', pl.col('size'))) </code></pre> <p>But is fails:</p> <pre><code>ValueError - Traceback, line 3 2 df = pl.DataFrame({'size': [34.2399, 1232.22, -479.1]}) ----&gt; 3 df.with_columns(pl.format('{:,.2f}', pl.col('size'))) File polars\functions\as_datatype.py:718, in format(f_string, *args) 717 msg = &quot;number of placeholders should equal the number of arguments&quot; --&gt; 718 raise ValueError(msg) ValueError: number of placeholders should equal the number of arguments </code></pre> <p>How can I format a <code>float</code> or <code>int</code> column using a format specifier like <code>'{:,.2f}'</code>?</p>
<python><format><python-polars>
2024-09-04 20:49:48
2
3,591
Phil-ZXX
78,950,432
6,580,080
Is there a scenario where `foo in list(bar)` cannot be replaced by `foo in bar`?
<p>I'm digging into a codebase containing thousands of occurrences of <code>foo in list(bar)</code>, e.g.:</p> <ul> <li><p>as a boolean expression:</p> <pre class="lang-py prettyprint-override"><code>if foo in list(bar) or ...: ... </code></pre> </li> <li><p>in a for loop:</p> <pre class="lang-py prettyprint-override"><code>for foo in list(bar): ... </code></pre> </li> <li><p>in a generator expression:</p> <pre class="lang-py prettyprint-override"><code>&quot;,&quot;.join(str(foo) for foo in list(bar)) </code></pre> </li> </ul> <p>Is there a scenario (like a given version of Python, a known behavior with a type checker, etc.) where <code>foo in list(bar)</code> is not just a memory-expensive version of <code>foo in bar</code>? What am I missing here?</p>
<python>
2024-09-04 20:18:09
5
1,177
ebonnal
78,950,364
10,053,485
Abstract Base Class property setter absence not preventing Class instantiation
<p>I'm trying to get abstract properties to work, enforcing property getter &amp; setter definitions in downstream classes.</p> <pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod class BaseABC(ABC): @property @abstractmethod def x(self): pass @x.setter @abstractmethod def x(self, value): pass class MyClass(BaseABC): def __init__(self, value): self._x = value @property def x(self): return self._x # @x.setter # def x(self, val): # self._x = val obj = MyClass(10) print(obj.x) obj.x = 20 print(obj.x) </code></pre> <p>Having read <a href="https://docs.python.org/3/library/abc.html#abc.abstractproperty" rel="nofollow noreferrer">the documentation</a> it seems to indicate the above should trigger a <code>TypeError</code>, when the class is being build, but it only triggers an <code>AttributeError</code> once the attribute is being set.</p> <p>Why does the absent setter, explicitly defined in <code>BaseABC</code> through an <code>@abstractmethod</code>, not trigger the expected TypeError? How does one ensure a setter <em>is</em> required in the daughter class?</p>
<python><class><properties><abstract-class>
2024-09-04 19:58:08
2
408
Floriancitt
78,950,354
6,440,589
How to merge two xml files at a specific level
<p>I want to merge two xml files using Python:</p> <p><strong>File1.xml</strong></p> <pre><code>&lt;?xml version='1.0' encoding='ASCII'?&gt; &lt;MyData&gt; &lt;Elements&gt; &lt;Element&gt; &lt;ElementID&gt;15&lt;/ElementID&gt; &lt;/Element&gt; &lt;/Elements&gt; &lt;/MyData&gt; </code></pre> <p>And <strong>File2.xml</strong></p> <pre><code>&lt;?xml version='1.0' encoding='ASCII'?&gt; &lt;MyData&gt; &lt;Elements&gt; &lt;Element&gt; &lt;ElementID&gt;16&lt;/ElementID&gt; &lt;/Element&gt; &lt;/Elements&gt; &lt;/MyData&gt; </code></pre> <p>I can use the approach suggested in this <a href="https://medium.com/@problemsolvingcode/merge-two-xml-files-in-python-b6ea8c7478d6#:%7E:text=Use%20the%20getroot()%20method,root1%E2%80%9D%20and%20%E2%80%9Croot2%E2%80%9D.&amp;text=Now%2C%20To%20merge%20the%20root2,use%20the%20extend()%20method.&amp;text=Finally%2C%20use%20the%20write(),xml%E2%80%9D." rel="nofollow noreferrer">Medium post</a>:</p> <pre><code>import xml.etree.ElementTree as ET tree1 = ET.parse('File1.xml') tree2 = ET.parse('File2.xml') root1 = tree1.getroot() root2 = tree2.getroot() root1.extend(root2) tree1.write('merged_files.xml') </code></pre> <p>This returns:</p> <pre><code>&lt;MyData&gt; &lt;Elements&gt; &lt;Element&gt; &lt;ElementID&gt;15&lt;/ElementID&gt; &lt;/Element&gt; &lt;/Elements&gt; &lt;Elements&gt; &lt;Element&gt; &lt;ElementID&gt;16&lt;/ElementID&gt; &lt;/Element&gt; &lt;/Elements&gt; &lt;/MyData&gt; </code></pre> <p><strong>But how can I merge files at a given &quot;level&quot;, <em>e.g.</em> Elements?</strong></p> <p>I would like to obtain:</p> <pre><code>&lt;MyData&gt; &lt;Elements&gt; &lt;Element&gt; &lt;ElementID&gt;15&lt;/ElementID&gt; &lt;/Element&gt; &lt;Element&gt; &lt;ElementID&gt;16&lt;/ElementID&gt; &lt;/Element&gt; &lt;/Elements&gt; &lt;/MyData&gt; </code></pre>
<python><xml><merge><elementtree>
2024-09-04 19:55:20
2
4,770
Sheldon
78,950,320
11,628,353
How to get all the files in OneDrive account using the graph SDK?
<p>I'm trying to use a service account to pull all files from a OneDrive business account using the MS Graph Python SDK.</p> <pre><code>import asyncio from msgraph import GraphServiceClient from azure.identity import ClientSecretCredential microsoft_tenant_id = '123abc' client_id = '123abc' client_secret = '123abc' SCOPES = ['https://graph.microsoft.com/.default'] credential = ClientSecretCredential(microsoft_tenant_id, client_id, client_secret) graph_client = GraphServiceClient(credential, SCOPES) user_id = 'myemail@companyname.com' async def get_drive_count(): # What do I use after .drives? response = await graph_client.users.by_user_id(user_id).drives... # not sure what to use next asyncio.run(get_drive_count()) </code></pre> <p>I can't find any examples on how to use the graph client to pull one drive files.</p> <p>I've tried using <code>.root.children.get()</code> but the SDK doesn't have any of those methods.</p> <p>Does anyone know how to pull all OneDrives files using their SDK?</p>
<python><microsoft-graph-api><onedrive><microsoft-graph-sdks>
2024-09-04 19:44:12
2
897
WHOATEMYNOODLES
78,950,275
219,153
How to get the list of colors used by a default color cycler in Matplotlib?
<p>I would like to explicitly assign colors from <code>colorList</code> to consecutive points on a graph and have them identical to what implicit use of color cycler would produce. For example:</p> <pre><code>for i, c in enumerate(colorList): plt.scatter(i, i, color=c) </code></pre> <p>would produce the same colors as:</p> <pre><code>for i, _ in enumerate(colorList): plt.scatter(i, i) </code></pre> <p>How to get <code>colorList</code>?</p>
<python><matplotlib><colors>
2024-09-04 19:29:33
0
8,585
Paul Jurczak
78,950,183
1,889,297
Python hvplot Explorer limit of 10,000 point
<p>Using Python hvplot Explorer is giving nuisance Line plot when using over 10,000 points. Is that a bug? can I configure the threshold value?</p> <pre><code>import numpy as np import pandas as pd import hvplot.pandas N=1001 Range = 10 DF = [] for n in range(Range): x = np.linspace(0.0, 6.4, num=N) y = np.sin(x) + n/10 df = pd.DataFrame({'x': x, 'y': y}) df['#'] = n DF.append(df) DF = pd.concat(DF) print(f'param number= {N*Range}') DF.hvplot.explorer(x='x', y='y', by = ['#'], kind='line') </code></pre> <p>When Number of points is &lt;10,000 <a href="https://i.sstatic.net/GPumXtuQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPumXtuQ.png" alt="enter image description here" /></a></p> <p>When Number of points is &gt;10,000 <a href="https://i.sstatic.net/bZBSwjDU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZBSwjDU.png" alt="enter image description here" /></a></p>
<python><hvplot>
2024-09-04 18:57:49
1
504
user1889297
78,950,123
1,271,079
Creating an sqlalchemy hybrid_property that read into a property dictionary value
<p>I have a property (<code>parsing</code> in the sample below) that return a dictionary. I want to be able to filter query results based on a value inside that dictionary.</p> <p>I tried to follow <a href="https://stackoverflow.com/a/49990926/1271079">this response</a>.</p> <p>I have the following (simplified) code:</p> <pre><code>class Document(Base): @property def parsing(self) -&gt; Optional[dict]: # very simplified code, this involve more python code and can't be easily a sqlalchemy expression if not self.a_dictionary: return None return self.a_dictionary @hybrid_method def get_info(self) -&gt; Optional[str]: if not self.parsing: return None return self.parsing[&quot;info&quot;] @get_info.expression def get_info(self) -&gt; String: return cast(self.parsing[&quot;info&quot;], String) </code></pre> <p>But when I try to use the expression in a query, something like:</p> <pre><code>query = Document.query(...).filter(Document.get_info().in_(info_list)) </code></pre> <p>I get that error: <code>'property' object is not subscriptable</code> caused by the <code>f.parsing[&quot;info&quot;]</code> bit.</p> <p>Is hybrid property and expression really what I should use ? How to fix that?</p>
<python><sqlalchemy><orm>
2024-09-04 18:38:16
0
807
azerty
78,950,043
391,161
Is it possible to change the pip config used by bazel?
<p>This is a follow-up to my <a href="https://stackoverflow.com/questions/78946447/is-it-possible-to-change-the-index-url-for-fetching-rules-python-itself-in-baz">previous question</a>.</p> <p>After resolving that error, I now get the following error, which is another case where a URL needs to be rewritten, but this time inside the pip invoked by bazel:</p> <pre><code>pip._vendor.requests.exceptions.SSLError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Max retries exceeded with url: /packages/31/80/3a54838c3fb461f6fec263ebf3a3a41771bd05190238de3486aae8540c36/jinja2-3.1.4-py3-none-any.whl (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)'))) Traceback (most recent call last): File &quot;&lt;frozen runpy&gt;&quot;, line 198, in _run_module_as_main File &quot;&lt;frozen runpy&gt;&quot;, line 88, in _run_code File &quot;/home/ubuntu/.cache/bazel/_bazel_ubuntu/b4e0fd0e207e6fdf5e33997b6741cf2d/external/rules_python/python/pip_install/tools/wheel_installer/wheel_installer.py&quot;, line 205, in &lt;module&gt; main() File &quot;/home/ubuntu/.cache/bazel/_bazel_ubuntu/b4e0fd0e207e6fdf5e33997b6741cf2d/external/rules_python/python/pip_install/tools/wheel_installer/wheel_installer.py&quot;, line 190, in main subprocess.run(pip_args, check=True, env=env) File &quot;/home/ubuntu/.cache/bazel/_bazel_ubuntu/b4e0fd0e207e6fdf5e33997b6741cf2d/external/python3_11_x86_64-unknown-linux-gnu/lib/python3.11/subprocess.py&quot;, line 571, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/home/ubuntu/.cache/bazel/_bazel_ubuntu/b4e0fd0e207e6fdf5e33997b6741cf2d/external/python3_11_x86_64-unknown-linux-gnu/bin/python3', '-m', 'pip', '--isolated', 'wheel', '--no-deps', '--require-hashes', '-r', '/tmp/tmpv11wtxad']' returned non-zero exit status 2. </code></pre> <p>I already have a new index URL configured in <code>~/.config/pip/pip.conf</code>, but bazel's pip appears to be ignoring it.</p> <p>I searched for <code>pip</code> in the <a href="https://bazel.build/reference/command-line-reference" rel="nofollow noreferrer">command line reference</a> but did not find anything.</p> <p><strong>Is there a way to force bazel's pip to use a particular configuration?</strong></p>
<python><linux><pip><bazel>
2024-09-04 18:13:29
1
76,345
merlin2011
78,950,011
5,134,817
Segfault when using a function defined in a separate file, but not when defined in the same file
<p>I am trying to write some wrappers using the Python C API to work with NumPy arrays. If I write all my code in one file, the code works fine, tests pass, and everything seems great. If however I try and split the file into some headers, and a few different files, it segfaults, and on the surface of it I can't see why. Am I doing something wrong?</p> <p>(PS - I am also trying to come up with a nice way of writing <code>#define PY_ARRAY_UNIQUE_SYMBOL ...</code> and <code>#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION</code> only once, but struggle with multiple definition complaints, and suspect the right way to do this might better influence how I split up the various files and headers).</p> <h3>My Example</h3> <p>I am making a module which takes a numpy array, multiplies it by some factor, and writes the result into another array.</p> <p>The files looks like:</p> <pre><code>dir/ ├── module.c ├── module_example.py ├── module_examples.so ├── module_headers.h └── module_implementation.c </code></pre> <p>The tests I want to run without segfaulting in <code>module_example.py</code>:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python import numpy as np from module_examples import foo import unittest class TestNumpyFloatWrappers(unittest.TestCase): def test_numpy_wrapper(self): a = np.arange(10, dtype=float) b = np.arange(10, dtype=float) foo(input=a, output=b, factor=3.0) if __name__ == &quot;__main__&quot;: unittest.main() </code></pre> <p>The function I want to define in <code>module_headers.h</code>:</p> <pre class="lang-c prettyprint-override"><code>#ifndef PYARV_MODULE_HEADERS_H #define PYARV_MODULE_HEADERS_H #define PY_SSIZE_T_CLEAN #include &lt;Python.h&gt; #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION #include &lt;numpy/arrayobject.h&gt; PyObject * foo(PyObject *Py_UNUSED(self), PyObject *args, PyObject *kwargs); #endif//PYARV_MODULE_HEADERS_H </code></pre> <p>The implementation in <code>module_implementation.c</code></p> <pre class="lang-c prettyprint-override"><code>#include &quot;module_headers.h&quot; PyObject *foo(PyObject *Py_UNUSED(self), PyObject *args, PyObject *kwargs) { PyArrayObject *input_array; PyArrayObject *output_array; double factor; #define N_ARRAYS 2 PyArrayObject **arrays[N_ARRAYS] = {&amp;input_array, &amp;output_array}; char *arg_names[] = { &quot;input&quot;, &quot;output&quot;, &quot;factor&quot;, NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, &quot;$O!O!d:multiply&quot;, arg_names, &amp;PyArray_Type, &amp;input_array, &amp;PyArray_Type, &amp;output_array, &amp;factor)) { return NULL; } for (int i = 0; i &lt; N_ARRAYS; i++) { PyObject *array = *arrays[i]; if (PyArray_NDIM(array) != 1) { PyErr_SetString(PyExc_ValueError, &quot;Array must be 1-dimensional&quot;); return NULL; } if (PyArray_TYPE(array) != NPY_DOUBLE) { PyErr_SetString(PyExc_ValueError, &quot;Array must be of type double&quot;); return NULL; } if (!PyArray_IS_C_CONTIGUOUS(array)) { PyErr_SetString(PyExc_ValueError, &quot;Array must be C contiguous.&quot;); return NULL; } } npy_double *input_buffer = (npy_double *) PyArray_DATA(input_array); npy_double *output_buffer = (npy_double *) PyArray_DATA(output_array); size_t input_buffer_size = PyArray_SIZE(input_array); size_t output_buffer_size = PyArray_SIZE(output_array); if (input_buffer_size != output_buffer_size) { PyErr_SetString(PyExc_ValueError, &quot;The input and output arrays are of differing lengths.&quot;); return NULL; } NPY_BEGIN_THREADS_DEF; NPY_BEGIN_THREADS; /* No longer need the Python GIL */ for (size_t i = 0; i &lt; input_buffer_size; i++) { output_buffer[i] = input_buffer[i] * factor; } NPY_END_THREADS; /* We return the Python GIL. */ Py_RETURN_NONE; } </code></pre> <p>Trying to glue everything together in <code>module.c</code>, where the first commented out few lines crash with a segfault, and the others (where everything lives in just one big file) works fine.</p> <pre class="lang-c prettyprint-override"><code>/* // Gives a segfault when the tests run #define PY_ARRAY_UNIQUE_SYMBOL EXAMPLE_ARRAY_API #include &quot;module_headers.h&quot; */ /* // Works fine. #define PY_SSIZE_T_CLEAN #include &lt;Python.h&gt; #define PY_ARRAY_UNIQUE_SYMBOL EXAMPLE_ARRAY_API #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION #include &lt;numpy/arrayobject.h&gt; PyObject *foo(PyObject *Py_UNUSED(self), PyObject *args, PyObject *kwargs) { //... } */ static PyMethodDef example_methods[] = { {&quot;foo&quot;, (PyCFunction) foo, METH_VARARGS | METH_KEYWORDS, NULL}, {NULL}, }; static struct PyModuleDef example_module = { .m_base = PyModuleDef_HEAD_INIT, .m_doc = &quot;Something is going wrong here.&quot;, .m_name = &quot;examples&quot;, .m_size = -1, .m_methods = example_methods, }; PyObject * PyInit_module_examples(void) { import_array(); PyObject *module = PyModule_Create(&amp;example_module); if ( !module || PyModule_AddStringConstant(module, &quot;__version__&quot;, Py_STRINGIFY(NPB_VERSION))) { Py_XDECREF(module); return NULL; } return module; } </code></pre> <h3>Attempting to debug the issue</h3> <p>Running through a debugger in Python just says:</p> <pre><code>..../python ..../module_example.py process exited with status -1 (attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)) Process finished with exit code 0 </code></pre>
<python><c><numpy>
2024-09-04 17:59:53
1
1,987
oliversm
78,949,917
3,413,122
How to read subprocess output before process completes
<p>I'm working on a Python project that's making a <code>subprocess.Popen</code> call. That call will output strings before it completes. I want to read those strings in order to track progress.</p> <p>Currently I'm doing it as such</p> <pre class="lang-py prettyprint-override"><code>with subprocess.Popen(&quot;&lt;my command&gt;&quot;, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, bufsize=1) as proc: while time.time() &lt; timeout: line = proc.stdout.readline() ....additional work.... </code></pre> <p>This works perfectly from a green path perspective, however it runs into issues if there's a problem. The reason being is that <code>proc.stdout.readline()</code> never times out and blocks the thread causing me to be stuck.</p> <p>I've been reading the docs to explore other options but they all seem to rely on waiting for the process to finish before being able to read the output.</p> <p>Does anyone have any suggestions for potential solutions?</p>
<python><subprocess>
2024-09-04 17:35:17
0
1,914
AndyReifman
78,949,773
2,893,712
Pandas Groupby and Filter based on first record having date greater than specific date
<p>I have a dataframe that shows details about employees and the site they are at and the positions they have held. The dataframe has columns for Site Id, Employee ID, and StartDate (plus a lot more fields). I have this sorted by Site and Employee ID ASC and then EffectiveDate DESC (latest record is first)</p> <pre><code>Site EmployeeID StartDate 1 123 2024-09-01 1 123 2024-08-01 1 123 2024-06-01 1 123 2024-05-01 2 100 2024-06-01 2 100 2024-03-01 </code></pre> <p>I need to create a new column called <code>EndDate</code> which is the date of the previous record minus 1 day. We are moving to a new system so we only care about the dates that include the range 7/1/24 (or after). So for my example df, it would look like</p> <pre><code>Site EmployeeID StartDate EndDate Import 1 123 2024-09-01 Y 1 123 2024-08-01 2024-08-31 Y 1 123 2024-06-01 2024-07-31 Y 1 123 2024-05-01 2024-05-31 N 2 100 2024-06-01 Y 2 100 2024-03-01 2024-05-31 N </code></pre> <p>And then filtering for <code>df['Import'] ='Y'</code></p> <p>My initial idea was to iterate over <code>df.groupby(by=['Site','EmployeeID'])</code> and use <code>.iloc[]</code> to get the next values date, subtract 1 day, check if the <code>EndDate</code> is greater than 7/1/24, then set Import to <code>Y</code> or <code>N</code> accordingly. The problem is that this is a very large dataset (~300K rows) and this operation would take a very long.</p>
<python><pandas><group-by>
2024-09-04 16:51:21
3
8,806
Bijan
78,949,640
4,505,998
Pytest-like verbose asserts in Jupyter Lab
<p>I'm using Jupyter lab and sometimes I want to run some assertions on my data. For example, that all the folds have at least 10 test items.</p> <p>Nevertheless, when the assert fails, I have to modify the notebook and run it again just to print more information.</p> <p>Is it possible to get verbose asserts like the ones pytest uses:</p> <pre><code> def test_function(): &gt; assert f() == 4 E assert 3 == 4 E + where 3 = f() test_assert1.py:6: AssertionError </code></pre> <p>Perhaps using a jupyter lab plugin or some other Python package.</p>
<python><pytest><jupyter>
2024-09-04 16:19:48
2
813
David Davó
78,949,602
3,486,684
Applying a polars Expression to a Polars series
<p>Here's a toy example to illustrate an idea:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl series = pl.Series(&quot;x&quot;, [0, -1, 1, -1]) def apply_abs_max(series: pl.Series, default: int) -&gt; int: result = series.abs().max() if result is None: return default else: if isinstance(result, int): return result else: raise ValueError(f&quot;{result=}, {type(result)=}, expected `int`.&quot;) apply_abs_max(series, -1) # 1 </code></pre> <p>Suppose I want to generalize <code>apply_abs_max</code> to <code>apply_expr</code>:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl series = pl.Series(&quot;x&quot;, [0, -1, 1, -1]) def apply_expr(series: pl.Series, default: int, expr: pl.Expr) -&gt; int: raise NotImplementedError() result = ... # what do I do here to apply `expr` to `series`? if result is None: return default else: if isinstance(result, int): return result else: raise ValueError(f&quot;{result=}, {type(result)=}, expected `int`.&quot;) apply_expr(series, -1, pl.Expr().abs().max()) </code></pre> <p><code>apply_expr</code> is not actually implemented, as you can see above, because I do not know how to apply the input <code>expr</code> onto <code>series</code>. How can I go about doing that?</p>
<python><python-polars>
2024-09-04 16:08:03
2
4,654
bzm3r
78,949,473
2,359,895
Display numpy 2D array as an RGBImage
<p>I want to display an <strong>RGBColor</strong> bitmap as an image I have several functions that fill the bitmap. also at some point I will add an alpha channel.</p> <p><em>I saw similar questions but none of them worked for me.</em> <a href="https://stackoverflow.com/questions/22777660/display-an-rgb-matrix-image-in-python">display an rgb matrix image in python</a></p> <p>I have an <strong>RGBColor</strong> class defined as</p> <pre><code>class RGBColor: bytes = np.zeros(3, dtype=np.ubyte) # RGBColor.__new__ def __new__(cls, *args, **kwargs): return super().__new__(cls) # RGBColor.__init__ def __init__(self, r: int, g: int, b: int) -&gt; None: self.bytes[0] = r self.bytes[1] = g self.bytes[2] = b </code></pre> <p>and a <strong>Bitmap</strong> defined as</p> <pre><code>class Bitmap: # Point.__new__ def __new__(cls, *args, **kwargs): return super().__new__(cls) # Point.__init__ def __init__(self, rows: int, cols: int, backColor: RGBColor) -&gt; None: self.bits = np.empty( (rows,cols), dtype=RGBColor) for r in range(rows): for c in range(cols): self.bits[r][c] = backColor </code></pre> <p>and my attempt at displaying this is</p> <pre><code>import numpy as np import matplotlib.pyplot as plt def show(self, bitmap: Bitmap): plt.imshow(bitmap.bits) plt.show() </code></pre> <p>The error I get is</p> <blockquote> <p>File &quot;C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\image.py&quot;, line 706, in set_data raise TypeError(&quot;Image data of dtype {} cannot be converted to &quot;</p> <p>TypeError: Image data of dtype object cannot be converted to float</p> </blockquote>
<python><numpy><image>
2024-09-04 15:37:03
1
1,195
Paul Baxter
78,949,414
2,297,965
Consecutive count of binary column by group
<p>I am attempting to create a 'counter' of consecutive binary values = 1, resetting when the binary value = 0, for each group. Example of data:</p> <pre><code>data = {'city_id': [1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6], 'week': [1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7], 'binary': [0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1]} df = pd.DataFrame(data) </code></pre> <p>For each id, the first <code>binary = 1</code> should begin with a <code>consecutive_count = 1</code> rather than 0. And this should reset each time <code>binary = 0</code>, along with each time we move on to a new id.</p> <p>I have already created a solution that does this. It looks like this:</p> <pre><code>df['consecutive'] = 0 for city in df['city_id'].unique(): city_df = df[df['city_id'] == city] consecutive_count = 0 for i in range(len(city_df)): if city_df['binary'].iloc[i] == 1: consecutive_count += 1 else: consecutive_count = 0 df.loc[(df['city_id'] == city) &amp; (df['week'] == city_df['week'].iloc[i]), 'consecutive'] = consecutive_count </code></pre> <p>The main issue is that my solution is extremely inefficient for large data. I have a large set of ids, ~2.5M, and this solution either times out or runs for hours and hours, so I am struggling in making this more efficient. TIA.</p>
<python><pandas><dataframe><performance>
2024-09-04 15:23:56
2
484
coderX
78,949,215
3,482,266
Does a simple assignment in Python evaluate twice?
<p>In <a href="https://docs.python.org/3/reference/simple_stmts.html#augmented-assignment-statements" rel="nofollow noreferrer">this page of Python official reference</a>, there's the following sentence:</p> <blockquote> <p>An augmented assignment statement like <code>x += 1</code> can be rewritten as <code>x = x + 1</code> to achieve a similar, but not exactly equal effect. In the augmented version, <code>x</code> is only evaluated once.</p> </blockquote> <p>It seems to suggest that a simple assignment would evaluate <code>x</code> twice. I wanted to test this, so created the following example code:</p> <pre><code>X = [print(&quot;\t\tHello X&quot;), 0 ] X = X + [1] </code></pre> <p>This actually prints &quot;Hello X&quot; only once, at the first assignment. I've tried making <code>X[0]</code> a function that prints something instead, and it doesn't work.</p> <p>Here's a small example with setters and getters:</p> <pre><code>class X: def __init__(self, value:int = 0): self._value = value @property def value(self): print(&quot;HEllo X!&quot;) return self._value @value.setter def value(self, value): self._value = value </code></pre> <p>Now, I experiment with the code below:</p> <pre><code>x = X(1) x.value = x.value +1 x.value </code></pre> <p>This prints &quot;HEllo X!&quot; twice. I was expecting it to be thrice.</p> <pre><code>x = X(1) x.value +=1 x.value </code></pre> <p>This prints &quot;HEllo X!&quot; twice, the same number of times as with the simple assignment.</p> <p>Is there a way to check whether Python really evaluates <code>x</code> twice with a simple assignment?</p>
<python>
2024-09-04 14:36:07
4
1,608
An old man in the sea.
78,949,093
20,054,635
How to Resolve AttributeError: module 'fiona' has no attribute 'path'?
<p>I have a piece of code that was working fine until last week, but now it's failing with the following error:</p> <p>AttributeError: module 'fiona' has no attribute 'path'</p> <p>I’ve ensured that all the necessary libraries are installed and imported. Does anyone have any ideas on what might be going wrong or how I can resolve this issue?</p> <p>Thanks!</p> <pre><code> pip install geopandas pip install fiona import geopandas as gpd import fiona countries = gpd.read_file(gpd.datasets.get_path(&quot;naturalearth_lowres&quot;)) </code></pre>
<python><dataframe><databricks><geopandas><fiona>
2024-09-04 14:06:26
6
369
Anonymous
78,949,086
967,621
Install a pre-release version of Python on M1 Mac using conda
<p>I would like to install python 3.13.0rc1 with conda on an M1 Mac.</p> <p>However, <code>conda create</code> fails with error message &quot;python 3.13.0rc1** is not installable because it requires _python_rc, which does not exist (perhaps a missing channel)&quot;:</p> <pre><code>% conda search python ... python 3.12.4 h99e199e_1 pkgs/main python 3.12.5 h30c5eda_0_cpython conda-forge python 3.13.0rc1 h17d3ab0_0_cp313t conda-forge python 3.13.0rc1 h17d3ab0_1_cp313t conda-forge python 3.13.0rc1 h17d3ab0_2_cp313t conda-forge python 3.13.0rc1 h8754ccd_100_cp313 conda-forge python 3.13.0rc1 h8754ccd_101_cp313 conda-forge python 3.13.0rc1 h8754ccd_102_cp313 conda-forge </code></pre> <pre><code>% conda create --name py python=3.13.0rc1 --channel conda-forge --override-channels Channels: - conda-forge Platform: osx-arm64 Collecting package metadata (repodata.json): done Solving environment: failed LibMambaUnsatisfiableError: Encountered problems while solving: - nothing provides _python_rc needed by python-3.13.0rc1-h17d3ab0_0_cp313t Could not solve for environment specs The following package could not be installed ââ python 3.13.0rc1** is not installable because it requires ââ _python_rc, which does not exist (perhaps a missing channel). </code></pre> <p>Note that installing the latest python without specifying the version (<code>conda create --name py python</code>) installs python 3.12.5.</p> <p><strong>See also:</strong></p> <ul> <li><a href="https://stackoverflow.com/q/41102954/967621">How to install the latest development version of Python with conda?</a></li> <li><a href="https://stackoverflow.com/q/77277139/967621">Install python 3.12 using mamba on mac</a></li> </ul>
<python><macos><installation><conda>
2024-09-04 14:05:26
1
12,712
Timur Shtatland
78,949,043
2,928,970
Python typing annotation return value annotation based if function argument being a list or not
<p>If I have</p> <pre><code> def get( ids: str | list[str] | int | list[int], ) -&gt; float | list[float]: </code></pre> <p>Is there a way to specify in return value annotation that a list of <code>float</code>s is output only when the input <code>ids</code> is a list of <code>str</code>s or <code>int</code>s?</p>
<python><python-typing>
2024-09-04 13:56:09
2
1,395
hovnatan