QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,700,997
1,044,422
How to generate a hierarchical colourmap in matplotlib?
<p>I have a hierarchical dataset that I wish to visualise in this manner. I've been able to construct a heatmap for it.</p> <p><a href="https://i.sstatic.net/QsIlavSn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsIlavSn.png" alt="enter image description here" /></a></p> <p>I want to generate a colormap in matplotlib such that <code>Level 1</code> get categorical colours while <code>Level 2</code> get different shades of the <code>Level 1</code> colour. I was able to get <code>Level 1</code> colours from a &quot;tab20&quot; palette but I can't figure out how to generate shades of the base <code>Level 1</code> colour.</p> <p><strong>EDIT:</strong> Just to be clear, this needs to be a generic script. So I can't hard code the colormap.</p> <h2>MWE</h2> <p>At the moment this just creates a colormap based on the level 1 values. I am not sure how to generate the shades for the level 2 colours:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import matplotlib as mpl df = pd.DataFrame({&quot;Level 2&quot;: [4, 5, 6, 6, 7], &quot;Level 1&quot;: [0, 0, 1, 1, 1]}).T colours = mpl.colormaps[&quot;tab20&quot;].resampled(len(df.loc[&quot;Level 1&quot;].unique())).colors colour_dict = { item: colour for item, colour in zip(df.loc[&quot;Level 1&quot;].unique(), colours) } sns.heatmap( df, cmap=mpl.colors.ListedColormap([colour_dict[item] for item in colour_dict.keys()]), ) colours </code></pre> <p>In this example, 4 and 5 should be shades of the colour for 0 and 6 and 7 should be shades of the colour for 1.</p> <h2>Edit 2</h2> <p>Applying @mozway's answer below, this is the heatmap I see:</p> <p><a href="https://i.sstatic.net/bm5N061U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bm5N061U.png" alt="enter image description here" /></a></p> <p>This is with 423 values in level 2 and n=500.</p>
<python><pandas><matplotlib><seaborn>
2024-07-03 08:53:11
1
1,845
K G
78,700,714
10,377,244
Polars - groupby mean on list
<p>I want to make mean on groups of embeddings vectors. For examples:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl pl.DataFrame({ &quot;id&quot;: [1,1 ,2,2], &quot;values&quot;: [ [1,1,1], [3, 3, 3], [1,1,1], [2, 2, 2] ] }) </code></pre> <pre><code>shape: (4, 2) id values i64 list[i64] 1 [1, 1, 1] 1 [3, 3, 3] 2 [1, 1, 1] 2 [2, 2, 2] </code></pre> <p><strong>Expected result.</strong></p> <pre class="lang-py prettyprint-override"><code>import numpy as np pl.DataFrame({ &quot;id&quot;:[1,2], &quot;values&quot;: np.array([ [[1,1,1], [3, 3, 3]], [[1,1,1], [2, 2, 2]] ]).mean(axis=1) }) </code></pre> <pre><code>shape: (2, 2) id values i64 list[f64] 1 [2.0, 2.0, 2.0] 2 [1.5, 1.5, 1.5] </code></pre>
<python><dataframe><python-polars>
2024-07-03 07:50:35
1
1,127
MPA
78,700,332
7,699,611
Tuple matching using SQLAlchemy
<p>Let's say I have a db of Customers. I want to fetch data and filter rows by exact pairs.</p> <p>I want to do the following using sqlalchemy:</p> <pre><code>SELECT first_name, age FROM Customers WHERE (first_name, age) in (('John', '31'),('Robert', '22')); </code></pre> <p>I know I can do this:</p> <pre><code> SELECT first_name, age FROM Customers WHERE first_name in ('John', 'Robert') and age in ('31','22'); </code></pre> <p>like this:</p> <pre><code>with get_mariadb_session() as session: query = select( Customers.name, Customers.age, Customers.email ).select_from( Customers ).where( and_(Customers.name.in_(names), Customers.age.in_(ages)) ) res = session.execute(query) </code></pre> <p>But this might return a result set with a record ('John','22') too.</p> <p>Is it possible to do tuple matching in sqlalchemy?</p>
<python><sql><sqlalchemy><tuples>
2024-07-03 06:09:13
1
939
Muslimbek Abduganiev
78,700,325
937,440
Search for currency pairs using Pandas
<p>I am trying to learn more about Python. Please see the code below:</p> <pre><code>import pandas as pd df = pd.read_html(&quot;https://finance.yahoo.com/crypto?offset=0&amp;count=25&quot;)[0] symbols = df.Symbol.tolist() test=len(symbols) for symbol in symbols: print(symbol) </code></pre> <p>The query string has default values. If I change the query string to:</p> <pre><code>https://finance.yahoo.com/crypto?offset=5&amp;count=100 </code></pre> <p>Then the same 25 symbols are returned as the first url i.e. the same 25 symbols are returned regardless of the offset or count. If I browse in a browser then I get the expected results i.e. they vary depending on the offset and count.</p> <p>Also my reading is telling me that only us cryptos are returned. Is this correct?</p>
<python><python-3.x><yahoo-finance><yfinance>
2024-07-03 06:06:46
1
15,967
w0051977
78,699,964
660,391
How can one combine iterables keeping only the first element with each index?
<p>Let's say I have a number of iterables:</p> <pre><code>[[1, 2], [3, 4, 5, 6], [7, 8, 9], [10, 11, 12, 13, 14]] </code></pre> <p>How can I get only each element that is the first to appear at its index in any of the iterables? In this case:</p> <pre><code>[1, 2, 5, 6, 14] </code></pre> <p>Visualized:</p> <pre><code>[1, 2] [_, _, 5, 6] [_, _, _] [_, _, _, _, 14] </code></pre>
<python><iterable>
2024-07-03 03:32:29
7
332
qulinxao
78,699,894
6,141,238
When using matplotlib, how can I make each figure always plot in the same location on the screen?
<p>I have a Python script that plots 3 figures using matplotlib. When I run this script, and then close the 3 figures and rerun it, the figures appear in different locations on my screen in the second run compared to the first run.</p> <p>(For example, in the first run, the first figure might be positioned near the upper left corner of the screen, with the second and third figures staggered or cascaded toward the lower right relative to the first figure. In the second run, the first figure might be positioned closer to the middle of the screen, the second figure might be cascaded relative to the first, while the third figure may be positioned near the upper left corner of the screen.)</p> <p>Is there a way to prevent this inconsistency of figure placement from occurring across different runs? In other words, is there a way to force Python to maintain the same placement of the figures on the screen across each run?</p>
<python><matplotlib><plot><window><figure>
2024-07-03 02:48:35
1
427
SapereAude
78,699,832
315,820
Get user response as speech-to-text in Twilio
<p>I am exploring Twilio programmable voice for the first time, and can't find how to get user speech input as text.</p> <p>TwiML Gather with the speech input</p> <pre><code> gather: Gather = Gather( input=&quot;speech&quot;, action=process_response_url, action_on_empty_result=process_response_url, method=&quot;POST&quot;, speech_model=&quot;experimental_conversations&quot;, timeout=3, speech_timeout=3, max_speech_time=45, actionOnEmptyResult=True, ) resp = VoiceResponse() resp.append(say).append(gather) return Response(content=resp.to_xml(), media_type=&quot;application/xml&quot;) </code></pre> <p>The <code>gather</code> is executed as expected, and Twilio logs show the request parameters on the action call as</p> <pre><code>Called=%2B17817024591&amp;ToState=MA&amp;CallerCountry=US&amp;Direction=inbound&amp;SpeechResult=I%27d+like+to+be+a+big+boss.&amp;CallerState=MA&amp;Language=en-US&amp;Confidence=0.905934&amp;ToZip=02062&amp;CallSid=CAf3ffac9dc479ac39f9669b7f0225c963&amp;To=%2B17817024591&amp;CallerZip=02148&amp;ToCountry=US&amp;ApiVersion=2010-04-01&amp;CalledZip=02062&amp;CallStatus=in-progress&amp;CalledCity=NORWOOD&amp;From=%2B17813258707&amp;AccountSid=ACee34277c560b337e8a27d916122afcf8&amp;CalledCountry=US&amp;CallerCity=BOSTON&amp;Caller=%2B17813258707&amp;FromCountry=US&amp;ToCity=NORWOOD&amp;FromCity=BOSTON&amp;CalledState=MA&amp;FromZip=02148&amp;FromState=MA </code></pre> <p>How can I parse the speechResult parameter? The controller code</p> <pre><code>@router.post(&quot;/process_response&quot;, tags=[TWILIO], summary=&quot;Process caller response&quot;) async def process_response(request: Request): print(f&quot;process_response: Request {type(request)}: {request}&quot;) print(f&quot;process_response: path parameters {request.path_params}&quot;) print(f&quot;process_response: query parameters {request.query_params}&quot;) print(f&quot;process_response: values {request.values()}&quot;) </code></pre> <p>Logs show that:</p> <pre><code>request object is starlette.requests.Request, request.path_params is an empty dict, request.query_params is null, and request.values() returns a ValueView object with various objects and functions, but no parameters </code></pre> <p>How can I get the parameters?</p>
<python><twilio><speech-to-text><twilio-twiml>
2024-07-03 02:10:14
1
1,657
jprusakova
78,699,793
9,744,061
Animation for plotting y=omega*x^2 with omega varying from -3 to 3
<p>I want to plot y = omega*x^2 with omega varying from -3 to 3 with a step size of 0.25 (and x spanning -4 to 4 with a step size of 0.001). The current version of my code (below) only has omega start at 0 and not -3. How do I adjust my code to get omega to vary over the range I want?</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation fig = plt.figure() axis = plt.axes(xlim=(-4, 4), ylim=(-40, 40)) line, = axis.plot([], [], lw=3) def init(): line.set_data([], []) return line, def animate(omega): x = np.linspace(-4, 4, 8000) y = omega*x**2 line.set_data(x, y) return line, anim = FuncAnimation(fig, animate, init_func=init, frames=500, interval=100, blit=True) plt.show() </code></pre> <p>This is my desired result:</p> <p><a href="https://i.sstatic.net/Ar2PX.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ar2PX.gif" alt="https://i.sstatic.net/Ar2PX.gif" /></a></p>
<python><matplotlib><animation><matplotlib-animation>
2024-07-03 01:58:01
1
305
Ongky Denny Wijaya
78,699,760
4,451,521
Error with Tokenizer parallelism when using gradio and mlflow
<p>I have written a script using gradio and sometimes (I emphasize this- only sometimes) when I run it I get</p> <pre><code>huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) </code></pre> <p>The strange thing is that different to the answers I got searching the internet , I am not using tokenizers, or LLMs or accessing huggingface models or forking anything.</p> <p>My code is simply a gradio script that uses dropdowns buttons and dataframes and that when clicked it sends a request to a remote machine. This remote machine is running an API that actually calls a LLava model, but that is process running on a remote machine.</p> <p>The local machine- where the error is displayed- just sends a request and receives it</p> <p>The only other thing that occurs to me is that when the process is finished I do</p> <pre><code>with mlflow.start_run() as run: mlflow.log_param(&quot;some_param&quot;,something) # ... some more param metrics and artifacts logging print( f&quot;Logged data to MLFlow with run ID: {run.info.run_id} in experiment: {experiment_name}&quot; ) </code></pre> <p>The error message appears in this moment and before the final print</p> <p>Any hint of why this could be happening?</p>
<python><mlflow><gradio>
2024-07-03 01:36:16
0
10,576
KansaiRobot
78,699,674
10,616,752
NVD Error: Invalid API key since NIST CVSS v4.0 changes
<p>I have a process that uses the NVD API to pull down vulnerability data. It has worked fine for years.</p> <p>Suddenly, I get periodic errors in the log, e.g. :</p> <pre><code>Downloaded 6000 CVEs. Process finished at 2024-07-02 11:02:55 Total duration: 173.34 seconds Downloaded 8000 CVEs. Error: Invalid API key Process finished at 2024-07-02 15:01:10 Total duration: 0.30 seconds Downloaded 0 CVEs. Process finished at 2024-07-02 17:00:37 Total duration: 1.92 seconds Downloaded 0 CVEs. Error: Invalid API key </code></pre> <p>I tested the issue with this code here:</p> <pre><code>import requests import logging import time # Configure logging logging.basicConfig(filename='/&lt;dir&gt;/api_key_check.log', level=logging.INFO, format='%(asctime)s:%(levelname)s:%(message)s') def is_valid_api_key(apiKey, retries=3, delay=5, timeout=10): test_url = 'https://services.nvd.nist.gov/rest/json/cves/2.0' test_params = {'startIndex': 0, 'resultsPerPage': 1} headers = {'apiKey': apiKey} for attempt in range(retries): try: logging.info(f&quot;Attempt {attempt + 1} to check API key.&quot;) test_response = requests.get(test_url, params=test_params, headers=headers, timeout=timeout) if test_response.status_code == 200: logging.info(&quot;API key is valid.&quot;) return True elif test_response.status_code == 403: logging.error(&quot;Forbidden: The API key might be invalid or rate-limited.&quot;) return False else: logging.error(f&quot;Unexpected status code {test_response.status_code} on attempt {attempt + 1}&quot;) except requests.Timeout: logging.error(f&quot;Request timed out on attempt {attempt + 1}&quot;) except requests.RequestException as e: logging.error(f&quot;Request failed on attempt {attempt + 1}: {e}&quot;) time.sleep(delay) logging.error(&quot;API key validation failed after multiple attempts.&quot;) return False # Replace 'your_api_key_here' with your actual API key apiKey = 'your_api_key_here' if is_valid_api_key(apiKey): print(&quot;API key is valid.&quot;) else: print(&quot;Error: Invalid API key.&quot;) </code></pre> <p>I run that and I get:</p> <pre><code>Error: Invalid API key. </code></pre> <p>NIST and the NVD people are not great at responding. This has been broken for 4 days.</p> <p>They have this on their chat group:</p> <p><a href="https://groups.google.com/a/list.nist.gov/g/nvd-news" rel="nofollow noreferrer">https://groups.google.com/a/list.nist.gov/g/nvd-news</a></p> <p>Has anyone experienced this issue, or have any guidance?</p>
<python><security><cve>
2024-07-03 00:34:48
2
546
Scouse_Bob
78,699,642
298,171
How to inspect Python file/module information without using `__file__` — perhaps in a module-level `__getattr__(…)` function?
<p>I have a utility package for Python projects I maintain and use. It’s a bundle of lightweight, common, junk-drawer tools (e.g. command parsing, app lifecycle management, document generation, &amp;c &amp;c). Currently, using this stuff is all very simple in one’s own project, but most modules written to work with it require this very irritating bit of boilerplate, like so:</p> <pre class="lang-py prettyprint-override"><code> from myproject.exporting import Exporter # This is the irritating part: exporter = Exporter(path=__file__) export = exporter.decorator() @export def public_function(…): &quot;&quot;&quot; This function has been exported, and is public. &quot;&quot;&quot; ... def private_function(…): &quot;&quot;&quot; This function is private. &quot;&quot;&quot; ... # This bit does not really bother me: __all__, __dir__ = exporter.all_and_dir() </code></pre> <p>This <code>exporter</code> interface lets one label classes, functions (even lambdas!) as public. These get added to generated module <code>__all__</code> and <code>__dir__(…)</code> values, but the whole thing assists with documentation generation and other introspective things. This is besides the point, though, which is: <strong>having to do this hacky instatiation with a <code>__file__</code> value is an eyesore.</strong> Linters hate <code>__file__</code>, so do most human Python programmers (including myself) and in general it feels like an unstable and antiquated way of doing things.</p> <p>Is there a way of getting the value of <code>__file__</code> in another manner? I am looking for a way to surface this datum via some other (hopefully less fragile) route.</p> <p>My immediate thought is to have a <a href="https://peps.python.org/pep-0562/" rel="nofollow noreferrer">module-level <code>__getattr__(…)</code> function</a>, as I have had good results using these to provide instances of things on <code>import</code> – but, regardless of whether module <code>__getattr__(…)</code> is the best tool or not in this case, I am not sure about which Python introspection facilities would be warranted in their use.</p> <p>Also, <a href="https://stackoverflow.com/a/9271479/298171">this answer</a> has the basic info on the <code>__file__</code> value, for those who are curious.</p>
<python><python-3.x><python-module><introspection><python-internals>
2024-07-03 00:13:01
0
4,435
fish2000
78,699,612
20,122,390
How does Socket.IO work internally with AsyncRedisManager?
<p>I have an application that uses Socket.io to implement websockets and is configured with Redis. The server side is written in Python, so I have something like this:</p> <pre><code>REDIS = f&quot;redis://{settings.REDIS_USER}:{settings.REDIS_PASSWORD}@{settings.REDIS_HOST}:{settings.REDIS_PORT}/0&quot; mgr = AsyncRedisManager(REDIS, channel=settings.REDIS_CHANNEL) sio = AsyncServer( async_mode=&quot;asgi&quot;, cors_allowed_origins=&quot;*&quot;, client_manager=mgr, logger=False ) # logger=True, engineio_logger=True sio.register_namespace(CommentsConnector(&quot;/comments&quot;)) sio.register_namespace(DepartmentMessagesConnector(&quot;/department&quot;)) sio.register_namespace(TeammateConnector(&quot;/teammate&quot;)) sio.register_namespace(PersonalMessagesConnector(&quot;/personal&quot;)) sio.register_namespace(SchedulerConnector(&quot;/snooze&quot;)) sio.register_namespace(AssignConnector(&quot;/assign&quot;)) sio.register_namespace(MailmanConnector(&quot;/mailman&quot;)) sio.register_namespace(StatusConnector(&quot;/status&quot;)) sio.register_namespace(DeleteInboxConnector(&quot;/delete-inbox&quot;)) sio.register_namespace(DeleteUserConnector(&quot;/delete-user&quot;)) sio.register_namespace(SearchConnector(&quot;/search-engine&quot;)) sio.register_namespace(MoveToConnector(&quot;/move-to&quot;)) sio.register_namespace(UpdateSessionMessages(&quot;/session&quot;)) sio.register_namespace(MoveToCompletedMessages(&quot;/move-to-completed&quot;)) sio_app = ASGIApp(sio) </code></pre> <p>I understand that Socket.IO using AsyncRedisManager uses the Redis Pub/Sub functionality, but I'm not sure how they integrate. Socket.IO creates redis channels for each of the Rooms? or what is the policy/use that socket.io has for the creation and consumption of Redis Pub/Sub channels?</p>
<python><redis><socket.io>
2024-07-02 23:54:44
0
988
Diego L
78,699,598
9,074,679
Gunicorn: ModuleNotFoundError: No module named 'server'
<p>I am trying to deploy my Flask app (to <a href="https://railway.app/" rel="nofollow noreferrer">https://railway.app/</a>).</p> <p>My project structure looks like this:</p> <pre><code>myproject/server ├── Procfile ├── __init__.py ├── app.py ├── requirements.txt ├── sources │   ├── __init__.py │   ├── integrations │   ├── source_controller.py │   ├── sources.py </code></pre> <p>And in <code>app.py</code>, I have the following imports:</p> <pre><code>from flask import Flask, request from server.sources.source_controller import SourceController from server.sources.sources import Source </code></pre> <p>While I can run my server just fine locally (with <code>flask --app app run</code>), when I try calling <code>gunicorn app:app</code> (which is what railway calls when deploying my app), I get the following error:</p> <pre><code>[2024-07-02 19:43:34 -0400] [65259] [ERROR] Exception in worker process Traceback (most recent call last): File &quot;/usr/local/lib/python3.11/site-packages/gunicorn/arbiter.py&quot;, line 609, in spawn_worker worker.init_process() File &quot;/usr/local/lib/python3.11/site-packages/gunicorn/workers/base.py&quot;, line 134, in init_process self.load_wsgi() File &quot;/usr/local/lib/python3.11/site-packages/gunicorn/workers/base.py&quot;, line 146, in load_wsgi self.wsgi = self.app.wsgi() ^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/gunicorn/app/base.py&quot;, line 67, in wsgi self.callable = self.load() ^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/gunicorn/app/wsgiapp.py&quot;, line 58, in load return self.load_wsgiapp() ^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/gunicorn/app/wsgiapp.py&quot;, line 48, in load_wsgiapp return util.import_app(self.app_uri) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/gunicorn/util.py&quot;, line 371, in import_app mod = importlib.import_module(module) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1204, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1176, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1147, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 690, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 940, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed File &quot;/Users/marcoeangeli/Projects/tomes/server/app.py&quot;, line 3, in &lt;module&gt; from server.sources.source_controller import SourceController ModuleNotFoundError: No module named 'server' [2024-07-02 19:43:34 -0400] [65259] [INFO] Worker exiting (pid: 65259) ... other logs ... </code></pre> <p>I have tried searching for this error but I can't find anything that helps. I pretty confused why my IDE is happy with the way I'm importing, running the app directly runs fine, but Gunicorn fails!</p>
<python><flask><gunicorn>
2024-07-02 23:47:26
0
525
Marco
78,699,562
4,268,602
jsonschema installed through poetry cannot import own validate submodule
<p>I have jsonschema installed through poetry.</p> <p>When I attempt to use it, I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/danielbak/hw-sys-measurements-logging/hw_sys_measurements_logging/data_log.py&quot;, line 5, in &lt;module&gt; from jsonschema import validate File &quot;/Users/danielbak/Library/Caches/pypoetry/virtualenvs/hw-sys-measurements-logging-ABlvfjOw-py3.9/lib/python3.9/site-packages/jsonschema/__init__.py&quot;, line 13, in &lt;module&gt; from jsonschema._format import FormatChecker File &quot;/Users/danielbak/Library/Caches/pypoetry/virtualenvs/hw-sys-measurements-logging-ABlvfjOw-py3.9/lib/python3.9/site-packages/jsonschema/_format.py&quot;, line 11, in &lt;module&gt; from jsonschema.exceptions import FormatError File &quot;/Users/danielbak/Library/Caches/pypoetry/virtualenvs/hw-sys-measurements-logging-ABlvfjOw-py3.9/lib/python3.9/site-packages/jsonschema/exceptions.py&quot;, line 15, in &lt;module&gt; from referencing.exceptions import Unresolvable as _Unresolvable File &quot;/Users/danielbak/Library/Caches/pypoetry/virtualenvs/hw-sys-measurements-logging-ABlvfjOw-py3.9/lib/python3.9/site-packages/referencing/__init__.py&quot;, line 5, in &lt;module&gt; from referencing._core import Anchor, Registry, Resource, Specification File &quot;/Users/danielbak/Library/Caches/pypoetry/virtualenvs/hw-sys-measurements-logging-ABlvfjOw-py3.9/lib/python3.9/site-packages/referencing/_core.py&quot;, line 9, in &lt;module&gt; from rpds import HashTrieMap, HashTrieSet, List ImportError: cannot import name 'HashTrieMap' from 'rpds' (/Users/danielbak/Library/Caches/pypoetry/virtualenvs/hw-sys-measurements-logging-ABlvfjOw-py3.9/lib/python3.9/site-packages/rpds/__init__.py) </code></pre> <p>So it appears that poetry is not importing the correct version of rdps. How can I resolve this?</p> <p>I also had another issue where jsonschema's installation did not automatically pull in <code>attrs</code> so I had to <code>poetry add attrs</code>. It appears that jsonschema is not correctly pulling in its dependencies.</p>
<python><jsonschema><python-poetry>
2024-07-02 23:24:50
0
4,156
Daniel Paczuski Bak
78,699,498
22,407,544
Why is my Django app unable to find my URL pattern?
<p>My Django project is run in Docker and I use Celery to handle queuing. When a user submits an audio file the system starts an asynchronous task(that transcribes the audio), continuously checks its progress, and updates the UI once the transcription is complete, with a download button. However I've been getting an error after the transcription completes but before the download button appears. The error indicates that it can't find the view that provides the completed transcript to the user. Here's my code.</p> <p>views.py:</p> <pre><code>def initiate_transcription(request, session_id): file_name = request.session.get('uploaded_file_name') file_path = request.session.get('uploaded_file_path') if request.method == 'GET': if not file_name or not file_path: return redirect(reverse('transcribeSubmit')) if request.method == 'POST': try: if not file_name or not file_path: return redirect(reverse('transcribeSubmit')) audio_language = request.POST.get('audio_language') output_file_type = request.POST.get('output_file_type') if file_name and file_path: print(str(&quot;VIEW: &quot;+session_id)) task = transcribe_file_task.delay(file_path, audio_language, output_file_type, 'ai_transcribe_output', session_id) return JsonResponse({'status': 'success', 'task_id': task.id}) except Exception as e: return JsonResponse({'status': 'error', 'error': 'No file uploaded'}) return render(request, 'transcribe/transcribe-complete-en.html') def check_task_status(request, session_id, task_id): task_result = AsyncResult(task_id) if task_result.ready(): transcribed_doc = TranscribedDocument.objects.get(id=session_id) return JsonResponse({ 'status': 'completed', 'output_file_url': transcribed_doc.output_file.url }) else: return JsonResponse({'status': 'pending'}) </code></pre> <p>JS:</p> <pre><code>form.addEventListener('submit', function(event) { event.preventDefault(); const transcribeField = document.querySelector('.transcribe-output-lang-select') const errorDiv = document.querySelector('.error-transcribe-div'); transcribeField.style.opacity = '0'; setTimeout(function() { transcribeField.style.display = 'none'; transcribingFileField.style.display = 'block'; errorDiv.style.opacity = '0'; errorDiv.style.display = 'none'; }, 300); setTimeout(function() { transcribingFileField.style.opacity = '1' }, 500); const formData = new FormData(form); const xhr = new XMLHttpRequest(); xhr.onload = function() { if (xhr.status == 200) { const response = JSON.parse(xhr.responseText); if (response.status === 'success') { pollTaskStatus(response.task_id); } else { showError('An error occurred while initiating the transcription.'); } } else { showError('An error occurred while uploading the file.'); } }; xhr.onerror = function() { showError('An error occurred while uploading the file.'); }; xhr.open('POST', form.action, true); xhr.send(formData); }); function pollTaskStatus(taskId) { const pollInterval = setInterval(() =&gt; { const xhr = new XMLHttpRequest(); xhr.onload = function() { if (xhr.status == 200) { const response = JSON.parse(xhr.responseText); if (response.status === 'completed') { clearInterval(pollInterval); showCompletedUI(response.output_file_url); } } }; xhr.open('GET', `/check_task_status/${taskId}/`, true); xhr.send(); }, 5000); // Poll every 5 seconds } function showCompletedUI(outputFileUrl) { const transcribingText = document.querySelector('.transcribing-text'); transcribingText.textContent = 'Transcript Completed'; const downloadBtn = document.querySelector('.download-btn'); downloadBtn.addEventListener('click', function() { window.location.href = outputFileUrl; this.innerHTML = 'Transcript downloaded!'; setTimeout(() =&gt; { this.innerHTML = 'Click to download'; }, 3000); }); } function showError(message) { const errorDiv = document.querySelector('.error-transcribe-div'); errorDiv.textContent = message; errorDiv.style.opacity = '1'; errorDiv.style.display = 'block'; const transcribingFileField = document.querySelector('.transcribing-file-field'); transcribingFileField.style.opacity = '0'; transcribingFileField.style.display = 'none'; } function showTranscriptionComplete(fileUrl) { // Update UI to show transcription is complete console.log('success') const transcribingText = document.querySelector('.transcribing-text'); transcribingText.textContent = 'Transcript Completed'; const orderComplete = document.querySelector('.order-complete-data'); orderComplete.style.opacity = '0'; transcriptComplete = document.querySelector('.transcript-complete'); const transcriptSVG = document.querySelector('.transcript-svg'); setTimeout(function() { orderComplete.style.display = 'none'; transcriptComplete.style.opacity = '1'; transcriptComplete.style.display = 'block'; transcribeLoader.style.opacity = '0'; transcribeLoader.style.display = 'none'; transcriptSVG.style.opacity = '1'; transcriptSVG.style.display = 'block'; }, 300); let blob = new Blob([xhr.response], {type: xhr.getResponseHeader('Content-Type')}); let fileName = xhr.getResponseHeader('Content-Disposition').split('filename=')[1]; console.log(fileName); // Set up download button const downloadBtn = document.querySelector('.download-btn'); downloadBtn.addEventListener('click', function() { download(blob, fileName); this.innerHTML = 'Transcript downloaded!'; setTimeout(() =&gt; { this.innerHTML = 'Click to download'; }, 3000); } )}; </code></pre> <p>urls.py:</p> <pre><code>urlpatterns = [ path(&quot;&quot;, views.transcribeSubmit, name=&quot;transcribeSubmit&quot;), path(&quot;init-transcription/&lt;str:session_id&gt;/&quot;, views.initiate_transcription, name=&quot;initiate_transcription&quot;), path(&quot;check_task_status/&lt;str:task_id&gt;/&quot;, views.check_task_status, name=&quot;check_task_status&quot;), ] </code></pre> <p>Here's the error log:</p> <pre><code>web-1 | Not Found: /check_task_status/02760416-c2fb-4526-b0d0-d5cdafabf8cf/ </code></pre>
<python><django><docker><celery>
2024-07-02 22:50:51
0
359
tthheemmaannii
78,699,493
2,612,259
What is the Pythonic way to use match/case statements with classes that only provide getter methods and computed value methods?
<p>I have several classes that I don't own. These classes provide <em>methods</em> to access internal information, rather than attributes or properties.</p> <p>For example a class might look something like this:</p> <pre class="lang-py prettyprint-override"><code>class Foo: def __init__(self, a, b) -&gt; None: self._a = a self._b = b def a(self): return self._a def b(self): return self._b def c(self): return self._a + self._b </code></pre> <p>I would like to be able to use match/case to do something like this:</p> <pre><code>foo_list = [ Foo(1,0), Foo(0,1), Foo(1,2), Foo(3,3), Foo(3,2), ] for foo in foo_list: match foo: case Foo(a=a, b=0): print(f'first a = {a}, b = {foo.b}') case Foo(a=0, b=b): print(f'second a = {foo.a}, b = {b}') case Foo(a=1, b=b): print(f'third a = {foo.a}, b = {b}') case Foo(a=a, b=3): print(f'forth a = {a}, b = {foo.b}') case Foo(a=a, b=b, c=5): print(f'fifth a = {a}, b = {b}, c = {foo.c}') </code></pre> <p>However this does not work since the <code>case</code> statements matching an object expect constructor-like syntax where the named arguments are attributes or properties not getter methods, which is what I have.</p> <p>What is the Pythonic way to match/case statements with classes such as this?</p>
<python>
2024-07-02 22:49:21
3
16,822
nPn
78,699,293
3,486,684
Efficiently convert Polars string column into struct containing integer then unnest
<p>Consider the following toy example:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl xs = pl.DataFrame( [ pl.Series( &quot;date&quot;, [&quot;2024 Jan&quot;, &quot;2024 Feb&quot;, &quot;2024 Jan&quot;, &quot;2024 Jan&quot;], dtype=pl.String, ) ] ) ys = ( xs.with_columns( pl.col(&quot;date&quot;).str.split(&quot; &quot;).list.to_struct(fields=[&quot;year&quot;, &quot;month&quot;]), ) .with_columns( pl.col(&quot;date&quot;).struct.with_fields(pl.field(&quot;year&quot;).cast(pl.Int16())) ) .unnest(&quot;date&quot;) ) ys </code></pre> <pre><code>shape: (4, 2) ┌──────┬───────┐ │ year ┆ month │ │ --- ┆ --- │ │ i16 ┆ str │ ╞══════╪═══════╡ │ 2024 ┆ Jan │ │ 2024 ┆ Feb │ │ 2024 ┆ Jan │ │ 2024 ┆ Jan │ └──────┴───────┘ </code></pre> <p>I think it would be more efficient to do the operations on a unique series of date data (<a href="https://stackoverflow.com/questions/76681016/python-pandas-x-polars-values-mapping-lookup-value">I could use <code>replace</code>, but I have opted for <code>join</code></a> for no good reason):</p> <pre class="lang-py prettyprint-override"><code>unique_dates = ( pl.DataFrame([xs[&quot;date&quot;].unique()]) .with_columns( pl.col(&quot;date&quot;) .str.split(&quot; &quot;) .list.to_struct(fields=[&quot;year&quot;, &quot;month&quot;]) .alias(&quot;struct_date&quot;) ) .with_columns( pl.col(&quot;struct_date&quot;).struct.with_fields( pl.field(&quot;year&quot;).cast(pl.Int16()) ) ) ) unique_dates </code></pre> <pre><code>shape: (2, 2) ┌──────────┬──────────────┐ │ date ┆ struct_date │ │ --- ┆ --- │ │ str ┆ struct[2] │ ╞══════════╪══════════════╡ │ 2024 Jan ┆ {2024,&quot;Jan&quot;} │ │ 2024 Feb ┆ {2024,&quot;Feb&quot;} │ └──────────┴──────────────┘ </code></pre> <pre class="lang-py prettyprint-override"><code>zs = ( xs.join(unique_dates, on=&quot;date&quot;) .drop(&quot;date&quot;) .rename({&quot;struct_date&quot;: &quot;date&quot;}) .unnest(&quot;date&quot;) ) zs </code></pre> <pre><code>shape: (4, 2) ┌──────┬───────┐ │ year ┆ month │ │ --- ┆ --- │ │ i16 ┆ str │ ╞══════╪═══════╡ │ 2024 ┆ Jan │ │ 2024 ┆ Feb │ │ 2024 ┆ Jan │ │ 2024 ┆ Jan │ └──────┴───────┘ </code></pre> <p>What can I do to improve the efficiency of this operation even further? Am I using <code>polars</code> idiomatically enough?</p>
<python><dataframe><python-polars>
2024-07-02 21:24:48
1
4,654
bzm3r
78,699,268
1,036,396
Settings REST_FRAMEWORK error DEFAULT_SCHEMA_CLASS configuration
<p>I'm following along <a href="https://blog.logrocket.com/django-rest-framework-create-api/" rel="nofollow noreferrer">this tutorial</a>. I added the &quot;REST_FRAMEWORK&quot; object how they describe in settings.py</p> <pre><code>REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework_simplejwt.authentication.JWTAuthentication', 'DEFAULT_SCHEMA_CLASS': 'drf_spectacular.openapi.AutoSchema', ), } </code></pre> <p>I get this error after running command <code>py manage.py runserver</code>. Any help is greatly appreciated.</p> <p><a href="https://i.sstatic.net/4alYpQ5L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4alYpQ5L.png" alt="error" /></a></p>
<python><django-rest-framework>
2024-07-02 21:17:04
1
564
Jason Spence
78,699,259
14,345,989
Sklearn NMF match input category order
<p>I'm using Non-Negative Matrix Factorization (NMF) in sklearn for unsupervised category prediction (with labels for checking accuracy), but am running into the problem where I don't have a clear map between input categories and transformed categories.</p> <p>For example, if my categories are &quot;A&quot;, &quot;B&quot;, and &quot;C&quot; (n_components=3) I don't know which order the transformed categories will be in. I can manually print the data associated with each output feature to determine which input it most closely resembles, but am looking for an automatic solution.</p> <p>Is there a convenient method for this, or do I need to perform guess-and-check to see what category order maximizes accuracy (very slow for large numbers of categories)?</p>
<python><scikit-learn><matrix-factorization>
2024-07-02 21:13:31
1
886
Mandias
78,699,230
14,250,641
How to save single Random Forest model with cross validation?
<p>I am using 10 fold cross validation, trying to predict binary labels (Y) based on the embedding inputs (X). I want to save one of the models (perhaps the one with the highest ROC AUC). I'm not sure how to do it because the ROC AUCs are not stored and I don't know how to grab accordingly.</p> <pre><code>X = np.array([np.array(x) for x in df['embeddings'].values]) y = df['label'].values groups = df['chromosome'].values group_kfold = GroupKFold(n_splits=n_folds) </code></pre> <h1>Initialize figure for plotting</h1> <pre><code>fig, axes = plt.subplots(1, 2, figsize=(15, 6)) all_fpr = [] all_tpr = [] all_accuracy = [] all_pr_auc = [] </code></pre> <pre><code>Perform cross-validation and plot ROC and PR curves for each fold for i, (train_idx, val_idx) in enumerate(group_kfold.split(X, y, groups)): X_train_fold, X_val_fold = X[train_idx], X[val_idx] y_train_fold, y_val_fold = y[train_idx], y[val_idx] # Initialize classifier rf_classifier = RandomForestClassifier(n_estimators=n_trees, random_state=42, max_depth=max_depth, n_jobs=-1) # Train the classifier on this fold rf_classifier.fit(X_train_fold, y_train_fold) # Make predictions on the validation set y_pred_proba = rf_classifier.predict_proba(X_val_fold)[:, 1] # Calculate ROC curve fpr, tpr, _ = roc_curve(y_val_fold, y_pred_proba) all_fpr.append(fpr) all_tpr.append(tpr) # Calculate AUC roc_auc = auc(fpr, tpr) # Plot ROC curve for this fold axes[0].plot(fpr, tpr, lw=1, alpha=0.7, label=f'ROC Fold {i+1} (AUC = {roc_auc:.2f})') # Calculate precision-recall curve precision, recall, _ = precision_recall_curve(y_val_fold, y_pred_proba) # Calculate PR AUC pr_auc = auc(recall, precision) all_pr_auc.append(pr_auc) # Plot PR curve for this fold axes[1].plot(recall, precision, lw=1, alpha=0.7, label=f'PR Curve Fold {i+1} (AUC = {pr_auc:.2f})') # Calculate accuracy accuracy = accuracy_score(y_val_fold, rf_classifier.predict(X_val_fold)) all_accuracy.append(accuracy) # Initialize empty arrays to store interpolated TPR values interpolated_tpr = [] # Define common set of thresholds mean_fpr = np.linspace(0, 1, 100) # Interpolate TPR values for each fold to the common set of thresholds for fpr, tpr in zip(all_fpr, all_tpr): interpolated_tpr.append(np.interp(mean_fpr, fpr, tpr)) # Calculate the mean and standard deviation of interpolated TPR values mean_tpr = np.mean(interpolated_tpr, axis=0) std_tpr = np.std(interpolated_tpr, axis=0) # Plot the mean ROC curve with shaded area representing the standard deviation axes[0].plot(mean_fpr, mean_tpr, color='black', linestyle='--', lw=2, label=f'Average ROC curve ({np.round(auc(mean_fpr, mean_tpr), 2)})') axes[0].fill_between(mean_fpr, mean_tpr - std_tpr, mean_tpr + std_tpr, color='grey', alpha=0.2) # Plot ROC for random classifier axes[0].plot([0, 1], [0, 1], linestyle='--', lw=2, color='r', alpha=0.8) </code></pre>
<python><pandas><numpy><scikit-learn><cross-validation>
2024-07-02 21:02:25
1
514
youtube
78,699,135
4,200,910
Github API - How to add a repository to a team, with a particular role?
<p>I want to use Github's rest API to add a repository to a team within my organization, and add it as the admin role on said repo.</p> <p>Now I've obviously checked <a href="https://docs.github.com/en/rest/teams/teams?apiVersion=2022-11-28#add-or-update-team-repository-permissions" rel="nofollow noreferrer">github's documentation on this</a>, but no matter what I try I'm having a lot of trouble with this particular call.</p> <p>I'm using <code>requests</code>, and this is the code snippet in question:</p> <pre><code>url = f&quot;https://api.github.com/orgs/{owner}/teams/{teamname}/repos/{owner}/{repo}&quot; headers = { &quot;Authorization&quot;: f&quot;token {token}&quot; } params = { &quot;permission&quot;: &quot;Admin&quot; } response = requests.post(url, headers=headers, params=params) </code></pre> <p>Doesn't work. I get a 404 when I try this.</p> <p>I wonder if I'm misunderstanding the exact URL I should be using here, It's odd that I have the owner in there twice, in the document it says &quot;org&quot; for the first one, which makes sense, and then it says &quot;repos/{owner}/{repo}&quot; and I'm not 100% sure what that one is supposed to be.</p> <p>It also calls it &quot;team_slug.&quot; I've looked all over the documentation and can't find what exactly team_slug means and if it's different than team name.</p>
<python><github><python-requests><github-api>
2024-07-02 20:34:11
1
435
Whitewind617
78,699,064
6,141,238
When using matplotlib, how do I set the on-screen lengths of the x and y axes to be equal without changing the limits of either axis?
<p>I would like to make the axes of a matplotlib plot a square, and do so by stretching or compressing the scaling of the axes rather than by changing the axis limits. How can this be done?</p> <p>(In other words, I am seeking a Python analog of the MATLAB command <code>axis square</code>.)</p> <p>So far, I have tried:</p> <ul> <li><p><code>plt.gca().set_aspect('equal')</code>. This seems to equalize the scaling, rather than the on-screen lengths, of the x and y axes, consistent with the matplotlib <a href="https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.set_aspect.html" rel="nofollow noreferrer">documentation</a>. The axes of the resulting plot are not square unless ymax - ymin = xmax - xmin, where xmin, xmax, ymin, and ymax are the axes limits (and this equality does not hold for my plot).</p> </li> <li><p><code>plt.axis('equal')</code> and <code>plt.axis('scaled')</code>. These seem to provide the results shown and described in <a href="https://stackoverflow.com/questions/45057647/difference-between-axisequal-and-axisscaled-in-matplotlib">this</a> post. In general, the axes of the resulting plot are not square.</p> </li> </ul>
<python><matplotlib><axis>
2024-07-02 20:09:40
1
427
SapereAude
78,698,933
6,622,697
How to validate Post arguments in Django
<p>I've seen plenty of solutions on how to validate Form field parameters, but I am not using Django templates or implementing the front-end in Django at all. I'm looking purely for server-side backend validation. Let's say I have</p> <pre><code>@api_view(['POST']) def my_func(request): </code></pre> <p>And I want the data to be something like:</p> <pre><code>{ &quot;username&quot; : &lt;user&gt;, &quot;password&quot; : &lt;pw&gt;, &quot;age&quot; : &lt;age&gt; } </code></pre> <p>I want to validate that <code>username</code> and <code>password</code> are strings, perhaps with a minimum length requirement. And that <code>age</code> is a number, again, perhaps with other restrictions, such as min/max.</p> <p>Is there a standard way to do this?</p>
<python><django><validation><django-views>
2024-07-02 19:30:18
1
1,348
Peter Kronenberg
78,698,840
2,153,235
Get the DPI for a PyPlot figure
<p>At least in Spyder, the PyPlot plots can be low resolution, e.g., from:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt # import seaborn as sns rng = np.random.default_rng(1) scat = plt.scatter( *rng.integers( 10, size=(2,10) ) ) </code></pre> <p>My web surfing has brought me to a suggested solution: Increase the dots-per-inch.</p> <pre><code>plt.rcParams['figure.dpi']=300 </code></pre> <p>Surfing indicates that the default is 100.</p> <p><em><strong>After much experimentation, is there a way to &quot;get&quot; the DPI from the plotted object, e.g., <code>scat</code> above?</strong></em> There doesn't seem to be a <code>scat.dpi</code>, <code>scat.getdpi</code>, or <code>scat.get_dpi</code>.</p> <p><strong>Afternote:</strong> Thanks to BigBen for pointing out the object-oriented interface. It requires the definition of a figure and axes therein before actually plotting the data. His code patterns seem to return a DPI, but in Spyder, the displayed figure isn't updated with the plotted data.</p> <p><a href="https://i.sstatic.net/AJy4AA68.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJy4AA68.png" alt="enter image description here" /></a></p> <p>Web surfing yields indications that &quot;inline&quot; plots in Spyder are static. I'm wasn't entirely sure if this was the cause, since the plots in the &quot;Plots&quot; window aren't inline (top half of above image), but they also fail to show the plotted data. Eventually, I found that setting the following did allow BigBen's solution to work: <code>Tools -&gt; Preferences -&gt; IPython console -&gt; Graphics -&gt; Graphics backend -&gt; Automatic</code>. A separate window opens for the figure and axes when it is defined by the source code, and it is updated with plot of the data when <code>scatter</code> is invoked.</p>
<python><matplotlib>
2024-07-02 19:06:34
1
1,265
user2153235
78,698,804
14,259,505
How to recognize an audio when i provide a list of more than 4 language in azure using recognize_once()?
<p>Azure Speech SDK has a limitation where it only supports detecting up to 4 languages simultaneously in the &quot;DetectAudioAtStart&quot; mode. To work around this limitation, I create batches of 4 languages from the languages_to_detect list and attempt to detect the language for each batch.But it is not able to recogize and giving me wrong answer. I am passing a bangali audio file and it is saying hindi. which is wrong. Below is the code for your reference:</p> <pre><code>import azure.cognitiveservices.speech as speechsdk subscription_key = &quot;00000000000000000000000000&quot; service_region = &quot;westus&quot; audio_file_path = &quot;C:\\yogesh_folder\\speech_bangla.wav&quot; # List of all languages to detect languages_to_detect = [&quot;en-US&quot;, &quot;ml-IN&quot;, &quot;ta-IN&quot;, &quot;te-IN&quot;, &quot;gu-IN&quot;, &quot;kn-IN&quot;, &quot;mr-IN&quot;, &quot;pa-IN&quot;, &quot;bn-IN&quot;, &quot;hi-IN&quot;] # Configure speech recognition speech_config = speechsdk.SpeechConfig(subscription=subscription_key, region=service_region) # Audio configuration audio_config = speechsdk.audio.AudioConfig(filename=audio_file_path) # Initialize detected language detected_language = None # Iterate through batches of 4 languages for i in range(0, len(languages_to_detect), 4): # Slice the batch of languages batch_languages = languages_to_detect[i:i+4] # Configure auto-detection of source language for current batch auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig( languages=batch_languages ) # Create a speech recognizer instance for current batch speech_recognizer = speechsdk.SpeechRecognizer( speech_config=speech_config, auto_detect_source_language_config=auto_detect_source_language_config, audio_config=audio_config ) # Perform recognition print(f&quot;Detecting speech in languages: {batch_languages}&quot;) result = speech_recognizer.recognize_once() # Check result if result.reason == speechsdk.ResultReason.RecognizedSpeech: detected_language = result.properties.get(speechsdk.PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult) print(f&quot;Detected language: {detected_language}&quot;) break # Exit loop if language is detected # If no language is detected, provide feedback if detected_language is None: print(&quot;No language detected.&quot;) </code></pre>
<python><azure><speech-recognition><azure-cognitive-services><azure-speech>
2024-07-02 18:54:42
1
391
Yogesh Govindan
78,698,733
6,622,697
How do I get csrfToken for front-end to submit logins using Django?
<p>I've read all the issues about this, but it's still not clear to me.</p> <p>I am not using Django templates with Django Rest Framework to handle login functions (login, logout, changepassword, create user, etc).</p> <p>I tried POSTing to the <code>accounts/login</code> page with</p> <pre><code>&quot;username&quot;: &lt;user&gt;, &quot;password&quot;: &lt;password&gt; </code></pre> <p>But it also wants a <code>CsrfViewMiddleware</code> token. It's not clear to me where I get that. Does the front-end have to request it first from another endpoint? Is there some other way to do the CSR checking?</p> <h1>Update</h1> <p>I have tried getting a CSRF token using</p> <pre><code>@api_view(['GET']) def csrf(request): return JsonResponse({'csrf': get_token(request)}) </code></pre> <p>I then tried the <code>accounts/login</code> request setting this token in the body as <code>csrfmiddlewaretoken</code>. I also sent it in the header as <code>X-CSRFToken</code></p> <pre><code>{ &quot;username&quot;: &quot;peter&quot;, &quot;password&quot;: &quot;test&quot;, &quot;csrfmiddlewaretoken&quot;: &quot;ze3OW730d8q0F5QcsH6UmW1yQu0Qj3wPWREofiyhw3H4FNrlZH0HUYfB1yhmriz8&quot; } </code></pre> <p>But I still get the error</p> <pre><code> &lt;h1&gt;Forbidden &lt;span&gt;(403)&lt;/span&gt;&lt;/h1&gt; &lt;p&gt;CSRF verification failed. Request aborted.&lt;/p&gt; </code></pre>
<python><django><django-rest-framework><django-csrf>
2024-07-02 18:33:27
1
1,348
Peter Kronenberg
78,698,418
9,744,061
Make animation using Python but give me an error, animation can't appear
<p>I want to plot animation of y=omega*x^2, which omega is -3 into 3 with step size 0.25, x from -4 into 4 with step size 0.001. I'm new studying plot animation in Python. I try to make a code but give me an error as follows and no animation appear.</p> <pre><code>UserWarning: Animation was deleted without rendering anything. This is most likely not intended. To prevent deletion, assign the Animation to a variable, e.g. `anim`, that exists until you have outputted the Animation using `plt.show()` or `anim.save()`. Process finished with exit code 0 </code></pre> <p>This is my code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation fig = plt.figure() axis = plt.axes(xlim=(-4, 4), ylim=(-40, 40)) line, = axis.plot([], [], lw=3) def init(): line.set_data([], []) return line, def animate(omega): x = np.linspace(-4, 4, 8000) y = omega*x**2 line.set_data(x, y) return line, anim = FuncAnimation(fig, animate, init_func=init, frames=200, interval=20, blit=True) </code></pre> <p>I don't know what the mistake of the code. How to fix the code such that the animation can appear? Thanks for your help.</p>
<python><matplotlib><animation><plot><matplotlib-animation>
2024-07-02 17:08:04
1
305
Ongky Denny Wijaya
78,698,334
3,366,355
There's no Build available in Visual Studio 10 for Python code
<p>I need a custom script to run Shiv and package my python code. I know there are Build &gt; Events that i can use to to run a PreBuild script, but this isn't available for Python, and it looks like the Build doesn't show anything useful either, is there any workaround for this?</p> <p><a href="https://i.sstatic.net/CUTnPzIr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CUTnPzIr.png" alt="enter image description here" /></a></p>
<python><visual-studio>
2024-07-02 16:45:04
1
1,032
martin
78,698,202
2,862,945
Rotating a vector field in an efficient way in python
<p>I need to rotate a 3D vector field by 90 degrees. I do that by first using the NumPy function <code>rot90</code> (see the <a href="https://numpy.org/doc/stable/reference/generated/numpy.rot90.html" rel="nofollow noreferrer">docs here</a>). Then I rotate each component at each position in a very inefficient way: using nested <code>for</code> loops. It works fine, as can be seen for example by the following two figures: on the left you see the original vector field, where all vectors simply point into the x-direction, on the right the whole field is rotated by 90 degrees. It works as intended, but it is implemented in a very inefficient way, making it almost useless when I come to large array (e.g. 100x200x400).</p> <p><a href="https://i.sstatic.net/YWFXOQx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YWFXOQx7.png" alt="enter image description here" /></a></p> <p>This is my implementation:</p> <pre><code># import standard modules import matplotlib.pyplot as plt import numpy as np from mayavi import mlab def define_3D_vector_field( Nx=30, Ny=40, Nz=50, direction='x' ): # initialze empty arrays, one for each vector component vecfield_x = np.zeros( (Nx, Ny, Nz) ) vecfield_y = np.zeros( (Nx, Ny, Nz) ) vecfield_z = np.zeros( (Nx, Ny, Nz) ) if direction == 'x': vecfield_x[ round(.25*Nx):round(.75*Nx), round(.25*Ny):round(.75*Ny), round(.25*Nz):round(.75*Nz) ] = 1. return vecfield_x, vecfield_y, vecfield_z def make_simple_3Dvecfield_plot( vecfield_x, vecfield_y, vecfield_z ): fig1 = mlab.figure( bgcolor=(1,1,1), fgcolor=(0,0,0), size=(800,600), ) src = mlab.pipeline.vector_field( vecfield_x, vecfield_y, vecfield_z ) mlab.pipeline.vectors(src, mask_points=20, scale_factor=4.) mlab.xlabel('x') mlab.ylabel('y') mlab.zlabel('z') mlab.outline() mlab.show() def rot90vecfield(vecfield_x, vecfield_y, vecfield_z, rot_axis='y'): theta = np.radians(90.) if rot_axis == 'y': vf_x = np.rot90(vecfield_x, k=1, axes=(2,0)) vf_y = np.rot90(vecfield_y, k=1, axes=(2,0)) vf_z = np.rot90(vecfield_z, k=1, axes=(2,0)) # NOTE: this requires the transposed matrix to work correctly rot_mat = np.array( [ [np.cos(theta) , 0., np.sin(theta) ], [0. , 1., .0 ], [-np.sin(theta), 0., np.cos(theta) ] ] ).T vf_x_tmp = np.copy(vf_x) vf_y_tmp = np.copy(vf_y) vf_z_tmp = np.copy(vf_z) for xx in range(vecfield_x.shape[0]): for yy in range(vecfield_x.shape[1]): for zz in range(vecfield_x.shape[2]): vec_xyz = np.dot( rot_mat, np.array( [vf_x_tmp[zz,yy,xx], vf_y_tmp[zz,yy,xx], vf_z_tmp[zz,yy,xx]] ) ) vf_x[ zz, yy, xx ] = vec_xyz[0] vf_y[ zz, yy, xx ] = vec_xyz[1] vf_z[ zz, yy, xx ] = vec_xyz[2] return vf_x, vf_y, vf_z def main(): vf_x, vf_y, vf_z = define_3D_vector_field() make_simple_3Dvecfield_plot( vf_x, vf_y, vf_z ) vf_x, vf_y, vf_z = rot90vecfield( vf_x, vf_y, vf_z ) make_simple_3Dvecfield_plot( vf_x, vf_y, vf_z ) if __name__ == '__main__': main() </code></pre> <p>I am pretty sure there is some nice little algebra which I can apply, I just can't get my head around it. Any hints on how to make this more efficient (memory-wise and computation-wise) would be greatly appreciated!</p>
<python><numpy><matrix><rotation><linear-algebra>
2024-07-02 16:08:52
2
2,029
Alf
78,698,175
769,933
compute intensive function with multiple outputs applied elementwise to polars Array column, use iter_slices?
<p>I need to apply a compute intensive function, mainly consisting of calls to numpy functions, to a column of a polars DataFrame that has the <code>pl.Array(...)</code> datatype. In need a streaming method, as my actual data is regularly larger than the computer memory. I can do it using <code>iter_slices</code>, creating DataFrames, and concatenating them.</p> <p>But it really feels like this is something <code>map_batches</code> is meant to be used for, but it doesn't handle multiple return values. Is there something else I should look at, or should I call this good enough until I implement a rust UDF?</p> <pre class="lang-py prettyprint-override"><code>import polars as pl import numpy as np filter1 = np.array([-1,1,-1,1,1]) filter2 = np.array([0.3,0.3,-0.3,0.3, 0.3]) def compute(x): a = np.dot(x, filter1) b = np.dot(x, filter2) return np.sin(a-b), np.cos(a+b) df = pl.DataFrame({&quot;a&quot;:[np.arange(5.0)+i for i in range(5)]}, schema={&quot;a&quot;:pl.Array(pl.Float64,5)}) dfs = [] for df_iter in df.iter_slices(10000): peak_x, peak_y = compute(df_iter[&quot;a&quot;].to_numpy()) dfs.append(pl.DataFrame({&quot;c&quot;: peak_x, &quot;d&quot;: peak_y})) df_out = pl.concat(dfs) </code></pre>
<python><numpy><python-polars>
2024-07-02 16:02:49
0
2,396
gggg
78,698,115
8,565,759
Plotting multiple dataframes in a single output using Python Bokeh - how to create legends for each plot in a loop?
<p>I am trying to input a single file that contains data from different packet streams (hence, different time values). I created a dataframe for each time and the data points/columns from each dataframe are plotted in single plot. I am not being able to get how to append the items in the legend to reflect the correct col names in the loop. This is what I have so far:</p> <pre><code>j = 0 tabs_df_list = [] ## Creating source and figures for each df for x in df_list: j += 1 df_col_names = x.columns.to_numpy() col_names = df_col_names.tolist() source = &quot;source&quot;+str(j) source = ColumnDataSource(x) ## Create a figure for each df p = &quot;p&quot; + str(j) p = figure(title = 'Test Tlm Report', match_aspect = False, toolbar_location = 'right', height=750, width=1000, x_axis_label = 'Time [hh:mm]', # needs to correct time series y_axis_label = 'Data value') # need to customize to tlm col tabs_temp = [] items_list = [] k = 1 while k &lt; len(col_names): r = &quot;r&quot; + str(k) r = p.line(x=col_names[0], y=col_names[k], source=source, line_width=2, line_color=&quot;lightseagreen&quot;, name=col_names[k], legend_label=col_names[k]) r = p.scatter(x=col_names[0], y=col_names[k], source=source) items_list.append(col_names[k]) items_list.append(r) tab = [TabPanel(child=p, title=col_names[0])] k += 1 p.add_tools(HoverTool(tooltips=tooltips)) legend = Legend(items=[items_list ],location=(3, -25)) p.add_layout(legend, 'right') p.legend.click_policy=&quot;mute&quot; tabs_df = &quot;tabss&quot; + str(j) tabs_df = Tabs(tabs=tab) tabs_df_list.append(tabs_df) grid = gridplot (tabs_df_list, ncols=1) show (grid) </code></pre> <p>The items list in the legend is what I do not know how to dynamically add to =/</p>
<python><pandas><dataframe><plot><bokeh>
2024-07-02 15:49:59
1
420
Brain_overflowed
78,698,110
7,425,756
Is s.rfind() in Python implemented using iterations in backward direction?
<p>Does rfind iterates over a string from end to start?</p> <p>I read the docs <a href="https://docs.python.org/3.12/library/stdtypes.html#str.rfind" rel="nofollow noreferrer">https://docs.python.org/3.12/library/stdtypes.html#str.rfind</a> and see</p> <blockquote> <p>str.rfind(sub[, start[, end]])</p> <p>Return the highest index in the string where substring sub is found, such that sub is contained within s[start:end]. Optional arguments start and end are interpreted as in slice notation. Return -1 on failure.</p> </blockquote> <p>And the doc says not much about the implementation. Maybe there are some implementation notes somewhere else in the docs.</p> <p>I have tried to look up the source code using my IDE (Visual Code) and it showed me something pretty much like an interface stub for some hidden native (C/C++) code.</p> <pre><code>def rfind(self, sub: str, start: SupportsIndex | None = ..., end: SupportsIndex | None = ..., /) -&gt; int: ... </code></pre> <p>Then I have tried to find the appropriate source code on Github in Python repositories but it turned out not so easy.</p> <p>I am a newbie in Python. So while it may be obvious for everyone around how to simply look up the source code needed to find the answer it is not straightforward for me.</p>
<python><python-builtins>
2024-07-02 15:48:22
1
528
Alexander Ites
78,697,972
1,575,381
TypeError: list indices must be integers, not tupl
<p>This is a common question on SO but all the examples I've looked at are fairly complicated. So I am posting the simplest example I can think of to get a straight-forward answer.</p> <pre><code>x = [(1,2),(3,4),(5,6)] print(x[0]) (1,2) print(x[0,1]) TypeError: list indices must be integers or slices, not tuple </code></pre> <p>The error message indicates that if I want a consecutive range of elements, I can use a slice: <code>x[0:2]</code>. But what if I want non-consecutive elements like <code>x[0,2]</code>? How do I get those by index?</p>
<python><list><nested>
2024-07-02 15:20:03
1
431
someguyinafloppyhat
78,697,965
1,422,096
Write a lossless-compressed image with multiple layers
<p>How to save an image, losslessly compressed, with multiple layers (like with Photoshop PSD but rather an open format), with OpenCV <code>cv2</code> or <code>PIL</code>/<code>Pillow</code> ?</p> <p>Note: Is seems that TIFF format can do this (see <a href="https://graphicdesign.stackexchange.com/questions/35424/is-it-possible-to-export-a-layered-png-file-from-photoshop-to-paint-net">this answer</a>), but as I remember TIFF have lower compression ratio than PNG.</p> <p>Pseudo code:</p> <pre><code>import cv2 cv2.imwrite('test.tiff', layer1, layer2, layer3) </code></pre>
<python><opencv><python-imaging-library><compression><image-compression>
2024-07-02 15:18:07
1
47,388
Basj
78,697,903
2,326,627
MonkeyPatching python submodule
<p>I want to <a href="https://en.wikipedia.org/wiki/Monkey_patch" rel="nofollow noreferrer">monkey patch</a> an external code base to check alternative versions of the same function.</p> <p>At the moment, the project tree looks like this</p> <pre class="lang-bash prettyprint-override"><code>. ├── mymodule │   ├── __init__.py │   └── myA.py └── subProjectsDir └── subProject └── subonemodule ├── __init__.py └── subtwomodule ├── __init__.py ├── subA.py └── subthreemodule ├── __init__.py └── subB.py </code></pre> <p>In order to better reproduce the example on your machine, I uploaded the project in a zip file on my personal cloud storage space (<a href="https://drive.proton.me/urls/8WR1HB54C4#j4PRHd16An1z" rel="nofollow noreferrer">here</a>). However, I'll keep the content of each file here.</p> <h1>Subproject</h1> <p>Let's start from the sub-project I need to fix. I put it inside the <code>subProjectsDir/subProject</code> directory.</p> <h2><code>subonemodule/__init__.py</code></h2> <pre class="lang-py prettyprint-override"><code>from . import subtwomodule </code></pre> <h2><code>subonemodule/subtwomodule/__init__.py</code></h2> <pre class="lang-py prettyprint-override"><code>from .subA import funA </code></pre> <h2><code>subonemodule/subtwomodule/subA.py</code></h2> <pre class="lang-py prettyprint-override"><code>def funA(): print(&quot;submoudle subA -&gt; fun A&quot;) </code></pre> <p><em>This is the function I want to monkey patch</em></p> <h2><code>subonemodule/subtwomodule/subthreemodule/__init__.py</code></h2> <pre class="lang-py prettyprint-override"><code>from .subB import funB </code></pre> <h2><code>subonemodule/subtwomodule/subthreemodule/subB.py</code></h2> <pre class="lang-py prettyprint-override"><code>from ...subtwomodule.subA import funA def funB(): print(&quot;subsubmoudle subB -&gt; fun B&quot;) funA() </code></pre> <p><em>Note that this is the function called by my modules.</em> This function in turn calls the function I need to patch. The import chain is, however, relative, which can maybe cause problems.</p> <h1>My Code</h1> <h2><code>mymodule/__init__.py</code></h2> <pre class="lang-py prettyprint-override"><code>import os import sys thisdir = os.path.dirname(os.path.realpath(__file__)) sys.path.append(os.path.join(thisdir, &quot;..&quot;, &quot;subProjectsDir&quot;, &quot;subProject&quot;)) print(f&quot;From mymodule launch: {sys.path}&quot;) </code></pre> <p>The previous cose is used to use the subproject inside my code.</p> <h2><code>mymodule/myA.py</code></h2> <pre class="lang-py prettyprint-override"><code>from subonemodule import subtwomodule from subonemodule.subtwomodule import subA from subonemodule.subtwomodule.subthreemodule.subB import funB def myFunA(): print(&quot;My A&quot;) def main(): # Monkey patch 1, not working # subA.funA = myFunA # Monkey patch 2, not working subtwomodule.subA.funA = myFunA funB() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>The file contains my two attempts to monkey patch the function <code>funA</code>. To be more clear, the goal is that whenever the function <code>funA</code> is called from any module belonging to the sub-project, such call is replaced by a call to my funtion <code>myFunA</code>. However, neither of the two approaches is working.</p> <pre class="lang-bash prettyprint-override"><code>$&gt; python3 -m mymodule.myA sub3module subB -&gt; fun B sub2module subA -&gt; fun A </code></pre> <h1>Question</h1> <p>How should I proceed in this case? I was trying to not touch at all the external sub-project.</p>
<python><python-3.x><monkeypatching>
2024-07-02 15:07:57
1
1,188
tigerjack
78,697,859
24,191,255
How to solve complex equations numerically in Python?
<p>Originally, I had the following equation:</p> <pre><code>2mgv×sin(\alpha) = CdA×\rho(v^2 + v_{wind}^2 + 2vv_{wind}cos(\phi))^(3/2) </code></pre> <p>which I could express as the following non-linear equation:</p> <pre><code>(K × v)^(2/3) = v^2 + v_{wind} + 2vv_{wind}cos(\phi) </code></pre> <p>To solve this, I need to use a numerical approach.</p> <p>I tried writing a code for this in Python using fsolve from scipy.optimize. However, the results I got are not too promising. What else should I try? Should I use a different approach/package or just my code needs to be improved? I also experienced that the result is highly dependent on v_initial_guess.</p> <p>Please note, that I would consider myself as a beginner in programming.</p> <p>I also tried writing a code to solve the equation for v using the Newton-Raphson method but I wasnt too successful.</p> <p>Here is my code:</p> <pre><code>import numpy as np from scipy.optimize import fsolve m = 80 g = 9.81 alpha = np.radians(10) #incline CdA = 0.65 rho = 1.225 v_w = 10 phi = np.radians(30) #wind angle with the direction of motion sin_alpha = np.sin(alpha) cos_phi = np.cos(phi) def equation(v): K = ((m * g * sin_alpha) / ((CdA * rho))**(2/3)) return K * v**(2/3) - v**2 - 2*v*v_w*cos_phi - v_w**2 v_initial_guess = 30 v_solution = fsolve(equation, v_initial_guess, xtol=1e-3) print(&quot;v:&quot;, v_solution[0])type here </code></pre> <p>EDIT: This is what my code looks like now:</p> <pre><code> import numpy as np from scipy.optimize import fsolve import matplotlib.pyplot as plt m = 80 g = 9.81 alpha = np.radians(2) # incline CdA = 0.321 rho = 1.22 v_w = 5 phi = np.radians(180) # wind angle with the direction of motion sin_alpha = np.sin(alpha) cos_phi = np.cos(phi) def lhs(v): return m * g * v * sin_alpha def rhs(v): return 0.5 * CdA * rho * (v**2 + v_w**2 + 2*v*v_w*cos_phi)**(3) def difference(v): return lhs(v) - rhs(v) # fsolve to find the intersection v_initial_guess = 8 v_intersection = fsolve(difference, v_initial_guess)[0] v_values = np.linspace(0.1, 50, 500) lhs_values = lhs(v_values) rhs_values = rhs(v_values) plt.figure(figsize=(10, 6)) plt.plot(v_values, lhs_values, label='$2mgv\\sin(\\alpha)$', color='blue') plt.plot(v_values, rhs_values, label='$CdA\\rho(v^2 + v_{wind}^2 + 2vv_{wind}\\cos(\\phi))^{3/2}$', color='red') plt.xlabel('Velocity (v)') plt.xlim(0, 20) plt.title('LHS and RHS vs. Velocity') plt.legend() plt.grid(True) plt.ylim(0, 2000) plt.show() print(f&quot;The intersection occurs at v = {v_intersection:.2f} m/s&quot;) P_grav_check = m *g * sin_alpha * v_intersection P_air_check = 0.5 * CdA * rho * (v_intersection ** 2 + v_w ** 2 + 2 * v_intersection * v_w * cos_phi) ** (3) print(P_grav_check) print(P_air_check) </code></pre>
<python><numerical-methods><equation><equation-solving><fsolve>
2024-07-02 14:59:50
2
606
Márton Horváth
78,697,835
2,748,928
Deepspeed : AttributeError: 'DummyOptim' object has no attribute 'step'
<p>I want to use deepspeed for training LLMs along with Huggingface Trainer. But when I use deepspeed along with trainer I get error &quot;AttributeError: 'DummyOptim' object has no attribute 'step'&quot;. Below is my code</p> <pre><code>import argparse import numpy as np import torch from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForCausalLM from trl import DPOTrainer, DPOConfig def preprocess_data(item): return { 'prompt': 'Instruct: ' + item['prompt'] + '\n', 'chosen': 'Output: ' + item['chosen'], 'rejected': 'Output: ' + item['rejected'] } def main(): parser = argparse.ArgumentParser() parser.add_argument(&quot;--epochs&quot;, type=int, default=1) parser.add_argument(&quot;--beta&quot;, type=float, default=0.1) parser.add_argument(&quot;--batch_size&quot;, type=int, default=4) parser.add_argument(&quot;--lr&quot;, type=float, default=1e-6) parser.add_argument(&quot;--seed&quot;, type=int, default=2003) parser.add_argument(&quot;--model_name&quot;, type=str, default=&quot;EleutherAI/pythia-14m&quot;) parser.add_argument(&quot;--dataset_name&quot;, type=str, default=&quot;jondurbin/truthy-dpo-v0.1&quot;) parser.add_argument(&quot;--local_rank&quot;, type=int, default=0) args = parser.parse_args() # Determine device based on local_rank device = torch.device(&quot;cuda&quot;, args.local_rank) if torch.cuda.is_available() else torch.device(&quot;cpu&quot;) tokenizer = AutoTokenizer.from_pretrained(args.model_name) tokenizer.pad_token = tokenizer.eos_token model = AutoModelForCausalLM.from_pretrained(args.model_name).to(device) ref_model = AutoModelForCausalLM.from_pretrained(args.model_name).to(device) dataset = load_dataset(args.dataset_name, split=&quot;train&quot;) dataset = dataset.map(preprocess_data) # Split the dataset into training and validation sets dataset = dataset.train_test_split(test_size=0.1, seed=args.seed) train_dataset = dataset['train'] val_dataset = dataset['test'] training_args = DPOConfig( learning_rate=args.lr, num_train_epochs=args.epochs, per_device_train_batch_size=args.batch_size, logging_steps=10, remove_unused_columns=False, max_length=1024, max_prompt_length=512, deepspeed=&quot;ds_config.json&quot; ) # Verify and print embedding dimensions before finetuning print(&quot;Base model embedding dimension:&quot;, model.config.hidden_size) model.train() ref_model.eval() dpo_trainer = DPOTrainer( model, ref_model, beta=args.beta, train_dataset=train_dataset, eval_dataset=val_dataset, tokenizer=tokenizer, args=training_args, ) dpo_trainer.train() # Evaluate evaluation_results = dpo_trainer.evaluate() print(&quot;Evaluation Results:&quot;, evaluation_results) save_model_name = 'finetuned_model' model.save_pretrained(save_model_name) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>The config file used is the below one</p> <pre><code>{ &quot;zero_optimization&quot;: { &quot;stage&quot;: 3, &quot;offload_optimizer&quot;: { &quot;device&quot;: &quot;cpu&quot;, &quot;pin_memory&quot;: true }, &quot;offload_param&quot;: { &quot;device&quot;: &quot;cpu&quot;, &quot;pin_memory&quot;: true }, &quot;overlap_comm&quot;: true, &quot;contiguous_gradients&quot;: true, &quot;sub_group_size&quot;: 1e9, &quot;reduce_bucket_size&quot;: &quot;auto&quot;, &quot;stage3_prefetch_bucket_size&quot;: &quot;auto&quot;, &quot;stage3_param_persistence_threshold&quot;: &quot;auto&quot;, &quot;stage3_max_live_parameters&quot;: 1e9, &quot;stage3_max_reuse_distance&quot;: 1e9, &quot;stage3_gather_16bit_weights_on_model_save&quot;: true }, &quot;bf16&quot;: { &quot;enabled&quot;: &quot;auto&quot; }, &quot;fp16&quot;: { &quot;enabled&quot;: &quot;auto&quot;, &quot;loss_scale&quot;: 0, &quot;initial_scale_power&quot;: 32, &quot;loss_scale_window&quot;: 1000, &quot;hysteresis&quot;: 2, &quot;min_loss_scale&quot;: 1 }, &quot;gradient_accumulation_steps&quot;: &quot;auto&quot;, &quot;gradient_clipping&quot;: &quot;auto&quot;, &quot;train_batch_size&quot;: &quot;auto&quot;, &quot;train_micro_batch_size_per_gpu&quot;: &quot;auto&quot;, &quot;wall_clock_breakdown&quot;: false, &quot;flops_profiler&quot;: { &quot;enabled&quot;: false, &quot;detailed&quot;: false }, &quot;optimizer&quot;: { &quot;type&quot;: &quot;Lamb&quot;, &quot;params&quot;: { &quot;lr&quot;: &quot;auto&quot;, &quot;betas&quot;: [0.9, 0.999], &quot;eps&quot;: &quot;auto&quot;, &quot;weight_decay&quot;: &quot;auto&quot; } }, &quot;zero_allow_untested_optimizer&quot;: true } </code></pre> <p>The code works with out deepspeed. I have torch=2.3.1, deepspeed =0.14.5, trl=0.9.4 and CUDA Version: 12.5.</p> <p>Appreciate any hint on this !</p>
<python><huggingface-transformers><large-language-model><huggingface-trainer><deepspeed>
2024-07-02 14:53:33
1
1,220
Refinath
78,697,726
10,037,886
Systematically turn (numpy) 1d-array of size 1 to scalar
<p>I have been scratching my head over this for the last hour.</p> <p>Imagine I have a function that takes as argument a number, or an array up to dimension 1. I'd like it to return a scalar (not a 0d array) in the former case, and an array of the same shape in the latter case, like ufuncs do.</p> <p>The current implementation of my function does something along the lines of</p> <pre class="lang-py prettyprint-override"><code>def func(x: Real | np.ndarray, arr: np.ndarray): &quot;&quot;&quot;Illustrate an actually more complicated function&quot;&quot;&quot; return arr @ np.sin(arr[:, None] * x) </code></pre> <p>and I know <code>arr</code> is a 1d array. It is promoted to 2d so that there is no broadcast issue with the element-wise multiplication. The issue being that a 1d array is systematically returned. The nicety being that the cases</p> <ul> <li>x scalar and len(arr) == 1;</li> <li>x scalar and len(arr) &gt; 1 ;</li> <li>x array and len(arr) == 1 ;</li> <li>x array and len(arr) &gt;= 1 and (len(x) != len(arr) or len(x) == len(arr))</li> </ul> <p>are covered in a one-liner (the tautological last statement is here for emphasis).</p> <p>I tried</p> <pre class="lang-py prettyprint-override"><code>@np.vectorize def func(x, arr): return arr @ np.sin(arr * x) </code></pre> <p>which consistently returns a 1d array as well, and is probably not the best performance-wise. I looked at <code>functools.singledispatch</code>, which leads to much duplication, and I'd probably forget about some corner cases. A solution would be</p> <pre class="lang-py prettyprint-override"><code>def func(x, arr): res = arr @ np.sin(arr[:, None] * x) if len(res) == 1: return res.item() return res </code></pre> <p>but I have many such functions, and it doesn't feel very pythonic? I can write a decorator to write this check,</p> <pre class="lang-py prettyprint-override"><code>def give_me_a_scalar(f): @functools.wraps(f) def wrapper(*args, **kwargs): res = f(*args, **kwargs) if len(res) == 1: return res.item() return res return wrapper </code></pre> <p>seems to be doing what I want, but I struggle to believe nothing like that already exists. Am I missing something simpler?</p>
<python><numpy>
2024-07-02 14:31:58
1
427
Aubergine
78,697,508
13,135,901
Drop duplicates when querying multiple tables in Django
<p>I have a custom manager with search, which orders return results by rank:</p> <pre><code>class MyManager(models.Manager): def search(self, query, fa, fb=None, fc=None, fd=None, qs=None): if not qs: qs = self.get_queryset() try: if not (1 in [c in query for c in '&amp;|()!*:&quot;']): query = &quot; &amp; &quot;.join([f&quot;{q}:*&quot; for q in query.split()]) vector = SearchVector(*fa, weight=&quot;A&quot;, config=&quot;english&quot;) if fb: vector += SearchVector(*fb, weight=&quot;B&quot;, config=&quot;english&quot;) if fc: vector += SearchVector(*fc, weight=&quot;C&quot;, config=&quot;english&quot;) if fd: vector += SearchVector(*fd, weight=&quot;D&quot;, config=&quot;english&quot;) query = SearchQuery(query, search_type=&quot;raw&quot;) qs = ( qs.annotate(search=vector, rank=SearchRank(vector, query)) .filter(search=query) .order_by(&quot;-rank&quot;, &quot;-id&quot;) .distinct(&quot;rank&quot;, &quot;id&quot;) ) qs.count() # Trigger exception except (ProgrammingError, UnboundLocalError): qs = qs.none() return qs </code></pre> <p>But when I try searching on related fields, it still returns duplicate results:</p> <pre><code>class Case(models.Model): machine = models.ForeignKey(Machine, on_delete=models.CASCADE) user = models.ForeignKey(Profile, on_delete=models.SET_NULL) hashtags = models.CharField(max_length=255) closed = models.BooleanField(default=False) objects = MyManager() class CaseProgress(models.Model): case = models.ForeignKey(Case, on_delete=models.CASCADE) desc = models.TextField() class Profile(models.Model): user = models.OneToOneField(User, on_delete=models.SET_NULL, null=True) </code></pre> <pre><code>class CaseListView(ListView): model = Case def get_queryset(self): query = self.request.GET.get(&quot;query&quot;, None) show_closed = ( True if self.request.GET.get(&quot;show_closed&quot;, False) == &quot;true&quot; else False ) if query: if not show_closed: qs = self.model.objects.filter(closed=False) else: qs = self.model.objects.all() fa = ( &quot;id&quot;, &quot;machine__serial_number&quot;, &quot;machine__company__name&quot;, &quot;user__user__first_name&quot;, &quot;user__user__last_name&quot;, ) fb = (&quot;hashtags&quot;,) fc = (&quot;caseprogress__desc&quot;,) qs = self.model.objects.search( query, fa, fb, fc, qs=qs ) return qs </code></pre> <p>Now I run a search and the query can be found in different tables, than in returns both results, even though i have <code>id</code> in <code>distinct</code>. Why is that? I've read <a href="https://docs.djangoproject.com/en/5.0/ref/models/querysets/#distinct" rel="nofollow noreferrer">Django's manual</a> on this subject, but it doesn't seem to apply in my case, since I am not ordering on the ralated fields.</p>
<python><django>
2024-07-02 13:52:55
1
491
Viktor
78,697,411
4,302,701
How to kill all the threads in ThreadPools
<p>How can I kill all the threads in ThreadPools(threadpoolexecutor) when one thread finishes?</p>
<python><python-3.x><threadpool>
2024-07-02 13:32:11
1
353
rao
78,697,317
1,044,422
How to correctly replace an element in a pd.Dataframe with a list?
<p>I have a simple <code>pandas</code> Dataframe like this:</p> <pre class="lang-py prettyprint-override"><code>test_df = pd.DataFrame( { 0: [ &quot;Setup&quot;, &quot;Dissection&quot;, &quot;Specimen Removal&quot;, &quot;Closure&quot;, ] } ) </code></pre> <p>I want to replace the values in column 0 with colour values, like this:</p> <pre class="lang-py prettyprint-override"><code>colours = { &quot;Setup&quot;: [0.12156862745098039, 0.4666666666666667, 0.7058823529411765, 1.0], &quot;Dissection&quot;: [0.8392156862745098, 0.15294117647058825, 0.1568627450980392, 1.0], &quot;Specimen Removal&quot;: [ 0.9686274509803922, 0.7137254901960784, 0.8235294117647058, 1.0, ], &quot;Closure&quot;: [0.6196078431372549, 0.8549019607843137, 0.8980392156862745, 1.0], } test_df.replace(colours) </code></pre> <p>But running this gives me an error:</p> <pre class="lang-bash prettyprint-override"><code>File ~/.venv/fastai/lib/python3.11/site-packages/pandas/core/internals/blocks.py:729, in Block.replace(self, to_replace, value, inplace, mask, using_cow) 725 elif self._can_hold_element(value): 726 # TODO(CoW): Maybe split here as well into columns where mask has True 727 # and rest? 728 blk = self._maybe_copy(using_cow, inplace) --&gt; 729 putmask_inplace(blk.values, mask, value) 730 if not (self.is_object and value is None): 731 # if the user *explicitly* gave None, we keep None, otherwise 732 # may downcast to NaN 733 blocks = blk.convert(copy=False, using_cow=using_cow) File ~/.venv/fastai/lib/python3.11/site-packages/pandas/core/array_algos/putmask.py:56, in putmask_inplace(values, mask, value) 54 values[mask] = value[mask] 55 else: ---&gt; 56 values[mask] = value 57 else: 58 # GH#37833 np.putmask is more performant than __setitem__ 59 np.putmask(values, mask, value) ValueError: NumPy boolean array indexing assignment cannot assign 4 input values to the 1 output values where the mask is true </code></pre> <p>What am I doing wrong?</p>
<python><pandas>
2024-07-02 13:15:24
2
1,845
K G
78,697,300
8,284,452
How to preprocess dataset to assign coordinates before opening with xarray.open_mfdataset?
<p>The xarray documentation for the <code>open_mfdataset</code> function states that you can use the <code>preprocess</code> argument to apply a function to each dataset before concatenation. The NetCDF datasets I have do not have coordinates assigned when you open them one-by-one, so I was attempting to assign them before concatenation with <code>combine='by_coords'</code> in the <code>open_mfdataset</code> function.</p> <p>This is what a single one of the datasets looks like if you open it:</p> <pre class="lang-py prettyprint-override"><code>path = 'path/to/my/file/file.nc' ds = xr.open_dataset(path, decode_times=False) ds # &lt;xarray.Dataset&gt; Size: 1GB # Dimensions: (comid: 2677612, time_mn: 120, time_yr: 10) # Dimensions without coordinates: comid, time_mn, time_yr #Data variables: # COMID (comid) int32 11MB ... # Time_mn (time_mn) int32 480B ... # Time_yr (time_yr) int32 40B ... # RAPID_mn_cfs (comid, time_mn) float32 1GB ... # RAPID_yr_cfs (comid, time_yr) float32 107MB ... </code></pre> <p>To use <code>open_mfdataset</code> my code looks like this. The <code>assignCoordinates</code> function works as intended, but it still fails to open the datasets.</p> <pre class="lang-py prettyprint-override"><code>def assignCoordinates(df): df = df.assign_coords({ &quot;comid&quot;: df['COMID'], &quot;time_mn&quot;: fd.calcDatetimes(df, 'Time_mn', df.sizes['time_mn']), #this just calculates datetimes for the weird time units used in these files, the function works properly &quot;time_yr&quot;: fd.calcDatetimes(df, 'Time_yr', df.sizes['time_yr']) }) return df path = &quot;path/to/files/*.nc&quot; ds = xr.open_mfdataset(path, preprocess=assignCoordinates, combine='by_coords', decode_times=False) ds </code></pre> <p>This is the error I receive:</p> <pre><code>ValueError: Could not find any dimension coordinates to use to order the datasets for concatenation </code></pre> <p>I assume the preprocessed files are not actually being used by <code>open_mfdataset</code>, but then I don't really understand what the point of that argument is. My suspicion that it doesn't use the preprocessed datasets is further enforced by the fact that if it was, I should be able to remove <code>decode_times=False</code> because the times are now calculated in a way that makes sense and could be decoded after running through the <code>assignCoordinates</code> function, but if I remove it, I get an error about the times being unable to be decoded.</p> <p>Is there a way to do what I am wanting or do I really have to open each dataset individually?</p> <p><strong>Minimum Reproducible Example</strong></p> <p>Copy this code, and fill out the export path. This will create three <code>.nc</code> files in the directory you specify.</p> <pre class="lang-py prettyprint-override"><code>import xarray as xr import numpy as np np.random.seed(0) temperature = 15 + 8 * np.random.randn(2, 3, 4) precipitation = 10 * np.random.rand(2, 3, 4) lon = [-99.83, -99.32] lat = [42.25, 42.21] instruments = [&quot;manufac1&quot;, &quot;manufac2&quot;, &quot;manufac3&quot;] time = pd.date_range(&quot;2014-09-06&quot;, periods=4) reference_time = pd.Timestamp(&quot;2014-09-05&quot;) ds = xr.Dataset( data_vars=dict( temperature=([&quot;loc&quot;, &quot;instrument&quot;, &quot;time&quot;], temperature), precipitation=([&quot;loc&quot;, &quot;instrument&quot;, &quot;time&quot;], precipitation), ), attrs=dict(description=&quot;Weather related data.&quot;), ) for i in range(1,4): ds.to_netcdf(f'yourdirectory/test{i}.nc') #### EDIT HERE ##### </code></pre> <p>After doing the above, run this code (remember to alter the directory to where you saved the files created above):</p> <pre class="lang-py prettyprint-override"><code>def assignCoordinates(df): df = df.assign_coords({ &quot;loc&quot;: df['loc'], &quot;instrument&quot;: df['instrument'], &quot;time&quot;: df['time'] }) return df ds = xarr.open_mfdataset('yourdirectory/*.nc', preprocess=assignCoordinates, combine='by_coords') #### EDIT HERE ##### ds </code></pre>
<python><python-xarray><netcdf>
2024-07-02 13:11:18
1
686
MKF
78,697,270
4,451,315
Get categories of arrow chunkedarray
<p>If I have a PyArrow chunkedarray and want to know all its categories, I can go through each individual array, get the categories from there, and find the union:</p> <pre><code>import pyarrow as pa import functools # setup one = pa.array([&quot;a&quot;, &quot;b&quot;, None, &quot;d&quot;], pa.dictionary(pa.int64(), pa.utf8())) two = pa.array([&quot;a&quot;, &quot;b&quot;, None, &quot;d&quot;, 'e'], pa.dictionary(pa.int64(), pa.utf8())) ca = pa.chunked_array([one, two]) # operation: cats = functools.reduce(lambda x, y: x | y, (set(x.dictionary) for x in ca.chunks)) out = pa.chunked_array([list(cats)]) </code></pre> <p>This works, but feels quite convoluted - is there a better way?</p>
<python><pyarrow>
2024-07-02 13:04:42
1
11,062
ignoring_gravity
78,697,219
3,007,075
How to apply rolling_map on polars and create two columns?
<p>See the code below, it's a stand-in for what I need. ChatGPT and perplexity aren't helpful here. In practice I can compute col1 and col2 one at a time, but it is unnecessarily slow.</p> <pre><code>import numpy as np import polars as pl def _compute_coef1(series): y = series[1:] x = series[:-1] mean_x = x.mean() mean_y = y.mean() x_centered = x - mean_x y_centered = y - mean_y den = (x_centered * x_centered).sum() or np.nan coef1 = (x_centered * y_centered).sum() / den return coef1 def _compute_coef2(series): y = series[1:] x = series[:-1] mean_x = x.mean() mean_y = y.mean() x_centered = x - mean_x y_centered = y - mean_y den = (x_centered * x_centered).sum() or np.nan coef1 = (x_centered * y_centered).sum() / den coef2 = mean_y - coef1 * mean_x return coef2 # Apply the rolling computation of AR(1) coefficients and coef2 expr1 = ( pl.col(&quot;input_col&quot;) .rolling_map(_compute_coef1, window_size=10, min_periods=3) .alias(&quot;coef1&quot;) ) expr2 = ( pl.col(&quot;input_col&quot;) .rolling_map(_compute_coef2, window_size=10, min_periods=3) .alias(&quot;coef2&quot;) ) # Testing: df = pl.DataFrame({&quot;input_col&quot;: [1, 1, 2, 2]}) df = df.with_columns(expr1).with_columns(expr2) print(df) </code></pre> <p>While the code above runs and work, it is unnecessarily slow. The real version I use makes is unbearable and it spends too much memory.</p>
<python><python-polars>
2024-07-02 12:57:00
1
1,166
Mefitico
78,697,191
5,431,734
Tkinter GUI doesn't open from Crontab
<p>I have a Raspberry Pi 4 Model B, and I try to start a very basic Tkinter GUI on reboot.</p> <pre><code>#!/home/pi/miniforge3/envs/myenv/bin/python import tkinter as tk def tk_gui(): root = tk.Tk() root.geometry(&quot;200x200&quot;) label = tk.Label(root, text=&quot;Hello&quot;) label.pack(pady=20) root.mainloop() if __name__ == &quot;__main__&quot;: tk_gui() </code></pre> <p>I saved the above in a file called <code>tk_basic.py</code> and also changed the permissions <code>chmod + x tk_basic.py</code>. When I call it manually the GUI works.</p> <p>I want to edit my Crontab so the GUI opens up on reboot.</p> <pre><code>@reboot XAUTHORITY=/home/pi/.Xauthority DISPLAY=:0 /home/pi/path/to/folder/tk_basic.py </code></pre> <p>But this doesn't work for me. My <code>DISPLAY</code> looks to be 0. since the command <code>ps aux | grep &quot;Xorg|PID&quot;</code> returns.</p> <pre><code>USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND pi 2047 0.0 0.0 6088 1920 pts/0 S+ 13:01 0:00 grep -E --color=auto Xorg|PID </code></pre> <p>Even when I change cron to call my python executable like this:</p> <pre><code>@reboot XAUTHORITY=/home/pi/.Xauthority DISPLAY=:0 /home/pi/miniforge3/envs/myenv/bin/python /home/pi/path/to/folder/tk_basic.py </code></pre> <p>or even</p> <pre><code> @reboot /home/pi/miniforge3/envs/myenv/bin/python /home/pi/path/to/folder/tk_basic.py </code></pre> <p>dont do any difference, the gui doesnt show up. I know there are similar posts like <a href="https://stackoverflow.com/a/50915743/5431734">this</a> or <a href="https://stackoverflow.com/a/72853390/5431734">that</a> but I couldnt make it work. Any ideas what I am missing?</p> <p>To summarise, my crontab this is what my crontab looks like: (none of these works). I even tried <code>DISPLAY=:1</code> to no avail. Other tasks in the crontab further below these lines are called succesfully.</p> <pre><code>@reboot XAUTHORITY=/home/pi/.Xauthority DISPLAY=:0 /home/pi/miniforge3/envs/myenv/bin/python /home/pi/path/to/folder/tk_basic.py @reboot XAUTHORITY=/home/pi/.Xauthority DISPLAY=:0 /home/pi/path/to/folder/tk_basic.py @reboot /home/pi/miniforge3/envs/myenv/bin/python /home/pi/dev/path/to/folder/tk_basic.py @reboot /home/pi/path/to/folder/tk_basic.py </code></pre>
<python><tkinter><cron><raspberry-pi4>
2024-07-02 12:52:23
0
3,725
Aenaon
78,697,177
1,826,893
How to get arguments from ArgParser to write to a textfile so it can be read with `fromfile_prefix_chars`
<p>I want a script to save all arguments (whether provided or populate from defaults or read from a text file) to a text file in such a format that I can re-run the script using <code>fromfile_prefix_chars</code> with that file. This is to allow for easy reproducability and store the arguments that were used to generate certain data.</p> <p>I have a very brittle and incomplete implementation. I'm sure there is a better way but have not found it via google.</p> <p><strong>Q: how to capture a) all arguments with b) clean and concise code ?</strong></p> <h2>Context</h2> <p><code>ArgumentParser</code> has a feature that allows the reading of arguments from a text file (<code>fromfile_prefix_chars</code>). There seems little point in reinventing the wheel to read e.g. a JSON file. While the reading of a text file is part of the <code>ArgumentParser</code> feature set, writing that file seems not to be. I am looking for an elegant solution to convert a <code>arg: Namespace</code> and a <code>parser: ArgumentParser</code> into a text file (containing all arguments, not only those passed in at the command line), that can be read using <code>ArgumentParser</code></p> <h1>Workflow Example</h1> <ol> <li>Run script <code>python script.py --count 7 --channel-stdev 17 /somepath</code> This should produce a text file called <code>args_out.txt</code> with the content</li> </ol> <pre><code>--count 1 --channel-mean 4.5 --channel-stdev 3 /somepath </code></pre> <ol start="2"> <li>I can re-run the script using <code>python script.py @args_out.txt</code></li> </ol> <h2>Minimal example</h2> <pre class="lang-py prettyprint-override"><code>from pathlib import Path import logging import argparse from argparse import Namespace, ArgumentParser from typing import Optional, List, Tuple def get_arg_parser() -&gt; ArgumentParser: parser = ArgumentParser(description='Test', fromfile_prefix_chars='@') parser.add_argument('--count', default=1, type=int, help='Number of ') parser.add_argument('--channel-mean', default=4.5, type=float, help='Mean number ') parser.add_argument('--channel-stdev', default=3, type=float, help='Standard deviation ') # Logging control log_group = parser.add_mutually_exclusive_group() log_group.add_argument('-v', '--debug', action='store_true', help='Enable debug mode') log_group.add_argument('-q', '--quiet', action='store_true', help='Enable quiet mode') # I/O control parser.add_argument('input', nargs='+', help='Path to a single') return parser def parse_args(arg_list: Optional[List[str]] = None) -&gt; Namespace: parser = get_arg_parser() return parser.parse_args(arg_list) def convert_args_to_text(args: Namespace, parser: ArgumentParser) -&gt; str: &quot;&quot;&quot; Convert an argparse.Namespace object to a list of strings that can be written to a file and then read as arguments again. &quot;&quot;&quot; required, optionals, optionals_store_true = get_arg_name_to_argument_dicts(parser) text = '' for k, v in vars(args).items(): k_dash = k.replace('_', '-') if k_dash in optionals: text = add_key_and_value(text, key=k_dash, value=v) elif k_dash in optionals_store_true: if v: text = add_key_and_value(text, key=v, value=None) elif k_dash in required: text = add_key_and_value(text, key=None, value=v) else: logging.warning(f&quot;skipping argument {k}&quot;) return text def get_arg_name_to_argument_dicts(parser: ArgumentParser) -&gt; Tuple[dict, dict, dict]: &quot;&quot;&quot; Here I try to extract the argument names and whether they are optional or required from the ArgParser This is very brittle FIXME &quot;&quot;&quot; optionals = {} optionals_store_true = {} # These are only added if they are true required = {} for optk, optv in parser._optionals._option_string_actions.items(): if not optk.startswith('--'): # Skip short options continue if isinstance(optv, argparse._StoreAction): optionals[optk.strip('--')] = optv elif isinstance(optv, argparse._StoreTrueAction): optionals_store_true[optk.strip('--')] = optv for req_group_actions in parser._positionals._group_actions: name = req_group_actions.dest required[name] = req_group_actions return required, optionals, optionals_store_true def add_key_and_value(text: str, key: Optional[str], value: Optional[Tuple[str, List]]) -&gt; str: &quot;&quot;&quot; Add a argument key and value to a text block. &quot;&quot;&quot; if key is not None: text += f&quot;--{key}\n&quot; if value is None: return text if isinstance(value, list): for vi in value: text += f'{vi}\n' else: text += f'{value}\n' return text def main(): args = parse_args() print(args) text = convert_args_to_text(args, get_arg_parser()) with open('args_out.txt', 'w') as f: f.write(text) if __name__ == '__main__': main() </code></pre> <p>Thank you for your help</p>
<python><python-3.x><argparse>
2024-07-02 12:50:42
2
1,559
Edgar H
78,697,154
2,873,337
Django is randomly ignoring certain days when I try to bulk_create. How do I solve this? How does this even happen?
<p>How does Django randomly skip days when trying to <code>bulk_create</code>? The dataset is a CSV exported via a third party tool from a MSSQL Server.</p> <p>Exhibit 1 - Data from my database:</p> <p><a href="https://i.sstatic.net/Ddjz7O74.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ddjz7O74.png" alt="enter image description here" /></a></p> <p>Note the whole 2024-06-13 is missing.</p> <p>Exhibit 2 - Data dump from my dataframe:</p> <p><a href="https://i.sstatic.net/vTnT9pho.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vTnT9pho.png" alt="enter image description here" /></a></p> <p>When I look at my pandas dataframe, there was data for 2024-06-13. So reading the CSV and parsing the date works.</p> <ol> <li><p>At first, I thought the issue was using too much memory to <code>bulk_create</code> so I tried chunking. The problem still remained. But if it was a memory problem, then it wouldn't so cleanly eliminate that day without affecting the other days around it. The session start/stop times correspond with when the shop opens and closes on the 12th and 14th.</p> </li> <li><p>It's not the only day that randomly disappeared. There are other days before this as well that have vanished. Also, the last possible import date was 2024-06-24. After that, it won't import any more sessions that exists in my dataframe. I tried both SQLite and Postgres to no avail in case it was a database issue.</p> </li> </ol> <p>This is how it's imported from my dataframe via DjangoORM:</p> <pre><code>sessions = [Session(**row) for row in df.to_dict(orient='records')] objs = Session.objects.bulk_create( sessions, batch_size=900, # update_conflicts=True, # unique_fields=['session_number'], # update_fields=['tax_invoice_number'] ) </code></pre> <p>I removed <code>update_conflicts</code> to allow it to throw an <code>Error</code> if there was conflicting keys, but it didn't.</p> <p>For reference, just to show that it's in my dataframe when I dump it out to Google Sheets</p> <p>Does anyone have any idea why some days just don't get written to the database? Using Django 5.0.6</p> <h2>Update</h2> <p>If I imported only that ONE csv file, it ends up working fine. If I iterated through all the files, one per month since 2023-01, committing via <code>bulk_create</code> once per file, then I get the issue.</p> <p>I doubt it's a pk issue as I allowed <code>bulk_create</code> to throw an error on duplicate pk, but also manually checked to see if the pk from the omitted records were in the system.</p>
<python><django><pandas>
2024-07-02 12:46:47
0
651
chi11ax
78,697,067
10,590,609
Airflow decorated task type hinting
<p>Take this simple dag, in which one task takes the output of another:</p> <pre class="lang-py prettyprint-override"><code>import datetime from airflow.decorators import dag, task @task def task_that_returns_a_string() -&gt; str: return &quot;a string&quot; @task def task_that_takes_a_string(arg: str) -&gt; None: print(arg) @dag( schedule_interval=&quot;@weekly&quot;, start_date=datetime.datetime(2022, 1, 1), ) def my_dag(): task_that_takes_a_string(task_that_returns_a_string()) my_dag() </code></pre> <p>This dag runs fine, but my editor is complaining about the line <code>task_that_takes_a_string(task_that_returns_a_string())</code>:</p> <pre><code>Argument of type &quot;XComArg&quot; cannot be assigned to parameter &quot;arg&quot; of type &quot;str&quot; &quot;XComArg&quot; is incompatible with &quot;str&quot;PylancereportArgumentType </code></pre> <p>I could of course disable type checking on that line, but then I would miss out on valuable feedback. So what should I change so that I get correct type-hinting feedback?</p>
<python><airflow><python-typing><pyright><airflow-taskflow>
2024-07-02 12:32:22
1
332
Izaak Cornelis
78,697,037
5,462,372
Unable to install ploty-resampler==0.8.1
<p>I am facing this typical error while installing <code>ploty-resampler==0.8.1</code> from a downloaded whl file in a server with RHEL 8.9 OS.</p> <p>I am firing the following command:</p> <pre><code>pip3 install -v plotly-resampler==0.8.1 --no-index --find-links=./mssuser_package </code></pre> <p>Inside the <code>mssuser_package</code> folder, I have downloaded from pypi site and kept the wheel file <br /> <em>plotly_resampler-0.8.1-cp38-cp38-manylinux_2_31_x86_64.whl</em></p> <p>My python version is 3.8.10.</p> <p>On installing it gives the following error:</p> <pre><code>Given no hashes to check 0 links for project 'plotly-resampler': discarding no candidates ERROR: Could not find a version that satisfies the requirement plotly-resampler==0.8.1 (from versions: none) ERROR: No matching distribution found for plotly-resampler==0.8.1 Exception information: Traceback (most recent call last): File &quot;/u01/mssuser/Python-3.8.10/Python/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 341, in resolve name, crit = self._merge_into_criterion(r, parent=None) File &quot;/u01/mssuser/Python-3.8.10/Python/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 173, in _merge_into_criterion raise RequirementsConflicted(criterion) pip._vendor.resolvelib.resolvers.RequirementsConflicted: Requirements conflict: SpecifierRequirement('plotly-resampler==0.8.1') During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/u01/mssuser/Python-3.8.10/Python/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/resolver.py&quot;, line 127, in resolve result = self._result = resolver.resolve( File &quot;/u01/mssuser/Python-3.8.10/Python/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 473, in resolve state = resolution.resolve(requirements, max_rounds=max_rounds) File &quot;/u01/mssuser/Python-3.8.10/Python/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 343, in resolve raise ResolutionImpossible(e.criterion.information) pip._vendor.resolvelib.resolvers.ResolutionImpossible: [RequirementInformation(requirement=SpecifierRequirement('plotly-resampler==0.8.1'), parent=None)] The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/u01/mssuser/Python-3.8.10/Python/lib/python3.8/site-packages/pip/_internal/cli/base_command.py&quot;, line 180, in _main status = self.run(options, args) File &quot;/u01/mssuser/Python-3.8.10/Python/lib/python3.8/site-packages/pip/_internal/cli/req_command.py&quot;, line 204, in wrapper return func(self, options, args) File &quot;/u01/mssuser/Python-3.8.10/Python/lib/python3.8/site-packages/pip/_internal/commands/install.py&quot;, line 318, in run requirement_set = resolver.resolve( File &quot;/u01/mssuser/Python-3.8.10/Python/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/resolver.py&quot;, line 136, in resolve raise error from e pip._internal.exceptions.DistributionNotFound: No matching distribution found for plotly-resampler==0.8.1 Removed build tracker: '/tmp/pip-req-tracker-z987pb84' </code></pre> <p>I dont know where I am going wrong. Please help to resolve this issue.</p>
<python><plotly-resampler>
2024-07-02 12:27:14
2
3,683
MSS
78,697,015
11,628,437
PyCharm won't let me import from folders cloned from github
<p>I am experiencing a strange error that I don't know how to characterize.</p> <p>I am trying to import a class from a subfolder, cloned from <code>GitHub</code>. But when I do <code>from Awesome_Python_Games import class_to_import</code>, it says <code>ModuleNotFoundError: No module named 'Awesome_Python_Games'</code></p> <p>So, I tried to let PyCharm do the heavy lifting. Therefore, I placed the file in another folder I created in the home directory <code>fold_1.fold_2</code> and it was detected. When I manually moved it into <code>Awesome_Python_Games</code> I was expecting PyCharm to refactors the import statement. This time, however, the refactor process involved deleting the import statement!</p> <p>The folder <code>Awesome_Python_Games</code> was a git clone from here - <a href="https://github.com/OSSpk/Awesome-Python-Games" rel="nofollow noreferrer">https://github.com/OSSpk/Awesome-Python-Games</a></p> <p>Here are some screenshots -</p> <p>This gives an error - <a href="https://i.sstatic.net/UmGTRPiE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmGTRPiE.png" alt="enter image description here" /></a></p> <p>This doesn't -</p> <p><a href="https://i.sstatic.net/JfjKDK82.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfjKDK82.png" alt="enter image description here" /></a></p> <p>I found a similar post <a href="https://stackoverflow.com/questions/28705029/pycharm-error-no-module-when-trying-to-import-own-module-python-script">here</a>, but even after following the instructions in the top-rated answer I couldn't get it to work.</p>
<python><module><pycharm>
2024-07-02 12:21:10
1
1,851
desert_ranger
78,696,765
3,070,181
Cannot locate tkinter icon file when using pyinstaller with poetry on Windows10
<p>I am using <a href="https://stackoverflow.com/a/60945435/3070181">this answer</a> to locate my icon file. My project is organised thus</p> <pre><code>. │ main.spec │ poetry.lock │ pyproject.toml │ README.md │ ├───my_app │ │ pyinstaller.py │ │ __init__.py │ │ │ └───src │ │ main.py │ │ │ └───images │ icon.png │ └───dist my_app.exe </code></pre> <p>And my code:</p> <pre><code># main.py import sys import os import tkinter as tk from pathlib import Path def main() -&gt; None: root = tk.Tk() icon_file = resolve_path('images/icon.png') root.geometry('600x200') root.columnconfigure(0, weight=1) display_info = tk.StringVar() display = tk.Entry(root, textvariable= display_info) display.grid(sticky=tk.EW) info = f'{Path(icon_file).is_file()} - {icon_file}' display_info.set(info) # root.iconphoto(False, tk.PhotoImage(file=icon_file)) root.mainloop() def resolve_path(path) -&gt; str: if getattr(sys, 'frozen', False): return os.path.abspath(Path(sys._MEIPASS, path)) return os.path.abspath(Path(Path(__file__).parent, path)) if __name__ == '__main__': main() </code></pre> <p>The output I get when I run the script is</p> <pre><code>True - C:\Users\fred\projects\my_app\my_app\src\images\icon.png </code></pre> <p>But when I build the exe and run that, then output is</p> <pre><code>False - C:\Users\fred\AppData\Local\Temp\_MEI163682\images\icon.png </code></pre> <p>So it does not find the file</p> <p>Where should it be located?</p>
<python><windows><tkinter><pyinstaller><python-poetry>
2024-07-02 11:33:40
1
3,841
Psionman
78,696,707
1,285,061
Django is unable to send email out even though conf is correct
<p>Django is unable to send email out.</p> <p>But <a href="https://www.smtper.net/" rel="nofollow noreferrer">https://www.smtper.net/</a> was able to send a test email with same exact settings, user, pass.</p> <p>What do I need do more in django to send email out?</p> <p>settings.py</p> <pre><code>## #environment variables from .env. from dotenv import load_dotenv load_dotenv() NOREPLYEMAIL = os.getenv('NOREPLYEMAIL') NOREPLYEMAILPASS = os.getenv('NOREPLYEMAILPASS') ### Email config EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' EMAIL_HOST = 'smtppro.zoho.com' EMAIL_PORT = 587 EMAIL_USE_TLS = True EMAIL_HOST_USER = NOREPLYEMAIL EMAIL_HOST_PASSWORD = NOREPLYEMAILPASS DEFAULT_FROM_EMAIL = NOREPLYEMAIL </code></pre> <p>view.py</p> <pre><code># using https://docs.djangoproject.com/en/5.0/topics/email/ @csrf_protect def test(request): testemail = EmailMessage( &quot;Hello&quot;, #subject &quot;Body goes here&quot;, #message body NOREPLYEMAIL, #from [&quot;mygmail@gmail.com&quot;,], #to reply_to=[NOREPLYEMAIL], headers={&quot;Message-ID&quot;: &quot;test&quot;}, ) testemail.send() </code></pre> <p>console</p> <pre><code>Content-Type: text/plain; charset=&quot;utf-8&quot; MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: Hello From: noreply@mydomain.com To: mygmail@gmail.com Reply-To: noreply@mydomain.com Date: Tue, 02 Jul 2024 11:38:16 -0000 Message-ID: test Body goes here ------------------------------------------------------------------------------- [02/Jul/2024 17:08:16] &quot;GET /test/ HTTP/1.1&quot; 200 13102 </code></pre>
<python><python-3.x><django><email><django-email>
2024-07-02 11:23:56
1
3,201
Majoris
78,696,521
3,928,295
import UserDict ModuleNotFoundError: No module named 'UserDict' when running a Flask app with PyJadeExtension
<p>I am upgrading my flask app from python 3.8 to 3.10 as python 3.8 is not going to reach its end of life by October 2024. While running the <code>app.py</code> file. My code throws an exception at this line.</p> <pre><code>app = Flask(__name__) app.jinja_env.add_extension('pyjade.ext.jinja.PyJadeExtension') # &lt;-- this line is bugged #app.register_blueprint(log_routes, url_prefix=&quot;/logs&quot;) </code></pre> <p>The exception stacktrace:</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/anaconda3/envs/py10/lib/python3.10/site-packages/pyjade/runtime.py&quot;, line 8, in &lt;module&gt; from collections import Mapping as MappingType ImportError: cannot import name 'Mapping' from 'collections' (/usr/local/anaconda3/envs/py10/lib/python3.10/collections/__init__.py) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/Users/manmohansinghvirdi/GitHub/qa-cp-simulator/CPStationSimulator/qawebserver/qawebserver.py&quot;, line 27, in &lt;module&gt; app.jinja_env.add_extension('pyjade.ext.jinja.PyJadeExtension') File &quot;/usr/local/anaconda3/envs/py10/lib/python3.10/site-packages/jinja2/environment.py&quot;, line 375, in add_extension self.extensions.update(load_extensions(self, [extension])) File &quot;/usr/local/anaconda3/envs/py10/lib/python3.10/site-packages/jinja2/environment.py&quot;, line 119, in load_extensions extension = t.cast(t.Type[&quot;Extension&quot;], import_string(extension)) File &quot;/usr/local/anaconda3/envs/py10/lib/python3.10/site-packages/jinja2/utils.py&quot;, line 149, in import_string return getattr(__import__(module, None, None, [obj]), obj) File &quot;/usr/local/anaconda3/envs/py10/lib/python3.10/site-packages/pyjade/__init__.py&quot;, line 4, in &lt;module&gt; from .utils import process File &quot;/usr/local/anaconda3/envs/py10/lib/python3.10/site-packages/pyjade/utils.py&quot;, line 224, in &lt;module&gt; from .ext.html import Compiler as HTMLCompiler File &quot;/usr/local/anaconda3/envs/py10/lib/python3.10/site-packages/pyjade/ext/html.py&quot;, line 6, in &lt;module&gt; from pyjade.runtime import is_mapping, iteration, escape File &quot;/usr/local/anaconda3/envs/py10/lib/python3.10/site-packages/pyjade/runtime.py&quot;, line 10, in &lt;module&gt; import UserDict ModuleNotFoundError: No module named 'UserDict' </code></pre> <p>After searching a lot on online forums, I found that I should update <code>from collections import MutableMapping</code> to <code>from collections.abc import MutableMapping</code> , as mentioned here - <a href="https://github.com/Significant-Gravitas/AutoGPT/issues/65#issuecomment-1494259607" rel="nofollow noreferrer">https://github.com/Significant-Gravitas/AutoGPT/issues/65#issuecomment-1494259607</a></p> <p>However, this is going to update the core packages and not make it a seamless solution for CI/CD pipelines. Is there a solution to this exception stacktrace ?</p>
<python><flask><python-3.10><pyjade>
2024-07-02 10:41:22
0
1,804
Manmohan_singh
78,696,471
511,976
AWS Glue logs are corrupted
<p>I have an issue with the logs from my AWS Glue jobs being corrupted.</p> <p>The logs are emitted from Glue Python scripts ad go to CloudWatch. I recently started posting structured JSON logs with AWS EMF, and noticed that the logs were being corrupted. Some log statements in excess of 1024 bytes are split into multiple CloudWatch log entries, while sometimes smaller log statements are grouped into a single CloudWatch log entry. This obviously breaks JSON messages, and confuses other log analysis queries.</p> <p>The issue seemed to begin when many log messages were emitted in quick succession. I saw a log entry that had a large ~6000b JSON payload that was unsplit in the beginning of a log, before everything devolved into the 1024 byte chunks when a lot of lines were printed in quick succession.</p> <p>The jobs are Spark 3.3 jobs written in Python 3.9 and running on Glue 4.0. The affected logs are the one produced by the Python script and not the ones produced by Spark.</p> <p>I did some testing, and wrote a script that does nothing except initialize Glue and then output log messages using either Python's print function, a Python logger, or the logger that comes from calling <code>getLogger()</code> on a <code>GlueContext</code>, and my conclusion is that either the CloudWatch agent where the Python is executed is misconfigured, or some intermediary process, between the running script and the CloudWatch agent, is.</p> <p>The script I used for testing is the following: <a href="https://gist.github.com/mhvelplund/961c8868fbfdf857dcd0a623a353870b" rel="nofollow noreferrer">https://gist.github.com/mhvelplund/961c8868fbfdf857dcd0a623a353870b</a></p> <p>Running the script with continuous logging enabled and using the Glue logger (<code>--TYPE=glue</code>), everything works fine but the logs go in the same log stream as the Spark logs, which contain a lot of Java noise.</p> <p>Running with a Python logger (<code>--TYPE=log</code>) or with print statements (<code>--TYPE=print</code>) is where problems arise. Log lines are grouped or split, seemingly arbitrarily, and not necessarily in the same manner from one run to another. This indicates that the issue is time related. Using a version of the script without the <code>output</code> delegation, but just raw print statements, I was able to get every printed line in a single CloudWatch message.</p> <p><a href="https://i.sstatic.net/Xko7jwcg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xko7jwcg.png" alt="No delay" /></a></p> <p>Inserting a slight delay of as little as 100 ms (<code>--DELAY=100</code>) before each output statement, the grouping and splitting issues disappeared.</p> <p><a href="https://i.sstatic.net/OlmKFCz1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlmKFCz1.png" alt="100 ms delay" /></a></p> <p>Unfortunately, using the Glue logger is not a good solution since I have legacy code in my real scripts that use the Python loggers and raw print statements, and changing those would be painful.</p> <p>Has anyone encountered this problem with CloudWatch or Glue before, and if so, how did you solve it without monkeypatching <code>sys.stdout</code>? 😉</p>
<python><logging><aws-glue><amazon-cloudwatchlogs>
2024-07-02 10:31:08
1
2,380
mhvelplund
78,696,434
928,713
Connection Timeout on External API Call on Server but Works Locally
<p>I have a Django 4.2.2 application running on Python 3.11. One of the views is as follows:</p> <pre><code>import requests from django.http import HttpResponse def get_captcha(request): response = requests.get( &quot;https://geoportale.cartografia.agenziaentrate.gov.it/age-inspire/srv/ita/Captcha?type=image&amp;lang=it&quot; ) session_id = response.cookies.get(&quot;JSESSIONID&quot;) request.session[&quot;CAPTCHA&quot;] = session_id content_type = response.headers[&quot;content-type&quot;] return HttpResponse(response.content, content_type=content_type) </code></pre> <p>When running the application locally, the API call is executed successfully. However, when running on an Aruba server, it results in a timeout. Here are the steps I've taken to try to resolve this issue, without success:</p> <ul> <li>Downgraded OpenSSL to the same version used locally (1.1.1n)</li> <li>Disabled the firewall</li> <li>Changed DNS servers for domain resolution</li> <li>Downgraded the Python version</li> </ul> <p>Below is the result of traceroute from the server:</p> <pre><code>traceroute to geoportale.cartografia.agenziaentrate.gov.it (217.175.52.194), 30 hops max, 60 byte packets 1 host2-224-110-95.serverdedicati.aruba.it (95.110.224.2) 0.702 ms 0.744 ms 0.824 ms 2 cr2-te0-0-0-2.it2.aruba.it (62.149.185.196) 0.865 ms 0.919 ms * 3 * * * 4 * * * 5 * * * 6 * 93-57-68-2.ip163.fastwebnet.it (93.57.68.2) 10.095 ms 10.516 ms 7 81-208-111-134.ip.fastwebnet.it (81.208.111.134) 10.482 ms 10.370 ms 10.536 ms 8 * * * 9 * * * 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * </code></pre> <p>And here is the output of nslookup from the server:</p> <pre><code>Server: 127.0.0.53 Address: 127.0.0.53#53 Non-authoritative answer: Name: geoportale.cartografia.agenziaentrate.gov.it Address: 217.175.52.194 </code></pre> <p>These are the details from Chrome inspector for the call made from the server (which fails):</p> <pre><code>Request URL: https://beta.service.com/captcha/ Request Method: GET Status Code: 500 Internal Server Error Remote Address: 95.110.228.4:443 Referrer Policy: same-origin Connection: keep-alive Content-Length: 152853 Content-Type: text/html; charset=utf-8 Cross-Origin-Opener-Policy: same-origin Date: Tue, 02 Jul 2024 08:39:55 GMT Referrer-Policy: same-origin Server: nginx Strict-Transport-Security: max-age=31536000; includeSubDomains Vary: Cookie X-Content-Type-Options: nosniff X-Frame-Options: DENY Accept: */* Accept-Encoding: gzip, deflate, br, zstd Accept-Language: it-IT,it;q=0.9,en-US;q=0.8,en;q=0.7 Cache-Control: no-cache Connection: keep-alive Cookie: _fbp=fb.1.1712064669028.980891575; _ga=GA1.3.585575725.1712064668; _gid=GA1.2.894542501.1719830766; csrftoken=us75UYbPCtVM8DTA0ZhQPzC9ON5Jc3qZ; sessionid=z287mdm63vq9j1p0uxwvf69ieykput00; _gid=GA1.3.894542501.1719830766; _ga=GA1.1.585575725.1712064668; _ga_314551799=GS1.1.1719909450.504.1.1719909452.0.0.0; _ga_52G663NH12=GS1.1.1719909450.176.1.1719909452.58.0.0; _gat_UA-67329675-1=1; _gcl_au=1.1.471068170.1719846195.1018237849.1719909461.1719909460 Host: beta.service.com Pragma: no-cache Referer: https://beta.service.com/ Sec-Ch-Ua: &quot;Not/A)Brand&quot;;v=&quot;8&quot;, &quot;Chromium&quot;;v=&quot;126&quot;, &quot;Google Chrome&quot;;v=&quot;126&quot; Sec-Ch-Ua-Mobile: ?0 Sec-Ch-Ua-Platform: &quot;Linux&quot; Sec-Fetch-Dest: empty Sec-Fetch-Mode: cors Sec-Fetch-Site: same-origin User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36 </code></pre> <p>And these are the details from the local environment (which succeeds):</p> <pre><code>Request URL: http://localhost:8000/captcha/ Request Method: GET Status Code: 200 OK Remote Address: 127.0.0.1:8000 Referrer Policy: same-origin Content-Length: 19823 Content-Type: image/jpeg Cross-Origin-Opener-Policy: same-origin Date: Tue, 02 Jul 2024 10:15:21 GMT Referrer-Policy: same-origin Server: WSGIServer/0.2 CPython/3.11.9 Set-Cookie: sessionid=q57cn8cmh1961ghhhu6ywerflojq1f4l; expires=Tue, 16 Jul 2024 10:15:21 GMT; HttpOnly; Max-Age=1209600; Path=/; SameSite=Lax Vary: Cookie X-Content-Type-Options: nosniff X-Frame-Options: DENY Accept: */* Accept-Encoding: gzip, deflate, br, zstd Accept-Language: it-IT,it;q=0.9,en-US;q=0.8,en;q=0.7 Cache-Control: no-cache Connection: keep-alive Cookie: _ga=GA1.1.1117820162.1715856098; _gcl_au=1.1.2110919363.1715856098; _fbp=fb.0.1715856098133.219741871; USENAV=1717774507848.1409993568842083; csrftoken=lLd1jqD3VhFqW2NdQI5RAAQEntxfs476; sessionid=q57cn8cmh1961ghhhu6ywerflojq1f4l; _gid=GA1.1.1504428548.1719909313; _ga_52G663NH12=GS1.1.1719909292.46.1.1719909562.60.0.0; _ga_314551799=GS1.1.1719909292.220.1.1719909562.0.0.0 Host: localhost:8000 Pragma: no-cache Referer: http://localhost:8000/ Sec-Ch-Ua: &quot;Not/A)Brand&quot;;v=&quot;8&quot;, &quot;Chromium&quot;;v=&quot;126&quot;, &quot;Google Chrome&quot;;v=&quot;126&quot; Sec-Ch-Ua-Mobile: ?0 Sec-Ch-Ua-Platform: &quot;Linux&quot; Sec-Fetch-Dest: empty Sec-Fetch-Mode: cors Sec-Fetch-Site: same-origin User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36 </code></pre> <p>Additional Information: On other two servers from Aruba the call succeds. Aruba's support team claims that nothing is blocking the call on their network. I would appreciate any insights or suggestions on what might be causing this timeout on the server, since right now I don't know what else I could try and where to look to solve it.</p> <p><strong>EDIT: adding more info on error and variable values</strong></p> <p>Error</p> <pre><code>Request Method: GET Request URL: https://beta.service.com/captcha/ Django Version: 4.2.2 Exception Type: ConnectTimeout Exception Value: HTTPSConnectionPool(host='geoportale.cartografia.agenziaentrate.gov.it', port=443): Max retries exceeded with url: /age-inspire/srv/ita/Captcha?type=image&amp;lang=it (Caused by ConnectTimeoutError(&lt;urllib3.connection.HTTPSConnection object at 0x7f55bb0a6150&gt;, 'Connection to geoportale.cartografia.agenziaentrate.gov.it timed out. (connect timeout=None)')) </code></pre> <p>Local variables:</p> <pre><code>cert: None chunked: False conn: &lt;urllib3.connectionpool.HTTPSConnectionPool object at 0x7f5579092fd0&gt; proxies: OrderedDict() request: &lt;PreparedRequest [GET]&gt; self: &lt;requests.adapters.HTTPAdapter object at 0x7f55821c33d0&gt; stream: False timeout: Timeout(connect=None, read=None, total=None) url: '/age-inspire/srv/ita/Captcha?type=image&amp;lang=it' verify: True </code></pre>
<python><django><server><python-requests><url-routing>
2024-07-02 10:23:54
0
420
m.piras
78,696,343
10,746,597
Read a CSV file with Pandas in Databricks workspace
<p>I've uploaded a notebook on Databricks workspace. The workspace tree looks like this:</p> <p><a href="https://i.sstatic.net/ovZDi3A4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ovZDi3A4.png" alt="enter image description here" /></a></p> <p>I want to read a CSV in the data folder using pandas. I'm trying to do so with the following code in the FastFashionML notebook:</p> <pre><code>import pandas as pd items = pd.read_csv('data/fastFasionItemsDim.csv', sep='|') </code></pre> <p>What I get is:</p> <pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'data/fastFasionItemsDim.csv' </code></pre> <p>This is how the data folder looks like:</p> <p><a href="https://i.sstatic.net/ANNqOr8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ANNqOr8J.png" alt="enter image description here" /></a></p> <p>Although the extension is not shown in the tree, the file is named fastFasionItemsDim.csv, as I uploaded it using a zip file. I also tried to remove the extension from the file path, but I still can't read the file.</p> <p>How should I locate and load correctly the CSV file with pandas in the Databricks File System?</p>
<python><pandas><databricks>
2024-07-02 10:06:02
3
742
juuso
78,696,254
22,400,527
How to order migration files across different apps in django?
<p>I am working on a django project. After some time, I wanted to refactor the code. I wanted to create a new app and move some of my models to that app. Here's an example:</p> <p>Suppose I have an app named <code>CategoryApp</code> and two models in it named <code>Product</code> and <code>Category</code>.</p> <pre class="lang-py prettyprint-override"><code>class Category(models.Model): ... class Product(models.Model): ... </code></pre> <p>After some time of coding, I realized that I need a separate app for <code>Product</code>. So, I created a new app named <code>ProductApp</code>. And copied the <code>Product</code> model from <code>CategoryApp</code> to <code>ProductApp</code>. I managed all the foreign keys and relationships appropriately.</p> <p>Now, I need to create the migration files.</p> <p>First of all, I created a migration file for the app <code>ProductApp</code> (<code>python3 manage.py makemigrations ProductApp</code>) that just creates a new table <code>productapp_product</code> in the database. The migration file is named <code>0001.py</code>. I applied this migration and a new table is created.</p> <p>Second, I created an empty migration file in the <code>ProductApp</code> (<code>python3 manage.py makemigrations ProductApp --empty</code>). The migration file is named <code>0002.py</code> I modified the contents of the empty migration file to run SQL query to copy all the data from <code>categoryapp_product</code> table to <code>productapp_product</code>. I also added reverse_sql in case I need to reverse the migration. I applied this migration as well and all the data are copied to the new table.</p> <p>Third, I deleted the <code>Product</code> model from <code>CategoryApp</code> and created a migration file for <code>CategoryApp</code>. The migration file is named <code>0002.py</code>. I applied this migration and the old table is now deleted.</p> <p>I pushed my code to github. I went to my VPS server and pulled the latest code. I ran the command <code>pyrhon3 manage.py migrate</code> hoping that everything will work.</p> <p>Alas, I got an error saying that <code>categoryapp_product does not exist</code> while migrating <code>ProductApp/migrations/0002.py</code>. Django had migrated <code>CategoryApp/migrations/0002.py</code> before <code>ProductApp/migrations/0002.py</code> which caused the <code>categoryapp_product</code> table to be deleted before its data could be copied to <code>productapp_product</code>.</p> <p>Is there a way to specify the order of migration files to be migrated across apps? Or, is there a way to add a migration file as a dependency to another migration file?</p> <p>What is the solution to this problem?</p>
<python><django><django-migrations>
2024-07-02 09:48:16
1
329
Ashutosh Chapagain
78,696,115
6,618,051
Get scenario name in pytest_bdd_apply_tag method
<p>In case of <code>@todo</code> tag I need to send a request to Zephyr Scale that the test is skipped.</p> <pre class="lang-py prettyprint-override"><code>def pytest_bdd_apply_tag(tag, function): env = os.environ.get(ENV_VARIABLE) if tag == 'todo': marker = pytest.mark.skip(reason=&quot;Not implemented yet&quot;) marker(function) ## Get issue number # match = re.search(r'\[([A-Z]+-[^\]]+)\]', scenario.name) ## Send request to Zephyr Scale # ZephyrScale(match.group(1)).set_skipped() return True else: # Fall back to the default behavior of pytest-bdd return None </code></pre> <p>Is there any options to get Scenario name?</p>
<python><pytest><bdd><pytest-bdd>
2024-07-02 09:19:08
1
1,939
FieryCat
78,696,026
3,190,076
Jupyter notebook. How to direct the output to a specific cell?
<p>Is there a way to specify the output cell where a function should print its output?</p> <p>In my specific case, I have some threads running, each with a logger. The logger output is printed on any running cell, interfering with that cell's intended output. Is there a way I can force the logger to print only on <code>cell #1</code>, for example?</p>
<python><logging><jupyter-notebook>
2024-07-02 09:05:46
2
10,889
alec_djinn
78,696,011
2,886,640
How can I set all form fields readonly in Odoo 16 depending on a field?
<p>In Odoo 16, I'm trying to make all fields from a form view readonly depending on the value of other field of the same form.</p> <p>First I've tried the following:</p> <pre><code>&lt;xpath expr=&quot;//field&quot; position=&quot;attributes&quot;&gt; &lt;attribute name=&quot;attrs&quot;&gt;{'readonly': [('my_field', '=', True)]}&lt;/attribute&gt; &lt;/xpath&gt; </code></pre> <p>With no result.</p> <p>I can't use <code>&lt;form edit=&quot;false&quot;&gt;</code> neither since I have to check the field value.</p> <p>A rule with <code>&lt;field name=&quot;perm_write&quot;&gt;1&lt;/field&gt;</code> works, but it doesn't behave as I need, since it allows you to modify the whole form until you click on <em>Save</em> and get the permission error.</p> <p>And overwriting <code>get_view</code> is not a valid option, since cannot depend on <code>my_field</code> value.</p> <p>The only solution I can find is to modify each field of the form with <code>xpath</code>, which is pretty disturbing, and is not consistent if in the future more fields are added to the form view via other apps.</p> <p>Does anyone have a better solution for this?</p>
<python><python-3.x><xml><odoo><odoo-16>
2024-07-02 09:01:38
1
10,269
forvas
78,695,836
4,451,521
In a gradio tab GUI, a button calls the other tab
<p>I have the following script</p> <pre><code>import gradio as gr # Define the function for the first tab def greet(text): return f&quot;Hello {text}&quot; # Define the function for the second tab def farewell(text): return f&quot;Goodbye {text}&quot; # Create the interface for the first tab with gr.Blocks() as greet_interface: input_text = gr.Textbox(label=&quot;Input Text&quot;) output_text = gr.Textbox(label=&quot;Output Text&quot;) button = gr.Button(&quot;Submit&quot;) button.click(greet, inputs=input_text, outputs=output_text) # Create the interface for the second tab with gr.Blocks() as farewell_interface: input_text = gr.Textbox(label=&quot;Input Text&quot;) output_text = gr.Textbox(label=&quot;Output Text&quot;) button = gr.Button(&quot;Submit&quot;) button.click(farewell, inputs=input_text, outputs=output_text) # Combine the interfaces into tabs with gr.Blocks() as demo: with gr.Tabs(): with gr.TabItem(&quot;Greet&quot;): greet_interface.render() with gr.TabItem(&quot;Farewell&quot;): farewell_interface.render() # Launch the interface # demo.launch() demo.launch(server_name=&quot;0.0.0.0&quot;, server_port=7861) </code></pre> <p>I am scratching my head because this works in one environment I have, and yet in the other it fails terribly.</p> <p>How it fails:</p> <p>In the second tab (farewell) when the button is pressed, it actually calls the greet function. farewell is never called.</p> <p>I can see that some processing is being done in the output_text of the second tab but it never is completed. Instead the output text of the first tab is filled</p> <p>I can not comprehend why this is happening.</p> <p>The only difference I have is that of the environments</p> <ol> <li>The environment where it works:</li> </ol> <ul> <li>Python 3.11.1</li> <li>use venv</li> <li>gradio 4.37.2</li> </ul> <ol start="2"> <li>The environment where it fails</li> </ol> <ul> <li>Python 3.9.16</li> <li>use poetry</li> <li>gradio 4.32.2</li> </ul> <p>Can someone help me with this strange occurrence? Is tabbed gradio buggy?</p> <p>Btw, I already tried using completely different variables per tab but that does not work</p>
<python><gradio>
2024-07-02 08:24:53
1
10,576
KansaiRobot
78,695,792
5,423,080
Find local minima of array including the first element of plateau
<p>I refactored a code that finds the indices of local minima of the negative part of an array.</p> <p>The original code identified as minima also the first element of a flat, but the new code doesn't and I couldn't find a way.</p> <p>This is a MWE:</p> <pre><code>import numpy as np from scipy import signal test = np.array([0, 1, 2, 3, 1, 0, -1, -1, -2, -3, -2, 0]) wanted_mins = np.where(((test[:-2] &gt; test[1:-1]) * (test[2:] &gt;= test[1:-1])) &amp; (test[1:-1] &lt; 0))[0] + 1 refactored_mins = signal.find_peaks(np.negative(np.where(test&lt;0, test, 0.0)))[0] </code></pre> <p>The <code>wanted_mins</code> are <code>6</code> and <code>9</code>, but the <code>refactored_mins</code> is just <code>9</code>.</p> <p>How can I obtain also the first element of plateau points?</p>
<python><scipy>
2024-07-02 08:16:50
0
412
cicciodevoto
78,695,671
14,463,396
apply function to two pandas columns and assign them back to the original dataframe raises future warning
<p>I have a dataframe with columns for the mean and standard deviation of different distributions of things (along with other columns irrelevant to the question). I need to apply a function to transform these numbers into mu and sigma to pass into a log normal distribution (what this function does is irrelevant to my question, so I've simplified it). Example dataframe and function are:</p> <pre><code>df = pd.DataFrame({'mean': [3, 11, 9, 19, 22], 'std': [10, 17, 10, 25, 30]}) def log_normal_transform(mean_std): mu = mean_std[0] sigma = mean_std[1] return mu*2, sigma*2 </code></pre> <p>I want to apply this function to transform the mean and std columns in the dataframe. I used apply like this:</p> <pre><code>df[['mean', 'std']].apply(log_normal_transform, axis=1) </code></pre> <p>which returns a series of tuples with the transformed numbers:</p> <pre><code> 0 0 (6, 20) 1 (22, 34) 2 (18, 20) 3 (38, 50) 4 (44, 60) </code></pre> <p>Now when I try to assign this back to the original columns like this:</p> <pre><code>df[['mean', 'std']] = df[['mean', 'std']].apply(log_normal_transform, axis=1) </code></pre> <p>I get an error that the columns are not the same length as key (which I understand). I found <a href="https://stackoverflow.com/questions/29550414/how-can-i-split-a-column-of-tuples-in-a-pandas-dataframe">this question</a> on how to assign a series of tuples to multiple columns, most answers suggest to convert the series to a list, so I tried this:</p> <pre><code>df[['mean', 'std']] = df[['mean', 'std']].apply(log_normal_transform, axis=1).values.tolist() </code></pre> <p>Which does work, however I get a future warning of:</p> <pre><code>&lt;string&gt;:2: FutureWarning: Series.__getitem__ treating keys as positions is deprecated. In a future version, integer keys will always be treated as labels (consistent with DataFrame behavior). To access a value by position, use `ser.iloc[pos]` </code></pre> <p>How can I do this without making the future warning come up so my code will still work in future versions?</p>
<python><pandas><dataframe>
2024-07-02 07:50:36
1
3,395
Emi OB
78,695,616
9,348,836
AttributeError: 'NoneType' object has no attribute 'username' in Flask application
<p>This is my API Endpoint of an Update Route for a particular URL in Flask:</p> <pre><code># Update Route for a particular trip @app.route('/trips/&lt;int:trip_id&gt;', methods=['PUT']) def update_trip(trip_id): trip = trips.get(trip_id) #query from the database data = request.get_json() trip.username = data['username'] trip.start_at = datetime.strptime(data['start_at'], '%Y-%m-%d %H:%M:%S') trip.from_gps = data['from_gps'] trip.end_at = datetime.strptime(data['end_at'], '%Y-%m-%d %H:%M:%S') trip.distance = data['distance'] return jsonify({'message': 'Trip updated successfully!'}) </code></pre> <p>and the error I get in the console when running the Code in PostMan is:</p> <blockquote> <p>File &quot;E:\Wlink\Location_Tra\backend\main.py&quot;, line 608, in update_trip trip.username = data['username']<br /> AttributeError: 'NoneType' object has no attribute 'username'</p> </blockquote> <p>The filename is main.py in flask.<br /> The Screenshots:</p> <p><strong>Postman</strong></p> <p><a href="https://i.sstatic.net/LiiPCBdr.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LiiPCBdr.jpg" alt="PUT Request In Postman" /></a></p> <p>I am sending Raw JSON in PUT request.</p>
<python><flask><postman>
2024-07-02 07:37:38
1
1,291
bibashmanjusubedi
78,695,360
3,949,631
Most space-efficient way of storing large number of large integers
<p>I have a large number of very large integers (in general, larger than <code>unsigned long long</code>), e.g.</p> <pre><code>976790216313803691633662 213166167 12361374472474414331778521 74143614714316467475274141141343747437542416477224365045416 ... 46247285274316436417475827426432634528582016257012061846140147206 7287516801860168175715895175371357381735188912758511 </code></pre> <p>which I would like to store in a file. What is the most space-efficient method to store them? I have tried several methods, including converting to bytes and storing in an <code>h5</code> file, storing as <code>.npz</code> using <code>numpy</code>, but the most compression I get is by storing them in plain text and then making a <code>.tar.gz</code> of the file.</p> <p>Is there a better way (i.e., which would give better compression)? There could be anywhere between 10^4 and 10^12 lines per file.</p> <p>Edit: You can produce a minimal reproducible example by running the following in python (which is how the list of numbers is obtained in the first place):</p> <pre><code>import numpy as np def generate_int(n): return int(str(np.random.randint(1, 10)) + ''.join([str(np.random.randint(0, 10)) for _ in range(n-1)])) output = [generate_int(np.random.randint(4, 1001)) for _ in range(1000000)] </code></pre>
<python><numpy><compression><h5py>
2024-07-02 06:29:21
2
497
John
78,695,350
2,049,685
pyparsing - back to basics
<p>In attempting to put together a very simple example to illustrate an issue I'm having with pyparsing, I have found that I can't get my simple example to work - this example is barely more complex than Hello, World!</p> <p>Here is my example code:</p> <pre><code>import pyparsing as pp import textwrap as tw text = ( &quot;&quot;&quot; A) Red B) Green C) Blue &quot;&quot;&quot; ) a = pp.AtLineStart(&quot;A)&quot;) + pp.Word(pp.alphas) b = pp.AtLineStart(&quot;B)&quot;) + pp.Word(pp.alphas) c = pp.AtLineStart(&quot;C)&quot;) + pp.Word(pp.alphas) grammar = a + b + c grammar.run_tests(tw.dedent(text).strip()) </code></pre> <p>I would expect it to return <code>[&quot;A)&quot;, &quot;Red&quot;, &quot;B)&quot;, &quot;Green&quot;, &quot;C)&quot;, &quot;Blue&quot;]</code> but instead I get:</p> <pre><code>A) Red A) Red ^ ParseException: not found at line start, found end of text (at char 6), (line:1, col:7) FAIL: not found at line start, found end of text (at char 6), (line:1, col:7) B) Green B) Green ^ ParseException: Expected 'A)', found 'B' (at char 0), (line:1, col:1) FAIL: Expected 'A)', found 'B' (at char 0), (line:1, col:1) C) Blue C) Blue ^ ParseException: Expected 'A)', found 'C' (at char 0), (line:1, col:1) FAIL: Expected 'A)', found 'C' (at char 0), (line:1, col:1) </code></pre> <p>Why would it say it's found end of text after the first line??? Why is it expecting A) after the first line???</p> <p>(Note: <code>textwrap.dedent()</code> and <code>strip()</code> have no impact on the results of this script.)</p>
<python><pyparsing>
2024-07-02 06:26:17
2
631
Michael Henry
78,695,314
3,896,008
Weird behavior when updating the values using `iloc` in pandas dataframe
<p>While copying a pandas dataframe, ideally, we should use <code>.copy()</code>, which is a deep copy by default.</p> <p>We could also achieve the same using <code>new_df = pd.Dataframe(old_df)</code>, which is also intuitive (and common style across most programming languages since in principle, we are calling a copy constructor).</p> <p>In both cases, they have different memory identifers as shown below. But when I change the <code>old_df</code> using <code>.iloc</code>, it changes the value for the <code>new_df</code>. Is this the expected behaviour? I couldn't figure out this behaviour using the docs. Am I missing something trivial?</p> <pre class="lang-py prettyprint-override"><code># ! pip install smartprint from smartprint import smartprint as sprint import pandas as pd data = {'A': [1, 2, 3], 'B': [4, 5, 6]} original_df = pd.DataFrame(data) new_df = pd.DataFrame(original_df) original_df['A'] = original_df['A'].apply(lambda x: x * 1000) print (&quot;============ Changing a column with pd.Dataframe(old_df); No change&quot;) sprint(original_df) sprint(new_df) sprint (id(new_df), id(original_df)) data = {'A': [1, 2, 3], 'B': [4, 5, 6]} original_df = pd.DataFrame(data) new_df = pd.DataFrame(original_df) original_df.iloc[0,:] = 20 print (&quot;============ Using .iloc with pd.Dataframe(old_df); Change&quot;) sprint(original_df) sprint(new_df) sprint (id(new_df), id(original_df)) data = {'A': [1, 2, 3], 'B': [4, 5, 6]} original_df = pd.DataFrame(data) new_df = original_df.copy() original_df.iloc[0,:] = 20 print (&quot;============ Using .iloc with old_df.copy(); No change&quot;) sprint(original_df) sprint(new_df) sprint (id(new_df), id(original_df)) </code></pre> <p><strong>Output:</strong></p> <pre class="lang-py prettyprint-override"><code>============ Changing a column with pd.Dataframe(old_df); No change original_df : A B 0 1000 4 1 2000 5 2 3000 6 new_df : A B 0 1 4 1 2 5 2 3 6 id(new_df), id(original_df) : 140529132556544 140529514510944 ============ Using .iloc with pd.Dataframe(old_df); Change original_df : A B 0 20 20 1 2 5 2 3 6 new_df : A B 0 20 20 1 2 5 2 3 6 id(new_df), id(original_df) : 140528893052000 140528894065584 ============ Using .iloc with old_df.copy(); No change original_df : A B 0 20 20 1 2 5 2 3 6 new_df : A B 0 1 4 1 2 5 2 3 6 id(new_df), id(original_df) : 140529057828336 140528940223984 </code></pre> <p>My python and pandas versions are listed below:</p> <pre class="lang-py prettyprint-override"><code>import sys sys.version Out[16]: '3.8.17 (default, Jul 5 2023, 16:18:40) \n[Clang 14.0.6 ]' pd.__version__ Out[17]: '1.4.3' </code></pre>
<python><pandas><dataframe>
2024-07-02 06:13:36
1
1,347
lifezbeautiful
78,695,290
4,534,585
Updating SentOn Date value in .msg file saved in local folder
<p>How Can I update the SentOn date in a .msg file located in local folder on my machine? Can this be done using python, VBA or any other tool?</p>
<python><vba><outlook><com>
2024-07-02 06:08:27
1
647
Prashant Mishra
78,695,142
3,244,618
Python requests Selenium issue with ChromeDriver headless mode only
<p>I want to scrape a facebook ads website using requests library and Selenium with ChromeDriver (because I need to run it on pythonanywhere which has chromedriver).</p> <p>so I do:</p> <pre><code> chrome_options = ChromeOptions() arguments = [ &quot;--disable-notifications&quot;, &quot;--start-maximized&quot;, &quot;disable-infobars&quot;, &quot;--disable-gpu&quot;, &quot;--headless&quot;, &quot;window-size=1980,1080&quot;, &quot;--allow-running-insecure-content&quot;, &quot;--disable-extensions&quot;, &quot;--no-sandbox&quot;, &quot;--ignore-certificate-errors&quot;, &quot;--test-type&quot;, &quot;--disable-web-security&quot;, &quot;--safebrowsing-disable-download-protection&quot; ] for argument in arguments: chrome_options.add_argument(argument) prefs = { &quot;intl.accept_languages&quot;: &quot;en-US&quot; } chrome_options.add_experimental_option(&quot;prefs&quot;, prefs) chrome_options.add_argument(&quot;user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36&quot;) # Path to your ChromeDriver chrome_driver_path = &quot;/usr/local/bin/chromedriver&quot; # This is typically the path on PythonAnywhere # Set up the WebDriver service = ChromeService(executable_path=chrome_driver_path) driver = webdriver.Chrome(service=service, options=chrome_options) </code></pre> <p>Setup driver and then scrape the page:</p> <pre><code>def scrape_page(companyId, companyName): # Navigate to the Facebook Ads Library page url = f'https://www.facebook.com/ads/library/?active_status=all&amp;ad_type=all&amp;country=NL&amp;view_all_page_id={companyId}&amp;search_type=page&amp;media_type=all' driver.get(url) time.sleep(5) print(driver.page_source) </code></pre> <p>Of coursem sleep is not good for long term usage. I should use WebDriverWait. but for now I want to make it work.</p> <p>But what it prints is the HTML with script tags. Looks like page was not loaded properly. When I remove headless I see browser running and loading page properly and script prints everything loaded.</p> <p>Any ideas how to this this?</p>
<python><selenium-webdriver><selenium-chromedriver>
2024-07-02 05:18:53
1
2,779
FrancMo
78,695,080
8,205,705
How to debug Triton Python, especially Triton-JIT compiler passes?
<p>I would like to do the following:</p> <ol> <li>Build triton Python (which run from Python script): <ul> <li>Into specific directory</li> <li>Build all cpp files with debug symbol</li> <li>LLVM which used by triton should also be built with debug symbols</li> </ul> </li> <li>Run a Python file to debug: <ul> <li>Related Python script</li> <li>Called cpp functions or objects could be debugged by GDB</li> </ul> </li> </ol> <p>I have searched the internet but cannot find related tutorials or guidance; thus. I would like to ask if anyone knows how to do this.</p>
<python><debugging><triton>
2024-07-02 04:52:08
0
1,049
Shore
78,695,057
9,870,211
Pyodbc auto running CREATE PROCEDURE
<p>I am connecting to a SQL Server using pyodbc - nothing fancy.</p> <p>I have been flagged by a DBA that I have long running queries on the DB, however, I know the query I am running takes between 1-3 seconds long because if I run it in SSMS it takes this long.</p> <p>When we added a trace to the DB, it seems there are two queries being executed. The below query seems to be auto running by pyodbc somehow, which I think is a driver initialisation maybe?</p> <pre><code>create procedure sys.sp_datatype_info_100 ( @data_type int = 0, @ODBCVer tinyint = 2 ) as declare @mintype int declare @maxtype int set @ODBCVer = isnull(@ODBCVer, 2) if @ODBCVer &lt; 3 -- includes ODBC 1.0 as well set @ODBCVer = 2 else set @ODBCVer = 3 if @data_type = 0 begin select @mintype = -32768 select @maxtype = 32767 end else begin select @mintype = @data_type select @maxtype = @data_type end select TYPE_NAME = v.TYPE_NAME, DATA_TYPE = v.DATA_TYPE, PRECISION = v.PRECISION, LITERAL_PREFIX = v.LITERAL_PREFIX, LITERAL_SUFFIX = v.LITERAL_SUFFIX, CREATE_PARAMS = v.CREATE_PARAMS, NULLABLE = v.NULLABLE, CASE_SENSITIVE = v.CASE_SENSITIVE, SEARCHABLE = v.SEARCHABLE, UNSIGNED_ATTRIBUTE = v.UNSIGNED_ATTRIBUTE, MONEY = v.MONEY, AUTO_INCREMENT = v.AUTO_INCREMENT, LOCAL_TYPE_NAME = v.LOCAL_TYPE_NAME, MINIMUM_SCALE = v.MINIMUM_SCALE, MAXIMUM_SCALE = v.MAXIMUM_SCALE, SQL_DATA_TYPE = v.SQL_DATA_TYPE, SQL_DATETIME_SUB = v.SQL_DATETIME_SUB, NUM_PREC_RADIX = v.NUM_PREC_RADIX, INTERVAL_PRECISION = v.INTERVAL_PRECISION, USERTYPE = v.USERTYPE from sys.spt_datatype_info_view v where v.DATA_TYPE between @mintype and @maxtype and v.ODBCVer = @ODBCVer order by 2, 12, 11, 20 </code></pre> <p><strong>Q: Has anyone seen this before and know anything more about it? Is it possible to disable this?</strong></p> <p>I am using simple code, sample below.</p> <pre><code>import pyodbc conn_string = &quot;DRIVER={ODBC Driver 18 for SQL Server};SERVER=%SERVER%;DATABASE=%DATABASE%;Trusted_Connection=yes;MultiSubnetFailover=yes;TrustServerCertificate=yes;&quot; sql = &quot;SELECT TOP 100 * FROM MY_DB&quot; conn = pyodbc.connect(conn_string) cursor = conn.cursor() cursor.execute(sql) res = cursor.fetchall() cursor.close() conn.close() </code></pre>
<python><sql-server><pyodbc>
2024-07-02 04:41:14
0
2,415
DDV
78,694,885
6,670,097
How to rename the array of StructType fields in PySpark?
<p>I need to read a JSON in French language and want to convert it English column names.</p> <p>e.g. The Schema is like this</p> <pre><code> |-- unites: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- id: string (nullable = true) | | |-- score: integer (nullable = true) | | |-- statutDiffusionUnite: string (nullable = true) | | |-- unitePurgeeUnite: boolean (nullable = true) | | |-- dateCreationUnite: date (nullable = true) | | |-- sigleUnite: string (nullable = true) | | |-- sexeUnite: string (nullable = true) | | |-- periodesUnite: array (nullable = true) | | | |-- element: struct (containsNull = true) | | | | |-- dateFin: date (nullable = true) | | | | |-- dateDebut: date (nullable = true) | | | | |-- etatAdministratifUnite: string (nullable = true) </code></pre> <p>I can use the withColumn and alias OR withColumnRenamed to rename the top level. But I have problem to rename the array of struct (e.g. the periodesUnite.dateFin above).</p> <p>My plan is to create a new English column mapping that <code>periodesUnite: array</code> first (e.g. new column named 'UnitsPeriods') and then create another column mapping the <code>unites: array</code> (e.g. named 'Units') After that, replace the new 'Units.periodesUnite' by the new column 'UnitsPeriods'.</p> <p>The DF will be like this:</p> <pre><code> |-- unites: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- id: string (nullable = true) | | |-- score: integer (nullable = true) | | |-- statutDiffusionUnite: string (nullable = true) | | |-- unitePurgeeUnite: boolean (nullable = true) | | |-- dateCreationUnite: date (nullable = true) | | |-- sigleUnite: string (nullable = true) | | |-- sexeUnite: string (nullable = true) | | |-- periodesUnite: array (nullable = true) | | | |-- element: struct (containsNull = true) | | | | |-- dateFin: date (nullable = true) | | | | |-- dateDebut: date (nullable = true) | | | | |-- etatAdministratifUnite: string (nullable = true) |-- Units: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- id: string (nullable = true) | | |-- score: integer (nullable = true) | | |-- broadcast_status: string (nullable = true) | | |-- is_purged: boolean (nullable = true) | | |-- creation_date: date (nullable = true) | | |-- symbol: string (nullable = true) | | |-- sex: string (nullable = true) | | |-- UnitPeriods: array (nullable = true) | | | |-- element: struct (containsNull = true) | | | | |-- dateFin: date (nullable = true) | | | | |-- dateDebut: date (nullable = true) | | | | |-- etatAdministratifUnite: string (nullable = true) |-- UnitsPeriods: array (nullable = false) | |-- element: array (containsNull = true) | | |-- element: array (containsNull = true) | | | |-- element: struct (containsNull = true) | | | | |-- end_date: date (nullable = true) | | | | |-- start_date: date (nullable = true) | | | | |-- administrative_state: string (nullable = true) </code></pre> <p>I use the following codes to create the new column, but I can't find a way to fill the values from the 'unites.periodesUnite' to the new column 'UnitsPeriods'</p> <pre><code>newdf = df.withColumn('UnitsPeriods', F.array(F.struct(*[F.lit(None).cast(f.dataType).alias(fr_eng_name_mappings[f.name] if f.name in fr_eng_name_mappings else f.name) for f in unit_period_schema]))) \ </code></pre> <p>I also notice that the original column has the nullable set to true, but the new one has it set to false. Is there a way to set the nullable for the column?</p>
<python><apache-spark><pyspark><apache-spark-sql>
2024-07-02 03:04:19
1
831
kklo
78,694,739
8,105,437
Why does flask seem to require a redirect after POST?
<p>I have an array of forms I want rendered in a flask blueprint called fans. I am using sqlalchemy to SQLLite during dev to persist the data and flask-wtforms to render. The issue appears to be with DecimalRangeField - if I have two or more fans and change the slider on just one, the other slider appears to move to match it, despite the data value of the DecimalRangeField being unchanged. Note: the code below is working, the issue arises when the redirect line I highlighted below is deleted.</p> <p>Here is the routes.py code with the &quot;fix&quot; of the redirect added:</p> <pre><code>@bp.route('/', methods=['GET', 'POST']) def fans_index(): fans = Fan.query.all() if fans.__len__() == 0: return redirect(url_for('fans.newfan')) form = FanForm(request.form) if form.validate_on_submit(): # request.method == 'POST' for fan in fans: if fan.name == form.name.data: fan.swtch = form.swtch.data fan.speed = round(form.speed.data) db.session.commit() return redirect(url_for('fans.fans_index')) # &lt;-- THIS is required, why? else: # request.method == 'GET' pass forms = [] for fan in fans: form = FanForm() form.name.data = fan.name form.swtch.data = fan.swtch form.speed.data = fan.speed forms.append(form) return render_template('fans_index.html', title='Fans!', forms=forms) </code></pre> <p>Here is the form used:</p> <pre><code>class FanForm(FlaskForm): name = HiddenField('Name') swtch = BooleanField('Switch', render_kw={'class': 'swtch'}) speed = DecimalRangeField('Speed', render_kw={'class': 'speed'}, validators=[DataRequired()]) submit = SubmitField('Save Fan') </code></pre> <p>And here is the html template:</p> <pre><code>&lt;h1&gt;Fans&lt;/h1&gt; &lt;div class=&quot;container&quot;&gt; &lt;div class=&quot;row&quot;&gt; {% for form in forms %} &lt;div class=&quot;col mx-1 shadow-5-strong border border-white rounded&quot; style=&quot;max-width: 220px&quot;&gt; &lt;h2 class=&quot;ms-1&quot;&gt;{{ form.name.data }}:&lt;/h2&gt; &lt;form class=&quot;mx-auto ms-3&quot; name=&quot;{{ form.name.data }}&quot; action=&quot;&quot; method=&quot;post&quot;&gt; {{ form.hidden_tag() }} &lt;div&gt;{{ form.name }}&lt;/div&gt; &lt;p&gt; {{ form.speed.label }}: &lt;span class=&quot;speed_display_val&quot;&gt;{{ form.speed.data | round }}%&lt;/span&gt;&lt;br&gt; {{ form.speed(min=20) }}&lt;br&gt; {% for error in form.speed.errors %} &lt;span style=&quot;color: red;&quot;&gt;[{{ error }}]&lt;/span&gt; {% endfor %} &lt;/p&gt; &lt;p&gt; {{ form.swtch.label }} &lt;span class=&quot;ms-3&quot;&gt;{{ form.swtch }}&lt;/span&gt;&lt;br&gt; {% for error in form.swtch.errors %} &lt;span style=&quot;color: red;&quot;&gt;[{{ error }}]&lt;/span&gt; {% endfor %} &lt;/p&gt; &lt;p&gt;{{ form.submit }}&lt;/p&gt; &lt;/form&gt; &lt;/div&gt; {% endfor %} &lt;/div&gt; &lt;/div&gt; </code></pre> <p>I also have some simple javascript for this page to animate the slider and do submits when a user moves the slider or clicks a checkbox for turning the fans on and off:</p> <pre><code>/* script to animate the slider value changing and post data on slider mouseup or switch click */ const values = Array.from(document.getElementsByClassName('speed_display_val')); const speeds = Array.from(document.getElementsByClassName('speed')); const swtches = Array.from(document.getElementsByClassName('swtch')); //Note: switch is a reserved word in JS speeds.forEach((speed, i) =&gt; { speed.oninput = (e) =&gt; { values[i].textContent = Math.round(e.target.value) + '%' }; speed.onmouseup = () =&gt; { speed.form.requestSubmit() }; }); swtches.forEach((swtch) =&gt; { swtch.onclick = () =&gt; { swtch.form.requestSubmit() }; }); </code></pre>
<python><flask><flask-sqlalchemy><flask-wtforms>
2024-07-02 01:31:36
1
6,080
dmcgrandle
78,694,642
6,622,697
How to retrieve an object with a foreign key in many-to-many
<p>I have a many-to-many relationship.</p> <pre><code>class Module(BaseModel): name = models.TextField(unique=True, null=False) groups = models.ManyToManyField(ModuleGroup, db_table='module_group_members') class ModuleGroup(BaseModel): name = models.TextField(unique=True, null=False) </code></pre> <p>I want to retrieve the list of modules and their associated groups, something like this:</p> <pre><code>{[ { 'name' : 'name1' 'groups' : ['group1', 'group2'] }, 'name' : 'name2' 'groups' : ['group6', 'group7'] } ]} </code></pre> <p>Tried this:</p> <pre><code>modules = Module.objects.filter(is_active=True) print('values_list', list(modules.values('name', 'groups'))) </code></pre> <p>Which gives e:</p> <pre><code>[{'name': 'name1', 'groups__name': 'group2'}, {'name': 'name1', 'groups__name': 'group2'}, {'name': 'name2', 'groups__name': 'group6'}, {'name': 'name2', 'groups__name': 'group7'}, ] </code></pre> <p>Is there any way to group the same modules together so that the group names are in a list?</p>
<python><django>
2024-07-02 00:21:10
2
1,348
Peter Kronenberg
78,694,641
1,615,521
Pydantic field validation against Enum name
<p>I have an Enum:</p> <pre><code>class DataType(Enum): TIMESTAMP = MyClass(&quot;ts&quot;) DATETIME = MyClass(&quot;dt&quot;) DATE = MyClass(&quot;date&quot;) </code></pre> <p>and a Pydantic Model with a field of the type of that Enum:</p> <pre><code>class Aggregation(BaseModel): field_data_type: DataType </code></pre> <p>Is there a convenient way to tell Pydantic to validate the Model field against the names of the enum rather than the values? i.e., to be able to build this Model:</p> <pre><code>agg = Aggregation(field_data_type=&quot;TIMESTAMP&quot;) </code></pre>
<python><enums><pydantic>
2024-07-02 00:21:04
1
1,135
Onca
78,694,406
82,156
Using pip to install a github pull request (404 not found)
<p>According to <a href="https://stackoverflow.com/questions/13561618/pip-how-to-install-a-git-pull-request">pip: how to install a git pull request</a> I should be able to use pip to install a library with an unmerged pull request, eg.</p> <pre><code>python3 -m pip install git+https://github.com/jamalex/notion-py.git@refs/pull/294/merge </code></pre> <p>The <code>merge</code> at the end auto-merges the PR.</p> <p>This used to work for me. However, recently when I've tried to redeploy my app, I now get an error saying that url is 404 not found (and in fact, it's true if I open the url in the browser).</p> <p>Why would this url have worked in the past but now no longer be available? How might I work around the issue?</p>
<python><pip>
2024-07-01 22:21:28
1
100,604
emmby
78,694,380
18,758,987
Is there a way to automatically overload functions with a 1-to-1 input-output type correspondence?
<p>Let's say I have this function</p> <pre class="lang-py prettyprint-override"><code>from typing import reveal_type def func(x: str | list[str]) -&gt; int | list[int]: &quot;&quot;&quot; Some docstring that should show up. &quot;&quot;&quot; if isinstance(x, str): return 0 return list(range(len(x))) y = func([&quot;hi&quot;, &quot;hello&quot;]) reveal_type(y) # int | list[int] </code></pre> <p>Revealing <code>y</code>'s type / hovering over it in your IDE will tell you <code>int | list[int]</code>.</p> <p>To make <code>y</code>'s type more precise, we overload it:</p> <pre class="lang-py prettyprint-override"><code>from typing import overload, reveal_type @overload def func(x: str) -&gt; int: ... @overload def func(x: list[str]) -&gt; list[int]: ... def func(x: str | list[str]) -&gt; int | list[int]: &quot;&quot;&quot; Some docstring that should show up. &quot;&quot;&quot; if isinstance(x, str): return 0 return list(range(len(x))) y = func([&quot;hi&quot;, &quot;hello&quot;]) reveal_type(y) # list[int] </code></pre> <p>Now, <code>y</code>'s revealed type is <code>list[int]</code>.</p> <p>I have many of these sort of functions—ones where the possible return types have a 1-to-1 correspondence with the types of a single input of the function. Ideally, I'd just have to do something like—</p> <pre class="lang-py prettyprint-override"><code>from typing import UnionMatch # not a real thing def func(x: UnionMatch[str, list[str]]) -&gt; UnionMatch[int, list[int]]: &quot;&quot;&quot; Some docstring that should show up. &quot;&quot;&quot; if isinstance(x, str): return 0 return list(range(len(x))) </code></pre> <p>—to be precise about the return type. No <code>@overload</code>s needed.</p> <p>How can this sort of functionality be achieved?</p> <p>User kevdog824 on Reddit has a solution <a href="https://www.reddit.com/r/learnpython/comments/1b4mzzo/deleted_by_user/kt3t464/" rel="nofollow noreferrer">here</a> and <a href="https://www.reddit.com/r/learnpython/comments/1b4mzzo/deleted_by_user/kt3zuyy/" rel="nofollow noreferrer">here</a> for the case where there are exactly 2 types in the <code>UnionMatch</code>:</p> <pre class="lang-py prettyprint-override"><code>from typing import ( reveal_type, Callable, Concatenate, Generic, overload, ParamSpec, TypeVar, ) _C1 = TypeVar(&quot;_C1&quot;) _C2 = TypeVar(&quot;_C2&quot;) _P = ParamSpec(&quot;_P&quot;) _T1 = TypeVar(&quot;_T1&quot;) _T2 = TypeVar(&quot;_T2&quot;) class FuncTemplate(Generic[_C1, _C2, _P, _T1, _T2]): def __init__(self, func: Callable[Concatenate[_C1 | _C2, _P], _T1 | _T2]) -&gt; None: self.func = func # self.__call__.__doc__ = func.__doc__ # Not correct b/c: # AttributeError: attribute '__doc__' of 'method' objects is not writable @overload def __call__( self, common_input: _C1, *args: _P.args, **kwargs: _P.kwargs, ) -&gt; _T1: ... @overload def __call__( self, common_input: _C2, *args: _P.args, **kwargs: _P.kwargs, ) -&gt; _T2: ... def __call__( self, common_input: _C1 | _C2, *args: _P.args, **kwargs: _P.kwargs, ) -&gt; _T1 | _T2: return self.func(common_input, *args, **kwargs) @FuncTemplate def func(x: str | list[str]) -&gt; int | list[int]: &quot;&quot;&quot; Some docstring that should show up. &quot;&quot;&quot; if isinstance(x, str): return 0 return list(range(len(x))) y = func([&quot;hi&quot;, &quot;hello&quot;]) reveal_type(y) # list[int] z = func(&quot;hey&quot;) reveal_type(z) # int </code></pre> <p>A dealbreaker is that the function's docstring is lost. Hovering over the function in my IDE (I believe it uses Pylance) says</p> <pre><code>(function) func: FuncTemplate[str, list[str], (), int, list[int]] </code></pre> <p>I haven't been able to get <code>@functools.wraps</code> working here. And ideally, there's a solution that works with more than 2 types in the <code>UnionMatch</code>.</p>
<python><overloading><python-typing><pylance>
2024-07-01 22:09:49
0
472
chicxulub
78,694,344
16,778,949
How to efficiently apply a `spacy` model to encode an entire `pandas` dataframe?
<p>I am trying to implement a semantic search where I have to match a query string over a corpus that is present in a pandas dataframe. I am trying to use <code>spacy</code> and its <code>similarity</code> function to compute the similarity between two items of text. To pass it to the <code>similarity</code> function, first I need to encode the entire dataframe using a model (Here I am using their <code>en_core_web_md</code> pre-trained pipeline to get the vectors).</p> <p>How can I do this efficienty over multiple large dataframes? Currently I'm just using the <code>pandas</code> <code>map</code> function to apply the model to every item in the dataframe.</p> <pre class="lang-py prettyprint-override"><code>import spacy import pandas as pd import nltk nltk.download('words') from nltk.corpus import words def main(): query_string = &quot;Eat Apple&quot; # Generate a sample corpus using words from nltk data = [words.words()[:1000] for _ in range(5)] df = pd.DataFrame(data).T compute_semantic_match_score(df, query_string) def compute_semantic_match_score(df, query): model = spacy.load(&quot;en_core_web_md&quot;) data_embeddings = df.map(model) query_embedding = model(query) match_scores = data_embeddings.map(query_embedding.similarity) print(match_scores) if __name__==&quot;__main__&quot;: main() </code></pre> <p>Here I am using a sample dataframe generated from an <code>nltk</code> corpus for a minimum reproducible example.</p> <p>However in the limited compute environment where this is going to be deployed, this takes a considerable amount of time to execute. Is there a more efficient way to apply the model over the entire dataframe?</p> <p>The reference dataframes only change infrequently so I can consider pre-computing the values for the data and storing them on disk. However I'd prefer to do the processing live if possible.</p> <p>Any help is appreciated. Thank you!</p>
<python><pandas><dataframe><spacy>
2024-07-01 21:53:40
0
534
phoney_badger
78,694,172
16,778,949
How to read data validation rules from an excel sheet using Python?
<p>I have a template Excel sheet with some data validation rules that I need to populate with data from another source. I need to read the template Excel sheet with Python and populate the rows and then write it out while preserving the data validation rules that used to exist in the original excel sheet. The template is externally generated and I have no control over it, so I pretty much need to copy the validation rules from there.</p> <p>I tried reading the Excel sheet in as an <code>openpyxl</code> workbook, but when checking the data validation on cells, they all seem to be empty.</p> <pre class="lang-py prettyprint-override"><code>import openpyxl def main(): wb2 = openpyxl.load_workbook('data_validation.xlsx') ws = wb2.active print(ws.data_validations) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>yields the output</p> <pre><code>&lt;openpyxl.worksheet.datavalidation.DataValidationList object&gt; Parameters: disablePrompts=None, xWindow=None, yWindow=None, count=0, dataValidation=[] </code></pre> <p>Where <code>data_validation.xlsx</code> is a placeholder for the template Excel sheet. For the purposes of a minimal reproducible example, it can be a simple two sheets Excel sheet with a dropdown validation on one of the cells where the dropdown values are read from another sheet inside the Excel workbook.</p> <p><strong>Sheet 1</strong> | | A | |:-------:|:-------:| | 1 | Item 1 |</p> <p><strong>Sheet 2</strong> | | A | |:---:|:------:| | 1 | Item 1 | | 2 | Item 3 | | 3 | Item 3 |</p> <p>With the validation rule.</p> <p>Allow: List</p> <p>Source: <code>=Sheet2!$A$1:$A$3</code></p> <p>An image representing the same example:</p> <p><a href="https://i.sstatic.net/jtiGHrgF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jtiGHrgF.png" alt="images of two excel workbook sheets. one with a cell with a validation rule. A window with the validation rule displayed nearby" /></a></p> <p>While reading in the workbook <code>openpyxl</code> does raise a warning.</p> <pre><code>UserWarning: Data Validation extension is not supported and will be removed </code></pre> <p>So I'm assuming <code>openpyxl</code> doesn't support reading in pre-existing validation rules from an excel sheet? only populating it with new ones?</p> <p>if that is the case, is there some other way to read in the validation rules? Any help is appreciated. Thanks!</p>
<python><excel><validation><openpyxl>
2024-07-01 20:49:39
0
534
phoney_badger
78,693,985
1,234,419
Ignoring request for not matching predefined routes
<p>I have some Sanic Python app code where following routes are defined:</p> <pre><code> def create_inspector(app): inspector = Sanic(configure_logging=False) inspector.config['KEEP_ALIVE'] = False inspector.app = app inspector.add_route(render_status, '/') inspector.add_route(get_status, '/api/status') return inspector </code></pre> <p>But when I hit the endpoint with endpoint.</p> <pre><code>curl -k http://localhost:9504/api/123 </code></pre> <p>I get following traceback in app log.</p> <pre><code> File &quot;/opt/product/4.0.0.1/lib64/python3.6/site-packages/sanic/app.py&quot;, line 546, in handle_request handler, args, kwargs, uri = self.router.get(request) File &quot;/opt/product/4.0.0.1/lib64/python3.6/site-packages/sanic/router.py&quot;, line 344, in get return self._get(request.path, request.method, '') File &quot;/opt/product/4.0.0.1/lib64/python3.6/site-packages/sanic/router.py&quot;, line 393, in _get raise NotFound('Requested URL {} not found'.format(url)) sanic.exceptions.NotFound: Requested URL /api/123 not found </code></pre> <p>Is it possible to trap all the invalid endpoints and send them to 404 (resource not found) status?</p>
<python><rest><crud><sanic>
2024-07-01 19:39:13
2
1,403
vector8188
78,693,934
5,001,432
How to supply **kwargs with an variable that has the same name as a keyword argument?
<p>This has already been asked <a href="https://stackoverflow.com/q/61746117/5001432">here</a>, but the answers ignore that this is in the scenario where the functions are a part of a library, and therefore the user has no control over the API (outside of submitting an issue, which I will do for my case). However, I'm hoping that there may be a quick workaround to avoid waiting for an API fix.</p> <p>The scenario is as as follows. I am trying to call <a href="https://github.com/gammapy/gammapy/blob/2a70d230b8f61059d20e400e730ccf2ac0503361/gammapy/estimators/points/core.py#L176-L229" rel="nofollow noreferrer">this function</a>, which has keyword argument <code>format=None</code>, and which makes an internal call (passing the <code>**kwargs</code>) to a <a href="https://docs.astropy.org/en/stable/io/ascii/read.html#parameters-for-read" rel="nofollow noreferrer">function from another library</a>, which has a keyword argument of the same name: <code>format</code>. I want to effectively be able to accomplish something of this sort:</p> <pre><code>FluxPoints.read(fn, format=None, kwargs={'format':'basic'}) # OR FluxPoints.read(fn, format=None, **{'format':'basic'}) </code></pre> <p>However, none of these attempts work. Is there a workaround here or can this only be fixed in the API?</p>
<python><keyword-argument>
2024-07-01 19:18:01
2
1,445
cadams
78,693,875
9,751,001
How do I create a regex dynamically using strings in a list for use in a pandas dataframe search?
<p>The following code allows me to successfully identify the 2nd and 3rd texts, and only those texts, in a pandas dataframe by search for rows that contain the word &quot;cod&quot; or &quot;i&quot;:</p> <pre><code>import numpy as np import pandas as pd texts_df = pd.DataFrame({&quot;id&quot;:[1,2,3,4], &quot;text&quot;:[&quot;she loves coding&quot;, &quot;he was eating cod&quot;, &quot;i do not like fish&quot;, &quot;fishing is not for me&quot;]}) texts_df.loc[texts_df[&quot;text&quot;].str.contains(r'\b(cod|i)\b', regex=True)] </code></pre> <p><a href="https://i.sstatic.net/tCVaLEsy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCVaLEsy.png" alt="enter image description here" /></a></p> <p>I would like to build the list of words up dynamically by inserting words from a long list but I can't figure out how to do that successfully.</p> <p>I've tried the following but I get an error saying &quot;r is not defined&quot; (which I expected as it's not a variable but I can't put it as part of the string either and don't know what I should do)</p> <pre><code>kw_list = [&quot;cod&quot;, &quot;i&quot;] kw_regex_string = &quot;\b(&quot; for kw in kw_list: kw_regex_string = kw_regex_string + kw + &quot;|&quot; kw_regex_string = kw_regex_string[:-1] # remove the final &quot;|&quot; at the end kw_regex_string = kw_regex_string + &quot;)\b&quot; myregex = r + kw_regex_string texts_df.loc[texts_df[&quot;text&quot;].str.contains(myregex, regex=True)] </code></pre> <p>How can I build the 'or' condition containing the list of key words and then insert that into the reg ex in a way that will work in the pandas dataframe search?</p>
<python><pandas><regex>
2024-07-01 19:04:09
1
631
code_to_joy
78,693,782
6,843,153
Pandas get the row with a max field from a groupby result
<p>I have the following dataframe:</p> <pre><code>col_a, col_b, col_c 1 01 34 1 02 23 1 03 567 1 04 425 2 01 234 2 02 54 2 03 56 3 01 56 3 02 567 </code></pre> <p>I want to select the rows with the max value in <code>col_b</code> for each <code>col_a</code> value. I have tried this:</p> <pre><code>df[[&quot;col_a&quot;, &quot;col_b&quot;, &quot;col_c&quot;]].groupby([&quot;col_a&quot;, &quot;col_b&quot;]).max(&quot;col_b&quot;) </code></pre> <p>But I just get all values for &quot;col_b&quot;.</p> <p>How can achieve my goal?</p>
<python><pandas><group-by>
2024-07-01 18:38:31
0
5,505
HuLu ViCa
78,693,717
606,025
Python : ModuleNotFoundError: No module named ...?
<p>I am trying to run a very simple python project on VScode/Pycharm.</p> <p>Directory Structure</p> <pre><code>PythonAnalyticalProject/ ├── sales_analysis/ │ ├── __init__.py │ ├── cli.py │ ├── constants.py │ ├── db.py │ ├── sales_analysis.py (contains main) ├── images/ │ ├── image1.png/ │ └── image2.png/ ├── setup.py └── requirements.txt </code></pre> <p>I have installed the requirements.txt dependencies and also added the project directory in PYTHONPATH.</p> <p>But I am still getting an error when I execute sales_analysis.py.</p> <pre><code>ModuleNotFoundError: No module named 'sales_analysis.cli'; 'sales_analysis' is not a package </code></pre> <p>This error originated on a line which had an import statement</p> <pre><code>from sales_analysis.cli import user_continue </code></pre> <p>How do I solve this error. I seem to have made all needed configurations, but the error still persists.</p>
<python>
2024-07-01 18:22:58
3
1,453
frewper
78,693,687
8,190,068
Align title to the left in an AccordionItem
<p>I am using an Accordion widget within my sample app (used for learning Kivy). In my KV code, I have:</p> <pre><code> Accordion: orientation: 'vertical' AccordionItem: title: '2024/12/20 00:00 &lt;reference&gt;' ... </code></pre> <p>The title is displayed centered within the AccordionItem header. I would like to change this to have all the titles aligned to the left. But I didn't see a way to do this in the documentation for the Accordion widget.</p> <p><strong>Is there a simple way to do this simple thing???</strong></p>
<python><kivy><accordion>
2024-07-01 18:17:09
1
424
Todd Hoatson
78,693,682
2,181,977
apply list-returning function to all rows in a pandas DataFrame
<p>I need to apply a function and have the returned list's values inserted into columns in the (new) dataframe. That is, if I have this:</p> <pre><code>import pandas as pd def fu(myArg): return {&quot;one&quot;:myArg,&quot;two&quot;:20} li = [10,20] df = pd.DataFrame(li,columns=[&quot;myArg&quot;]) </code></pre> <p>i would like to apply the <code>fu</code> function to each row of the dataframe so that the output is :</p> <pre><code> myArg one two 0 10 10 20 1 20 20 20 </code></pre> <p>(sorry for the extra spaced there, added for display reasons).</p> <p>How do I go about doing this, efficiently?</p>
<python><pandas><dataframe><apply>
2024-07-01 18:15:43
3
587
Fredrik Nylén
78,693,600
8,190,068
Change image in a BoxLayout when an action occurs in another Box Layout
<p>I have a sort of note-taking app I'm putting together. It has a horizontal BoxLayout which contains an image and another BoxLayout (which happens to be vertical). This embedded BoxLayout contains a TabbedPanel.</p> <p>I want to change the image in the main BoxLayout whenever the user selects a different tab in the embedded BoxLayout.</p> <p><strong>I'm not sure what to do in the on_release() method of each TabbedPanelItem in order to get the different images to display.</strong></p> <p>Currently I have the layout working the way I want up to the point of displaying the image for the first (default) tab.</p> <p>Here is my code:</p> <ul> <li><a href="https://drive.google.com/file/d/1UZYLum62zlOqHH7x2pvOt1dIFo52VBal/view?usp=drive_link" rel="nofollow noreferrer">main.py</a></li> <li><a href="https://drive.google.com/file/d/13l7BF_Lc4ioT47ksMJbMU4uGo-bmJFJH/view?usp=drive_link" rel="nofollow noreferrer">layouttest.kv</a></li> </ul> <p>In the KV code, I have set the source of the Image to app.leftImage. In the 3 on_release() handlers, I set app.leftImage to the correct image. But that doesn't change what is displayed.</p> <p>There is probably something simple that I am not understanding, but I am new to Kivy programming, so please answer with sufficient detail for a newbie to grasp.</p>
<python><kivy><handler>
2024-07-01 17:55:42
2
424
Todd Hoatson
78,693,574
5,527,646
Creating a dictionary from a subset of a list
<p>Suppose I have python list like this:</p> <pre><code>master_list = ['a_1', 'b_4', 'd_2', 'c_3', 'a_2', 'c_1', 'd_4', 'b_3', 'd_3', 'c_2', 'a_4', 'b_1', 'c_4', 'a_3', 'b_1', 'd_1'] </code></pre> <p>Each item in the list is a letter, underscore, and number. I want a dictionary to group this items by number in a list like this:</p> <pre><code> my_dict = {'1': [], '2': [], '3': [], '4':[]} </code></pre> <p>While I could use the following to populated my lists inside <code>my_dict</code>, it seems awkward. Is there a more efficient way to achieve this?</p> <pre><code>for item in master_list: if '_1' in item: my_dict['1'].append(item) elif '_2' in item: my_dict['2'].append(item) elif '_3' in item: my_dict['3'].append(item) elif '_4' in item: my_dict['4'].append(item) </code></pre>
<python><list><dictionary><nested>
2024-07-01 17:44:50
3
1,933
gwydion93
78,693,523
1,711,271
How to concatenate two dataframes of one row each without generating NaN
<p>I have two dataframes:</p> <pre><code>import pandas as pd import numpy as np n=10 df = pd.DataFrame( { &quot;A&quot;: 1.0, &quot;B&quot;: pd.Timestamp(&quot;20130102&quot;), &quot;C&quot;: pd.Series(1, index=list(range(n)), dtype=&quot;float32&quot;), &quot;D&quot;: np.array([3] * n, dtype=&quot;int32&quot;), &quot;E&quot;: pd.Categorical([&quot;test&quot;, &quot;train&quot;] * int(n/2)), &quot;F&quot;: &quot;foo&quot;, } ) i_f = ['A', 'B'] o_f = ['C', 'D', 'E', 'F'] df1 = pd.DataFrame(data=df[i_f].values, columns=i_f).reset_index(drop=True) df2 = pd.DataFrame(data=df[o_f].values, columns=o_f).reset_index(drop=True) </code></pre> <p>df1 and df2 are dataframes with the same number of rows, and different columns. Now I want to select a row i in df1, a row j in df2, and obtain a single dataframe of a single row, which has the columns from df1 and df2. I do:</p> <pre><code>i = 3 j = 2 df_curr = pd.concat([df1.iloc[[i], :], df2.iloc[[j], :]], axis=1) </code></pre> <p>However, what I get instead is a dataframe with two rows, and with NaNs:</p> <pre><code>print(df_curr) # A B C D E F # 3 1.0 2013-01-02 NaN NaN NaN NaN # 2 NaN NaT 1.0 3 test foo </code></pre> <p>Instead, I need</p> <pre><code> A B C D E F # 1.0 2013-01-02 1.0 3 test foo </code></pre> <p>How can I get it?</p>
<python><pandas><dataframe><concatenation>
2024-07-01 17:26:27
1
5,726
DeltaIV
78,693,316
25,874,132
how do I find the dimension of the span of the intersection/union of two null spaces of different sizes of matrices using numpy/scipy?
<p>I need to find $dim(span(H\cap G))$ and $dim(span(H\cup G))$ where H and G are defined the following way:</p> <p><a href="https://i.sstatic.net/e8HMmqZv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8HMmqZv.png" alt="enter image description here" /></a></p> <p>I have no idea how to find the intersection/union of their null spaces and after that, I don't know how to find the span and dim.</p> <p>matrix A has sizes of 5X18, and matrix B - 8X18.</p> <p>This is what I tried using the null_space of Scipy; it seems like it does find the null spaces but fails in the intersection part, and I didn't find a dedicated intersection/union function.</p> <pre><code>import numpy as np import sympy as sp from sympy import * from scipy.linalg import null_space A = np.matrix([[2 ,0 ,3 ,-1 ,2 ,-1 ,6 ,-4 ,7 ,8 ,-1 ,-4 ,7 ,0 ,0 ,0 ,0 ,7 ], [-6 ,-7 ,-9 ,10 ,1 ,10 ,-4 ,12 ,-7 ,4 ,-4 ,5 ,-7 ,0 ,0 ,0 ,7 ,0 ], [10 ,0 ,8 ,-12 ,-4 ,-5 ,-5 ,-13 ,0 ,5 ,2 ,1 ,0 ,0 ,0 ,7 ,0 ,0], [8 ,0 ,5 ,-4 ,1 ,-4 ,3 ,-9 ,7 ,11 ,3 ,-2 ,7 ,0 ,7 ,0 ,0 ,0 ], [-9 ,-7 ,-3 ,8 ,5 ,8 ,1 ,11 ,0 ,-1 ,1 ,-3 ,0 ,7 ,0 ,0 ,0 ,0 ]]) B = np.matrix([[3 ,2 ,3 ,-1 ,5 ,-5 ,2 ,0 ,-3 ,5 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,2 ], [39 ,26 ,27 ,-13 ,29 ,-25 ,-4 ,-2 ,-7 ,21 ,0 ,0 ,0 ,0 ,0 ,0 ,32 ,0 ], [-5 ,18 ,31 ,-9 ,25 ,-5 ,12 ,-26 ,5 ,17 ,0 ,0 ,0 ,0 ,0 ,32 ,0 ,0 ], [25 ,6 ,21 ,-3 ,19 ,-23 ,4 ,-14 ,-9 ,27 ,0 ,0 ,0 ,0 ,16 ,0 ,0 ,0 ], [-3 ,-6 ,-3 ,1 ,-5 ,1 ,-4 ,2 ,3 ,-1 ,0 ,0 ,0 ,4 ,0 ,0 ,0 ,0 ], [-47 ,-10 ,-35 ,5 ,-53 ,17 ,4 ,-14 ,79 ,-13 ,0 ,0 ,32 ,0 ,0 ,0 ,0 ,0 ], [67 ,66 ,71 ,-33 ,81 ,-61 ,76 ,-10 ,-67 ,41 ,0 ,32 ,0 ,0 ,0 ,0 ,0 ,0 ], [59 ,18 ,31 ,-41 ,57 ,-37 ,12 ,6 ,-59 ,49 ,32 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ]]) An = null_space(A) Bn = null_space(B) intersect = An[np.where(An==Bn)] print(intersect) </code></pre>
<python><numpy><scipy><linear-algebra>
2024-07-01 16:24:25
1
314
Nate3384
78,693,142
6,622,697
How to express the reverse of a Many-to-One in Django
<p>I realized that Django has Many-to-One and not One-to-Many, but how to you express the opposite relationship?</p> <p>I have a table which has 2 foreign keys to another table. The table Validation_Run has 2 foreign keys to Calibration_Run Django complains that that the reverse relationship has a conflict</p> <p><code>calibration.ValidationRun.calibration_run_pk: (fields.E304) Reverse accessor 'CalibrationRun.validationrun_set' for 'calibration.ValidationRun.calibration_run_pk' clashes with reverse accessor for 'calibration.ValidationRun.calibration_run_pk_tune_parameters'. HINT: Add or change a related_name argument to the definition for 'calibration.ValidationRun.calibration_run_pk' or 'calibration.ValidationRun.calibration_run_pk_tune_parameters'.</code></p> <p>How do I define the reverse relationship (from Calibration_Run to Validation_Run), so there is no conflict?</p>
<python><django><django-models><many-to-one>
2024-07-01 15:46:17
1
1,348
Peter Kronenberg
78,693,106
3,688,293
Converting between two sets of constants
<p>I have two enums <code>NAME</code> and <code>ALIAS</code> which are guaranteed to have the same number of constants, and I need a way to convert each constant from <code>NAME</code> to its corresponding one from <code>ALIAS</code>, and vice-versa. For example:</p> <pre><code>def name_to_alias(name): if name == Name.PAUL: return Alias.STARCHILD elif name == Name.GENE: return Alias.DEMON ... def alias_to_name(alias): if alias == Alias.STARCHILD: return Name.PAUL elif alias == Alias.DEMON: return Name.GENE ... </code></pre> <p>I don't wish to maintain two functions (or dicts) like these. Ideally, I'd have the enum mappings in a single data structure which I can access from both conversion functions.</p> <p>I'm thinking of something like:</p> <pre><code>mappings = { Name.PAUL: Alias.STARCHILD Name.GENE: Alias.DEMON ... } </code></pre> <p>This would work for converting from <code>Name</code> to <code>Alias</code>, but the opposite may have issues (what happens if I make a copy-paste error and end up with two dict keys with the same value?). Is there an easy and safe way to achieve this?</p>
<python><dictionary>
2024-07-01 15:38:27
1
989
Martin
78,693,026
2,071,807
Django subquery with static VALUES expression
<p>Is it possible using the Django ORM to write a subquery which SELECTs from a set of fixed values?</p> <pre class="lang-sql prettyprint-override"><code>SELECT id, name, ( SELECT new_ages.age FROM (VALUES ('homer', 35), ('marge', 34)) AS new_ages(name, age) WHERE new_ages.name = ages.name ) AS new_age FROM ages ; </code></pre> <p>The output from such a query might be something like:</p> <pre><code>person_id name age 1 homer 35 42 marge 34 99 bart null </code></pre> <p>I know that the Django code would look something like this:</p> <pre class="lang-py prettyprint-override"><code>from django.db.models import OuterRef, Subquery from my_app.models import Person Person.objects.annotate(age=Subquery(...(name=OuterRef(&quot;name&quot;))) </code></pre> <p>But what goes in the <code>...</code>?</p>
<python><django>
2024-07-01 15:18:09
1
79,775
LondonRob
78,692,959
20,591,261
Polars truncates decimals
<p>I'm trying to truncate floating point numbers in my DataFrame to a desired number of decimal places. I've found that this can be done using Pandas and NumPy <a href="https://stackoverflow.com/questions/56780561/pandas-dataframe-how-to-cut-off-float-decimal-points-without-rounding">here</a>, but I've also seen that it might be possible with <code>polars.Config.set_float_precision</code>.</p> <p>Below is my current approach, but I think I might be taking extra steps.</p> <pre><code>import polars as pl data = { &quot;name&quot;: [&quot;Alice&quot;, &quot;Bob&quot;, &quot;Charlie&quot;], &quot;grade&quot;: [90.23456, 80.98765, 85.12345], } df = pl.DataFrame(data) ( df # Convert to string .with_columns( pl.col(&quot;grade&quot;).map_elements( lambda x: f&quot;{x:.5f}&quot;, return_dtype=pl.String ).alias(&quot;formatted_grade&quot;) ) # Slice to get desired decimals .with_columns( pl.col(&quot;formatted_grade&quot;).str.slice(0, length = 4) ) # Convert back to Float .with_columns( pl.col(&quot;formatted_grade&quot;).cast(pl.Float64) ) ) </code></pre>
<python><dataframe><precision><python-polars>
2024-07-01 15:04:30
2
1,195
Simon
78,692,950
5,049,813
Why is no warning thrown for indexing a Series of values with a bool Series that's too long?
<p>I have the following code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd series_source = pd.Series([1, 2, 3, 4], dtype=int) normal_index = pd.Series([True, False, True, True], dtype=bool) big_index = pd.Series([True, False, True, True, False, True], dtype=bool) # Both indexes give back: pd.Series([1, 2, 3, 4], dtype=int) # no warnings are raised! assert (series_source[normal_index] == series_source[big_index]).all() df_source = pd.DataFrame( [ [1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16] ] ) # no warning - works as expected: grabs rows 0, 2, and 3 df_normal_result = df_source[normal_index] # UserWarning: Boolean Series key will be reindexed to match DataFrame index. # (but still runs) df_big_result = df_source[big_index] # passes - they are equivalent assert df_normal_result.equals(df_big_result) print(&quot;Complete&quot;) </code></pre> <p><strong>Why is it that indexing the <code>series_source</code> with the <code>big_index</code> doesn't raise a warning, even though the big index has more values than the source?</strong> What is pandas doing under the hood in order to do the Series indexing?</p> <p>(Contrast this to indexing the <code>df_source</code>, where an explicit warning is raised that <code>big_index</code> needs to be re-indexed in order for the operation to work.)</p> <p>In <a href="https://pandas.pydata.org/docs/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer">the indexing docs</a>, it claims that:</p> <blockquote> <p>Using a boolean vector to index a Series works exactly as in a NumPy ndarray</p> </blockquote> <p>However, if I do</p> <pre><code>import numpy as np a = np.array([1, 2, 3, 4, 5]) b = np.array([True, False, True, True, False]) c = np.array([True, False, True, True, False, True, True]) # returns an ndarray of [1,3, 4] as expected print(a[b]) # raises IndexError: boolean index did not match indexed array along axis 0; # size of axis is 5 but size of corresponding boolean axis is 7 print(a[c]) </code></pre> <p>So it does not seem that this functionality matches Numpy as the docs claim. What's going on?</p> <p>(My versions are <code>pandas==2.2.2</code> and <code>numpy==2.0.0</code>.)</p>
<python><pandas><dataframe><indexing><series>
2024-07-01 15:02:53
2
5,220
Pro Q
78,692,910
6,227,035
Pymongo: automatic IP
<p>I am writing an application using Pymongo to access a database on MongoDb Atlas using some basic syntax such as:</p> <pre><code>import pymongo as pm url = &quot;mongodb+srv://user:password@clusterName.ecc..&quot; client = pm.MongoClient(url) db = client.get_database('name_db') </code></pre> <p>My problem is that every time I connect from a different IP I need to login manually to the mongoDB Atlas panel and add the new IP. Is there a pymongo function able to add the user IP automatically ? Sorry if the question is trivial, I am still new to the mongoDB world. Thank you!</p>
<python><mongodb><ip>
2024-07-01 14:54:53
1
1,974
Sim81
78,692,863
20,460,267
Python Function Losing List Element After Recursion?
<p>I've just started learning Python and I'm getting a weird error in an exercise program I have just written. The program takes in a list of tuples (freq_i, value_i) representing a histogram of distinct value_is, and is supposed to output a list of list of first permutations which I could then permute in turn to get all possible unique combinations of k chosen elements from the original list from which the input histogram was generated. Sorry, if that's not a clear explanation. My test case is as follows. Hopefully this will help clarify what it is supposed to do.</p> <pre><code>Original list = [3,4,4,7] Input histogram = [(1,3), (2,4), (1,7)] k = 2 Correct answer = [[3, 4], [4, 4], [3, 7], [4, 7]] My python program answer = [[3, 4], [4, 4], [3, 7]] </code></pre> <p>Note that this function only gives me 'base' permutations as it outputs the elements preserving the input order. I planned to use another function to permute the lists in the returned lists of lists to get [4, 3], [7, 3] and [7, 4] which would then be all possible unique ways of making a list of two elements from the original list, selecting each element at most once.</p> <pre><code>def acaddlast(listoflists, newelement, freq): if (freq &lt;= 0): return listoflists newlist = list() for z in range(freq): newlist.append(newelement) l = len(listoflists) for z in range(l): listoflists[z] += newlist return listoflists def allcombs2(x, k): #print(len(x),k) output = list() if (len(x) == 0): return output if (k == 0): return output if (k == 1): for z in x: output.append([z[1]]) return output countofelements = 0 for z in x: countofelements += z[0] if (k &gt; countofelements): return output if (len(x) == 1): return acaddlast([[]], x[0][1], k) if (k == countofelements): onelist = list() for z in x: for zz in range(z[0]): onelist.append(z[1]) output.append(onelist) return output # 1 &lt; k &lt; countofelements lastelementx = x.pop() f = min(k, lastelementx[0]) for z in range(f+1): output += acaddlast(allcombs2(x, k-z), lastelementx[1], z) if (lastelementx[0] &gt;= k): output += acaddlast([[]], lastelementx[1], k) return output output = allcombs2([(1,3), (2,4), (1, 7)], 2) print (f&quot;{len(output)} lists\n&quot;) print (output) </code></pre> <p>If I uncomment the first print statement in allcomb2() I get the following output. The first call has 3 elements in x initially, then the last element is popped off leaving 2. In the loop 'for z in range(f+1):' on this first call, f = 1, so z should have values 0 and 1, calling allcombs2() twice with x of 2 elements. The final 1 1 below should be 2 1. Somehow, x has lost one of its elements inside this loop.</p> <pre><code>Solving allcombs([(1, 3), (2, 4), (1, 7)],2) 3 2 2 2 1 2 1 1 1 0 1 1 3 lists [[3, 4], [4, 4], [3, 7]] </code></pre> <p>After pulling my hair out looking for the flaw in my Python code, I decided to manually translate it into PHP with which I'm much more familiar. I tried to maintain the same program logic and translate Python into PHP line by line so it is very near to the same code. The translation is below, and it produces the correct result.</p> <pre><code>&lt;?php function acaddlast($listoflists, $newelement, $freq) { if ($freq &lt;= 0) return $listoflists; $newlist = array(); for ($z = 0; $z &lt; $freq; $z++) { $newlist[] = $newelement; } $l = count($listoflists); for ($z = 0; $z &lt; $l; $z++) $listoflists[$z] = array_merge($listoflists[$z], $newlist); return $listoflists; } function allcombs2($x, $k) { // Returns array of all arrays of length $k where each value occurs at most its frequency times preserving input order. $xsize = count($x); //echo(&quot;($xsize, $k) &quot;); $output = array(); if ($xsize == 0) return $output; if ($k == 0) return $output; if ($k == 1) { foreach($x as $key =&gt; $value) $output[] = array($value[1]); return $output; } $countofelements = 0; foreach($x as $key =&gt; $value) $countofelements += $value[0]; if ($k &gt; $countofelements) return $output; if (count($x) == 1) return acaddlast(array(array()), $x[0][1], $k); if ($k == $countofelements) { $onelist = array(); foreach($x as $key =&gt; $value) { for ($zz = 0; $zz &lt; $value[0]; $zz++) $onelist[] = $value[1]; } return array($onelist); } // 1 &lt; k &lt; countofelements $lastelementx = $x[$xsize-1]; unset($x[$xsize-1]); if ($k &lt; $lastelementx[0]) { $f = $k; } else { $f = $lastelementx[0]; } for ($z = 0; $z &lt;= $f; $z++) $output = array_merge($output, acaddlast(allcombs2($x, $k-$z), $lastelementx[1], $z)); if ($lastelementx[0] &gt;= $k) $output = array_merge($output, acaddlast(array(array()), $lastelementx[1], $k)); return $output; } function printlistoflists($array) { echo &quot;[&quot;; $first2 = true; $listsonline = 0; $maxlistsperline = 20; foreach($array as $key =&gt; $value) { if(!$first2) echo &quot;, &quot;; if ($listsonline &gt;= $maxlistsperline) { echo &quot;\n&quot;; $listsonline = 0; } $listsonline++; echo &quot;[&quot;; $first = true; foreach($value as $key2 =&gt; $value2) { if(!$first) echo &quot;, &quot;; echo $value2; $first = false; } echo&quot;]&quot;; $first2 = false; } echo&quot;]\n&quot;; } $input = [[1,3], [2, 4], [1, 7]]; $listlen = 2; echo &quot;Choosing $listlen elements from histogram\n&quot;; printlistoflists($input); $output = allcombs2($input, $listlen); echo count($output).&quot; lists\n&quot;; printlistoflists($output); ?&gt; </code></pre> <p>Output is below if you uncomment the first echo in allcombs2().</p> <pre><code>Choosing 2 elements from histogram [[1, 3], [2, 4], [1, 7]] (3, 2) (2, 2) (1, 2) (1, 1) (1, 0) (2, 1) 4 lists [[3, 4], [4, 4], [3, 7], [4, 7]] </code></pre> <p>I've written some smaller functions to try to create an MRE but haven't yet found a way to get the same effect with something smaller. Any help or suggestions are most welcome, thanks.</p> <p>I'm using Python version 3.8.2 on Ubuntu 20.04.1 LTS Desktop.</p>
<python><recursion>
2024-07-01 14:45:26
1
2,007
Simon Goater
78,692,641
9,749,118
Hide injected parameter by a decorator from function parameter hinting in PyCharm
<p>I have a function that has a parameter injected by a decorator like so:</p> <pre><code>def decorator(func): @wraps(func) def wrapper(*args, **kwargs): return func('hello', *args, **kwargs) return wrapper @decorator def func(hello_string: str, a: int, b: int): &lt;some code&gt; </code></pre> <p>and you call the function like this:</p> <pre><code>func(1,2) </code></pre> <p>I want Pycharm to not show me the existence of hello_string as a parameter, I want to show the user just the parameters that they need to put in the function when calling it (a and b).</p> <p><a href="https://i.sstatic.net/2TfT7zM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2TfT7zM6.png" alt="enter image description here" /></a></p> <p>is there a way to do it, or are decorators not a good solution for this type of injection? I tried using context managers inside the function as well, but they cause the whole function to indent, and using multiple context managers makes the method look messy, while using multiple decorators is clean.</p> <p>Thanks</p>
<python><pycharm><decorator>
2024-07-01 13:59:17
1
537
INTODAN
78,692,458
7,563,454
Extract the closest two numbers that multiply to create a given number
<p>I made a CPU based raytracer in PyGame which uses a tile per thread to render each section of the screen. Currently I divide the screen vertically between lines, this doesn't give me the best distribution: I want to divide threads in even boxes covering both the X and Y directions. For example: If my resolution is <code>x = 640, y = 320</code> and I have 4 threads, I want a list of 4 boxes representing tile boundaries in the form <code>(x_min, y_min, x_max, y_max)</code>, in this case the result being <code>[(0, 0, 320, 160), (320, 0, 640, 160), (0, 160, 320, 320), (320, 160, 640, 320)]</code>.</p> <p>Problem is I don't see how to automatically divide the number of threads into a 2D grid: I want to extract the closest two whole numbers that multiply to match the thread setting. If this number can't be divided evenly, jump to the closest one that can... for instance no two integers can multiply to create 7, use 6 or 8 instead. I tried <code>math.sqrt</code> but it only works for perfectly divisible numbers like 16, even when rounding that it won't give accurate results for values like 32. What is the simplest solution?</p> <p>Examples: <code>4 = 2 * 2</code>, <code>6 = 2 * 3</code>, <code>8 = 2 * 4</code>, <code>9 = 3 * 3</code>, <code>16 = 4 * 4</code>, <code>24 = 4 * 6</code>, <code>32 = 4 * 8</code>, <code>64 = 8 * 8</code>.</p>
<python><algorithm><math><2d>
2024-07-01 13:14:32
3
1,161
MirceaKitsune
78,692,403
108,390
ElementTree.find() seems to miss recently appended elements
<p>I can't wrap my head around this one. Why is find() not finding the appended element when it can find the exact same element when it is loaded from an XML string?</p> <p>The following function creates an element and adds subelements to it.</p> <pre><code>def create_element(): d_004 = ET.Element(&quot;marcx:datafield&quot;, {&quot;ind1&quot;: &quot;0&quot;, &quot;ind2&quot;: &quot;0&quot;, &quot;tag&quot;: &quot;004&quot;}) d_004_a = ET.Element(&quot;marcx:subfield&quot;, {&quot;code&quot;: &quot;a&quot;}) d_004_a.text = &quot;e&quot; d_004.append(d_004_a) d_004_r = ET.Element(&quot;marcx:subfield&quot;, {&quot;code&quot;: &quot;r&quot;}) d_004_r.text = &quot;n&quot; d_004.append(d_004_r) return d_004 </code></pre> <p>When I call that and then tries to find() it, it does not find it, so another element is appended:</p> <pre><code>def test_append_twice(): namespaces = {&quot;marcx&quot;: &quot;info:lc/xmlns/marcxchange-v1&quot;} my_element = ET.Element(&quot;MyElement&quot;) my_element.append(copy.deepcopy(d_004)) added_element = my_element.find('./marcx:datafield[@tag=&quot;004&quot;]', namespaces) if not added_element: my_element.append(create_element()) assert len(list(my_element.iter())) == 4 # Returns 7 instead of 4 </code></pre> <p>I wrote a test (below) for the same thing when I read the thing from a string. Now find() works as expected.</p> <pre><code>def test_find_from_string(): namespaces = {&quot;marcx&quot;: &quot;info:lc/xmlns/marcxchange-v1&quot;} element_str= &quot;&quot;&quot;&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;MyElement&gt; &lt;marcx:datafield xmlns:marcx=&quot;info:lc/xmlns/marcxchange-v1&quot; ind1=&quot;0&quot; ind2=&quot;0&quot; tag=&quot;004&quot;&gt; &lt;marcx:subfield code=&quot;a&quot;&gt;e&lt;/marcx:subfield&gt; &lt;marcx:subfield code=&quot;r&quot;&gt;n&lt;/marcx:subfield&gt; &lt;/marcx:datafield&gt; &lt;/MyElement&gt;&quot;&quot;&quot; my_element= ET.ElementTree(ET.fromstring(element_str)) added_element = my_element.find('./marcx:datafield[@tag=&quot;004&quot;]', namespaces) if not added_element: my_element.append(create_element()) assert len(list(my_element.iter())) == 4 </code></pre> <p>What is going on? What am I missing? Has it to do with namespaces somehow?</p>
<python><elementtree>
2024-07-01 13:02:35
1
1,393
Fontanka16