QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,510,478
| 673,642
|
Can regular python packages contain namespace packages?
|
<p>I have read <a href="https://peps.python.org/pep-0420/" rel="nofollow noreferrer">https://peps.python.org/pep-0420/</a> but it does not explicitly declare that namespace packages are strictly only allowed on top-level. All examples show namespace packages as top-level packages.</p>
<p>Would this work (package1 and package2 are distributed in separate packages)?</p>
<pre><code>package1/
+-- foo/
+-- __init__.py
+-- bar/
+-- baz.py
package2/
+-- foo/
+-- bar/
+-- zoq.py
</code></pre>
<p>In the above example, "foo" is a regular package and "foo/bar" is a namespace package. I assume that the answer should be "no" since "foo" would have 2 different roles, but I could be wrong.</p>
<p>(A reason for this question would be if <code>foo/__init__.py</code> is in extensive use, and converting it to a namespace package would cause a major headache.)</p>
|
<python><package><namespaces>
|
2024-05-21 08:14:35
| 1
| 383
|
Gunnar
|
78,510,155
| 11,922,765
|
Python Dataframe avoid non-NaN values dropping during <> operations
|
<p><strong>My code:</strong></p>
<pre><code>xdf = pd.DataFrame(data={'A':[-10,np.nan,-2.2],'B':[np.nan,2,1.5],'C':[3,1,-0.3]},index=['2023-05-13 08:40:00','2023-05-13 08:41:00','2023-05-13 08:42:00'])
xdf =
A B C
2023-05-13 08:40:00 -10.0 NaN 3.0
2023-05-13 08:41:00 NaN 2.0 1.0
2023-05-13 08:42:00 -2.2 1.5 -0.3
</code></pre>
<p>Consider only values below 4.0 and above -4.0 in each row of the dataframe</p>
<pre><code>print(xdf[((xdf<4.0).all(axis=1))&((xdf>-4.0).all(axis=1))])
</code></pre>
<p><strong>Present output:</strong></p>
<pre><code> A B C
2023-05-13 08:42:00 -2.2 1.5 -0.3
</code></pre>
<p><strong>Expected output:</strong> My above code drops a row if there is a NaN in one column, despite other columns satisfying the filter condition. So, I want to omit NaN columns and consider non-NaN columns in <> operation.</p>
<pre><code> A B C
2023-05-13 08:41:00 NaN 2.0 1.0
2023-05-13 08:42:00 -2.2 1.5 -0.3
</code></pre>
<p><strong>Edit:</strong></p>
<p><strong>One working solution:</strong></p>
<pre><code>print(xdf[((xdf.fillna(True)<4.0).all(axis=1))&((xdf.fillna(True)>-4.0).all(axis=1))])
</code></pre>
|
<python><pandas><dataframe><numpy><logical-operators>
|
2024-05-21 07:09:33
| 1
| 4,702
|
Mainland
|
78,510,083
| 671,013
|
Hint IDE with column names (and types) of a pandas dataframe
|
<p>Assume you have a <code>pandas</code> dataframe <code>df</code> that has the column <code>foo</code>. Then, in <code>Jupyter</code> when you type <code>df.fo<TAB></code> the column name will be completed.
This works when <code>df</code> is in memory.
However, if <code>df</code> is not in memory then you can't TAB complete the column name.
Is there a way to TAB complete the column name in an IDE when <code>df</code> is not in memory?</p>
<p>More specifically, assume that I develop a function. I can give type hint to <code>df</code> so when inside the function I can get completions related to <code>pandas</code>.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def my_function(foo: pd.DataFrame, bar: pd.DataFrame):
foo.joi<TAB>
</code></pre>
<p>Here, <code>foo.join</code> will be completed and in general I can see the available methods for <code>foo</code>.</p>
<p>My question is, what about the columns of the dataframe (and their types)? Is there a way to get completions for the columns of the dataframe when <code>df</code> is not in memory? I have in mind something like type hinting that hints the IDE about the columns of the dataframe.
Similarly, to how you can hint a dictionary with <code>Dict[str, Dict[int: str]]</code> and then you get completions for the keys of the dictionary.</p>
<p>A bonus question is, how can this be done with <code>PySpark</code> dataframes?</p>
|
<python><pandas><dataframe>
|
2024-05-21 06:50:58
| 0
| 13,161
|
Dror
|
78,509,902
| 5,091,720
|
pandas quantile vs the Excel equivalent calculation difference
|
<p>There is probably a way to do this the problem is I don't know it. I have data sets that are often between 80 to 120 values long. I am trying to compute the 90% value for each separate data set. I was testing this out in excel and python and realized that there is a difference with the results when I thought I would get the same thing. This difference shows up for me when there are a certain number of data points such as 81, 91, 101, or 111.</p>
<p>I sorted my data smallest to largest. Example data set might be something like:</p>
<pre><code>Samples = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.15, 0.25, 0.25, 0.3, 0.44, 0.45,
0.47, 0.54, 0.58, 0.6, 0.68, 0.69, 0.74, 0.82, 0.84, 1.1, 1.1, 1.2, 1.2, 1.5,
1.7, 1.8, 1.9, 1.9, 2.3, 2.6, 3.5, 4.5, 4.8]
</code></pre>
<p><strong>My conclusion is that Excel produces the right value.</strong> The calculation in excel was given to me from someone else. The calculation.:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Description</th>
<th>Calculation</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td>Samples</td>
<td>=COUNT(Samples)</td>
<td>81</td>
</tr>
<tr>
<td>0.9 * N samples</td>
<td>= 0.9 * COUNT(Samples)</td>
<td>72.9</td>
</tr>
<tr>
<td><strong>Lower</strong></td>
<td>= SMALL(Samples, ROUNDDOWN( K4, 0) )</td>
<td><strong>1.50</strong></td>
</tr>
</tbody>
</table></div>
<p>So I was told to use Excel, but come on!!!π€ Python can do the same stuff right? Here is my python pandas code:</p>
<pre><code>import pandas as pd
# use the "Samples" from above β¬οΈ
myS = pd.Series(Samples)
q_lower = myS.quantile(q=0.9, interpolation='lower')
print(q_lower)
</code></pre>
<p>Pandas results = <strong>1.7</strong></p>
<p>Okay so I thought maybe numpy (this code is just a guess)</p>
<pre><code>import numpy as np
myindex = int(np.ceil(0.9 * len(Samples))) - 1
print(myS.iloc[myindex])
</code></pre>
<p>numpy result <strong>1.7</strong></p>
<p>So why can't I get the same results with excel and python. I prefer python if it can work.</p>
|
<python><arrays><pandas><excel><numpy>
|
2024-05-21 06:05:51
| 1
| 2,363
|
Shane S
|
78,509,755
| 10,452,700
|
How can filter and retrieve specific records from big data efficiently using Python/Pyspark in Google Colab medium?
|
<p>I'm struggling with a data engineering problem:</p>
<h3><strong>Dataset Characteristics</strong></h3>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>public dataset</th>
<th style="text-align: center;">duration</th>
<th style="text-align: center;">Total #VMs</th>
<th style="text-align: center;">Dataset size</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/Azure/AzurePublicDataset/blob/master/AzurePublicDatasetV2.md" rel="nofollow noreferrer">AzurePublicDatasetV2</a></td>
<td style="text-align: center;">30 consecutive days</td>
<td style="text-align: center;">2,695,548 (~2.6M )</td>
<td style="text-align: center;">235GB (156GB compressed) [198 files]</td>
</tr>
</tbody>
</table></div>
<p><img src="https://i.imgur.com/q9jWDE9.png" alt="img" /></p>
<h3><strong>Situation</strong></h3>
<p>I need to read 195 <code>gzip</code> files and filter some interested <code>vmid</code> records from time-series data. Due to the fact that the quality of data collection (the early time records for all <code>vmid</code>s are on early files and the latest records are stored in the last files), I need to read all files and filter based on the interested <code>vmid_lists</code>. The problem I'm using <a href="https://colab.research.google.com/" rel="nofollow noreferrer">Google Colab</a> medium for my analysis and it has limited computation resources it crashes if naively I open each file and decompress and concat the dataframes using <a href="/questions/tagged/pandas" class="s-tag post-tag" title="show questions tagged 'pandas'" aria-label="show questions tagged 'pandas'" rel="tag" aria-labelledby="tag-pandas-tooltip-container" data-tag-menu-origin="Unknown">pandas</a>. This approach needs further resources but currently, I do not have access.</p>
<ul>
<li>each file is around ~800MB and contains 1M rows.</li>
</ul>
<pre class="lang-none prettyprint-override"><code>| timestamp | vmid | mincpu | maxcpu | avgcpu |
|:-----------:|:-----------------------------------------------------------------|---------:|---------:|---------:|
| 0 | yNf/R3X8fyXkOJm3ihXQcT0F52a8cDWPPRzTT6QFW8N+1QPfeKR5//6xyX0VYn7X | 19.8984 | 24.9964 | 22.6307 |
</code></pre>
<p>To the best of my knowledge, there are new packages for read\load <a href="/questions/tagged/bigdata" class="s-tag post-tag" title="show questions tagged 'bigdata'" aria-label="show questions tagged 'bigdata'" rel="tag" aria-labelledby="tag-bigdata-tooltip-container" data-tag-menu-origin="Unknown">bigdata</a> (except for Spark):</p>
<ul>
<li><a href="https://www.dask.org/" rel="nofollow noreferrer">dask</a></li>
<li><a href="https://pola.rs/" rel="nofollow noreferrer">polars</a></li>
</ul>
<p>Let's say I want to efficiently collect and filter records of limited <code>vmid</code>s (e.g. 4 out of ~2.6M) that exist in the list of-course the best is:</p>
<ul>
<li>to select those that have 30 days data continuously or</li>
<li>to collect all <code>vmid</code> data individually and store them accordingly [e.g parquet files ]</li>
</ul>
<h3><strong>try-outs</strong></h3>
<p>What I have tried so far is below which leads to the Out-of-Memory (OOM) error or crashes notebook:</p>
<pre class="lang-py prettyprint-override"><code>import dask.dataframe as dd
dfs = []
for i in range(1,196):
print(i)
df = dd.read_csv(f"https://azurecloudpublicdataset2.z19.web.core.windows.net/azurepublicdatasetv2/trace_data/vm_cpu_readings/vm_cpu_readings-file-{i}-of-195.csv.gz", blocksize=None)
df.columns = ['timestamp', 'vmid', 'mincpu', 'maxcpu', 'avgcpu']
dfs.append(df)
# Combine dataframes
combined_df = dd.concat(dfs)
# Select top 20 vmid
#top_vmid_counts = combined_df["vmid"].value_counts().head(20) # when this not feasible
# How to select continuous time data for selected vmids ---> vmid_lists = ...
vmid_lists = ["yNf/R3X8fyXkOJm3ihXQcT0F52a8cDWPPRzTT6QFW8N+1QPfeKR5//6xyX0VYn7X", #30 days data continuously
"4WstS6Ub3GzHun4Mzb6BxLldKvkEkws2SZ9tbBV3kfLzOd+QRVETcgqLjtc3mCbD",
"5f2jDjOhz6v00WonXOAuZW0uPO4OXjf5t64xYvOefcKwb4v7mOQtOZEVebAbiQq7",
"E3fjqJ4h2SLfvLl9EV6/w9uc8osF0dw9dENCHteoNRLZTp500ezV9RPfyeMdOKfu",
]
top_vmid = combined_df[combined_df["vmid"].isin(vmid_lists)]
# Compute the result
result=top_vmid.compute()
</code></pre>
<h3><strong>Assumption</strong></h3>
<p>I think considering the situation I explained, the proposed solution should have:</p>
<ul>
<li>read all <code>csv.gz</code> files 1 by 1 till the last file</li>
<li>filter (if you use Pyspark better to cash the table by <code>.cashe()</code>)</li>
<li>concat dataframe(s)</li>
<li>store under the name of each vmids [e.g <code>.parquet</code> or <code>.csv</code> ]</li>
<li><strong>delete</strong> the <code>csv.gz</code> to avoid crashing</li>
</ul>
<h3><strong>Question\Challenge</strong></h3>
<p>So what is the best practice to efficiently do this task during read\load phase to retrieve easily interested records in Google Colab medium with default set-up? Looking for a smart way to solve the problem without change in <a href="https://colab.research.google.com/signup?utm_source=dialog&utm_medium=link&utm_campaign=settings_page" rel="nofollow noreferrer">Colab plan</a></p>
<p>Any help will be highly appreciated</p>
<hr />
<p>Colab notebook if one is interested to experiment: <a href="https://colab.research.google.com/drive/1z8KiU7G4bnf78oO3pgKIiOjxmeK_Fpb1?usp=sharing" rel="nofollow noreferrer"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" /></a></p>
<p>Potentially related posts I have found:</p>
<ul>
<li><a href="https://stackoverflow.com/q/35731855/10452700">More efficient way to select partly records from a big file in Python</a></li>
<li><a href="https://stackoverflow.com/q/76178807/10452700">How can I efficiently filter a PySpark data frame with conditions listed in the dictionary?</a></li>
<li><a href="https://stackoverflow.com/q/71726242/10452700">Efficient storage and retrieval for heirarichal data in python</a></li>
<li><a href="https://stackoverflow.com/q/73145670/10452700">how filter data where there is just one word</a></li>
<li><a href="https://stackoverflow.com/q/62855643/10452700">Make piece of code efficient for big data</a></li>
<li><a href="https://stackoverflow.com/q/78510492/10452700">Elegant way to enable random access by "month" in parquet file</a></li>
</ul>
|
<python><pyspark><google-colaboratory><dask><python-polars>
|
2024-05-21 05:14:59
| 2
| 2,056
|
Mario
|
78,509,742
| 12,291,425
|
Cannot get Azure subscriptions using Azure Python SDK
|
<p>I can get Azure subscriptons list <a href="https://stackoverflow.com/q/78505840/12291425">using REST API</a>.</p>
<p>However, when I'm switching to <a href="https://learn.microsoft.com/en-us/azure/developer/python/sdk/azure-sdk-overview" rel="nofollow noreferrer">Azure Python SDK</a>, there seems to be some problems.</p>
<p>This is the code so far:</p>
<pre class="lang-py prettyprint-override"><code>from azure.identity.aio import ClientSecretCredential
from azure.mgmt.resource import SubscriptionClient
import json
data = json.load(open("parameters.json"))
credential = ClientSecretCredential(
tenant_id=data["tenant"],
client_id=data["client_id"],
client_secret=data["client_secret"],
)
subs = SubscriptionClient(credential=credential)
l = list(subs.subscriptions.list())
print(l)
</code></pre>
<p>I use an additional <code>list</code> in the pnultimate line because <code>subs.subscriptions.list()</code> returns an iterator. Despite that, the code seems pretty straightforward.</p>
<p>However, this code gives the following error:</p>
<pre><code>Traceback (most recent call last):
File "c:\Users\azureuser\Documents\GitHub\vmss-scripts\vm_create.py", line 14, in <module>
l = list(subs.subscriptions.list())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\azureuser\scoop\apps\python\current\Lib\site-packages\azure\core\paging.py", line 123, in __next__
return next(self._page_iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\azureuser\scoop\apps\python\current\Lib\site-packages\azure\core\paging.py", line 75, in __next__
self._response = self._get_next(self.continuation_token)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\azureuser\scoop\apps\python\current\Lib\site-packages\azure\mgmt\resource\subscriptions\v2022_12_01\operations\_operations.py", line 526, in get_next
pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\azureuser\scoop\apps\python\current\Lib\site-packages\azure\core\pipeline\_base.py", line 230, in run
return first_node.send(pipeline_request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\azureuser\scoop\apps\python\current\Lib\site-packages\azure\core\pipeline\_base.py", line 86, in send
response = self.next.send(request)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\azureuser\scoop\apps\python\current\Lib\site-packages\azure\core\pipeline\_base.py", line 86, in send
response = self.next.send(request)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\azureuser\scoop\apps\python\current\Lib\site-packages\azure\core\pipeline\_base.py", line 86, in send
response = self.next.send(request)
^^^^^^^^^^^^^^^^^^^^^^^
[Previous line repeated 2 more times]
File "C:\Users\azureuser\scoop\apps\python\current\Lib\site-packages\azure\mgmt\core\policies\_base.py", line 46, in send
response = self.next.send(request)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\azureuser\scoop\apps\python\current\Lib\site-packages\azure\core\pipeline\policies\_redirect.py", line 197, in send
response = self.next.send(request)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\azureuser\scoop\apps\python\current\Lib\site-packages\azure\core\pipeline\policies\_retry.py", line 531, in send
response = self.next.send(request)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\azureuser\scoop\apps\python\current\Lib\site-packages\azure\core\pipeline\policies\_authentication.py", line 124, in send
self.on_request(request)
File "C:\Users\azureuser\scoop\apps\python\current\Lib\site-packages\azure\core\pipeline\policies\_authentication.py", line 100, in on_request
self._update_headers(request.http_request.headers, self._token.token)
^^^^^^^^^^^^^^^^^
AttributeError: 'coroutine' object has no attribute 'token'
sys:1: RuntimeWarning: coroutine 'GetTokenMixin.get_token' was never awaited
</code></pre>
<p>I don't know what was missing, and the error didn't gives much information.</p>
<p>It seems it was related with <code>token</code>. I can get a token with the following method:</p>
<pre class="lang-py prettyprint-override"><code>async def print_token():
token = await credential.get_token("https://management.azure.com/.default")
print(token.token)
await credential.close()
asyncio.run(print_token())
</code></pre>
<p>But it requires <code>asyncio</code> to run, which isn't compatible with my code. And where should I should put the token?</p>
<p>I've investigated the source code if <code>Azure CLI</code>. It seems it get subscriptions using the same method:
<a href="https://github.com/Azure/azure-cli/blob/f369cead2604e37480611b0cc269fee615956ea2/src/azure-cli-core/azure/cli/core/_profile.py#L835" rel="nofollow noreferrer">https://github.com/Azure/azure-cli/blob/f369cead2604e37480611b0cc269fee615956ea2/src/azure-cli-core/azure/cli/core/_profile.py#L835</a></p>
<p>The client was acqurired from the function below, and the type was</p>
<p><a href="https://github.com/Azure/azure-cli/blob/f369cead2604e37480611b0cc269fee615956ea2/src/azure-cli-core/azure/cli/core/profiles/_shared.py#L60" rel="nofollow noreferrer">https://github.com/Azure/azure-cli/blob/f369cead2604e37480611b0cc269fee615956ea2/src/azure-cli-core/azure/cli/core/profiles/_shared.py#L60</a></p>
<p>Which is essentially the <code>SubscriptionClient</code>.</p>
|
<python><azure><azure-management-api><azure-python-sdk>
|
2024-05-21 05:11:26
| 2
| 558
|
SodaCris
|
78,509,356
| 93,910
|
Best practice to open files for use in a gui
|
<p>My application opens a video file, plays frames in a GUI, then closes it. The logic is like this</p>
<pre><code>import cv2
try:
self.vid = cv2.VideoCapture(local_file)
self.play_frame()
finally:
# vid.release()
</code></pre>
<p>But the problem is that the logic for <code>start_playback()</code> is asynchronous, eg</p>
<pre><code>def play_frame(self):
ok, frame = self.vid.read()
self.tkframe['image'] = ... # display frame
if ok: # next frame
tkframe.after( 33, lambda:self.play_frame() )
# else: vid.release() ?
</code></pre>
<p>And my question is: <strong>is there a safe way to make sure the video gets closed</strong>? Clearly <code>try...finally</code> is not going to work here, as it would close the video after the first frame. And I don't see how I can make a <code>with</code> statement work.</p>
<p>I am currently using <code>else: vid.release()</code> when the last frame is read. However, the file keeps getting left open, for example when an exception occurs somewhere else in the application, or the app closes prematurely.</p>
<h1>Edit</h1>
<p>in answer to @john gordon's comment</p>
<p>I am running the app from an interactive console -- I set a variable for the video filename, call the player, and it exits back to the console.</p>
<p>I then run another function that overwrites the video, and this function fails with "This file is already open in another application, python.exe".</p>
<p>If I close or restart the python console, it closes the file, but then I lose all my variables.</p>
<p>Anyhow this is about <em>best practices</em> for ensuring the video gets closed.</p>
|
<python><try-catch><with-statement><fclose><file-in-use>
|
2024-05-21 01:56:45
| 0
| 7,056
|
Sanjay Manohar
|
78,509,264
| 2,813,606
|
How to remove everything in a string before first occurrence of pattern (Python)
|
<p>I have a string that looks like the following:</p>
<pre><code>sample_string = 'Hello, my name is Bob.\nI like sparkling water.\nMy favorite flavor is mango.\n Goodbye.'
</code></pre>
<p>I want to do two things:</p>
<ol>
<li>Edit the string in a way that removes all characters before the first occurrence of '\n'</li>
<li>Remove all occurrences of '\n'</li>
</ol>
<p>I've got #2 sorted out just fine:</p>
<pre><code>sample_string.replace('\n',' ')
</code></pre>
<p>But, I'm not sure how to structure a regex expression to pinpoint the first \n and remove all characters before it.</p>
<p>The final string should look like:</p>
<pre><code>final_string = 'I like sparkling water. My favorite flavor is mango. Goodbye.'
</code></pre>
|
<python><regex><string>
|
2024-05-21 00:59:24
| 1
| 921
|
user2813606
|
78,509,126
| 1,322,962
|
Pip download `Cannot Install` Due To Conflict While Showing Exact Version Match?
|
<h1>Description</h1>
<p>I'm attempting to run <code>pip download</code> in order to bundle wheels into a zip file to upload to a service without internet access. I need to download all of my dependencies from PyPI and then upload them to the service for the service to run.</p>
<p>While running pip download I hit this error which makes very little sense to me due to the constraint message exactly matching on the version. Particularly since the constraint uses <code>==</code>.</p>
<p>Here's the <a href="https://raw.githubusercontent.com/apache/airflow/constraints-2.8.1/constraints-3.11.txt" rel="nofollow noreferrer">constraint file used</a></p>
<h1>Error</h1>
<pre><code> python -m pip download -r requirements/requirements.txt -d plugins
Looking in links: /usr/local/airflow/plugins
WARNING: Location '/usr/local/airflow/plugins' is ignored: it is either a non-existing path or lacks a specific scheme.
ERROR: Cannot install apache-airflow-providers-amazon==8.16.0 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested apache-airflow-providers-amazon==8.16.0
The user requested (constraint) apache-airflow-providers-amazon==8.16.0
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
</code></pre>
<h1>requirements.txt</h1>
<pre><code>--constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.8.1/constraints-3.11.txt"
apache-airflow-providers-amazon==8.16.0
apache-airflow-providers-postgres==5.10.0
# https://seasensor.atlassian.net/browse/SP-4980
--find-links /usr/local/airflow/plugins --no-index
# TODO: Remove these dependencies when we've moved to remote operators. These
# are all Task Code Requirements and not Airflow Requirements
# NOTE: Any requirement found both in https://github.com/iocurrents/data-pipeline/blob/main/scripts/requirements.txt
# and in the constraints file automatically default to the constrains file
# version.
botocore==1.34.67
iniconfig==2.0.0
jmespath==1.0.1
jsonschema==4.21.1
jsonschema-specifications==2023.12.1
markdown-it-py==3.0.0
mdurl==0.1.2
numpy==1.24.4 # Downgraded from 1.24.4
packaging==24.0
pandas==2.1.4 # Downgraded from 2.2.1
pluggy==1.3.0 # Downgrade from 1.4.0
psycopg2==2.9.9
pygments==2.17.2
pytest==7.4.4 # Downgraded from 8.1.1
python-dateutil==2.8.2 # Downgraded from 2.9.0.post0
pytz==2023.3.1.1 # Downgraded from 2024.1
referencing==0.32.1 # Downgraded from 0.34.0
rich==13.7.0 # Downgraded from 13.7.1
rpds-py==0.17.1 # Downgraded from 0.18.0
s3transfer==0.8.2 # Downgraded from 0.10.1
six==1.16.0
tzdata==2023.4 # Downgraded from 2024.1
urllib3==2.0.7 # Upgraded from 1.25.4
</code></pre>
<h1>Environment Information</h1>
<pre><code>python --version
Python 3.11.9
python -m pip --version
pip 24.0 from /Users/alexlordthorsen/.venvs/package_test/lib/python3.11/site-packages/pip (python 3.11)
</code></pre>
<h1>Removing The Version Pin Doesn't Work</h1>
<pre><code>python -m pip download -r requirements/requirements.txt -d plugins
Looking in links: /usr/local/airflow/plugins
WARNING: Location '/usr/local/airflow/plugins' is ignored: it is either a non-existing path or lacks a specific scheme.
ERROR: Cannot install apache-airflow-providers-amazon because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested apache-airflow-providers-amazon
The user requested (constraint) apache-airflow-providers-amazon==8.16.0
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
</code></pre>
<h1>Edit:</h1>
<p>Commenting out the line</p>
<pre><code>--find-links /usr/local/airflow/plugins --no-index
</code></pre>
<p>seems to have removed the issue in terms of being able to download requirements but it also means that my target system will not find the bundled zip file that contains all of the wheels.</p>
|
<python><python-3.x><pip>
|
2024-05-20 23:36:11
| 0
| 8,598
|
AlexLordThorsen
|
78,509,109
| 6,622,697
|
Does popen().readline block until all output is read?
|
<p>I'm running an asynchronous long-running process with <code>popen</code>. If I'm interested in getting all the data written to stdout, can I just do</p>
<pre><code>cmd = popen(...)
for line in cmd.stdout.readline()
... do something with line
</code></pre>
<p>I'm wondering if I need to test <code>cmd.poll()</code> at any point, or can I just assume that once the process is done, <code>readline</code> will return EOF?</p>
<h1>Update</h1>
<p>Just to clarify, here is the code I'm currently using</p>
<pre><code> while True:
line = cmd.stdout.readline()
print(line)
if not line:
break
</code></pre>
<pre><code> stdout.append(line)
</code></pre>
|
<python><python-3.x><popen>
|
2024-05-20 23:26:22
| 1
| 1,348
|
Peter Kronenberg
|
78,509,092
| 6,622,697
|
How to serialize subprocess.popen()
|
<p>This is follow-on to <a href="https://stackoverflow.com/questions/78507222/return-output-of-subprocess-popen-in-webserver?noredirect=1#comment138405770_78507222">Return output of subprocess.popen() in webserver</a>. I have a long running process that is kicked off by a Webserver call (running Django).
I'm using <code>subprocess.popen()</code> to spawn the process. I figured out how I can save the stdout and retrieve it later. But since this is a long-running process, I want the client to be able to cancel it. But in order to do that, I need an instance of the <code>popen()</code> object. But it is not serialiable, so I can't store it in a database or anywhere else?</p>
<p>Is there any other solution other that just storing it locally in a map in <code>views.py</code>?</p>
|
<python><python-3.x><django><popen>
|
2024-05-20 23:19:52
| 1
| 1,348
|
Peter Kronenberg
|
78,508,878
| 3,628,240
|
Web Scraping Javascript Page with Selenium and Python
|
<p>I'm looking to scrape a list from a website, where you have to select "Yes", check the "Show by State" box, click the "Submit and Find Doctor", and then select a state from the dropdown, and note the doctor's name.</p>
<p>Website:</p>
<pre><code>https://www.inspiresleep.com/en-us/find-a-doctor/
</code></pre>
<p>I'm getting stuck on the "Show by State" button. With this code:</p>
<pre><code><input id="show-by-state-checkbox" type="checkbox">
<span class="checkbox"></span>
<span class="label-text">Show by State</span>
</code></pre>
<p>This is what I have so far:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver import ActionChains
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
import time
import requests
doctor_dict = {}
#configure webdriver
options = webdriver.ChromeOptions()
driver = webdriver.Chrome(options = options)
driver.get("https://www.inspiresleep.com/en-us/find-a-doctor/")
time.sleep(3)
driver.find_element(By.XPATH,"//button[text()='Yes']").click()
time.sleep(5)
driver.find_element(By.CLASS_NAME,"value = 'checkbox'").click()
time.sleep(2)
driver.find_element(By.XPATH,"//button[text()='Submit & Find a Doctor']")
</code></pre>
<p>Could someone help me with the error to selecting the checkbox and then clicking "Select and Find Doctor"? Is it possible to select a state from the dropdown after this page? Alternatively is there an API or something that could be used to not have to use Selenium?</p>
|
<python><selenium-webdriver><web-scraping>
|
2024-05-20 21:48:20
| 2
| 927
|
user3628240
|
78,508,806
| 234,146
|
Controlling column order of Numpy txt output
|
<p>I would like to control the print order of column data in a printout from a Numpy table. The native savetxt function will only print in the order the _table_type was defined. Is there a variant that will allow specifying the column order?</p>
<p>I know I can print the values field by field if I need to. I'm looking for something easier.</p>
<p>Thanks</p>
|
<python><numpy><printing>
|
2024-05-20 21:14:26
| 1
| 1,358
|
Max Yaffe
|
78,508,795
| 398,670
|
Is there a way to override packages and versions with pipx install, e.g. psycopg2 -> psycopg2-binary?
|
<p>Say I need to install some Python project 'foo'.</p>
<p>It has a <code>requirements.txt</code> or <code>setup.py</code> or <code>setup.cfg</code> or <code>pyproject.toml</code> or <code>omgwtf.python</code> that lists <code>psycopg2</code> as a dependency.</p>
<p>But on my install target I don't have a C toolchain or libpq headers, and it's not particularly feasible to install one, so I want to install <code>foo</code> using <code>psycopg2-binary</code> instead of <code>psycopg2</code>.</p>
<p>Is there any way to do this with <code>pipx install git+https://web.site/foo</code> w/o cloning the sources and locally patching the Python dependencies - which might express this dependency in any of several inscrutable and overlapping forms due to the currently perl-esque TMTOWTDI nature of Python builds and dependency management?</p>
<p>It looks like <code>pipx</code> has <code>--pip-args</code> which can be used to pass <code>--prefer-binary</code> or <code>--only-binary=psycopg2</code>, but this fails as <code>psycopg2</code> has no binary distribution, instead it's in a different pip package. Is this a legacy packaging style issue with <code>psycopg2</code>:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement psycopg2
ERROR: No matching distribution found for psycopg2
</code></pre>
<p>For now I'm working around this by cloning the sources and patching the <code>setup.py</code>, but it's fragile as the patch breaks whenever other dependencies change. A simple <code>sed</code> would work but risks changing things it shouldn't. All in all it's just ugly. There must be a better way?</p>
|
<python><pip><psycopg2><pipx>
|
2024-05-20 21:09:35
| 0
| 328,701
|
Craig Ringer
|
78,508,758
| 12,011,902
|
What is the Rust reqwest equivalent of the following Python requests code?
|
<p>I am trying to do a Basic Authentication GET, ignoring certificate validation, using <code>reqwests</code>.</p>
<p>I have following Python <code>requests</code> code, which works:</p>
<pre class="lang-py prettyprint-override"><code>import requests
from requests.auth import HTTPBasicAuth
session = requests.sessions.Session()
session.auth = HTTPBasicAuth("username", "password")
session.verify = False
response = session.get("http://myurl.com/myapi")
</code></pre>
<p>I want to do the same thing using Rust <code>reqwest</code>.
So far, I have the following:</p>
<pre class="lang-rust prettyprint-override"><code>use reqwest
let url = "http://myurl.com/myapi";
let response = reqwest::Client::builder()
.danger_accept_invalid_certs(true)
.build()
.unwrap()
.get(&url)
.basic_auth("username", Some("password"))
.send()
.await?;
</code></pre>
<p>The Python <code>requests</code> call works as expected. However, I get the following response from Rust:</p>
<pre class="lang-bash prettyprint-override"><code>Error: reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("myurl.com")), port: None, path: "myapi", query: None, fragment: None }, source: Error { kind: Connect, source: Some("unsuccessful tunnel") } }
</code></pre>
<p>If I need to produce more code (I can't provide the actual URL), please let me know.</p>
|
<python><rust><python-requests><reqwest>
|
2024-05-20 20:59:30
| 1
| 658
|
trozzel
|
78,508,499
| 1,591,219
|
Split a graph so each type of node is only appearing once in in each partition with as little cuts as possible
|
<p>I have a table in BigQuery that represents the edges of a graph. Each node in the graph has a type. Each edge has a created_at property. Now I want to cut the graph into subcomponents. No subcomponent has more than one node of a selected type (A). I want to do this with as few edge cuts as possible. Prefer to cut younger edges rather than older ones.</p>
<ol>
<li>What is an efficient way to do this?</li>
<li>Can this be done in SQL via recursion?</li>
<li>What is the type of algorithm that I need to use?</li>
</ol>
<p>This is an example table structure.</p>
<p>No subcomponent has more than one node of type A in the resulting subcomponent.</p>
<p><a href="https://i.sstatic.net/UskflhED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UskflhED.png" alt="Visualization of an example graph" /></a></p>
<h2>Nodes:</h2>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">node_id</th>
<th style="text-align: left;">node_type</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">A</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">B</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;">A</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: left;">C</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: left;">A</td>
</tr>
<tr>
<td style="text-align: left;">6</td>
<td style="text-align: left;">B</td>
</tr>
</tbody>
</table></div>
<h2>Edges</h2>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>edge_id</th>
<th>source_node</th>
<th>target_node</th>
<th>created_at</th>
<th>source_type</th>
<th>target_type</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>4</td>
<td>2024-05-01 20:43:00</td>
<td>A</td>
<td>C</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>6</td>
<td>2024-05-02 06:43:00</td>
<td>B</td>
<td>B</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td>6</td>
<td>2024-05-03 08:43:00</td>
<td>A</td>
<td>B</td>
</tr>
<tr>
<td>4</td>
<td>6</td>
<td>4</td>
<td>2024-05-04 19:43:00</td>
<td>B</td>
<td>C</td>
</tr>
<tr>
<td>5</td>
<td>4</td>
<td>5</td>
<td>2024-05-05 21:43:00</td>
<td>C</td>
<td>A</td>
</tr>
<tr>
<td>reversed edges from aboveβ¦</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>6</td>
<td>4</td>
<td>1</td>
<td>2024-05-01 20:43:00</td>
<td>C</td>
<td>A</td>
</tr>
<tr>
<td>7</td>
<td>6</td>
<td>2</td>
<td>2024-05-02 06:43:00</td>
<td>B</td>
<td>B</td>
</tr>
<tr>
<td>8</td>
<td>6</td>
<td>3</td>
<td>2024-05-03 08:43:00</td>
<td>B</td>
<td>A</td>
</tr>
<tr>
<td>9</td>
<td>4</td>
<td>6</td>
<td>2024-05-04 19:43:00</td>
<td>C</td>
<td>B</td>
</tr>
<tr>
<td>10</td>
<td>5</td>
<td>4</td>
<td>2024-05-05 21:43:00</td>
<td>A</td>
<td>C</td>
</tr>
</tbody>
</table></div>
<p>This is what I currently do to get the component IDs. Where I'm currently stuck is how to cut these components into subcomponents.</p>
<pre class="lang-sql prettyprint-override"><code>with recursive
connected_components as (
select
source_node as node_id,
source_node as front,
target_node,
source_type,
target_type,
1 as depth,
[source_node] as visited_nodes
from edges
union all
select
cc.node_id,
e.source_node,
e.target_node as front,
e.source_type,
e.target_type,
cc.depth + 1 as depth,
array_concat(cc.visited_nodes, [e.target_node]) as visited_nodes
from edges as e
inner join connected_components as cc on e.source_node = cc.target_node
where cc.depth < 50 and not e.target_node in unnest(cc.visited_nodes)
),
components as (
select node_id, min(front) as component_id from connected_components group by 1
)
select * from components
</code></pre>
<p>As a result, I would want to have a similar lookup table but as a lookup for the subcomponent ID per node.</p>
<p>If not possible via sql can be also solved via Python and then reimporting the lookup table for the components into Bigquery.</p>
<p>So with the data provided above, I would expect the following table:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>id</th>
<th>component_id</th>
<th>component_id_from_pic (for reference only)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>c2</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>c2</td>
</tr>
<tr>
<td>5</td>
<td>5</td>
<td>c1</td>
</tr>
<tr>
<td>6</td>
<td>2</td>
<td>c3</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>c3</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td>c3</td>
</tr>
</tbody>
</table></div>
|
<python><graph><networkx><graph-theory><connected-components>
|
2024-05-20 19:43:20
| 1
| 618
|
Benedikt
|
78,508,467
| 7,547,525
|
Stitch pictures from rotating camera
|
<p>I am writing a homography by rotation project by following the example here: <a href="https://docs.opencv.org/4.4.0/d9/dab/tutorial_homography.html#tutorial_homography_Demo5" rel="nofollow noreferrer">https://docs.opencv.org/4.4.0/d9/dab/tutorial_homography.html#tutorial_homography_Demo5</a></p>
<p>For my project, I capture two photos using an XR environment. For each photo, I have 1.) a rotation quaternion given to me by the device, and 2.) a 4x4 camera intrinsics projection matrix of the XR scene. For my purposes I am assuming camera position(translation) does not change.</p>
<p>When I run my script, the stitch fails.
<a href="https://i.sstatic.net/77mY82eK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/77mY82eK.png" alt="enter image description here" /></a></p>
<p>Can someone show me where I am going wrong? I believe this is not working due to a lack of understanding of matrices, or improper conversion of camera projection matrix to camera intrinsics.</p>
<p>Script:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Python 2/3 compatibility
from __future__ import print_function
import numpy as np
import cv2 as cv
def quaternion_to_rotation_matrix(q):
x, y, z, w = q['x'], q['y'], q['z'], q['w']
return np.array([
[1 - 2*(y**2 + z**2), 2*(x*y - z*w), 2*(x*z + y*w)],
[2*(x*y + z*w), 1 - 2*(x**2 + z**2), 2*(y*z - x*w)],
[2*(x*z - y*w), 2*(y*z + x*w), 1 - 2*(x**2 + y**2)]
])
def basicPanoramaStitching(img1Path, img2Path):
img1 = cv.imread(cv.samples.findFile(img1Path))
img2 = cv.imread(cv.samples.findFile(img2Path))
if img1 is None or img2 is None:
print("Error loading images.")
return
# Rotation quaternions from deviceOrientation event
q1 = {'w': -0.7968594431877136, 'x': 0.0034535229206085205, 'y': 0.6041417717933655, 'z': -0.004005712922662497} # Quaternion for camera 1
q2 = {'w': -0.6669896245002747, 'x': 0.0010529130231589079, 'y': -0.7450388669967651, 'z': 0.00638939393684268} # Quaternion for camera 2
# Position data of camera
pos1 = {'x': 0, 'y': 2, 'z': 0}
pos2 = {'x': 0, 'y': 2, 'z': 0}
# Convert quaternion to rotation matrix
R1 = quaternion_to_rotation_matrix(q1)
R2 = quaternion_to_rotation_matrix(q2)
# Print rotation matrices
print("R1:\n", R1)
print("R2:\n", R2)
# Construct transformation matrices
c1Mo = np.eye(4)
c2Mo = np.eye(4)
c1Mo[0:3, 0:3] = R1
c2Mo[0:3, 0:3] = R2
c1Mo[0:3, 3] = [pos1['x'], pos1['y'], pos1['z']]
c2Mo[0:3, 3] = [pos2['x'], pos2['y'], pos2['z']]
# Raw intrinsics from the device (16-element column-major array).
# This is a 16 dimensional column-major 4x4 projection matrix that gives
# the scene camera the same field of view as the rendered camera feed.
raw_intrinsics = [2.5618553161621094, 0, 0, 0,
0, 1.4930813312530518, 0, 0,
0, 0, -1.0000009536743164, -1,
0, 0, -0.010000004433095455, 0]
# Convert to a 4x4 row-major matrix
intrinsics_4x4 = np.array(raw_intrinsics).reshape((4, 4)).T
# Extract the 3x3 camera intrinsic matrix directly from the 4x4 matrix
cameraMatrix = intrinsics_4x4[:3, :3]
# Since this is a projection matrix, adjust cx and cy based on the third column if necessary
fx, fy = cameraMatrix[0, 0], cameraMatrix[1, 1]
cx, cy = cameraMatrix[0, 2], cameraMatrix[1, 2]
# Adjusting cameraMatrix to a proper intrinsic format if needed
cameraMatrix = np.array([[fx, 0, cx], [0, fy, cy], [0, 0, 1]], dtype=np.float32)
print("Camera Matrix:\n", cameraMatrix)
# Compute rotation displacement
R2_transpose = R2.transpose()
R_2to1 = np.dot(R1, R2_transpose)
# Print the difference in rotation matrices
print("R_2to1 (Difference in rotation matrices):\n", R_2to1)
# Compute homography
H = cameraMatrix.dot(R_2to1).dot(np.linalg.inv(cameraMatrix))
H = H / H[2, 2]
print("Homography:\n", H)
# Apply the homography to the second image to visualize the transformation
transformed_img2 = cv.warpPerspective(img2, H, (img2.shape[1] * 2, img2.shape[0]))
# Visualize the transformed second image
cv.imshow("Transformed Image 2", transformed_img2)
# Stitch images
img_stitch = cv.warpPerspective(img2, H, (img2.shape[1] * 2, img2.shape[0]))
img_stitch[0:img1.shape[0], 0:img1.shape[1]] = img1
img_space = np.zeros((img1.shape[0], 50, 3), dtype=np.uint8)
img_compare = cv.hconcat([img1, img_space, img2])
cv.imshow("Final", img_compare)
cv.imshow("Panorama", img_stitch)
cv.waitKey(0)
def main():
import argparse
parser = argparse.ArgumentParser(description="Code for homography tutorial. Example 5: basic panorama stitching from a rotating camera.")
parser.add_argument("-I1", "--image1", help="path to first image", default="my-capture-135347.jpg")
parser.add_argument("-I2", "--image2", help="path to second image", default="my-capture-135408.jpg")
args = parser.parse_args()
print("Panorama Stitching Started")
basicPanoramaStitching(args.image1, args.image2)
print("Panorama Stitching Completed Successfully")
if __name__ == '__main__':
main()
</code></pre>
<p>Debugging outputs:</p>
<pre><code>R1:
[[ 2.69993348e-01 -2.21114543e-03 -9.62859819e-01]
[ 1.05568153e-02 9.99944055e-01 6.63907698e-04]
[ 9.62804484e-01 -1.03439817e-02 2.70001586e-01]]
R2:
[[-0.11024748 0.0069544 0.99387984]
[-0.01009224 0.99991613 -0.00811613]
[-0.99385293 -0.01092526 -0.11016804]]
Camera Matrix:
[[2.5618553 0. 0. ]
[0. 1.4930813 0. ]
[0. 0. 1. ]]
R_2to1 (Difference in rotation matrices):
[[-0.98674843 0.0028789 -0.16223314]
[ 0.00644999 0.99974826 -0.02148971]
[ 0.16213043 -0.02225134 -0.9865186 ]]
Homography:
[[ 1.000233 -0.00500717 0.42129751]
[-0.00381051 -1.01341045 0.03252436]
[-0.06415118 0.01510662 1. ]]
</code></pre>
<p>Input Photos:</p>
<p>my-capture-135347.jpg:
<a href="https://i.sstatic.net/7oLOlFXe.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7oLOlFXe.jpg" alt="my-capture-135347.jpg" /></a></p>
<p>my-capture-135408.jpg:
<a href="https://i.sstatic.net/Gp32eDQE.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gp32eDQE.jpg" alt="my-capture-135408.jpg" /></a></p>
|
<python><opencv><computer-vision>
|
2024-05-20 19:33:28
| 0
| 761
|
Clayton Rothschild
|
78,508,381
| 525,916
|
Pivot dataframe values and its unit
|
<p>I have the input dataframe in the following format:</p>
<pre><code>id city state date param value unit
1 Phoenix AZ 4-21-2024 temp 100 F
2 Phoenix AZ 4-21-2024 prec 0 mm
3 Phoenix AZ 4-21-2024 wind 2 mph
4 Phoenix AZ 4-20-2024 temp 101 F
5 Phoenix AZ 4-20-2024 prec 0 NaN
6 Phoenix AZ 4-20-2024 wind 4 mph
7 Seattle WA 4-20-2024 temp 82 F
8 Seattle WA 4-20-2024 prec 3 mm
9 Seattle WA 4-20-2024 wind 5 mph
</code></pre>
<p>I want this data pivoted into this format:</p>
<pre><code>id city state date temp prec wind temp_unit prec_unit wind_unit
1 Phoenix AZ 4-21-2024 100 0 2 F mm mph
2 Phoenix AZ 4-20-2024 101 0 4 F mph
3 Seattle WA 4-20-2024 82 3 5 F mm mph
</code></pre>
<p>How do I pivot the dataframe on param and get the value and its corresponding unit?</p>
|
<python><pandas>
|
2024-05-20 19:09:52
| 1
| 4,099
|
Shankze
|
78,508,149
| 5,924,264
|
Does equality check for floats in python check against machine epsilon tolerances?
|
<p>I'm a bit confused about the precision on floats in python and equality checks.</p>
<p>e.g.,</p>
<pre><code>15 + 1e-16 == 15
</code></pre>
<p>evaluates to true b/c 1e-16 is < machine epsilon for fp64.</p>
<p>and</p>
<pre><code>15 + 1e-15 == 15
</code></pre>
<p>evaluates false because 1e-15 > machine epsilon for fp64.</p>
<p>But when I do <code>sys.getsizeof(15 + 1e-16)</code>, I get 24 bytes, i.e., fp192, which if it were true, would mean <code>15 + 1e-16 == 15</code> would evaluate false since machine epsilon for fp192 is surely << 1e-16.</p>
<p>I always thought <code>==</code> is checking whether the LHS is within <code>+/-</code> of machine epsilon for the respective type.</p>
<p>To add to my confusion, I checked:</p>
<pre><code>1e-323 == 0 # evaluates false
1e-324 == 0 # evaluates true
sys.getsizeof(1e-323) # gives 24 bytes
</code></pre>
<p>I don't understand the bound on this.</p>
|
<python><floating-point><precision>
|
2024-05-20 18:01:08
| 1
| 2,502
|
roulette01
|
78,507,738
| 14,179,793
|
How to pass arguments containing parentheses to aws ecs execute-command using python subprocess.run?
|
<p><strong>Goal</strong>: Use the aws cli in a <code>subprocess.run</code> call to execute a command in a container of a running task.<br />
<strong>Example Usage</strong>:</p>
<pre><code>task_arn = 'a_task_arn'
arg = event = {'something': 'that contains ( )'}
# Attempt 1: json string
arg = json.dumps(arg)
subprocess_cmd = 'aws ecs execute-command --command "python -m task.function {arg}" --interactive --task {task_arn} --container Container'
subprocess.run(subprocess_cmd, shell=True
# Attempt 2: encoded json string
arg = json.dumps(arg).encode('utf-8')
subprocess_cmd = 'aws ecs execute-command --command "python -m task.function {arg}" --interactive --task {task_arn} --container Container'
subprocess.run(subprocess_cmd, shell=True
</code></pre>
<p><strong>Encountered Error</strong>: <code>/bin/sh: 1: Syntax error: "(" unexpected</code></p>
<p>I am trying to use the cli through a <code>subprocess.run</code> call instead of using the boto3 execute_command because</p>
<ul>
<li>it doesn't appear that there is an easy way to read the websocket boto3 returns: <a href="https://github.com/boto/boto3/issues/3496" rel="nofollow noreferrer">https://github.com/boto/boto3/issues/3496</a>,</li>
<li>as indicated in this SO answer, it only returns a very small portion of the websocket message: <a href="https://stackoverflow.com/questions/70367030/how-can-i-get-output-from-boto3-ecs-execute-command/70586300#70586300">How can I get output from boto3 ecs execute_command?</a></li>
</ul>
|
<python><amazon-ecs><aws-cli>
|
2024-05-20 16:27:06
| 1
| 898
|
Cogito Ergo Sum
|
78,507,669
| 4,483,861
|
Why is floor division inconsistent with division follow by floor, e.g. 1//0.2 = 4.0
|
<p>The operator <code>//</code> for integer division is called "floor division" in the python documentation, but comparing to other floor implementations there is a discrepancy, for example:</p>
<pre><code>>>> numpy.floor(1/0.2)
5.0
>>> float(math.floor(1/0.2))
5.0
>>> 1//0.2
4.0
</code></pre>
<p>Quite unexpected. Investigating a little further we can see that</p>
<pre><code>>>> numpy.floor(1/0.20000000000000001)
5.0
>>> numpy.floor(1/0.2000000000000001)
4.0
</code></pre>
<p>while</p>
<pre><code>>>> 1//0.19999999999999999
5.0
>>> 1//0.199999999999999999
4.0
</code></pre>
<p>It's standard CPython from anaconda. Maybe it could be an implementation detail like doing <code>if a<b</code> vs <code>if not a>b</code>, which wouldn't matter much for floats, but when the focus is integers, it makes a difference.</p>
<p>What's the reason for this discrepancy, and isn't it at least close to being a bug?</p>
|
<python><integer-division><floor><floor-division>
|
2024-05-20 16:14:02
| 0
| 2,649
|
Jonatan ΓstrΓΆm
|
78,507,605
| 1,145,666
|
When I read() a new frame from a cv2.VideoCapture, I get an old frame
|
<p>When I wait a little between two calls of <code>read()</code> from a OpenCV <code>VideoCapture</code> device, I still seem to get an "old" frame, instead of what is seen by the camera at that point:</p>
<pre><code>import cv2, time
vid_capture = cv2.VideoCapture(2)
ret, frame = vid_capture.read()
cv2.imshow("frame 1", frame)
time.sleep(10)
ret, frame2 = vid_capture.read()
cv2.imshow("frame 2", frame2)
while True:
ret, frame = vid_capture.read()
cv2.imshow("feed", frame)
if cv2.waitKey(100) & 0xFF == ord('q'):
break
</code></pre>
<p>The code above shows two frames and the feed. The two frames are the same, while they shouldn't be (I moved the camera to make sure).</p>
<p>Does OpenCV do some buffering, and how can I wait a little before fetching a new frame?</p>
|
<python><opencv><webcam>
|
2024-05-20 15:59:13
| 0
| 33,757
|
Bart Friederichs
|
78,507,580
| 10,946,777
|
Python Marshmallow set different fields as 'required' for each schema variant of the same schema
|
<p>I have the following schema:</p>
<pre><code>class MySchema(Schema):
id = fields.Str()
name = fields.Str()
value = fields.Str()
description = fields.Str()
</code></pre>
<p>I want to validate some input data with Flask for different types of requests.</p>
<p>For GET requests, I want the id field to be required (<code>id = fields.Str(required=True)</code>).</p>
<p>For POST requests, I do not need the id to be present, but the name, value and description fields to have <code>required=True</code>. I know how to exclude the id when validating the input data: <code>schema = MySchema(exclude=["id"])</code>.</p>
<p>For PUT requests, I need id and at least one of name, value or description.</p>
<p>I could just have 3 different schemas but that I would avoid that as much as possible.</p>
<p>I was thinking something along the lines of <code>schema = MySchema(require=["id"])</code>, where I can dynamically set the required fields when I load the schema.</p>
|
<python><schema><flask-restful><marshmallow>
|
2024-05-20 15:52:49
| 1
| 307
|
Sederfo
|
78,507,475
| 13,150,380
|
Caching yarn and python pip in google cloud build
|
<p>Currently building with cloud build takes arround 10 minutes (5 minutes frontend, 5 minutes backend). Is there anyway I can cache the yarn install and pip install so the build time cut significantly?
this is my current config for cloud build</p>
<pre class="lang-yaml prettyprint-override"><code>steps:
- name: "node:18.17.1"
entrypoint: bash
args:
- "-c"
- |
yarn install
yarn run create-app-yaml
yarn build
env:
- redacted
- name: "python:3.10.11"
entrypoint: bash
args:
- "-c"
- |
python -m pip install -r requirements.txt
python ./manage.py collectstatic --noinput
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "1600s"
</code></pre>
|
<python><google-cloud-platform><pip><yarnpkg><google-cloud-build>
|
2024-05-20 15:29:53
| 1
| 599
|
BlueBeret
|
78,507,453
| 5,525,901
|
Python threads behaviour on ctrl+c during join
|
<p>I'm struggling to understand what happens to other threads when ctrl+c interrupts the main thread while it's waiting for the other thread to finish.</p>
<p>Here's a small snippet of code to illustrate an example.</p>
<pre class="lang-py prettyprint-override"><code>import time, threading
def worker():
time.sleep(5)
print("Thread finished")
t = threading.Thread(target=worker)
t.start()
try:
t.join()
except KeyboardInterrupt:
print("Interrupted")
</code></pre>
<p>With this code, if I hit ctrl+c before the 5s are up, I expect the main thread to have a KeyboardInterrupt raised within the <code>join</code>, then it would get caught by the <code>except</code> block, printing "Interrupted".</p>
<p>Then the main thread would reach the end of its code and exit, but python should not shut down yet until the thread completes, since it's not a daemon thread. The thread should then print "Thread finished", and only then the python process should exit.</p>
<p>This is in line with my observations using python 3.6 and 3.7, however when I try this on Linux with python 3.10.12, 3.11.3, 3.12.0, and 3.12.3, it prints "Interrupted" and the process immediately exits.</p>
<p>What seems even stranger to me is that if I replace the <code>t.join()</code> with a <code>time.sleep(100000)</code> then it does wait for the thread to finish before exiting... It seems as if the thread is being abruptly aborted if the ctrl+c happens during the <code>join</code>, but not otherwise...</p>
<p>I couldn't find any explanation for this in the docs. The only relevant information I found was this, which seems to me like the newer python versions are misbehaving.</p>
<blockquote>
<p>The entire Python program exits when no alive non-daemon threads are left.</p>
</blockquote>
<p>Is this actually just a bug or is there a real explanation that I don't know as to why it behaves this way? If so, is there any way to have the documented behaviour in 3.12?</p>
<p>UPDATE: On windows, as Booboo pointed out, the <code>KeyboardInterrupt</code> will not even be raised until the <code>.join()</code> is done.</p>
|
<python><multithreading>
|
2024-05-20 15:24:00
| 1
| 1,752
|
Abraham Murciano Benzadon
|
78,507,425
| 4,442,337
|
Django: overriding ForeignKey limit_choices_to in subclasses?
|
<p>Following the current documentation <a href="https://docs.djangoproject.com/en/5.0/ref/contrib/contenttypes/#generic-relations" rel="nofollow noreferrer">https://docs.djangoproject.com/en/5.0/ref/contrib/contenttypes/#generic-relations</a> I created a <code>GenericReferModel</code> which is an abstract class that define a generic relation towards one or more models. So far so good. Now I'd like to limit the <code>ContentType</code> choices exploiting the <code>limit_choices_to</code> attribute of <code>models.ForeignKey</code>.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any
from django.contrib.contenttypes.fields import GenericForeignKey
from django.contrib.contenttypes.models import ContentType
from django.db import models
from django.utils.translation import gettext_lazy as _
class GenericReferModel(models.Model):
refer_type_choices: models.Q | dict[str, Any] | None = None
# https://docs.djangoproject.com/en/5.0/ref/contrib/contenttypes/#generic-relations
refer_type = models.ForeignKey(
ContentType,
on_delete=models.CASCADE,
null=True,
blank=True,
limit_choices_to=refer_type_choices,
verbose_name=_("refer type"),
)
refer_id = models.PositiveIntegerField(
_("refer id"),
null=True,
blank=True,
)
refer = GenericForeignKey("refer_type", "refer_id")
class Meta:
abstract = True
indexes = [
models.Index(fields=["refer_type", "refer_id"]),
]
class Chat(GenericReferModel):
nome = models.CharField(_("nome"), max_length=255)
refer_type_choices = (
models.Q(app_label="core", model="studio")
)
class Meta(GenericReferModel.Meta):
verbose_name = _("chat")
verbose_name_plural = _("chat")
</code></pre>
<p>Apparently, overriding <code>refer_type_choices</code> does not work and it is always considered the abstract class default value. Is there a way to achieve a dynamic assignation of choices on each subclass?</p>
|
<python><django>
|
2024-05-20 15:18:58
| 1
| 2,191
|
browser-bug
|
78,507,373
| 6,619,548
|
How might my API be receiving an incorrectly formatted ISO 8601 timestamp
|
<p>I have an iOS/Android app sending data to the Python backend.</p>
<p>In both apps, the user selects a date and time from a picker, and it is sent to the server:</p>
<p>Android:</p>
<pre><code>SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");
String formattedDate = simpleDateFormat.format(date);
URL url = new URL("https://example.com/api/test?distance=" + distance + "&date=" + formattedDate);
</code></pre>
<p>iOS:</p>
<pre><code>let formatter = DateFormatter()
formatter.dateFormat = "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"
let formattedDate = formatter.string(from: date)
let url = URL(string: "https://example.com/api/test?distance=\(distance)&date=\(formattedDate)")!
</code></pre>
<p>On the server, this is interpreted as follows:</p>
<pre><code>date = request.GET.get('date', None)
if date:
journey_date = datetime.fromisoformat(date)
</code></pre>
<p>I've just had a weird error come through, where the server is receiving the date in an unexpected format:</p>
<pre><code>ValueError at /api/test/
Invalid isoformat string: '2024-05-20T3:38:12.711\u202fPMZ'
</code></pre>
<p>Basically the time looks like it is AM/PM format, rather than 24 hour.</p>
<p>In all the testing I've done so far, I've never come across this. Assuming the user has not sent a manual request to the endpoint (e.g. via Postman), and that they have just used the app UI, how might the date be received in this format when the specified format was <code>yyyy-MM-dd'T'HH:mm:ss.SSS'Z'</code>?</p>
<p>Update:</p>
<p>The user agent suggests it's coming from the iOS app.</p>
|
<python><android><ios><datetime><simpledateformat>
|
2024-05-20 15:08:58
| 0
| 1,594
|
alstr
|
78,507,260
| 3,103,399
|
How to use jsonb_to_record and x() with sqlalchemy
|
<p>I am trying to make the following SQL alchemy orm query work :</p>
<pre class="lang-py prettyprint-override"><code>query = (
sqlalchemy.select(
cls.tasks_table.id.label("task_id"),
cls.tasks_table.doc.label("task_doc"),
cls.table.id.label("job_id"),
cls.table.doc.label("job_doc"),
)
.select_from(
func.jsonb_to_record(cls.tasks_table.doc).alias("x(job_id uuid)"),
)
.join(cls.table, func.jsonb_to_record(cls.tasks_table.doc), 'job_id' == cls.table.id)
)
</code></pre>
<p>yet this generates the following SQL query :</p>
<pre class="lang-sql prettyprint-override"><code>SELECT public.tasks.id AS task_id, public.tasks.doc AS task_doc, public.jobs.id AS job_id, public.jobs.doc AS job_doc
FROM public.tasks, jsonb_to_record(public.tasks.doc) AS "x(job_id uuid)"
JOIN public.jobs ON public.jobs.id = :id_1
</code></pre>
<p>The problem here are the apostrophes around <code>x(job_id uuid)</code>. without it, the SQL query will be correct.</p>
<p>Any idea how to pass the <code>x(job_id uuid)</code> part while using <code>jsonb_to_record</code> ?</p>
<p>My goal is to use this column for joining another table later in the <code>join_from</code> clause.</p>
|
<python><postgresql><sqlalchemy>
|
2024-05-20 14:45:01
| 1
| 5,386
|
jony89
|
78,507,236
| 2,502,331
|
Token (Access) Errors Connecting to MS SQL Server From DataBricks Python Notebooks Via PySPark JDBC Driver Using an Azure Service Principal and MSAL
|
<p>How do I resolve token (Active Directory access) errors when connecting to MS SQL Server from DataBricks Python notebooks via PySPark JDBC driver using an Azure/DataBricks Service Principal and MSAL (Microsoft Authentication Library)? I have tried both the PySpark JDBC driver package (current Maven coordinates: com.databricks:databricks-jdbc:2.6.25) and the Microsoft SQL Connector (current Maven coordinates: com.microsoft.azure:spark-mssql-connector_2.12:1.1.0) and MSAL. It works with ADAL but I get token errors with either of these libraries. What is the correct way to make this connection?</p>
|
<python><pyspark><jdbc><databricks><azure-ad-msal>
|
2024-05-20 14:40:09
| 1
| 978
|
StephenDonaldHuffPhD
|
78,507,222
| 6,622,697
|
Return output of subprocess.popen() in webserver
|
<p>I have a Webserver in Python using Django. I want to be able to kick of a long-running asynchronous subprocess and then have the client poll with a GET or POST and receive stdout as well as other information. For each iteration, the server will return the lines of stdout that it has so far. When the process ends, additional information will be returned.
How can I save the instance of cmd=subprocess.popen() so that on subsequent GET/POST, it can make the appropriate calls (e.g. cmd.poll, cmd.stdout.readline(), etc)</p>
<p>I've tried using Django's session manager, but the popen object is not Json serializable</p>
<h1>Update</h1>
<p>As suggested in the comments, instead of trying to save th popen object, I just continue the thread after the request has returned, and I save the stdout lines in the session.</p>
<p>But I'm having a problem updating the session object after the request has already returned. It appears to be getting updated but upon the next request, the session object does not have any of the entries that were added after the request was completed.</p>
<p>Here is the code (<code>views.py</code>). I'm using
<code>SESSION_ENGINE = 'django.contrib.sessions.backends.signed_cookies'</code> to specify cookie-based sessions.</p>
<pre><code>import threading
from os.path import expanduser
from rest_framework.decorators import api_view
from rest_framework.response import Response
import subprocess
# Separate thred to continue running our external task
def monitor_task(cmd, session):
print('running thread')
session['stdout'] = []
while True:
line = cmd.stdout.readline()
print('line: ', line.replace('\n', ''))
if not line:
print('breaking')
break
session['stdout'].append(line.replace('\n', ''))
print(session['stdout'])
print('task is ending')
@api_view(['GET'])
def run_ngen(request):
if request.method == 'GET':
cmd = subprocess.Popen([expanduser("~/testSpawn.sh"), "10"], shell=False, bufsize=-1,
encoding='utf-8',
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
x = threading.Thread(target=monitor_task, args=(cmd, request.session,))
x.start()
return Response('request started')
@api_view(['GET'])
def get_output(request):
# GEt the output that was put into the session by monitor_task
print('session: ', request.session.items())
stdout = request.session['stdout']
print('There are', len(stdout), 'lines')
print(stdout)
return Response(stdout)
</code></pre>
<p>The external program I'm running is</p>
<pre><code>#!/bin/bash
sleep=5
if [ -n "$1" ]
then
sleep=$1
fi
echo line1
echo line2
echo an error 1>&2
echo Sleeping for $sleep seconds
sleep $sleep
echo line3
echo another error 1>&2
echo line4
</code></pre>
<p>The problem I'm seeing is that any lines added to the session <em>after</em> the initial return of the request do not appear in the session later</p>
<p>The thread that runs the external task seems to work fine and adds all 5 lines to the session
<a href="https://i.sstatic.net/pzIUTzsf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pzIUTzsf.png" alt="enter image description here" /></a></p>
<p>However, when I hit the <code>get_output</code> endpoint that is supposed to return the contents of stdout, it only shows the first 3 lines (before the sleep)</p>
<p><a href="https://i.sstatic.net/cWkbFidg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWkbFidg.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><django><webserver><popen>
|
2024-05-20 14:38:35
| 1
| 1,348
|
Peter Kronenberg
|
78,507,209
| 785,349
|
Error Building 'pycapnp' for AppScale on Ubuntu 18.04 with Python 2.7
|
<p>As a Java Developer transitioning into the Python ecosystem, I've noticed a difference in how dependencies are handled between the two languages. In Java, project dependencies remain intact and the project builds reliably over time.</p>
<p>However, my experience with Python has shown it to be more fragile in this aspect. For instance, I've encountered issues with a <a href="https://github.com/AppScale/gts" rel="nofollow noreferrer">Python-based platform</a> that previously worked well before the deprecation of Python 2.7.</p>
<pre class="lang-bash prettyprint-override"><code>curl -Lo bootstrap.sh https://raw.githubusercontent.com/AppScale/gts/master/bootstrap.sh
bash bootstrap.sh
</code></pre>
<p>Running this now throws cascading errors.</p>
<p>I had to manually install the Cython module this time, whereas in the past, it was not necessary as the build didn't fail. The issue now is with the required module 'pycapnp' for <a href="https://github.com/AppScale/gts" rel="nofollow noreferrer">AppScale</a>, which I have to build myself, but it throws an error.</p>
<pre><code>root@appscale1:~# git clone https://github.com/capnproto/pycapnp.git
root@appscale1:~/pycapnp# pip install .
Processing /root/pycapnp
Complete output from command python setup.py egg_info:
Error compiling Cython file:
------------------------------------------------------------
...
DynamicValue.Reader new_server(InterfaceSchema&, PyObject *)
Capability.Client server_to_client(InterfaceSchema&, PyObject *)
PyPromise convert_to_pypromise(RemotePromise)
PyPromise convert_to_pypromise(VoidPromise)
VoidPromise taskToPromise(Own[PyRefCounter] coroutine, PyObject* callback)
void allowCancellation(CallContext context) except +reraise_kj_exception nogil
^
------------------------------------------------------------
</code></pre>
<p>My machine is running Ubuntu 18.04 with Python 2.7 installed.</p>
|
<python><python-2.7>
|
2024-05-20 14:35:48
| 0
| 35,657
|
quarks
|
78,507,093
| 20,920,790
|
How to make ID with specified length with hashlib?
|
<p>I need unique ID's to make it primary key in database and use it for join tables.
How to make ID with length I need?
For example my string:</p>
<pre><code>'827263877-13969916-1800-1-0.0'
</code></pre>
<p>I use this code to make ID:</p>
<pre><code>def make_unique_id(*args):
data = '-'.join(str(args))
id = hashlib.md5(data.encode('ascii')).hexdigest()
id_int = int(id, 32)
return id_int
</code></pre>
<p>Result:</p>
<pre><code>test_id_digest: '051fbd93807b21c48f7a579dbd48e13c'
test_id_final_int: 6810946243257077380510446529265852732
</code></pre>
<p>How can I generate integer ID with 10 numbers length?</p>
|
<python><hashlib>
|
2024-05-20 14:11:36
| 1
| 402
|
John Doe
|
78,506,950
| 11,439,134
|
Access row item of Gtk.ListView
|
<p>I need to add a custom class to some rows in a GtkListView widget so that I can change their background colour. Looking on the inspector, ListItem rows are instances of GtkListItemWidget, but I can't find anything in the docs or online regarding how to access these widgets.</p>
<p>I am using Gtk 4. Would anyone have any insights on how to access ListItem rows?</p>
|
<python><listview><gtk><gtk4>
|
2024-05-20 13:42:44
| 1
| 1,058
|
Andereoo
|
78,506,927
| 525,865
|
getting all the links - that are "stored" in the page: i tried to invest the page thoughroghly
|
<p>how to get all the links - that are "stored" in the page</p>
<p><a href="https://www.wohnungsbaugenossenschaften.de/gaestewohnung-finden/teilnehmende-genossenschaften" rel="nofollow noreferrer">https://www.wohnungsbaugenossenschaften.de/gaestewohnung-finden/teilnehmende-genossenschaften</a></p>
<p>tried to search in the basic-text of the page!</p>
<p>i tried to invest the page thoughroghly..i have seen the listing of approx 100 links.
but did not manage to extract them all at once</p>
<p><strong>update</strong>: tried out this one: - with BS4</p>
<pre><code>import requests
from bs4 import BeautifulSoup, SoupStrainer
source_code = requests.get('https://www.wohnungsbaugenossenschaften.de/gaestewohnung-finden/teilnehmende-genossenschaften')
soup = BeautifulSoup(source_code.content, 'lxml')
links = []
for link in soup.find_all('a'):
links.append(str(link))
</code></pre>
|
<python><html><web-scraping>
|
2024-05-20 13:37:30
| 1
| 1,223
|
zero
|
78,506,798
| 4,948,165
|
Keep/Build a map when doing multiple pandas groupby operations
|
<p>Imagine a process, where we do several pandas groupbys.</p>
<p>We start with a df like so:</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(1)
df = pd.DataFrame({
'id': np.arange(10),
'a': np.random.randint(1, 10, 10),
'b': np.random.randint(1, 10, 10),
'c': np.random.randint(1, 10, 10)
})
df
Out[23]:
id a b c
0 0 2 8 2
1 1 8 8 9
2 2 7 2 9
3 3 3 8 4
4 4 5 1 9
5 5 6 7 8
6 6 3 8 4
7 7 5 7 7
8 8 3 2 6
9 9 5 1 2
</code></pre>
<p>We perform a groupby 'a' (The 'max' agg is just an example)</p>
<pre><code>new_df = df.groupby('a').agg('max').reset_index()
Out[25]:
a id b c
0 2 0 8 2
1 3 8 8 6
2 5 9 7 9
3 6 5 7 8
4 7 2 2 9
5 8 1 8 9
</code></pre>
<p>And I want to keep track of the original id to which group it belongs.
For example,</p>
<pre><code>id 0 belongs to a = 2,
1 to 8,
2 to 7,
(3, 6, 8) belongs to 3
etc..
</code></pre>
<p>Afterword we perform another groupby:</p>
<pre><code>new_df.groupby('b').agg('max').reset_index()
Out[28]:
b a id c
0 2 7 2 9
1 7 6 9 9
2 8 8 8 9
</code></pre>
<p>Now we have a continued mapping,
Where</p>
<pre><code>group a (2, 3, 8) belongs to group 8 (of b)
(5, 6) = 7
7 = 2
</code></pre>
<p>And this result in a long map where the original id:</p>
<pre><code>0 => a = 2 => b = 8 (where b = 8 is the final group that interests me)
1 => a = 8 => b = 8
2 => a = 7 => b = 2
</code></pre>
<p>And so on..</p>
<p>Now I do this in order to reduce a lot of entities in my data so that I can group them in the same bucket somehow. And I need to map them from their original id to a new id, that is being represented after many iterations of groupby.</p>
<p>In the end I want to see something like so</p>
<pre><code>Out[32]:
id grp
0 0 8
1 1 8
2 2 2
3 3 8
4 4 7
5 5 7
6 6 8
7 7 7
8 8 8
9 9 7
</code></pre>
<p>again, because id=0 went to a=2 and a=2 went to b=8..</p>
<p>Any solution or suggestion will be most welcome. Even carrying the values in a column dedicated to this, with each group by.
And when we aggregate, we can do a set addition...</p>
|
<python><pandas><group-by>
|
2024-05-20 13:08:12
| 2
| 3,238
|
Eran Moshe
|
78,506,774
| 11,233,365
|
Rewriting URL request function to satisfy GitHub CodeQL server side request forgery (SSRF) warning
|
<p>I'm working on a function that returns a HTTP response from <a href="https://pypi.org/simple/" rel="nofollow noreferrer">https://pypi.org/simple/</a> when Python's <code>pip</code> installer requests it for a package. When pushing my code onto GitHub, the CodeQL checks warn of the risk of server side request forgery (SSRF), and asks me to create validation checks for the "user-defined input" (which is <code>pip</code>, in this case).</p>
<p>I have already made many attempts at validating the URL to satisfy this SSRF warning, but GitHub CodeQL has not accepted any of them so far. How can I rewrite the following to satisfy GitHub CodeQL's requirements for guarding against SSRF?</p>
<p><strong>The relevant block of code:</strong></p>
<pre class="lang-py prettyprint-override"><code>import requests
from fastapi import APIRouter, Response
pypi = APIRouter(prefix="/pypi", tags=["bootstrap"])
@pypi.get("/{package}/", response_class=Response)
def get_pypi_package_downloads_list(package: str) -> Response:
"""
Obtain list of all package downloads from PyPI via the simple API (PEP 503).
"""
url = f"https://pypi.org/simple/{package}"
full_path_response = requests.get(url)
</code></pre>
<p>The following is a non-exhaustive overview of attempts I've tried in order to satisfy that SSRF warning. However, none of them have worked for me.</p>
<pre class="lang-py prettyprint-override"><code># Attempt 1
# Check that it's a PyPI URL
url = f"https://pypi.org/simple/{package}"
if "pypi" in url:
full_path_response = requests.get(url)
else:
raise ValueError("This is not a valid package")
# Attempt 2
# Validate that package name is alphanumeric (allow _ and -)
if package.replace("_", "").replace("-", "").isalnum():
url = f"https://pypi.org/simple/{package}"
full_path_response = requests.get(url)
else:
raise ValueError("This is not a valid package")
# Attempt 3
# Check that it's a valid connection
with requests.get("https://pypi.org/simple/{package}") as http_response:
if http_response.status_code == 200:
full_path_response = http_response
else:
raise ValueError("This is not a valid package")
# Attempt 4
# Tried using RegEx matching to validate package name
if re.match(r"^[a-z0-9\_\-]+$", package):
full_path_response = requests.get(f"https://pypi.org/simple/{package}")
else:
raise ValueError("This is not a valid package")
# Attempt 5
# Use urllib.parse.urlparse to parse and validate the url
def validate_url(url: str) -> bool:
parsed_url = urlparse(url)
if parsed_url.scheme == "https" and parsed_url.hostname == "pypi.org":
return True
else:
return False
def validate_package(package: str) -> bool:
if package.replace("_", "").replace("-", "").isalnum():
return True
else:
return False
# Validate package and URL
if validate_package(package) and validate_url(f"https://pypi.org/simple/{package}"):
full_path_response = requests.get(
f"https://pypi.org/simple/{package}"
) # Get response from PyPI
else:
raise ValueError("This is not a valid package")
# Attempt 6
# Using a Pydantic model
from pydantic import BaseModel, HttpUrl, ValidationError
class UrlValidator(BaseModel):
url: HttpUrl
def validate(url: str):
try:
UrlValidator(url=url)
except ValidationError:
log.error(f"{url} was not a valid URL")
return False
else:
log.info(f"{url} was a valid URL")
return True
# Attempt at URL validation to satisfy GitHub CodeQL requirements
url = f"https://pypi.org/simple/{package}"
if validate(url):
full_path_response = requests.get(url)
# Attempt 7
# Encoding string before injection
from urllib.parse import quote_plus
def _validate_package_name(package: str) -> bool:
# Check that it only contains alphanumerics, "_", or "-", and isn't excessively long
if re.match(r"^[a-z0-9\-\_]+$", package):
return True
else:
return False
def _get_full_path_response(package: str) -> requests.Response:
# Sanitise string
package_clean = quote_plus(package)
print(f"Cleaned package: {package_clean}")
# Validation checks
if _validate_package_name(package_clean):
url = f"https://pypi.org/simple/{package_clean}"
print(f"URL: {url}")
return requests.get(url)
else:
raise ValueError(f"{package_clean} is not a valid package name")
full_path_response = _get_full_path_response(package)
# Attempt 8
# The nuclear option of maintaining a list of approved packages
approved_packages: list = [pkg.lower() for pkg in approved_packages] # List of package names from running `conda env list`
# Validate package and URL
if package.lower() in approved_packages:
url = f"https://pypi.org/simple/{package}"
full_path_response = requests.get(url)
else:
raise ValueError(f"{package} is not a valid package name")
</code></pre>
<p>Thanks!</p>
|
<python><github><codeql><ssrf>
|
2024-05-20 13:03:29
| 1
| 301
|
TheEponymousProgrammer
|
78,506,517
| 7,123,797
|
What thing is responsible for the explicit line joining?
|
<p>It seems that the Python tokenizer isn't responsible for the explicit line joining. I mean if we write the following code in file <code>script.py</code>:</p>
<pre><code>"one \
two"
</code></pre>
<p>and then type <code>python -m tokenize script.py</code> in the command prompt, we will got the following table:</p>
<pre><code>0,0-0,0: ENCODING 'utf-8'
1,0-2,4: STRING '"one \\\ntwo"'
2,4-2,5: NEWLINE '\n'
3,0-3,0: ENDMARKER ''
</code></pre>
<p>This means that the second token contains the string <code>'"one \\\ntwo"'</code>. But I expected it to be the string <code>'"one two"'</code> instead. So what thing actually handles the explicit line joining in Python? Maybe the Python parser does it?<br />
Or maybe there is some separate "evaluation stage" between the tokenization and parsing stages, where each token string is transformed into a more convenient representation for parsing (this stage is responsible for explicit line joining)?</p>
<p>Wikipedia has some words about the <a href="https://en.wikipedia.org/wiki/Lexical_analysis#Evaluator" rel="nofollow noreferrer">evaluation stage</a> in the general programming context (i.e. it doesn't talk specifically about Python), but this description is not very clear for me, and I couldn't come up with any more examples (besides the explicit line joining in string literal) when such a potential evaluation is necessary in Python. And the existence of such evaluation stage would mean that the parser doesn't get its input directly from the tokenizer...</p>
<p>I also tested how the Python tokenizer treats the explicit line joining outside the string literals. I entered</p>
<pre><code>2 + \
3
</code></pre>
<p>And got the following tokens:</p>
<pre><code>0,0-0,0: ENCODING 'utf-8'
1,0-1,1: NUMBER '2'
1,2-1,3: OP '+'
2,0-2,1: NUMBER '3'
2,1-2,2: NEWLINE '\n'
3,0-3,0: ENDMARKER ''
</code></pre>
<p>In this case the tokenizer simply removed the <code>\</code> symbol, as I expected.</p>
|
<python><language-lawyer><tokenize>
|
2024-05-20 12:12:15
| 0
| 355
|
Rodvi
|
78,506,404
| 1,937,473
|
Pandas apply with "expand result" results in NaN columns when using loc but not [ ]
|
<p>I am having some issues around setting new column values that are produced by a pandas <code>apply</code> function. If you scroll to the bottom I've provided a MRE you should be able to copy-paste and run.</p>
<h2>Explanation</h2>
<p>Let's assume we have a DataFrame <code>df</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({"a":["ABC", "DEF", "GHI"], "b":[1, 2, 3]})
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">a</th>
<th style="text-align: right;">b</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">ABC</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">DEF</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">GHI</td>
<td style="text-align: right;">3</td>
</tr>
</tbody>
</table></div>
<p>The specific data is irrelevant. Let's assume I have another function <code>some_function()</code> that operates on both columns, performs some logic, and returns a <code>tuple</code> of two values that I wish to make into columns.</p>
<p>The logic inside <code>some_function()</code> is irrelevant again, in this example it appends the value of <code>b</code> to an upper and lowercase <code>a</code> value.</p>
<pre class="lang-py prettyprint-override"><code>def some_func(param_a: str, param_b: int) -> tuple[str, str]:
output_a = param_a + " " + str(param_b)
output_b = param_a.lower() + " " + str(param_b)
return output_a, output_b
</code></pre>
<p>If you use <code>[]</code> indexing to insert the columns, then you get:</p>
<pre class="lang-py prettyprint-override"><code>df[["output_a", "output_b"]] = df.apply(
lambda row:
some_func(row["a"], row["b"]),
axis=1,
result_type="expand"
)
</code></pre>
<p>Which results in <code>print(df)</code> giving this (the desired output):</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">a</th>
<th style="text-align: right;">b</th>
<th style="text-align: left;">output_a</th>
<th style="text-align: left;">output_b</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">ABC</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">ABC 1</td>
<td style="text-align: left;">abc 1</td>
</tr>
<tr>
<td style="text-align: left;">DEF</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">DEF 2</td>
<td style="text-align: left;">def 2</td>
</tr>
<tr>
<td style="text-align: left;">GHI</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">GHI 3</td>
<td style="text-align: left;">ghi 3</td>
</tr>
</tbody>
</table></div>
<p>However, if you use the <code>.loc</code> operator instead like so:</p>
<pre class="lang-py prettyprint-override"><code>df.loc[:, ["output_a", "output_b"]] = df.apply(
lambda row:
some_func(row["a"], row["b"]),
axis=1,
result_type="expand"
)
</code></pre>
<p>Then <code>print(df)</code> gives:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">a</th>
<th style="text-align: right;">b</th>
<th style="text-align: right;">output_a</th>
<th style="text-align: right;">output_b</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">ABC</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: left;">DEF</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: left;">GHI</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
</tbody>
</table></div>
<h2>Question</h2>
<p>Is this behaviour expected, or is this a bug? Alternatively, is there something that I am doing wrong?</p>
<h2>MRE</h2>
<p>You should be able to copy this and run it locally. You will need <code>pandas==2.2.1</code>.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({"a":["ABC", "DEF", "GHI"], "b":[1, 2, 3]})
df2 = df.copy()
def some_func(param_a: str, param_b: int) -> tuple[str, str]:
output_a = param_a + " " + str(param_b)
output_b = param_a.lower() + " " + str(param_b)
return output_a, output_b
df[["output_a", "output_b"]] = df.apply(lambda row: some_func(row["a"], row["b"]), axis=1, result_type="expand")
df2.loc[:, ["output_a", "output_b"]] = df.apply(lambda row: some_func(row["a"], row["b"]), axis=1, result_type="expand")
print("DF 1 with [ ]")
print(df)
print()
print("DF 2 with .loc")
print(df2)
</code></pre>
<p>Has anyone experienced this issue before, or has some way around it? Alternatively, what is the correct way of accomplishing what I want to accomplish?</p>
|
<python><python-3.x><pandas><dataframe>
|
2024-05-20 11:43:56
| 0
| 333
|
DLJ
|
78,506,400
| 3,156,085
|
Annotating a decorator with generic Protocols causes "arg-type" (incompatible type) error involving `Never`
|
<p>I'm trying to implement and annotate a decorator for functions without keyword arguments (but with variable number of arguments) using <a href="https://docs.python.org/3/library/typing.html#typing.Protocol" rel="nofollow noreferrer"><code>typing</code>'s <code>Protocol</code></a> as <a href="https://docs.python.org/3/library/typing.html#user-defined-generic-types" rel="nofollow noreferrer">user-defined <code>Generic</code> types</a>.</p>
<p>But MyPy issues me the following errors (repeated in the MRE above the lines causing it):</p>
<ul>
<li><p><code>error: Argument 1 to "my_decorator" has incompatible type "Callable[[], int]"; expected "FunctionToDecorate[Never, Never]" [arg-type]</code>.</p>
</li>
<li><p><code>error: Argument 1 to "my_decorator" has incompatible type "Callable[[int], int]"; expected "FunctionToDecorate[Never, Never]" [arg-type]</code>.</p>
</li>
<li><p><code>error: Argument 1 to "my_decorator" has incompatible type "Callable[[int, Any], Any]"; expected "FunctionToDecorate[Never, Never]" [arg-type]</code>.</p>
</li>
</ul>
<p>What is it caused by and how to solve it?</p>
<p>Is it caused by the way I use <code>ParamSpec</code> in my <code>Protocol</code>s?</p>
<p>Why does it involve <code>Never</code> and why are the received types not consistent with the expected <code>FunctionToDecorate[Never, Never]</code>?</p>
<p><strong>MRE (<a href="https://mypy-play.net/?mypy=latest&python=3.10&gist=0fd7c7644f2d8a6386b939b30b58f323" rel="nofollow noreferrer">MyPy playground here</a>):</strong></p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3.10
from typing import Protocol, ParamSpec, Generic, TypeVar
T = TypeVar("T", covariant=True)
P = ParamSpec("P")
# Callable types (as Protocols):
class FunctionToDecorate(Generic[T, P], Protocol):
def __call__(self, *args: P.args) -> T: ...
class DecoratedFunction(Generic[T, P], Protocol):
def __call__(self, *args: P.args) -> T: ...
# The decorator
def my_decorator(f: FunctionToDecorate[T, P]) -> DecoratedFunction[T, P]:
def my_decorated_function(*args) -> T:
return f(*args)
return my_decorated_function
# error: Argument 1 to "my_decorator" has incompatible type "Callable[[], int]"; expected "FunctionToDecorate[Never, Never]" [arg-type]
@my_decorator
def get_val() -> int:
return 1
# error: Argument 1 to "my_decorator" has incompatible type "Callable[[int], int]"; expected "FunctionToDecorate[Never, Never]" [arg-type]
@my_decorator
def double_val(val: int) -> int:
return val * 2
# error: Argument 1 to "my_decorator" has incompatible type "Callable[[int, Any], Any]"; expected "FunctionToDecorate[Never, Never]" [arg-type]
@my_decorator
def get_val_with_kwarg(val: int, kw=None):
return val
def main():
print("get_val()`s return value:", get_val())
print("double_val()`s return value:", double_val(2))
print("get_val_with_kwarg()`s return value:", get_val_with_kwarg(3))
if __name__ == "__main__":
main()
</code></pre>
<p>The output:</p>
<pre><code>get_val()`s return value: 1
double_val()`s return value: 4
get_val_with_kwarg()`s return value: 3
</code></pre>
<hr />
<p><strong>NB:</strong></p>
<p>The use of <code>Protocol</code> is justified by the complexity of the involved signatures <code>Callable</code> <a href="https://docs.python.org/3/library/typing.html#annotating-callable-objects" rel="nofollow noreferrer">can't represent</a>. The above MRE is simply what my problem seems to boil down to when using these tools.</p>
<p><strong>NB-2:</strong></p>
<p>I'm also aware of the simpler syntax introduced by Python 3.12 for generic types. The use of <code>TypeVar</code> is justified by a backward compatibility constraint to Python 3.10.</p>
|
<python><mypy><python-typing><python-3.10>
|
2024-05-20 11:42:12
| 0
| 15,848
|
vmonteco
|
78,506,301
| 6,010,635
|
Select multi-index when one subindex obeys condition
|
<p>If I build a dataframe like this</p>
<pre><code>arrays = [
np.array(["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"]),
np.array(["one", "two", "one", "two", "one", "two", "one", "two"])]
df = pd.DataFrame(np.random.randn(8, 4), index=arrays)
0 1 2 3
bar one -0.424972 0.567020 0.276232 -1.087401
two -0.673690 0.113648 -1.478427 0.524988
baz one 0.404705 0.577046 1.715002 -1.039268
two -0.370647 -1.157892 1.344312 0.844885
foo one 1.075770 -0.109050 1.643563 -1.469388
two 0.357021 -0.674600 -1.776904 -0.968914
qux one -1.294524 0.413738 0.276662 -0.472035
two -0.013960 -0.362543 -0.006154 -0.923061
</code></pre>
<p>I want to select the entire multi-index when one part of the multi-index (a subindex) obeys a condition. In the example above, I want to select the full multi-index when the values of <code>two</code> in column <code>2</code> are less than 0, so I want</p>
<pre><code> 0 1 2 3
bar one -0.424972 0.567020 0.276232 -1.087401
two -0.673690 0.113648 -1.478427 0.524988
foo one 1.075770 -0.109050 1.643563 -1.469388
two 0.357021 -0.674600 -1.776904 -0.968914
qux one -1.294524 0.413738 0.276662 -0.472035
two -0.013960 -0.362543 -0.006154 -0.923061
</code></pre>
|
<python><pandas><multi-index>
|
2024-05-20 11:19:37
| 1
| 1,297
|
David
|
78,506,138
| 2,338,792
|
How to convert odd number of hex digits to bytes without loosing its leading zeros
|
<p>How to convert with Python an odd number of hex digits to bytes without losing its leading zeros?</p>
<p>For example, string '0001000' should be converted to bytes 0001000</p>
|
<python>
|
2024-05-20 10:44:46
| 1
| 2,354
|
DavidS
|
78,506,114
| 2,487,835
|
SpaCy transformer NER training β zero loss on transformer, not trained
|
<p>I am training a SpaCy pipeline with <code>['transformer', 'ner']</code> components, ner trains well, but transformer is stuck on 0 loss, and, I am assuming, is not training.</p>
<p>Here is my config:</p>
<pre class="lang-ini prettyprint-override"><code>[paths]
vectors = "en_core_web_trf"
init_tok2vec = null
train = "/home/sxdadmin/spacy/input/train.spacy"
dev = "/home/sxdadmin/spacy/input/dev.spacy"
[system]
gpu_allocator = "pytorch"
seed = 0
[nlp]
lang = "en"
pipeline = ["transformer", "ner"]
batch_size = 512
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
vectors = {"@vectors":"spacy.Vectors.v1"}
######################################################################
[components]
######################################################################
[components.transformer]
factory = "transformer"
max_batch_items = 4096
[components.transformer.model]
@architectures = "spacy-transformers.TransformerModel.v1"
name = "bert-base-cased"
tokenizer_config = {"use_fast": true}
[components.transformer.model.get_spans]
@span_getters = "spacy-transformers.doc_spans.v1"
[components.transformer.set_extra_annotations]
@annotation_setters = "spacy-transformers.null_annotation_setter.v1"
######################################################################
[components.ner]
factory = "ner"
incorrect_spans_key = null
moves = null
scorer = {"@scorers":"spacy.ner_scorer.v1"}
update_with_oracle_cut_size = 100
[components.ner.model]
@architectures = "spacy.TransitionBasedParser.v2"
state_type = "ner"
extra_state_tokens = false
hidden_width = 64
maxout_pieces = 2
use_upper = true
nO = null
######################################################################
[corpora]
######################################################################
[corpora.train]
@readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 3000
gold_preproc = false
limit = 0
augmenter = null
[corpora.dev]
@readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 3000
gold_preproc = false
limit = 0
augmenter = null
######################################################################
[training]
######################################################################
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
seed = 0
gpu_allocator = "pytorch"
dropout = 0.1
accumulate_gradient = 1
patience = 1600
max_epochs = 0
max_steps = 20000
eval_frequency = 200
frozen_components = []
annotating_components = []
before_to_disk = null
before_update = null
######################################################################
[training.batcher]
@batchers = "spacy.batch_by_words.v1"
discard_oversize = false
tolerance = 0.2
get_length = null
[training.batcher.size]
@schedules = "compounding.v1"
start = 64
stop = 512
compound = 1.001
t = 0.0
######################################################################
[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false
[training.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = false
eps = 0.00000001
learn_rate = 0.001
[training.score_weights]
ents_f = 1.0
ents_p = 0.0
ents_r = 0.0
ents_per_type = null
######################################################################
[pretraining]
######################################################################
[initialize]
vectors = "en_core_web_lg"
init_tok2vec = null
vocab_data = null
lookups = null
before_init = null
after_init = null
[initialize.components]
[initialize.components.transformer]
[initialize.tokenizer]
</code></pre>
<p>and the output:</p>
<p><a href="https://i.sstatic.net/MWlXbDpB.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MWlXbDpB.jpg" alt="enter image description here" /></a></p>
<p>All warnings are met, the famous Bert's max_length of 512 tokens is achieved by text segmentation. Data was previously tested on <code>[tok2vec, ner]</code> setup.</p>
<p>Please help.</p>
|
<python><machine-learning><spacy><spacy-3><spacy-transformers>
|
2024-05-20 10:39:58
| 1
| 3,020
|
Lex Podgorny
|
78,506,106
| 11,562,537
|
How to set the same width to the ttk.Combobox and ttk.Entry widgets?
|
<p>I have to put three widgets (ttk.Combobox, ttk.Entry and ttk.Combobox) in series with the same width. Below an example:</p>
<pre><code>import tkinter as tk
from tkinter import ttk
def configure_column_widths(frame, num_columns):
for i in range(num_columns):
frame.grid_columnconfigure(i, weight=1)
root = tk.Tk()
root.title("Tkinter Example")
main_frame = ttk.Frame(root, padding="10")
main_frame.grid(row=0, column=0, sticky=(tk.W, tk.E, tk.N, tk.S))
configure_column_widths(main_frame, 3)
combobox1 = ttk.Combobox(main_frame)
combobox1.grid(row=0, column=0, padx=5, pady=5, sticky=(tk.W, tk.E))
entry = ttk.Entry(main_frame)
entry.grid(row=0, column=1, padx=5, pady=5, sticky=(tk.W, tk.E))
combobox2 = ttk.Combobox(main_frame)
combobox2.grid(row=0, column=2, padx=5, pady=5, sticky=(tk.W, tk.E))
root.grid_rowconfigure(0, weight=1)
root.grid_columnconfigure(0, weight=1)
main_frame.grid_rowconfigure(0, weight=1)
root.mainloop()
</code></pre>
<p>The ttk.Entry one doesn't have the same width than the other ones. How can I fix this issue?</p>
<p><a href="https://i.sstatic.net/0ke0GWkC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0ke0GWkC.png" alt="enter image description here" /></a></p>
|
<python><tkinter>
|
2024-05-20 10:38:33
| 1
| 918
|
TurboC
|
78,505,970
| 24,758,287
|
Python - Subtract Date by First Entry?
|
<p>I'm using polars library in python to manipulate some dataframe.</p>
<p>I'm trying to do the following:
:</p>
<p>For some dataframe:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Person..</th>
<th>Fight with</th>
<th>On</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>B</td>
<td>3 Jan</td>
</tr>
<tr>
<td>A</td>
<td>C</td>
<td>4 Jan</td>
</tr>
<tr>
<td>A</td>
<td>D</td>
<td>5 Jan</td>
</tr>
<tr>
<td>A</td>
<td>E</td>
<td>5 Jan</td>
</tr>
<tr>
<td>A</td>
<td>B</td>
<td>10 Jan</td>
</tr>
<tr>
<td>A</td>
<td>B</td>
<td>20 Jan</td>
</tr>
<tr>
<td>A</td>
<td>C</td>
<td>20 Jan</td>
</tr>
</tbody>
</table></div>
<p>I want to return the "distance" between the current fighter-<strong>pair</strong> and the first fight they had, such that:
:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Person..</th>
<th>Fight with</th>
<th>On</th>
<th>Distance</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>B</td>
<td>3 Jan</td>
<td>0 Days</td>
</tr>
<tr>
<td>A</td>
<td>C</td>
<td>4 Jan</td>
<td>0 Days</td>
</tr>
<tr>
<td>A</td>
<td>D</td>
<td>5 Jan</td>
<td>0 Days</td>
</tr>
<tr>
<td>A</td>
<td>E</td>
<td>5 Jan</td>
<td>0 Days</td>
</tr>
<tr>
<td>A</td>
<td>B</td>
<td>10 Jan</td>
<td>7 Days (i.e. 10 Jan - 3 Jan); (CurrentDate - ABFirstFight)</td>
</tr>
<tr>
<td>A</td>
<td>B</td>
<td>20 Jan</td>
<td>17 Days (i.e. 20 Jan - 3 Jan); (CurrentDate - ABFirstFight)</td>
</tr>
<tr>
<td>A</td>
<td>C</td>
<td>20 Jan</td>
<td>16 Days (i.e. 20 Jan - 4 Jan); (CurrentDate - ACFirstFight)</td>
</tr>
</tbody>
</table></div>
<p><What I've Tried>:</p>
<ol>
<li>polars "first" function: Only returned the head of the dataframe</li>
<li>polars "first" function with some combinations of "over"/"group_by"/"rolling" functions: Returned some numbers, but I can't make sense of why the output was that way</li>
</ol>
<p>Does anyone have any advice on how to attempt this?</p>
<p>I think I might need to use some combination of "group_by" or "over", "first", and perhaps "sub" (to subtract two dates?), but I'm not sure how to proceed.
The hardest part for me is to try to extract the first entry of a given group (e.g. first date entry of the A-B pair, or the A-C pair, etc.)</p>
|
<python><dataframe><datetime><grouping><python-polars>
|
2024-05-20 10:13:21
| 2
| 301
|
user24758287
|
78,505,858
| 12,415,855
|
Copy cells with text and font-color with openpyxl?
|
<p>I try to copy some cells with colored fonts from one worksheet to antoher using openpyxl.
The cell A1 is only colored - and the cell A2 is richtext-colored (so only a part of the cell is colored)</p>
<p><a href="https://i.sstatic.net/Zq7oRumS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zq7oRumS.png" alt="enter image description here" /></a></p>
<p>Using the following code:</p>
<pre><code>import openpyxl as ox
import os
import sys
path = os.path.abspath(os.path.dirname(sys.argv[0]))
fn = os.path.join(path, "test.xlsx")
wb = ox.load_workbook(fn, rich_text=True)
ws = wb["input"]
wsOut = wb["output"]
cell1 = ws["A1"]
cell1NEW = wsOut["A1"]
cell1NEW.value = cell1.value
cell1 = ws["A2"]
cell1NEW = wsOut["A2"]
cell1NEW.value = cell1.value
wb.save("test.xlsx")
</code></pre>
<p>But when i run the code only the cell A2 i copied correct.
For the cell A1 only the text is copied - without the coloring / the style.</p>
<p><a href="https://i.sstatic.net/2HzGUjM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2HzGUjM6.png" alt="enter image description here" /></a></p>
<p>How can i copy for both cells the coloring (the style)?</p>
|
<python><openpyxl>
|
2024-05-20 09:53:15
| 1
| 1,515
|
Rapid1898
|
78,505,840
| 12,291,425
|
Cannot get Azure subscriptions using `msal` and requests in python through REST API
|
<p>I can get subsriptions using token captured from browser:
<a href="https://i.sstatic.net/ex72jBvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ex72jBvI.png" alt="REST API result" /></a></p>
<p>Then I switch to Python and <code>msal</code>.
I use the following code:</p>
<pre class="lang-py prettyprint-override"><code>import msal
import requests
import sys
import json
data = json.load(open("parameters.json"))
config = {
"authority": f"https://login.microsoftonline.com/{data['tenant']}",
"client_id": data["client_id"],
"client_secret": data["client_secret"],
"scopes": [
"https://management.azure.com/.default",
]
}
app = msal.ConfidentialClientApplication(
data["client_id"],
authority=f"https://login.microsoftonline.com/{data['tenant']}",
client_credential=data["client_secret"],
)
result = app.acquire_token_for_client(scopes=[
"https://management.azure.com/.default",
])
if "access_token" in result:
print("success")
else:
print(result.get("error"))
print(result.get("error_description"))
print(result.get("correlation_id"))
sys.exit(-1)
headers = {
"Authorization": f"Bearer {result['access_token']}"
}
response = requests.get(
headers=headers, url="https://management.azure.com/subscriptions?api-version=2019-08-01")
if response.status_code == 200:
print(response.content)
else:
print(response.status_code)
</code></pre>
<p>However, I cannot get desired result:</p>
<pre><code>success
b'{"value":[],"count":{"type":"Total","value":0}}'
</code></pre>
<p>The permissions are already granted:
<a href="https://i.sstatic.net/vtii3eo7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vtii3eo7.png" alt="Permissions: user_impersonation" /></a></p>
<p>I tried to add "https://management.azure.com//user_impersonation" to scope, but it fails with error:</p>
<pre><code>AADSTS1002012: The provided value for scope https://management.azure.com//user_impersonation is not valid. Client credential flows must have a scope value with /.default suffixed to the resource identifier (application ID URI).
</code></pre>
<p>I reads about the "OBO" flow, but my app isn't supposed to need user interactions.</p>
<p>Ask:
Does I miss something in the auth flow or permissions?</p>
|
<python><azure><azure-ad-msal><azure-rbac>
|
2024-05-20 09:47:54
| 1
| 558
|
SodaCris
|
78,505,497
| 8,205,554
|
How to define new values and sort by those values when combining a list with itself?
|
<p>I have a pandas dataframe like follows,</p>
<pre><code>+------------+-------------------+---------+--------+----------+
| process_no | process_durations | columns | orders | customer |
+------------+-------------------+---------+--------+----------+
| 0 | 3 | [0] | [3109] | [0] |
+------------+-------------------+---------+--------+----------+
| 1 | 100 | [11] | [5855] | [0] |
+------------+-------------------+---------+--------+----------+
| 2 | 81 | [8] | [5304] | [0] |
+------------+-------------------+---------+--------+----------+
</code></pre>
<p>I want to combine this dataframe with itself then, get the total length for <code>orders</code>, the length of unique values for <code>columns</code>, and the list of unique values for <code>customers</code>. Then, I want to sort in descending for the <code>orders</code> length and ascending for the <code>columns</code> length. But I should also keep the original version of the combination. For this situation, I did the following,</p>
<pre class="lang-py prettyprint-override"><code>from itertools import combinations
from operator import itemgetter
data = pd.DataFrame({
'process_no': [0, 1, 2],
'process_durations': [3, 100, 81],
'columns': [[0], [11], [8]],
'orders': [[3109], [5855], [5304]],
'customer': [[0], [0], [0]]
})
vals = data.values.tolist()
cross_combine = list(combinations(vals, r=2))
sorted_cross_combine = sorted(
[
(
x,
-(len(x[0][3]) + len(x[1][3])),
len(set(x[0][2] + x[1][2])),
list(set(x[0][4] + x[1][4]))
)
for x in cross_combine
],
key=itemgetter(1, 2)
)
print(sorted_cross_combine)
[(([0, 3, [0], [3109], [0]], [1, 100, [11], [5855], [0]]), -2, 2, [0]),
(([0, 3, [0], [3109], [0]], [2, 81, [8], [5304], [0]]), -2, 2, [0]),
(([1, 100, [11], [5855], [0]], [2, 81, [8], [5304], [0]]), -2, 2, [0])]
</code></pre>
<p>And here is the my example output if you wanna choose do it with pandas,</p>
<pre><code>+-----------------------------+-----------------------------+-------------+--------------+-----------+
| x1 | x2 |order_length |column_length | customers |
+-----------------------------+-----------------------------+-------------+--------------+-----------+
| [0, 3, [0], [3109], [0]] | [1, 100, [11], [5855], [0]] | -2 | 2 | [0] |
+-----------------------------+-----------------------------+-------------+--------------+-----------+
| [0, 3, [0], [3109], [0]] | [2, 81, [8], [5304], [0]] | -2 | 2 | [0] |
+-----------------------------+-----------------------------+-------------+--------------+-----------+
| [1, 100, [11], [5855], [0]] | [2, 81, [8], [5304], [0]] | -2 | 2 | [0] |
+-----------------------------+-----------------------------+-------------+--------------+-----------+
</code></pre>
<p>If you look at columns x1 and x2,</p>
<pre><code>orders => [3109] and [5855]
columns => [0] and [11]
customer => [0] and [0]
</code></pre>
<pre><code>order_length => -len([3109] + [5855])
column_length => len(set([0] + [1]))
customers => list(set([0] + [0]))
</code></pre>
<p>What I want to ask is,</p>
<p>Can I do this while the combine process is still taking place, provided that it is more effective? For example, I know there is no such function, but I imagine something like this,</p>
<pre class="lang-py prettyprint-override"><code>def calc(x, y):
return (
x + y,
-(len(x[3]) + len(y[3])),
len(set(x[2] + y[2])),
list(set(x[4] + y[4]))
)
cross_combine = list(combinations(vals, r=2, func=calc))
</code></pre>
<p>Or is there a way I can make the whole process more effective? The process of creating the <code>sorted_cross_combine</code> value takes about 20 seconds for a <code>vals</code> list with approximately 6500 elements.</p>
<p>You can download example data from the <a href="https://easyupload.io/rkdbro" rel="nofollow noreferrer">link</a>. You need to cast data type as follows,</p>
<pre class="lang-py prettyprint-override"><code>import ast
data = pd.read_csv('a.csv')
for col in ['columns', 'orders', 'customer']:
data[col] = data[col].apply(ast.literal_eval)
</code></pre>
<p>Thanks in advance.</p>
|
<python><pandas>
|
2024-05-20 08:30:46
| 2
| 2,633
|
E. Zeytinci
|
78,505,478
| 3,182,496
|
How to run a function multiple times with different return values?
|
<p>I have a test that intermittently fails during automated builds so I want to add some retries to my tool. However, I'm not sure how to force the tool to fail and verify that it has retried before I get the usual satisfactory outcome.</p>
<p>The program I am testing is written in Python, but some functionality is provided by the <a href="https://github.com/jblindsay/whitebox-tools" rel="nofollow noreferrer">Whitebox Tools</a> Rust library, via a <a href="https://github.com/opengeos/whitebox-python/tree/master" rel="nofollow noreferrer">Python wrapper</a>.</p>
<p>The function that calls Whitebox Tools looks like this:</p>
<pre><code>def run_wbt(func, *args, err_type=None, err_msg=None, **kwargs):
path_args = [args[1]] if isinstance(args[1], str) else args[1]
default_err = f"An error has occurred with the {func.__name__} Whitebox tool."
for retry in range(3):
rc = func(*args, **kwargs)
if rc == 0:
break
if rc != 0 or not all(os.path.exists(path) for path in path_args):
if err_type:
raise err_type(err_msg or default_err)
raise DatasetError(default_err)
</code></pre>
<p>You call it by passing in a Whitebox function and other parameters. We do this at least 5 times during a normal run of the tool. For example:</p>
<pre><code> run_wbt(
wbt.breach_depressions_least_cost,
dem_in_path,
dem_fix_path,
dist=dist,
max_cost=max_cost,
min_dist=min_dist,
fill=fill,
)
</code></pre>
<p>As the <a href="https://github.com/opengeos/whitebox-python/blob/master/whitebox/whitebox_tools.py#L4898" rel="nofollow noreferrer">Whitebox tools function</a> I am calling is third-party, I can't make any changes to it. It just absorbs any errors and just returns an exit code.</p>
<p>What I have tried to do is to use the <code>pytest-mock</code> <code>mocker</code> fixture to replace the return code with two bad codes followed by one good.</p>
<pre><code> # TODO actually run the function on the 3rd attempt
mock_bdlc = mocker.MagicMock(
spec=WhiteboxTools.breach_depressions_least_cost, side_effect=[1, 1, 0]
)
mocker.patch("whitebox.WhiteboxTools.breach_depressions_least_cost", mock_bdlc)
</code></pre>
<p>This does give me the exit codes I want, but I still need the Whitebox tool to run, to produce the outputs I am expecting. There are other Whitebox calls after this one and they need the outputs of this function. It's ok if the function actually runs on each retry, it will just overwrite the outputs on the second and third runs.</p>
<p>All I've managed to do so far is either mock the function and return different exit codes or just run the original function. I can't figure out how to do both.</p>
|
<python><unit-testing><mocking><python-unittest.mock>
|
2024-05-20 08:26:06
| 2
| 1,278
|
jon_two
|
78,505,470
| 9,542,989
|
skip_on_failure Flag with Pydantic V2?
|
<p>The <code>root_validator</code> that was available with Pydantic V1 allowed a flag called <code>skip_on_failure</code>, which allowed other validators to be run even if prior ones failed (when set to <code>True</code>).</p>
<p>Is there a way to achieve using the <code>model_validator</code> offered by V2? I cannot figure it out from the docs.</p>
|
<python><pydantic><pydantic-v2>
|
2024-05-20 08:24:08
| 0
| 2,115
|
Minura Punchihewa
|
78,505,410
| 8,972,207
|
Fivetran REST API get all connectors
|
<p>I'm currently working on a project where I need to list all connectors across multiple groups in Fivetran. I've managed to retrieve all the groups successfully using the REST API, but I'm having trouble listing all the connectors.</p>
<p>This is how I get all groups:</p>
<pre><code>import requests
import json
url = "https://api.fivetran.com/v1/groups"
headers = {
"Accept": "application/json;version=2",
"Authorization": "Basic yourkeyetc"
}
response = requests.request("GET", url, headers=headers)
parsed = json.loads(response.text)
print(json.dumps(parsed, indent=4))
</code></pre>
<p>But when I try to access <code>https://api.fivetran.com/v1/connectors</code> I get a <code>405</code> because that specific URL only allows POST? Any help is appreciated.</p>
|
<python><rest><fivetran>
|
2024-05-20 08:09:28
| 0
| 573
|
DarknessPlusPlus
|
78,505,110
| 3,662,734
|
Using register_hook() to freeze a portion of weight tensor does not speed up training
|
<p>I am doing a sort of transfer learning where I want to freeze a part of the weights tensor and train the other, as explained in <a href="https://stackoverflow.com/questions/78488981/set-a-part-of-weight-tensor-to-requires-grad-true-and-keep-rest-of-values-to-r/78489275?noredirect=1#comment138396465_78489275">my previous question</a></p>
<p>For this, I am using <code>register_hook()</code> to set the gradients of a portion of the weights to zero. I checked that the gradients were set to zero, however, I noticed that this doesn't accelerate the training, it seems that computations still happen for the weights for which the gradients were set to zero and I dont really understand if the <code>register_hook()</code> function sets the gradients to zero after computing them, or computes the new weights with gradients = 0. Is there any way to do that so I can get a similar behavior to <code>requires_grad=False</code> and speedup training?</p>
<p>Here is a simple example:</p>
<pre><code>import torch
import torch.nn as nn
import torch.optim as optim
import time
from torch.utils.data import Dataset, DataLoader
torch.manual_seed(42)
class example_net(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(example_net, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.fc2 = nn.Linear(hidden_size, hidden_size)
self.fc3 = nn.Linear(hidden_size, output_size)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = self.fc3(x)
return x
# dataset example:
class example_dataset(Dataset):
def __init__(self, input_size, num_samples):
self.input_size = input_size
self.num_samples = num_samples
self.data = torch.randn(num_samples, input_size)
self.targets = torch.randint(0, 10, (num_samples,))
def __len__(self):
return self.num_samples
def __getitem__(self, idx):
return self.data[idx], self.targets[idx]
input_size = 4096
batch_size = 64
num_samples = 10000
dataset = example_dataset(input_size, num_samples)
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
criterion = nn.CrossEntropyLoss()
def train_model(net, dataloader, use_hook=False, num_epochs=5):
optimizer = optim.SGD(net.parameters(), lr=0.01)
start_time = time.time()
if use_hook:
def hook_fn(grad):
grad = grad.clone()
grad[:, 10:] = 0 # Zero out all but the first 10 columns of the gradient
return grad
# Register the hook
hook_handle = net.fc1.weight.register_hook(hook_fn)
for epoch in range(num_epochs):
for inputs, targets in dataloader:
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
#print("after hook:", net.fc1.weight.grad)
optimizer.step()
if use_hook:
hook_handle.remove()
end_time = time.time()
return end_time - start_time
num_epochs = 1
# train model without hook
net_without_hook = example_net(input_size, 4096, 10)
time_without_hook = train_model(net_without_hook, dataloader, use_hook=False,
num_epochs=num_epochs)
print(f"Total training time without hook: {time_without_hook:.4f} seconds")
# train model with hook
net_with_hook = example_net(input_size, 4096, 10)
time_with_hook = train_model(net_with_hook, dataloader, use_hook=True,
num_epochs=num_epochs)
print(f"Total training time with hook: {time_with_hook:.4f} seconds")
</code></pre>
|
<python><deep-learning><pytorch><freeze>
|
2024-05-20 06:58:47
| 1
| 579
|
Emily
|
78,505,010
| 12,959,994
|
how to properly frame websocket messages in python socket library
|
<p>I'm building a websocket server in python using socket, it's embedded python that I can't install any packages for. I've gotten as far as handling the handshake and establish a connection. I can get data being sent back and forth from my server and client (a React App), but some of the payloads are too large, batching them up worked but then it was too slow. So I've compressed the data using zlib. Now the issue is it's saying Invalid Websocket Frame. I have tried to condense the code as much as possible to give a server that will run and demonstrate the problem.</p>
<p>I will be honest, I started writing this myself but have used AI for some parts of it, particularly the websocket handshake and framing the message, which I don't fully understand (hence the question), and AI can only get you so far.</p>
<p>So here is my server code, this runs in python 3.9 - 3.11, haven't tried other versions.</p>
<pre><code>import socket
import struct
import base64
import hashlib
import zlib
import logging
import json
from threading import Thread
class WebSocketServer(Thread):
def __init__(self):
Thread.__init__(self)
self.connection = None
self.logger = logging.getLogger('WebSocketServer')
def run(self):
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(('', 5558))
self.logger.info("Server started, waiting for connections...")
sock.listen(1)
while True:
connection, _ = sock.accept()
if connection:
self.connection = connection
self.logger.info("Client connected")
Thread(target=self.handle_connection, args=[connection]).start()
except Exception as e:
self.logger.error(f'Run error: {e}')
finally:
if self.connection:
self.connection.close()
self.logger.info('Server socket closed')
def handle_connection(self, connection):
try:
if self.perform_handshake(connection):
while True:
msg = self.receive_message(connection)
if msg:
self.logger.info(f'Received message: {msg}')
# Echo messages back
self.send_message(json.dumps(msg))
else:
break
except Exception as e:
self.logger.error(f'Connection error: {e}')
finally:
connection.close()
self.logger.info('Connection closed')
def perform_handshake(self, connection):
try:
self.logger.info("Performing handshake...")
request = connection.recv(1024).decode('utf-8')
self.logger.info(f"Handshake request: {request}")
headers = self.parse_headers(request)
websocket_key = headers['Sec-WebSocket-Key']
websocket_accept = self.generate_accept_key(websocket_key)
response = (
'HTTP/1.1 101 Switching Protocols\r\n'
'Upgrade: websocket\r\n'
'Connection: Upgrade\r\n'
f'Sec-WebSocket-Accept: {websocket_accept}\r\n\r\n'
)
connection.send(response.encode('utf-8'))
self.logger.info("Handshake response sent")
return True
except Exception as e:
self.logger.error(f'Handshake error: {e}')
return False
def parse_headers(self, request):
headers = {}
lines = request.split('\r\n')
for line in lines[1:]:
if line:
key, value = line.split(': ', 1)
headers[key] = value
return headers
def generate_accept_key(self, websocket_key):
magic_string = "258EAFA5-E914-47DA-95CA-C5AB0DC85B11"
accept_key = base64.b64encode(hashlib.sha1((websocket_key + magic_string).encode()).digest()).decode('utf-8')
return accept_key
def receive_message(self, connection):
try:
data = connection.recv(1024)
if not data:
return None
byte1, byte2 = struct.unpack('BB', data[:2])
fin = byte1 & 0b10000000
opcode = byte1 & 0b00001111
masked = byte2 & 0b10000000
payload_length = byte2 & 0b01111111
if masked != 0b10000000:
self.logger.error('Client data must be masked')
return None
if payload_length == 126:
extended_payload_length = data[2:4]
payload_length = int.from_bytes(extended_payload_length, byteorder='big')
masking_key = data[4:8]
payload_data = data[8:]
elif payload_length == 127:
extended_payload_length = data[2:10]
payload_length = int.from_bytes(extended_payload_length, byteorder='big')
masking_key = data[10:14]
payload_data = data[14:]
else:
masking_key = data[2:6]
payload_data = data[6:]
decoded_bytes = bytearray()
for i in range(payload_length):
decoded_bytes.append(payload_data[i] ^ masking_key[i % 4])
if opcode == 0x1: # Text frame
return decoded_bytes.decode('utf-8')
elif opcode == 0x8: # Connection close frame
self.logger.info('Connection closed by client')
return None
else:
self.logger.error(f'Unsupported frame type: {opcode}')
return None
except Exception as e:
self.logger.error(f'Error receiving message: {e}')
return None
def send_message(self, message):
try:
if self.connection and isinstance(message, str):
# Compress the message using zlib
compressed_message = zlib.compress(message.encode('utf-8'))
# Determine chunk size based on network conditions
max_chunk_size = 1024 # Adjust as needed
# Split the compressed message into smaller chunks
chunks = [compressed_message[i:i+max_chunk_size] for i in range(0, len(compressed_message), max_chunk_size)]
for chunk in chunks:
frame = bytearray()
frame.append(0b10000001) # Text frame opcode
length = len(chunk)
if length <= 125:
frame.append(length)
elif length <= 65535:
frame.append(126)
frame.extend(struct.pack('!H', length))
else:
frame.append(127)
frame.extend(struct.pack('!Q', length))
# Append the chunk to the frame
frame.extend(chunk)
# Send the framed chunk
self.connection.sendall(frame)
else:
self.logger.error("Connection closed or invalid message")
except Exception as e:
self.logger.error(f'Error sending message: {e}')
# Configure logging
logging.basicConfig(level=logging.INFO)
# Create an instance of WebSocketServer
server = WebSocketServer()
# Start the server
server.start()
</code></pre>
<p>And to test it I'm just using wscat from the terminal</p>
<pre><code>wscat -c ws://127.0.0.1:5558
</code></pre>
<p>Then I type any message, and the response I get is:</p>
<p><code>error: Invalid WebSocket frame: invalid UTF-8 sequence</code></p>
<ol>
<li>Why am I seeing this error?</li>
<li>How should I be framing these messages?</li>
<li>Is there a more efficient way of doing this?</li>
</ol>
<p>For context, the payloads I'm sending is an array of arrays of integers, the data is MIDI SysEx, so each array starts with a 0xF0 and ends with an 0xF7 byte. Because of this the messages are getting sent really quickly which is what was causing the problem. The first message is usually going to be quite large and get batched up, so could be thousands of SysEx arrays.</p>
<p>I have sent this data over similar web sockets before so I know it is possible, but I've never had to write them from scratch before.</p>
<p>Any help here would be greatly appreciated.</p>
|
<python><multithreading><sockets><websocket>
|
2024-05-20 06:30:31
| 1
| 652
|
jbflow
|
78,504,983
| 1,039,860
|
how to create a version.py file that is updated each time any file is pushed via git
|
<p>I am hoping to have git create a version.py file that looks like this (where the version is incremented each commit):</p>
<pre><code># This file is created by the pre-push script
class Version:
comment = "git commit comment"
hash = "some git hash"
version = "0.8.9"
</code></pre>
<p>I have tried this:</p>
<pre><code>#!/usr/bin/env /usr/bin/python
import os
import subprocess
import re
import sys
commit_msg_file = sys.argv[1]
with open(commit_msg_file, 'r') as file:
commit_msg = file.read().strip()
version_file = os.path.abspath('version.py')
hashed_code = subprocess.check_output(['git', 'rev-parse', 'HEAD']).strip().decode('utf-8')
if os.path.exists(version_file):
print(f'Reading previous {version_file}')
with open(version_file, 'r') as f:
content = f.read()
major, minor, patch = map(int, re.search(r'version = "(\d+)\.(\d+)\.(\d+)"', content).groups())
patch += 1
else:
print(f'Creating new {version_file}')
major, minor, patch = 0, 0, 1
print(f'Writing contents of {version_file} with "{commit_msg}"')
with open(version_file, 'w') as f:
f.write(f'''# This file is created by the pre-push script
class Version:
comment = "{commit_msg}"
hash = "{hashed_code}"
version = "{major}.{minor}.{patch}"
if __name__ == "__main__":
print(Version.version)
''')
f.close()
print(f'adding {version_file}')
subprocess.call(['git', 'add', version_file])
# subprocess.call(['git', 'commit', '-m', comment])
print(f'adding {version_file}')
subprocess.call(['git', 'add', version_file])
# subprocess.call(['git', 'commit', '-m', comment])
</code></pre>
<p>I haved tried this in pre-push and repare-commit-msg to no avail... Is there a better way or a way to fix what I have?
thanks in advance</p>
|
<python><git>
|
2024-05-20 06:25:39
| 3
| 1,116
|
jordanthompson
|
78,504,872
| 9,846,650
|
Issue accessing deeply nested attributes in Django Rest Framework serializer's validate method
|
<p>I'm facing an issue while trying to access deeply nested attributes within the <code>validate</code> method of a Django Rest Framework serializer. Here's a simplified version of my serializer:</p>
<pre class="lang-py prettyprint-override"><code>from rest_framework import serializers
from myapp.models import User
class YourSerializer(serializers.ModelSerializer):
def validate(self, attrs):
if hasattr(self.parent.instance, 'vehicle'):
if hasattr(self.parent.instance.vehicle, 'engine'):
if hasattr(self.parent.instance.vehicle.engine, 'fuel'):
fuel_type = self.parent.instance.vehicle.engine.fuel.diesel
if fuel_type:
pass
else:
raise serializers.ValidationError("Fuel type is not available.")
else:
raise serializers.ValidationError("Fuel attribute is not available.")
else:
raise serializers.ValidationError("Engine attribute is not available.")
else:
raise serializers.ValidationError("Vehicle attribute is not available.")
return attrs
</code></pre>
<p>In the <code>validate</code> method, I'm attempting to access deeply nested attributes (<code>vehicle.engine.fuel.diesel</code>) of the parent instance.
The thing is Model engine is created after user does ABC operations, and same applies for fuel, and diesel. When a load of people work on
codebase its tough to identify if a these Models are created or not, normally <code>getattr()</code> and <code>hasattr()</code> are used, but clubbing these a lot makes
code look messy.</p>
<p>Tried static tools like mypy and pylint, they help to a certain extent but sometimes, TypeError, NoneType issues popup.
Not sure how to fix these.</p>
<p>Thank you for your help!</p>
|
<python><python-3.x><django><django-rest-framework>
|
2024-05-20 05:52:13
| 1
| 712
|
mad_lad
|
78,504,721
| 13,285,583
|
Why torch.autograd.Function backward pass a grad_output?
|
<p>my goal is to understand <code>torch.autograd.Function</code>. The problem is I don't understand why backward need <code>grad_output</code>.</p>
<p>What I've tried:</p>
<ol>
<li><p>Read <a href="https://pytorch.org/tutorials/beginner/pytorch_with_examples.html" rel="nofollow noreferrer">Learning PyTorch with Examples</a></p>
<p>The Learning PyTorch with Examples teach how to define autograd functions. I understand for <code>LegendrePolynomial3</code>, the <code>forward</code> should be <code>Β½ * (5xΒ³ - 3x)</code>. However, I don't understand why the <code>backward</code> need <code>grad_output</code>.</p>
<pre><code>class LegendrePolynomial3(torch.autograd.Function):
"""
We can implement our own custom autograd Functions by subclassing
torch.autograd.Function and implementing the forward and backward passes
which operate on Tensors.
"""
@staticmethod
def forward(ctx, input):
"""
In the forward pass we receive a Tensor containing the input and return
a Tensor containing the output. ctx is a context object that can be used
to stash information for backward computation. You can cache arbitrary
objects for use in the backward pass using the ctx.save_for_backward method.
"""
ctx.save_for_backward(input)
# Β½ * (5xΒ³ - 3x)
return 0.5 * (5 * input ** 3 - 3 * input)
@staticmethod
def backward(ctx, grad_output):
"""
In the backward pass we receive a Tensor containing the gradient of the loss
with respect to the output, and we need to compute the gradient of the loss
with respect to the input.
"""
input, = ctx.saved_tensors
# d/dx Β½ * (5xΒ³ - 3x)
# d/dx (Β½ * 5xΒ³) - (Β½ * 3x)
# (3 * Β½ * 5xΒ²) - (1 * Β½ * 3)
# 1.5 * (5xΒ² - 1)
return grad_output * 1.5 * (5 * input ** 2 - 1)
</code></pre>
</li>
</ol>
|
<python><pytorch>
|
2024-05-20 04:58:49
| 1
| 2,173
|
Jason Rich Darmawan
|
78,504,499
| 11,942,776
|
Pyinstaller cannot detect C-module in package
|
<p>I'm using Pyinstaller to create a Windows executable from Python code.
My Python script depends on a package called MT5Manager, which is compiled from C.</p>
<p>I can use <code>python main.py</code> to run the script without error, but when I try running the .exe file, I get an error that says:</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 5, in <module>
manager = MT5Manager.ManagerAPI()
^^^^^^^^^^^^^^^^^^^^^^^
SystemError: <class 'MT5Manager.ManagerAPI'> returned NULL without setting an exception
[6680] Failed to execute script 'main' due to unhandled exception!
</code></pre>
<p>Here is what I did, step by step:</p>
<ol>
<li>Create an virtualenv</li>
</ol>
<pre><code>python -m venv .venv
.venv\Script\activate
</code></pre>
<ol start="2">
<li>Install package from pypi:</li>
</ol>
<pre><code>pip install MT5Manager pyinstaller
</code></pre>
<ol start="3">
<li>create a <code>main.py</code> with content as follows:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import numpy # I use this because I don't want to use --hidden-import in pyinstaller command
import MT5Manager
print("Prepair")
manager = MT5Manager.ManagerAPI()
print("OK.")
</code></pre>
<ol start="4">
<li>create an executable file:</li>
</ol>
<pre><code>pyinstaller -F main.py --paths .venv\Lib\site-packages
</code></pre>
<p>I'm using PyInstaller version 6.6.0 on Windows 11, using Python 3.12.3 provided by <code>pyenv-win</code>.</p>
<p>What is wrong, and how can I fix it?</p>
|
<python><pyinstaller>
|
2024-05-20 03:07:35
| 0
| 331
|
Nguyα»
n Δα»©c Huy
|
78,504,247
| 5,286,435
|
Weird Vim Python auto-indent issue
|
<p>When editing Python files, I am running into a weird auto-indent issue. No matter what I do, when I open a new line, the cursor always ends up in the middle of the terminal. I expect it be aligned with either the start of the previous line or one indent over if its opening a new control block. For example:</p>
<pre><code> x = y
|
# or like this
if x == y:
|
# pipe '|' indicates where I'd expect cursor to be because of autoindent.
</code></pre>
<p>However, it always like this:</p>
<pre><code> hello, world
|
or
if x == y:
|
</code></pre>
<p>My terminal is 100 characters wide and it always puts the cursor at position 54 when I open a new line. Any idea why this would be happening? I've pasted my Vim settings below. This started happening recently, and it's driving me nuts. Any help is really appreciated. Thanks.</p>
<pre><code> autoindent exrc modelines=0 smartcase wildignore=*.pyc
backup filetype=python ruler softtabstop=2 window=0
cmdheight=2 formatoptions=cqt scroll=18 suffixesadd=.py wrapmargin=5
comments=b:#,fb:- helplang=en scrolloff=2 syntax=python t_8u=
cscopetag hidden secure tabstop=2
cscopetagorder=1 ignorecase shiftwidth=2 textwidth=100
cscopeverbose incsearch showcmd ttymouse=sgr
expandtab laststatus=2 showmatch updatetime=300
backspace=indent,eol,start
backupdir=~/.vim/tmp
cinkeys=0{,0},0),0],:,!^F,o,O,e
commentstring=# %s
completeopt=menuone,menu,longest,preview
cscopeprg=/usr/local/bin/cscope
define=^\s*\(def\|class\)
directory=~/.vim/tmp
fileencoding=utf-8
fileencodings=ucs-bom,utf-8,default,latin1
include=^\s*\(from\|import\)
includeexpr=substitute(substitute(substitute(v:fname,b:grandparent_match,b:grandparent_sub,''),b:p
arent_match,b:parent_sub,''),b:child_match,b:child_sub,'g')
indentexpr=python#GetIndent(v:lnum)
indentkeys=0{,0},0),0],:,!^F,o,O,e,<:>,=elif,=except
keywordprg=python3 -m pydoc
omnifunc=syntaxcomplete#Complete
</code></pre>
|
<python><vim>
|
2024-05-20 00:20:10
| 0
| 372
|
Dnj Abc
|
78,503,955
| 3,168,356
|
Understanding the shape of np.apply_along_axis output
|
<p>I have a question regarding output dimensions of <code>np.apply_along_axis</code>.</p>
<p>In the first case values of <code>x</code> for the lambda function are <code>[1 2]</code>, <code>[1 3]</code>, <code>[1 4]</code> and they have shape <code>(2,)</code> since we split across <code>axis=0</code>. Each output of the lambda function has shape <code>(2,2)</code>:</p>
<pre><code>import numpy as np
out = np.apply_along_axis(
func1d=lambda x: x + np.asarray([[1,2],[3,4]]),
axis=0,
arr=np.asarray([[1,1,1],[2,3,4]])
)
print(out.shape)
# (2, 2, 3)
print(out)
# [[[2 2 2]
# [4 5 6]]
# [[4 4 4]
# [6 7 8]]]
</code></pre>
<p>In the second case values of <code>x</code> for the lambda function are <code>[1]</code>, <code>[2]</code>, <code>[3]</code> and they have shape <code>(1,)</code> since we split across <code>axis=1</code>. Each output of the lambda function has shape <code>(2,2)</code>:</p>
<pre><code>import numpy as np
out = np.apply_along_axis(
func1d=lambda x: x + np.asarray([[1,2],[3,4]]),
axis=1,
arr=np.asarray([[1,2,3]]).T
)
print(out.shape)
# (3, 2, 2)
print(out)
# [[[2 3]
# [4 5]]
# [[3 4]
# [5 6]]
# [[4 5]
# [6 7]]]
</code></pre>
<p>Can you please explain why in the first case the output has shape <code>(2,2,3)</code> and in the second case its shape is <code>(3,2,2)</code> given that the output from lambda function has the same shape?</p>
|
<python><numpy>
|
2024-05-19 21:20:27
| 2
| 3,169
|
Konstantin
|
78,503,848
| 116
|
How can I force devtools.pprint to not colorize its output?
|
<p>I have some code that uses <a href="https://python-devtools.helpmanual.io/usage/#prettier-print" rel="nofollow noreferrer"><code>devtools.pprint</code></a>:</p>
<pre><code>import devtools
devtools.pprint(my_dictionary)
</code></pre>
<p>How can I indicate to <code>pprint</code> that I do not want colorized output? Setting <a href="https://no-color.org" rel="nofollow noreferrer"><code>NO_COLOR</code></a> does not seem to have an effect:</p>
<pre><code>NO_COLOR=1 ./my_program.py
# (produces colorized output)
</code></pre>
|
<python>
|
2024-05-19 20:29:30
| 1
| 305,996
|
Mark Harrison
|
78,503,790
| 2,813,606
|
How to remove string + n preceding characters from pandas dataframe
|
<p>I have a dataframe that looks like the following:</p>
<pre><code>Group Attribute Text
1 A 'The ball is red456placeholder'
1 A 'I like pizza985placeholder.'
2 A 'Fire bad 231placeholder'
2 B 'Sparkling water makes me happy777placeholder'
</code></pre>
<p>I want to clean the text column and remove the occurrence of the word placeholder but also the three spaces in the string preceding it.</p>
<p>I understand how to remove the placeholder text:</p>
<pre><code>df['Text'] = df['Text'].str.replace('placeholder','')
</code></pre>
<p>Then, I'm left with the final column with the number on the end. I've tried:</p>
<pre><code>df['Text'] = df['Text'].str.replace(r'^\d+\s+', '', regex=True)
</code></pre>
<p>This doesn't seem to have any effect. Is there a better way to do this?</p>
|
<python><pandas><substring>
|
2024-05-19 20:03:52
| 2
| 921
|
user2813606
|
78,503,756
| 1,314,901
|
How do I type generic callables with default value in Python?
|
<p>I have the following function:</p>
<pre><code>T = TypeVar("T")
def handle_func(x: int, func: Callable[[int], T]) -> T:
return func(x)
</code></pre>
<p>and I can use it like so:</p>
<pre><code>handle_func(1, lambda x: x + 1)
handle_func(1, lambda x: x == 1)
</code></pre>
<p>When I run mypy, all is good.</p>
<p>But how can I add a default value to <code>func</code>?</p>
<p>When I do this:</p>
<pre><code>def handle_func(x: int, func: Callable[[int], T] = lambda x: x + 1) -> T:
return func(x)
</code></pre>
<p>and run mypy I get:</p>
<pre><code>error: Incompatible default for argument "func" (default has type "Callable[[int], int]", argument has type "Callable[[int], T]") [assignment]
error: Incompatible return value type (got "int", expected "T") [return-value]
</code></pre>
<p>How can I type this? What am I understanding incorrectly?</p>
|
<python><mypy><python-typing>
|
2024-05-19 19:51:10
| 2
| 2,060
|
Meir
|
78,503,335
| 7,658,051
|
psycopg2.errors.UndefinedTable: relation "mydjangoapp_mymodel" does not exist
|
<p>I am managing a django app built by third parts.</p>
<p>I have configured in settings.py the connection to a new db</p>
<pre><code>'default': { # changed
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': 'waterwatch',
'USER': 'waterwatch_main',
'PASSWORD': '****',
'HOST': 'localhost',
}
</code></pre>
<p>Now I want to populate the new database with the table in my models.py, that is</p>
<pre><code>class WaterConsumption(models.Model):
Id = models.IntegerField(primary_key=True)
Suburb = models.CharField(max_length=100)
NoOfSingleResProp = models.IntegerField()
AvgMonthlyKL = models.IntegerField()
AvgMonthlyKLPredicted = models.IntegerField()
PredictionAccuracy = models.IntegerField()
Month = models.CharField(max_length=50)
Year = models.IntegerField()
DateTime = models.DateTimeField()
geom = models.PointField()
def __str__(self):
return self.Suburb
class Meta:
verbose_name_plural = 'WaterConsumption'
</code></pre>
<p>so I run</p>
<pre><code>python manage.py makemigrations
</code></pre>
<p>but I get</p>
<pre><code>django.db.utils.ProgrammingError: relation "waterwatchapp_waterconsumption" does not exist
</code></pre>
<p>well... I guess that is obvious, I am actually trying to create new tables in my new database.</p>
<p>I guess something is wrong with the migrations.</p>
<p>so I have dropped and recreated the database, deleted all files in</p>
<ul>
<li><p><code>waterwatchapp/migrations</code> , except <code>__init__.py</code></p>
</li>
<li><p><code>waterwatchapp/__pycache__</code></p>
</li>
<li><p><code>waterwatch/__pycache__</code></p>
</li>
</ul>
<p>But as I run again <code>python manage.py makemigrations</code> or <code>python manage.py migrate</code>, I still get the same error.</p>
<p>Why does django does not creates the tables anew?</p>
<p>It seems to me that django is still somehow following a sort of track of what was in the database before.</p>
<p>How can I remove all the migration history done before, and make django apply the migrations to a new database? (I don't care about the stored data)</p>
<p>Complete traceback.</p>
<pre><code>Watching for file changes with StatReloader
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.UndefinedTable: relation "waterwatchapp_waterconsumption" does not exist
LINE 1: UPDATE "waterwatchapp_waterconsumption" SET "Suburb" = 'ATHL...
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/core/management/commands/runserver.py", line 115, in inner_run
autoreload.raise_last_exception()
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception
raise _exception[1]
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/core/management/__init__.py", line 381, in execute
autoreload.check_errors(django.setup)()
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/apps/registry.py", line 122, in populate
app_config.ready()
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/contrib/admin/apps.py", line 27, in ready
self.module.autodiscover()
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/contrib/admin/__init__.py", line 24, in autodiscover
autodiscover_modules('admin', register_to=site)
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/utils/module_loading.py", line 57, in autodiscover_modules
import_module('%s.%s' % (app_config.name, module_to_search))
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/waterwatchapp/admin.py", line 44, in <module>
).save()
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/models/base.py", line 743, in save
self.save_base(using=using, force_insert=force_insert,
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/models/base.py", line 780, in save_base
updated = self._save_table(
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/models/base.py", line 862, in _save_table
updated = self._do_update(base_qs, using, pk_val, values, update_fields,
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/models/base.py", line 916, in _do_update
return filtered._update(values) > 0
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/models/query.py", line 809, in _update
return query.get_compiler(self.db).execute_sql(CURSOR)
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 1591, in execute_sql
cursor = super().execute_sql(result_type)
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 1202, in execute_sql
cursor.execute(sql, params)
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 99, in execute
return super().execute(sql, params)
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 80, in _execute
with self.db.wrap_database_errors:
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/tommaso/tommaso03/coding_projects/corsi_udemy/create-smart-maps-in-python-and-leaflet/waterwatch/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "waterwatchapp_waterconsumption" does not exist
LINE 1: UPDATE "waterwatchapp_waterconsumption" SET "Suburb" = 'ATHL...
</code></pre>
|
<python><django><django-migrations>
|
2024-05-19 17:02:43
| 1
| 4,389
|
Tms91
|
78,503,334
| 19,659,857
|
The splash screen isn't centering exactly according to the screen size
|
<p>I'm using Custom Tkinter to create a splash screen. I wanted to center the splash screen both horizontally and vertically. I tried the following code, but it didn't work. The splash screen is not exactly centered. I've also attached a screenshot. Please check it and let me know your suggestions. Thanks!</p>
<pre><code> splash_screen = CTK.CTk()
splash_screen.overrideredirect(True)
screen_width = splash_screen.winfo_screenwidth()
screen_height = splash_screen.winfo_screenheight()
splash_width = int(0.29 * screen_width)
splash_height = int(0.31 * screen_height)
splash_image = CTK.CTkImage(light_image=Image.open('splash_image.jpg'),size =(splash_width,splash_height))
splash_image_lable = CTK.CTkLabel(splash_screen,image=splash_image,text='')
splash_image_lable.pack(fill=CTK.BOTH, expand=True)
x = (screen_width//2) - (splash_width//2)
y = (screen_height//2) - (splash_height//2)
geometry = f"{int(splash_width)}x{int(splash_height)}+{int(x)}+{int(y)}"
splash_screen.geometry(geometry)
</code></pre>
<p><a href="https://i.sstatic.net/nSnfmFAP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSnfmFAP.png" alt="Image" /></a></p>
|
<python><tkinter><customtkinter>
|
2024-05-19 17:02:21
| 0
| 339
|
user8
|
78,503,301
| 11,032,005
|
Why the sync function is behaving like async function and vice-versa?
|
<p>I'm trying to understand the difference between sync and async functions using Python fast API. In the following code, I have written a sync function and put a 10-second time delay in it. When I try to call this API two times simultaneously, I get the response time to the two calls is the same (10sec). For example, I called this API the first time and again called this API after 3 seconds. Usually, the first request will take a time of 10.2sec and the second request will take 13.2 sec, but both requests have the same response time (it is around 10.2 sec).</p>
<pre><code>from fastapi import FastAPI
import time
import random
import asyncio
app = FastAPI()
@app.get("/")
def roots(req_id):
print("Waiting for Sleep by", req_id)
time.sleep(10) # 10 sec user1 - block - concurrency -async
print("Sleep completed for ", req_id)
return "Hello"
# Sync func -> sequential, blocking
</code></pre>
<p>The fascinating thing which confuses me more is, that if I put an async keyword in front of the function, it's the two responses have different times. Somebody, please explain to me why these two scenarios have different time frames and moreover, why sync API is behaving like async and vice-versa</p>
|
<python><asynchronous><synchronization><fastapi>
|
2024-05-19 16:52:21
| 0
| 489
|
sachin murali
|
78,503,297
| 11,304,830
|
Unable to tag the right elements to scrape a website using Selenium in Python
|
<p>I am trying to scrape <a href="https://www.facile.it/energia-dual-fuel/preventivo.html" rel="nofollow noreferrer">this</a> website which would display utility bills information once details have been inserted. The website requires me to insert some info as well as clicking on some options.</p>
<p>I have been reading about Selenium and how to implement the "fill the details" and "click the button" option. However, I found hard to understand what "code" to insert from the html of the website to make the python code work.</p>
<p>I am basically stuck at the very beginning:</p>
<pre><code>url = 'https://www.facile.it/energia-dual-fuel/preventivo.html'
# Your information
name = 'Mario Rossi' # random name
email = 'test@live.it' # random email
phone = '3333333333' # random number
zip_code = '20019' # some random postcode for Milan
# Initialize the Chrome WebDriver
driver = webdriver.Chrome()
# Open the website
driver.get(url)
</code></pre>
<p>So far so good, from here onwards I find very hard to make it work. I tried to retrieve info by <code>Id</code> and <code>NAME</code> but it didn't really work. I also inspected the html of the websites but didn't really understand what to take it from it.</p>
<pre><code>name_field = driver.find_element(By.NAME, 'Nome e Cognome')
NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":"[name="Nome e Cognome"]"}
(Session info: chrome=125.0.6422.61); For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception
name_field.send_keys(name)
# Also tried something like this (but it didn't work either):
name_field = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.NAME, 'Nome e Cognome')))
</code></pre>
<p>It would be helpful to have an example of what to do with the "click" option and with the "fill the form" option so that I understand better how to do it.</p>
<p>Can anyone help me with this?</p>
|
<python><selenium-webdriver><web-scraping><selenium-chromedriver>
|
2024-05-19 16:50:33
| 2
| 1,623
|
Rollo99
|
78,503,255
| 10,200,497
|
How can I get the left edge as the label of pandas.cut?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'high': [110, 110, 101, 101, 115, 300],
}
)
</code></pre>
<p>And this is the expected output. Column <code>bin</code> should be created:</p>
<pre><code> high bin
0 110 105.0
1 110 105.0
2 101 100.0
3 101 100.0
4 115 111.0
5 300 220.0
</code></pre>
<p>Basically <code>bin</code> is created by using <code>pd.cut</code>:</p>
<pre><code>import numpy as np
evaluation_bins = [100, 105, 111, 120, 220, np.inf]
df['bin'] = pd.cut(df['high'], bins=evaluation_bins, include_lowest=True, right=False)
</code></pre>
<p>This gives me the category itself but I want the left edge as the output.</p>
<p>Honestly there was not much I could try to do. I could get the <code>dtypes</code> of <code>df</code> by <code>df.dtypes</code> but I don't know how to continue.</p>
|
<python><pandas><dataframe>
|
2024-05-19 16:37:20
| 3
| 2,679
|
AmirX
|
78,503,222
| 8,948,544
|
counting null values in sqlalchemy returns 0
|
<p>I am trying to get the unique values and their counts from an SQLite table. This is the python statement:</p>
<pre class="lang-py prettyprint-override"><code>results = db.session.query(MyTable.rating, db.func.count(MyTable.rating)).group_by(MyTable.rating).all()
counts = {str(row[0]): row[1] for row in results}
</code></pre>
<p>The ratings column is an integer column with a value ranging from 1 to 10, the value is null if no value is set. The above code returns</p>
<pre class="lang-json prettyprint-override"><code>{
"1": 3,
...
"10": 4,
"None": 0
}
</code></pre>
<p>The problem is that "None" should not be 0.</p>
<p>I tried a different script and still got 0</p>
<pre class="lang-py prettyprint-override"><code>nonecount = db.session.query(db.func.count(MyTable.rating)).filter(MyTable.rating.is_(None)).all()
</code></pre>
<p>I tested this with an SQL query and was able to get a correct value:</p>
<pre class="lang-sql prettyprint-override"><code>select count(*)
from MyTable
where rating is NULL
</code></pre>
|
<python><sqlite><sqlalchemy>
|
2024-05-19 16:24:40
| 0
| 331
|
Karthik Sankaran
|
78,503,184
| 1,028,270
|
Is there a way to prevent pydantic from calling my function used to find config files on import?
|
<p>My class looks something like this:</p>
<pre><code>class MyConfig(BaseSettings):
model_config = SettingsConfigDict(
env_file=get_configs(),
)
</code></pre>
<p>I didn't realize that my <code>get_configs()</code> seems to be getting called on import. I discovered this when I had a bug in that function and pytest collection was failing.</p>
<p>Is this a python behavior or something specific to how the <code>model_config</code> attribute of the <code>BaseSettings</code> class works? It doesn't make sense to me because I would expect my <code>get_configs()</code> function to only get called when the code that is actually instantiating <code>MyConfig</code> is executed.</p>
|
<python><pydantic><pydantic-settings>
|
2024-05-19 16:10:52
| 1
| 32,280
|
red888
|
78,503,121
| 3,404,480
|
matplotlib.pyplot not closing window on Mac
|
<p><code>matplotlib</code> is not closing its window screen after disposal of plot. It is still running on background.</p>
<pre class="lang-py prettyprint-override"><code> # python code
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(getXData(), getYData(), label="Line 1")
plt.savefig("plot.png", bbox_inches='tight')
plt.clf()
plt.close("all")
...
...
# long code that will run for hours
</code></pre>
<p><a href="https://i.sstatic.net/bmvtiWBU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bmvtiWBU.png" alt="matplotlib running after close" /></a></p>
<p>matplotlib running after closing</p>
<p>The question <a href="https://stackoverflow.com/questions/20401057/matplotlib-close-does-not-close-the-window">here</a> is different - I followed the accepted answer that is not working in my case. The issue is completely different as my window is distoyed and showing only icon on taskbar.</p>
<p>Can you suggest me what changes need to do in code, to completely close the window.</p>
<p><em>This issue might exist on MacBook only</em></p>
|
<python><matplotlib><apple-m1>
|
2024-05-19 15:47:32
| 0
| 388
|
Prashant
|
78,503,039
| 10,906,068
|
How to properly create Python regex for allowing alphanumeric characters, commas, dots, spaces
|
<p>Am working with Python, trying to allow only alphanumeric characters, commas, dots, spaces</p>
<p>When I run the code below, it throws error invalid syntax:</p>
<pre><code>strx ='test, 22222 @% te-st test test'
result = strx.replace(/[^A-Za-z0-9, .]/g, '')
print(result)
</code></pre>
|
<python>
|
2024-05-19 15:14:45
| 1
| 2,498
|
Nancy Moore
|
78,502,976
| 3,565,923
|
Selenium select click intercepted. Select.select_by_value()
|
<p>I want to change rows number in table by selecting value in Select object. Select is obscured by google iframe. I can click "x" button for iframe, but google iframe is still there.</p>
<pre class="lang-py prettyprint-override"><code>change_rows_el = driver.find_element(By.CSS_SELECTOR, 'select[aria-label="rows per page"')
ActionChains(driver).scroll_to_element(change_rows_el)
select = Select(change_rows_el)
select.select_by_value("50")
</code></pre>
<p>I get</p>
<pre><code>selenium.common.exceptions.ElementClickInterceptedException: Message: Element <select> is not clickable at point (1067,951) because another element <iframe id="google_ads.." > obscures it
</code></pre>
<p>If I had a button I probably could do something like:</p>
<pre class="lang-py prettyprint-override"><code>element = driver.find_element(By.ID, "element-id")
driver.execute_script("arguments[0].click();", element)
</code></pre>
<p>Probably obscuring iframe wouldn't make a difference in this case.</p>
<p>Please notice that I try to use <code>scroll_to_element</code>, but it doesn't matter since select element is already located in a viewport.</p>
<p>Screenshot below (select is located under "Ads by Google"):</p>
<p><a href="https://i.sstatic.net/26s6IoiM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26s6IoiM.png" alt="This is how table looks like. Change row select is under google's iframe" /></a></p>
|
<python><selenium-webdriver>
|
2024-05-19 14:56:48
| 2
| 350
|
user3565923
|
78,502,965
| 11,441,069
|
OpenAI Assistants API: Is there a better way to wait for the assistant's response, and how do I display the assistant's answer incrementally?
|
<p>Based on the available information (unfortunately, ChatGPT was not too useful), I created the following code that allows me to interact with the OpenAI Assistants API.</p>
<p>However, I still don't like the <code>_wait_for_run_completion</code> method and the <code>while</code> loop. Is there a better way to handle this?</p>
<pre><code>import os
import openai
from dotenv import load_dotenv
import time
class OpenAIChatAssistant:
def __init__(self, assistant_id, model="gpt-4o"):
self.assistant_id = assistant_id
self.model = model
if self.model != "just_copy":
load_dotenv()
openai.api_key = os.environ.get("OPENAI_API_KEY")
self.client = openai.OpenAI()
self._create_new_thread()
print('new instance started')
def _create_new_thread(self):
self.thread = self.client.beta.threads.create()
self.thread_id = self.thread.id
print(self.thread_id)
def reset_thread(self):
if self.model != "just_copy":
self._create_new_thread()
def set_model(self, model_name):
self.model = model_name
if self.model != "just_copy" and not hasattr(self, 'client'):
load_dotenv()
openai.api_key = os.environ.get("OPENAI_API_KEY")
self.client = openai.OpenAI()
self._create_new_thread()
def send_message(self, message):
if self.model == "just_copy":
return message
self.client.beta.threads.messages.create(
thread_id=self.thread_id, role="user", content=message
)
run = self.client.beta.threads.runs.create(
thread_id=self.thread_id,
assistant_id=self.assistant_id,
model=self.model
)
return self._wait_for_run_completion(run.id)
def _wait_for_run_completion(self, run_id, sleep_interval=1):
counter = 1
while True:
try:
run = self.client.beta.threads.runs.retrieve(thread_id=self.thread_id, run_id=run_id)
if run.completed_at:
messages = self.client.beta.threads.messages.list(thread_id=self.thread_id)
last_message = messages.data[0]
response = last_message.content[0].text.value
print(f'hello {counter}')
return response
except Exception as e:
raise RuntimeError(f"An error occurred while retrieving answer: {e}")
counter += 1
time.sleep(sleep_interval)
</code></pre>
<p>That class can be used in the console app in this way:</p>
<pre><code>import os
from openai_chat_assistant import OpenAIChatAssistant
def main():
assistant_id = "asst_..."
chat_assistant = OpenAIChatAssistant(assistant_id)
while True:
question = input("Enter your question (or 'exit' to quit, 'clean' to reset): ")
if question.lower() == 'exit':
break
elif question.lower() == 'clean':
os.system('cls' if os.name == 'nt' else 'clear')
chat_assistant.reset_thread()
print("Console cleared and thread reset.")
else:
response = chat_assistant.send_message(question)
print(f"Assistant Response: {response}")
if __name__ == "__main__":
main()
</code></pre>
<p>Of course, the <code>assistant_id</code> is needed. I set it in the <code>.env</code> file, the same as the API key:</p>
<pre><code>OPENAI_API_KEY=sk-proj-...
</code></pre>
|
<python><python-3.x><openai-api><openai-assistants-api>
|
2024-05-19 14:53:00
| 2
| 509
|
Krzysztof Krysztofczyk
|
78,502,897
| 2,223,967
|
Reduce the sum of differences between adjacent array elements
|
<p>I came across a coding challenge on the internet the question is listed below:</p>
<blockquote>
<p>Have the function FoodDistribution(arr) read the array of numbers
stored in arr which will represent the hunger level of different
people ranging from 0 to 5 (0 meaning not hungry at all, 5 meaning
very hungry). You will also have N sandwiches to give out which will
range from 1 to 20. The format of the array will be [N, h1, h2, h3,
...] where N represents the number of sandwiches you have and the rest
of the array will represent the hunger levels of different people.
Your goal is to minimize the hunger difference between each pair of
people in the array using the sandwiches you have available.</p>
<p>For example: if arr is [5, 3, 1, 2, 1], this means you have 5
sandwiches to give out. You can distribute them in the following order
to the people: 2, 0, 1, 0. Giving these sandwiches to the people their
hunger levels now become: [1, 1, 1, 1]. The difference between each
pair of people is now 0, the total is also 0, so your program should
return 0. Note: You may not have to give out all, or even any, of your
sandwiches to produce a minimized difference.</p>
<p>Another example: if arr is [4, 5, 2, 3, 1, 0] then you can distribute
the sandwiches in the following order: [3, 0, 1, 0, 0] which makes all
the hunger levels the following: [2, 2, 2, 1, 0]. The differences
between each pair of people is now: 0, 0, 1, 1 and so your program
should return the final minimized difference of 2.</p>
</blockquote>
<p>My first approach was to try to solve it greedily as the following:</p>
<ol>
<li>Loop until the sandwiches are zero</li>
<li>For each element in the array copy the array and remove one hunger at location i</li>
<li>Get the best combination that will give you the smallest hunger difference</li>
<li>Reduce the sandwiches by one and consider the local min as the new hunger array</li>
<li>Repeat until sandwiches are zero or the hunger difference is zero</li>
</ol>
<p>I thought when taking the local minimum it led to the global minimum which was wrong based on the following use case <code>[7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5]</code></p>
<pre><code>def FoodDistribution(arr):
sandwiches = arr[0]
hunger_levels = arr[1:]
# Function to calculate the total difference
def total_difference(hunger_levels):
return sum(abs(hunger_levels[i] - hunger_levels[i + 1]) for i in range(len(hunger_levels) - 1))
def reduce_combs(combs):
local_min = float('inf')
local_min_comb = None
for comb in combs:
current_difference = total_difference(comb)
if current_difference < local_min:
local_min = current_difference
local_min_comb = comb
return local_min_comb
# Function to distribute sandwiches
def distribute_sandwiches(sandwiches, hunger_levels):
global_min = total_difference(hunger_levels)
print(global_min)
while sandwiches > 0 and global_min > 0:
combs = []
for i in range(len(hunger_levels)):
comb = hunger_levels[:]
comb[i] -= 1
combs.append(comb)
local_min_comb = reduce_combs(combs)
x = total_difference(local_min_comb)
print( sandwiches, x, local_min_comb)
global_min = min(global_min, x)
hunger_levels = local_min_comb
sandwiches -= 1
return global_min
# Distribute sandwiches and calculate the minimized difference
global_min = distribute_sandwiches(sandwiches, hunger_levels)
return global_min
if __name__ == "__main__":
print(FoodDistribution([7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5]))
</code></pre>
<p>I changed my approach to try to brute force and then use memorization to optimize the time complexity</p>
<ol>
<li>Recurse until out of bounds or sandwiches are zero</li>
<li>For each location there are two options either to use a sandwich or ignore</li>
<li>When the option is to use a sandwich decrement sandwiches by one and stay at the same index.</li>
<li>When the option is to ignore increment the index by one.</li>
<li>Take the minimum between the two options and return it.</li>
</ol>
<p>The issue here is that I didn't know what to store in the memo and storing the index and sandwiches is not enough. I am not sure if this problem has a better complexity than 2^(n+s). Is there a way to know if dynamic programming or memorization is not the way to solve the problem and in this case can I improve the complexity by memorization or does this problem need to be solved with a different approach?</p>
<pre><code>def FoodDistribution(arr):
sandwiches = arr[0]
hunger_levels = arr[1:]
# Distribute sandwiches and calculate the minimized difference
global_min = solve(0, sandwiches, hunger_levels)
return global_min
def solve(index, sandwiches, hunger_levels):
if index >= len(hunger_levels) or sandwiches == 0:
return total_difference(hunger_levels)
# take a sandwich
hunger_levels[index] += -1
sandwiches += -1
minTake = solve(index, sandwiches, hunger_levels)
hunger_levels[index] += 1
sandwiches += 1
# dont take sandwich
dontTake = solve(index + 1, sandwiches, hunger_levels)
return min(minTake, dontTake)
def total_difference(hunger_levels):
return sum(abs(hunger_levels[i] - hunger_levels[i + 1]) for i in range(len(hunger_levels) - 1))
if __name__ == "__main__":
print(FoodDistribution([7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5]))
</code></pre>
<p><strong>Edit:</strong> Multiple states will give you the optimal answer for the use case above</p>
<pre><code>sandwiches = 7
hunger = [5, 4, 3, 4, 5, 2, 3, 1, 4, 5]
optimal is 6
states as follow
[3, 3, 3, 3, 3, 2, 2, 1, 4, 5]
[4, 3, 3, 3, 3, 2, 2, 1, 4, 4]
[4, 4, 3, 3, 2, 2, 2, 1, 4, 4]
[4, 4, 3, 3, 3, 2, 1, 1, 4, 4]
[4, 4, 3, 3, 3, 2, 2, 1, 3, 4]
[4, 4, 3, 3, 3, 2, 2, 1, 4, 4]
[5, 4, 3, 3, 3, 2, 2, 1, 3, 3]
</code></pre>
<p><strong>Note:</strong> I accepted @Matt Timmermans answer as it provides the best time complexity n and nlogn. But the two other answer are amazing and good to understand and be able to implement the solution using dynamic programming or memorization. Personally I prefer the memorization version expected time complexity is s<em>n</em>h where h is the max hunger level in the array.</p>
|
<python><algorithm><dynamic><greedy>
|
2024-05-19 14:28:17
| 4
| 980
|
Mohd Alomar
|
78,502,470
| 9,072,753
|
How to create HH:MM optimized regex?
|
<p>Given two times in the form <code>HH:MM</code>, I want to filter logs containing all hours between the given hours.</p>
<p>So for example, given 09:30 and 15:30, a simple regex <code>T(09:[345]|1[01234]|15:[123])</code> would suffice. There is the assumption that input is in correct format.</p>
<p>Creating a regex like, <code>T(09:30|09:31|09:32|.....|15:30)</code> is a trivial simple loop, but I wonder what transformations could I do to optimize such a regex and if such transformations are worth it.</p>
<p>I am writing in python and this is my current code. If it simplifies, I am open to any Unix tools.</p>
<pre><code>from dataclasses import dataclass
import datetime
import re
@dataclass
class TimeFilter:
start: datetime.time
stop: datetime.time
def timergx(self):
i = self.start.hour * 60 + self.start.minute
stop = self.stop.hour * 60 + self.stop.minute
return "T(" + "|".join(f"{x // 60:02d}:{x % 60:02d}" for x in range(i, stop)) + ")"
def HHMM2time(txt: str):
return datetime.time(*[int(x) for x in txt.split(":")])
tf = TimeFilter(HHMM2time("9:30"), HHMM2time("15:30"))
assert re.match(rf.timergx(), "T10:30")
</code></pre>
<p>How to "generate" the regex so it is "faster"? Or, is the regex in the form <code>T(09:30|09:31|09:32|.....|15:30)</code> actually "faster" to process than any optimized form? If relevant, GO regex engine will be using the regex.</p>
|
<python><regex>
|
2024-05-19 11:47:45
| 1
| 145,478
|
KamilCuk
|
78,502,465
| 20,920,790
|
How to simulate INSERT INTO VALUES ON CONFLICT DO UPDATE SET in Clickhouse?
|
<p>I need to update Clickhouse database with Python script (I use clickhouse_connect).
So, I need check conflicts before insert any data.</p>
<ol>
<li>In some tables I need just add new data.
For new data I generate manually ID's, so I check last ID in database, make ID's for new data and insert it into database by clickhouse_connect.get_client().insert_df().</li>
<li>Table which I need update with auto generated ID's if some row need to be changed and some just inserted.</li>
</ol>
<p>First, I get ID's which exists in new data and database and select data with differences.
This data should be altered.
For ID's which not exists in database and just insert new data to database.</p>
<p>So how make update data in Clickhouse right? And how to make it with clickhouse_connect?
Is mutation the best way to make it?</p>
<pre><code>ALTER TABLE [<database>.]<table> UPDATE <column> = <expression> WHERE <filter_expr>
</code></pre>
|
<python><clickhouse>
|
2024-05-19 11:46:18
| 0
| 402
|
John Doe
|
78,502,213
| 6,946,110
|
Python 3.12 Sentry-sdk AttributeError: module 'collections' has no attribute 'MutableMapping'
|
<p>I tried to use <code>sentry-sdk</code> latest version with Python 3.12, but when I run my django app, it shows me following error:</p>
<pre><code>AttributeError: module 'collections' has no attribute 'MutableMapping'
</code></pre>
<p>Full trace as follows:</p>
<pre><code>File "/usr/local/lib/python3.12/site-packages/sentry_sdk/__init__.py", line 1, in <module>
from sentry_sdk.hub import Hub, init
File "/usr/local/lib/python3.12/site-packages/sentry_sdk/hub.py", line 5, in <module>
from sentry_sdk.scope import Scope, _ScopeManager
File "/usr/local/lib/python3.12/site-packages/sentry_sdk/scope.py", line 11, in <module>
from sentry_sdk.attachments import Attachment
File "/usr/local/lib/python3.12/site-packages/sentry_sdk/attachments.py", line 5, in <module>
from sentry_sdk.envelope import Item, PayloadRef
File "/usr/local/lib/python3.12/site-packages/sentry_sdk/envelope.py", line 6, in <module>
from sentry_sdk.session import Session
File "/usr/local/lib/python3.12/site-packages/sentry_sdk/session.py", line 5, in <module>
from sentry_sdk.utils import format_timestamp
File "/usr/local/lib/python3.12/site-packages/sentry_sdk/utils.py", line 1302, in <module>
HAS_REAL_CONTEXTVARS, ContextVar = _get_contextvars()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/sentry_sdk/utils.py", line 1272, in _get_contextvars
if not _is_contextvars_broken():
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/sentry_sdk/utils.py", line 1213, in _is_contextvars_broken
from eventlet.patcher import is_monkey_patched # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/eventlet/__init__.py", line 6, in <module>
from eventlet import convenience
File "/usr/local/lib/python3.12/site-packages/eventlet/convenience.py", line 7, in <module>
from eventlet.green import socket
File "/usr/local/lib/python3.12/site-packages/eventlet/green/socket.py", line 21, in <module>
from eventlet.support import greendns
File "/usr/local/lib/python3.12/site-packages/eventlet/support/greendns.py", line 78, in <module>
setattr(dns, pkg, import_patched('dns.' + pkg))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/eventlet/support/greendns.py", line 60, in import_patched
return patcher.import_patched(module_name, **modules)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/eventlet/patcher.py", line 132, in import_patched
return inject(
^^^^^^^
File "/usr/local/lib/python3.12/site-packages/eventlet/patcher.py", line 109, in inject
module = __import__(module_name, {}, {}, module_name.split('.')[:-1])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/dns/namedict.py", line 35, in <module>
class NameDict(collections.MutableMapping):
^^^^^^^^^^^^^^^^^^^^^^^^^^
</code></pre>
<p>AttributeError: module 'collections' has no attribute 'MutableMapping'</p>
|
<python><django><sentry><python-3.12>
|
2024-05-19 10:11:58
| 0
| 1,553
|
msln
|
78,502,175
| 16,127,735
|
Search for text in a very large txt file (50+GB)
|
<p>I have a <code>hashes.txt</code> file that stores strings and their compressed SHA-256 hash values. Each line in the file is formatted as follows:</p>
<p><code><compressed_hash>:<original_string></code></p>
<p>The <code>compressed_hash</code> is created by taking the 6th, 13th, 20th, and 27th characters of the full SHA-256 hash. For example, the string <code>alon</code> when hashed: <code>5a24f03a01d5b10cab6124f3c0e7086994ac9c869fc8e76e1463458f829fc864</code> would be stored as:
<code>0db3:alon</code></p>
<p>I have a <code>search.py</code> script that works like this</p>
<p>For example, if the user inputs <code>5a24f03a01d5b10cab6124f3c0e7086994ac9c869fc8e76e1463458f829fc864</code> in <code>search.py</code> the script searches for its shortened form, <code>0db3</code> in <code>hashes.txt</code>.
If multiple matches are found, like:</p>
<pre><code>0db3:alon
0db3:apple
</code></pre>
<p>The script rehashes the matches (<code>alon</code>, <code>apple</code>) to get their full SHA-256 hash, and if there is a match (eg. <code>alon</code> when fully hashed matches the user input (<code>5a24f03a01d5b10cab6124f3c0e7086994ac9c869fc8e76e1463458f829fc864</code>), the script prints the string (<code>alon</code>)</p>
<p>The problem with this script is that it, the search usually takes around 1 hour, and my <code>hashes.txt</code> is 54GB. Here is the <code>search.py</code>:</p>
<pre><code>import hashlib
import mmap
def compress_hash(hash_value):
return hash_value[6] + hash_value[13] + hash_value[20] + hash_value[27]
def search_compressed_hash(hash_input, compressed_file):
compressed_input = compress_hash(hash_input)
potential_matches = []
with open(compressed_file, "r+b") as file:
# Memory-map the file, size 0 means the whole file
mmapped_file = mmap.mmap(file.fileno(), 0)
# Read through the memory-mapped file line by line
for line in iter(mmapped_file.readline, b""):
line = line.decode().strip()
parts = line.split(":", 1) # Split only on the first colon
if len(parts) == 2: # Ensure there are exactly two parts
compressed_hash, string = parts
if compressed_hash == compressed_input:
potential_matches.append(string)
mmapped_file.close()
return potential_matches
def verify_full_hash(potential_matches, hash_input):
for string in potential_matches:
if hashlib.sha256(string.encode()).hexdigest() == hash_input:
return string
return None
if __name__ == "__main__":
while True:
hash_input = input("Enter the hash (or type 'exit' to quit): ")
if hash_input.lower() == 'exit':
break
potential_matches = search_compressed_hash(hash_input, "hashes.txt")
found_string = verify_full_hash(potential_matches, hash_input)
if found_string:
print(f"Corresponding string: {found_string}")
else:
print("String not found for the given hash.")
</code></pre>
<p>And, if it helps, here's the <code>hash.py</code> script that actually generates the strings and hashes and puts them in <code>hashes.txt</code></p>
<pre><code>import hashlib
import sys
import time
`# Set the interval for saving progress (in seconds)
SAVE_INTERVAL = 60 # Save progress every minute
BUFFER_SIZE = 1000000 # Number of hashes to buffer before writing to file
def generate_hash(string):
return hashlib.sha256(string.encode()).hexdigest()
def compress_hash(hash_value):
return hash_value[6] + hash_value[13] + hash_value[20] + hash_value[27]
def write_hashes_to_file(start_length):
buffer = [] # Buffer to store generated hashes
last_save_time = time.time() # Store the last save time
for generated_string in generate_strings_and_hashes(start_length):
full_hash = generate_hash(generated_string)
compressed_hash = compress_hash(full_hash)
buffer.append((compressed_hash, generated_string))
if len(buffer) >= BUFFER_SIZE:
save_buffer_to_file(buffer)
buffer = [] # Clear the buffer after writing to file
# Check if it's time to save progress
if time.time() - last_save_time >= SAVE_INTERVAL:
print("Saving progress...")
save_buffer_to_file(buffer) # Save any remaining hashes in buffer
buffer = [] # Clear buffer after saving
last_save_time = time.time()
# Save any remaining hashes in buffer
if buffer:
save_buffer_to_file(buffer)
def save_buffer_to_file(buffer):
with open("hashes.txt", "a") as file_hashes:
file_hashes.writelines(f"{compressed_hash}:{generated_string}\n" for compressed_hash, generated_string in buffer)
def generate_strings_and_hashes(start_length):
for length in range(start_length, sys.maxsize): # Use sys.maxsize to simulate infinity
current_string = [' '] * length # Initialize with spaces
while True:
yield ''.join(current_string)
if current_string == ['z'] * length: # Stop when all characters reach 'z'
break
current_string = increment_string(current_string)
def increment_string(string_list):
index = len(string_list) - 1
while index >= 0:
if string_list[index] == 'z':
string_list[index] = ' '
index -= 1
else:
string_list[index] = chr(ord(string_list[index]) + 1)
break
if index < 0:
string_list.insert(0, ' ')
return string_list
def load_progress():
# You may not need this function anymore
return 1 # Just return a default value
if __name__ == "__main__":
write_hashes_to_file(load_progress())`
</code></pre>
<p>My OS is Windows 10.</p>
|
<python><hash><sha256>
|
2024-05-19 09:55:06
| 1
| 1,958
|
Alon Alush
|
78,502,149
| 13,329,963
|
User defined logvars for Django application deployed on uWSGI server
|
<p>I have deployed a Django application on a uWSGI server, which has the following middleware</p>
<pre><code># myapp/middleware.py
from django.utils.deprecation import MiddlewareMixin
class LogUserMiddleware(MiddlewareMixin):
def process_request(self, request):
request.META['REMOTE_USER'] = 'test'
</code></pre>
<p>The middleware is added in the settings.py file as well</p>
<pre><code># myapp/settings.py
MIDDLEWARE = [
...
'myapp.middleware.LogUserMiddleware',
...
]
</code></pre>
<p>Then in the uWSGI config file also the logvar is used</p>
<pre><code>...
log-format = %(ltime) %(addr){uWSGI} : %(method) %(uri) %(status) %(REMOTE_USER)
...
</code></pre>
<p>But still the logs show as</p>
<pre><code>19/May/2024:09:01:49 +0000 127.0.0.1{uWSGI} : GET /health_check 200 -
</code></pre>
<p>Basically the logvar is getting replaced with <strong>-</strong> instead of showing <strong>test</strong></p>
<p>Is there any extra config to be done?</p>
|
<python><django><logging><uwsgi>
|
2024-05-19 09:43:48
| 1
| 2,818
|
Prateek Gupta
|
78,502,090
| 20,920,790
|
How to connect to Clickhouse with SSL-on in Python?
|
<p>I trying connect to Clickhouse database in Python with pandahouse .
In Dbeaver my connection settings:</p>
<pre><code>host: **.***.**.***
port: 8443
database: db_name
user: user_name
password: ***
SSH: off
SSL: on
SSl mode: None
</code></pre>
<p>How to add SSL - setting to pandahouse connection?</p>
<p>I've also tryed clickhouse_connect, but don't get how to add SSL to connection.</p>
|
<python><clickhouse>
|
2024-05-19 09:17:37
| 2
| 402
|
John Doe
|
78,502,050
| 10,570,372
|
How to type hint numpy arrays that accept any types of numpy floats?
|
<p>As per question, I want to define a numpy type that accepts any numpy floats (i.e. <code>np.float64</code>, <code>np.float32</code> etc. Currently I am doing the below, is there a way to bound the <code>NDArray</code> such any subclass of <code>np.floating</code> is allowed?</p>
<pre class="lang-py prettyprint-override"><code>from numpy.typing import NBitBase, NDArray
import numpy as np
any_np_float: NDArray[Any]
</code></pre>
|
<python><numpy><python-typing>
|
2024-05-19 08:56:07
| 1
| 1,043
|
ilovewt
|
78,501,972
| 5,277,235
|
Iteratively source a value from a dictionary
|
<p>I have a data set where I know a limited number of values and based on other values in a data set I can figure out what the subsequent values are. I could do this with a huge if function, but I know long ago someone (in a job with code I no longer have access to) showed me how to source these values from a dictionary using lambda and apply.</p>
<p>I need to create this iteratively because whilst I can figure out a value post "Five_Zero", after that I have nothing to go on and need the prior value to know the next.</p>
<p>This is what my starting data looks like.<br />
<a href="https://i.sstatic.net/b1aItGUr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b1aItGUr.png" alt="enter image description here" /></a></p>
<p>I need to use a combination of WonLost, OnServeCorrected, and Score_Shift and source that from a list of possibles and return a value. This returns an error. I do not use lambda a lot and do not understand it well - I have not been able to find examples of how to replicate this.</p>
<p>This is the error I've received:<br />
Traceback (most recent call last):
File "mappingapplying.py", line 30, in
touch3["Score"][i] = touch3.var.apply(lambda x:x[d])
AttributeError: 'function' object has no attribute 'apply'</p>
<p>Indenting may be off as the code didn't copy cleanly.</p>
<pre><code>import pandas as pd
import numpy as np
data = [[1, "Won", 1, "Five", "Zero"],
[2, "Lost", 1, "",""],
[3, "Lost", 1, "",""],
[4, "Lost", 0, "Five","Zero"],
[5, "Lost", 0, "",""],
[6, "Won", 0, "",""]]
touch3 = pd.DataFrame(data, columns = ["Seconds", "WonLost", "OnServeCorrected", "S_Score", "R_Score"])
touch3["Score"] = touch3.S_Score + "_" + touch3.R_Score
touch3["Score_Shift"] = touch3.Score.shift(1)
d = [['Won_0_Five_Zero', "Five_Five"],
['Won_1_Five_Zero', "Three_Zero"],
['Lost_0_Five_Zero', "Three_Zero"],
['Lost_1_Five_Zero', "Five_Five"],
['Won_0_Five_Five', "Five_Three"],
['Lost_0_Five_Five', "Three_Five"],
['Won_1_Five_Five', "Three_Five"],
['Lost_1_Five_Five', "Five_Three"]]
for i in range(len(touch3.Seconds)):
if touch3.Score[i] not in (["Five_Zero", "Zero_Five"]):
touch3["var"] = touch3.WonLost[i] + "_" + str(touch3.OnServeCorrected[i]) + "_" + touch3.Score_Shift[i]
touch3["Score"][i] = touch3.var.apply(lambda x:x[d])
else:
touch3.Score[i] = touch3.Score[i]
</code></pre>
<p>Expected output should be:
<a href="https://i.sstatic.net/z1XAbVQ5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z1XAbVQ5.png" alt="enter image description here" /></a></p>
|
<python><pandas><dictionary><lambda><apply>
|
2024-05-19 08:16:28
| 1
| 607
|
James Oliver
|
78,501,884
| 17,718,669
|
Is there a better way to update all relation rows with `ForegnKey` relation in `Django ORM`
|
<p>I have two models, one like this:</p>
<pre class="lang-py prettyprint-override"><code>class PhysicalSensor(models.Model):
name = models.CharField(unique=True, max_length=255)
def __str__(self) -> str:
return self.name
class Sensor(models.Model):
physical_sensor = models.ForeignKey(PhysicalSensor, on_delete=models.RESTRICT, null=True, blank=True)
</code></pre>
<p>So, when I want to add a <code>PhisycalSensor</code> record, I want to set the <code>Sensor</code> records to that.
Right now I handle this in my <code>Serializer</code> class:</p>
<pre class="lang-py prettyprint-override"><code>
class PhysicalSensorSerializer(ModelSerializer):
sensors = SensorSerialiazer(required=False, many=True)
class Meta:
fields = ("__all__")
model = PhysicalSensor
def create(self, validated_data):
sensors = validated_data.pop("sensors")
ph_sensor = super().create(validated_data)
for sensor in sensors:
s = Sensor.objects.get(sensor["id"])
s.physical_sensor = ph_sensor
s.save()
</code></pre>
<p>and for edit of <code>PhysicalSensor</code>, I use the same thing in <code>.update()</code> method of my <code>Serializer</code>.</p>
<p>Is there any batter way of doing this or a best practice for this ?</p>
|
<python><django><django-models><django-rest-framework><django-serializer>
|
2024-05-19 07:33:56
| 1
| 326
|
parsariyahi
|
78,501,868
| 1,867,328
|
Define data type in creating custom function
|
<p>I have below custom function</p>
<pre><code>import pandas as pd
def MyFn(DF : pd.DataFrame) -> float :
return DF['Col_A'].values[1] - DF['Col_B'].values[1]
</code></pre>
<p>However I want to force user to supply a dataframe with 2 columns and having column names as <code>'Col_A' and 'Col_B'</code></p>
<p>Any insight how can I do it would be very appreciated.</p>
|
<python><pandas>
|
2024-05-19 07:25:17
| 2
| 3,832
|
Bogaso
|
78,501,768
| 7,290,715
|
Generate a SQL string containing UNION and JOIN recursively
|
<p>I am trying to generate a SQL string recursively that would contain UNION and JOIN using Python.</p>
<p>Below are the conditions:</p>
<ol>
<li><p>Number of columns would be determined from a variable. If v = 2 the number of columns would be 3, with 1st one would always be fixed. If v = 3 the number of columns would be 4 with 1st one would always be fixed.</p>
</li>
<li><p>In each UNION block the number of join : number of columns - 1. And this would be increasing in numbers in each UNION block till the time the loop reaches v-1.</p>
</li>
<li><p>Number of UNION : v-1</p>
</li>
<li><p>Number of columns that would take <code>NULL</code> value (i.e. <code>null as</code>) in each UNION block : v-1. And this would keep on decreasing like v-2,v-3 in each subsequent UNION block till it reaches 0</p>
</li>
</ol>
<p>With the above two conditions let me show the desired output.</p>
<pre><code>v = 2
tables = ['t1','t2'] #<---number of tables would be fixed
</code></pre>
<p>Now the desired output would be the sql string as :</p>
<pre><code>select
a.col_1 #<---This would be fixed
,a0.col_2
,null as col_3
from t1 a
left join t2 a0
on a.col_1 = a0.col_1 and a0.level = 1
union all
select
a.col_1
,a0.col_2
,a1.col_3 #<----the null is being replaced by actual column col_3
from table1 a
left join t2 a0
on a.col_1=a0.col_1 and a0.level = 1
left join t2 a1 #<----please observe that same table t1 is being aliased differently
on a.col_1 = a1.col_1 and a1.level = 2
</code></pre>
<p>Now what I have worked:</p>
<p><strong>Approach Updated</strong></p>
<pre><code>tables = ['t1','t2']
lvl = 3
std_cols = ['parent','node'] #<-- This two also be fixed
cols_select = ", ".join(['NULL AS col_{}'.format(i+1) for i in range(1,lvl)])
##Generating columns dynamically within UNION block##
cols=''
for i in range(lvl):
if i>0:
cols =cols+ 'a{}.{}'.format(i+1,std_cols[1]) + ' AS col_{},'.format(i+1)
else:
cols = ','+cols
print(cols[:-1])
##Generating 'join' dynamically within UNION block
join_s = ''
for i in range(lvl):
if i>0:
join_s = join_s + ' left join {} a{}'.format(tables[1],i+1) +' on c.{}= a{}.{} and a{}.level={}'.format(std_cols[0],i+1,std_cols[0],i+1,i+1)
print(join_s)
#Combining#
sql_i=''
sql_n =''
for i in range(lvl):
if i==0:
sql_i= sql_i + 'select c.{},'.format(std_cols[0]) + 'r.{} AS col_{},'.format(std_cols[1],i+1) + cols_select + ' from {} c'.format(tables[0]) +\
' left join {} r'.format(tables[1]) + ' on c.{}=r.{} and r.level={}'.format(std_cols[0],std_cols[0],i+1)
elif i>0:
sql_n = sql_n + 'select c.{}'.format(std_cols[0]) + cols[:-1] + ' from {} '.format(tables[0]) +\
join_s
else:
sql_n = sql_n + 'wip
</code></pre>
<p>But this approach is not working.</p>
<ol>
<li><p>Join and Column part is not happening recursively. E.g. if <code>lvl = 3</code>, the joining string should be like below:</p>
<p><code>a1.col_1,NULL AS col_2,NULL as col_3 a1.col_1,a2.col_2, NULL as col_3 a1.col_1,a2.col_2,a3.col_3</code></p>
</li>
<li><p>Not able to include <code>UNION</code> part</p>
</li>
</ol>
|
<python><sql>
|
2024-05-19 06:35:08
| 0
| 1,259
|
pythondumb
|
78,501,737
| 1,361,802
|
How do I get the theme color in python-pptx?
|
<p>Simply put, I want to get an RGB value from a <code>ColorFormat</code> object, ie <code>presentation.slides[0].shapes[0].text_frame.paragraphs.runs[0].color</code>.</p>
<p>If <code>color.type == MSO_COLOR_TYPE.SCHEME</code> then it means the color is based off one of the slide deck's preset color schemes.</p>
<p>Attempting to do <code>color.rgb</code> yields</p>
<blockquote>
<p>AttributeError: no .rgb property on color type '_SchemeColor'</p>
</blockquote>
<p>I want to get the RGB for a <code>ColorFormat</code> of type <code>SCHEME</code>.</p>
|
<python><lxml><python-pptx>
|
2024-05-19 06:15:17
| 1
| 8,643
|
wonton
|
78,501,376
| 9,761,768
|
Python: How to define a type annotation so that the resulting object has both attributes and keys with the same name?
|
<p>I would like to annotate an object <code>obj</code> so that the type checker (or the language server protocol) understand that it has both some attributes and keys with same name, giving me the correct intellisense.</p>
<p>For example:<br />
If <code>obj.foo</code> is a string and <code>obj.bar</code> is an integer: Then, <code>obj["foo"]</code> is a string and <code>obj["bar"]</code> is an integer.</p>
<p>And I would like to get the appropriate intellisense for both attributes/keys when I type <code>obj.</code> or <code>obj["]</code>.</p>
<p>I'm using VS Code, with pylance/pyright</p>
<p>Below is what I got so far:</p>
<p>(Obs: the <code>try/except</code> block is needed because there is a run time error when base classes for <code>TypedDict</code> are not <code>TypedDict</code>)</p>
<pre class="lang-py prettyprint-override"><code>myobj = ... # this object comes from other context
from typing import Protocol, TypedDict, cast
class MyDict(TypedDict):
foo: str
bar: int
class MyProtocol(Protocol):
foo: str
bar: int
try:
class MyClass(MyDict, MyProtocol): ...
myobj = cast(MyClass, myobj)
except TypeError:
pass
</code></pre>
<p><a href="https://i.sstatic.net/WiJFFkYw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WiJFFkYw.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/oJze82wA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJze82wA.png" alt="enter image description here" /></a></p>
<p>Current use case for this: <a href="https://docs.streamlit.io/develop/concepts/architecture/session-state" rel="nofollow noreferrer">streamlit session state</a></p>
<p>If there is a solution for this use case, I would like to know.</p>
|
<python><visual-studio-code><python-typing><pyright>
|
2024-05-19 01:20:29
| 1
| 753
|
Diogo
|
78,501,281
| 2,593,878
|
Change axis ticks with shared axes
|
<p>When I create a figure with shared axes, by default the tick labels go on the leftmost and bottom-most axes.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(2,2, sharex=True)
ax[0,0].plot(np.arange(10))
ax[0,1].plot(np.arange(10))
ax[1,0].plot(np.arange(10))
ax[1,1].axis('off')
ax[0,0].set_ylabel('y1')
ax[1,0].set_ylabel('y2')
ax[1,0].set_xlabel('x1')
ax[0,1].set_xlabel('x2')
</code></pre>
<p>Now, suppose I turn off one of the axes that has tick labels (e.g. bottom-right). Is there a way to put the tick labels back onto the newly bottom-most axis? E.g. in this figure, I would like the tick labels to appear on <code>ax[0,1]</code> right above the label <code>x2</code></p>
<p><a href="https://i.sstatic.net/2fJCKtHM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fJCKtHM.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><xticks><multiple-axes>
|
2024-05-18 23:50:30
| 0
| 7,392
|
dkv
|
78,501,274
| 11,959,501
|
How to design sub frames separately and add them up in main application
|
<p>I'm trying to design and create my python application with <code>Qt Designer (using PyQt5)</code>.</p>
<p>My Application happens to have lots of buttons and "sub windows" (frames basically) which is quite frustrating to design all at once.</p>
<p>I am wondering if there's a way to design a Frame or a Widget separately and then kind of "Import" it to the main application window?</p>
<p>I couldn't find any and would love to hear if someone knows a way.</p>
|
<python><python-3.x><pyqt5><qt-designer>
|
2024-05-18 23:48:25
| 1
| 577
|
Jhon Margalit
|
78,501,237
| 6,573,259
|
How to have multiple python multi-process save to a variable
|
<p>Im working on doing some image processing on a very large image 25 088 pixels by 36 864 pixels big. since the image is very large i do the image processing by 256x256 pixel 'tiles'. I noticed that on my windows task manager neither my CPU,RAM,GPU, or SSD reaches 50% utilization when running my function. This lead me to believe that there is performance i can squeeze out somehow.</p>
<pre><code>def processImage(self, img, tileSize = 256, numberOfThreads = 8): # a function within a class
height, width, depth = img.shape
print(height,width,depth,img.dtype)
#create a duplicate but empty matrix same as the img
processedImage = np.zeros((height,width,3), dtype=np.uint8)
#calculate left and top offsets
leftExcessPixels = int((width%tileSize)/2)
topExcessPixels = int((height%tileSize)/2)
#calculate the number of tiles columns(X) and row(Y)
XNumberOfTiles = int(width/tileSize)
YNumberOfTiles = int(height/tileSize)
#
for y in range(YNumberOfTiles):
for x in range(XNumberOfTiles):
XStart = (leftExcessPixels + (tileSize * x))
YStart = (topExcessPixels + (tileSize * y))
XEnd = XStart + tileSize
YEnd = YStart + tileSize
croppedImage = img[YStart:YEnd, XStart:XEnd]
print('Y: ' + str(y) + ' X: ' + str(x),end=" ")
#process the cropped images and store it on the same location on the empty image
processedImage[YStart:YEnd, XStart:XEnd] = self.doSomeImageProcessing(croppedImage)
</code></pre>
<p>Multi-threading seems to be the solution where i parallelize the processing of 'tiles'. since processing of the tiles are independent to each other, there should be no problem working on multiple tiles at the same time. What im not sure on how to do though is that the resulting matrix from <code>self.doSomeImageProcessing(croppedImage)</code> should be placed back to the same coordinates but on a different variable named <code>processedImage</code>. Im worried that since there are multiple threads and all of them are trying to write to the <code>processedImage</code> image variable python might not like that so much, any ideas on how to approach it?</p>
<p>EDIT::</p>
<p>As @tijko mentioned multithreading is not the answer, but multiprocess. Here is a sample standalone code for testing</p>
<pre><code>from multiprocessing import Process, Value, Array
from time import monotonic
import numpy as np
class myClass():
def doSomeImageProcessing(self,npRGBImage):
#Dummy image processing, just set all values to 255 or make the image white
print('I have been called')
a = npRGBImage
for i in range(255):
a[:] = i+1
return a
def processImage(self,tileSize = 256):
#create a large dummy image
img = np.zeros((25088,36864,3), dtype=np.uint8)
height, width, depth = img.shape
print(height,width,depth,img.dtype)
processedImage = np.zeros((height,width,depth), dtype=np.uint8)
leftExcessPixels = int((width%tileSize)/2)
topExcessPixels = int((height%tileSize)/2)
XNumberOfTiles = int(width/tileSize)
YNumberOfTiles = int(height/tileSize)
for y in range(YNumberOfTiles):
for x in range(XNumberOfTiles):
XStart = (leftExcessPixels + (tileSize * x))
YStart = (topExcessPixels + (tileSize * y))
XEnd = XStart + tileSize
YEnd = YStart + tileSize
croppedImage = img[YStart:YEnd, XStart:XEnd]
print('Y: ' + str(y) + ' X: ' + str(x),end=" ")
#Recreate the full image using the processed tiles
#Original Approach
#Run time 61.375 seconds
processedImage[YStart:YEnd, XStart:XEnd] = self.doSomeImageProcessing(croppedImage)
#check if all indexes were set to 255
mean = np.mean(processedImage)
if mean == 255:
print('Image Processing successful: ', mean)
else:
print('Image Processing failed: ', mean)
if __name__ == "__main__":
x=myClass()
start_time = monotonic()
x.processImage()
print(f"Run time {monotonic() - start_time} seconds")
</code></pre>
<p>Output:</p>
<pre><code>Image Processing successful: 255.0
Run time 61.360000000015134 seconds
</code></pre>
|
<python><python-3.x><multiprocessing><python-multiprocessing><python-3.12>
|
2024-05-18 23:24:37
| 4
| 752
|
Jake quin
|
78,501,129
| 1,545,014
|
Railroad diagrams in Pyparsing: How about Forward() declarations? Rule renaming?
|
<p>I'm using pyparsing 3.0.9, python 3.9.16, and I'm trying to write a grammar for a (sub-)set of YAML. Not so much for the produced parser, as for the railroad diagrams. The actual state of the program is shown below.</p>
<ul>
<li><p>The grammar (<a href="https://pyyaml.org/wiki/LibYAML" rel="nofollow noreferrer">defined here</a>), as expected, has recursion (<code>mapping</code>s can contain <code>mapping</code>s). However, I can't seem to find how (or where) to set the name, so it appears correctly in the diagram. Setting it in the Forward() declaration, or in the actaul declaration? Any combination I tried produces output errors.</p>
</li>
<li><p>If I declare rules which derive from common 'ancestor', I have to declare them with a copy() from that ancestor, else <code>set_name()</code> fails except for the last one. This seems logical, except it doesn't seem to work always.</p>
</li>
<li><p>Some parts of the diagrams seem to be incorrect (not corresponding to the definition). Example: The <code>node</code> definition produces <code>alias</code> twice at the start.</p>
</li>
</ul>
<p>Can someone point me in the right direction?</p>
<p>My code:</p>
<pre><code>import pyparsing as pp
def make_parser():
mapping = pp.Forward().set_name('mapping')
label = pp.Word(pp.alphanums + '-_')
true_false = pp.one_of('yes no true false').set_name('true_false')
anchor = label.copy().set_name('anchor')
tag = label.copy().set_name('tag')
alias = label.copy().set_name('alias')
key_value = (
(pp.Keyword('yaml-scalar-event') +
(pp.Keyword('yaml-scalar-event') ^ mapping))
).set_name('key_value')
mapping = (
pp.Keyword('yaml-mapping-start-event') +
pp.ZeroOrMore(key_value) +
pp.Keyword('yaml-mapping-end-event')
)
sequence = (
anchor ^
tag
).set_name('sequence')
scalar = (
alias ^
tag ^
('plain_implicit' + true_false) ^
('quoted_implicit' + true_false) ^
mapping
).set_name('scalar')
node = (
alias ^
scalar ^
sequence ^
mapping
).set_name('node')
document = (
pp.Keyword('yaml-document-start-event') +
pp.ZeroOrMore(node) +
pp.Keyword('yaml-document-end-event')
).set_name('document')
stream = (
pp.Keyword('yaml-stream-start-event') +
pp.ZeroOrMore(document) +
pp.Keyword('yaml-stream-end-event')
).set_name('stream')
return stream
def test_parser():
parser = make_parser()
parser.create_diagram('yaml_grammar.html',
vertical = 2)
def main(args):
parser = make_parser()
parser.create_diagram('yaml_grammar.html', vertical = 2)
if __name__ == '__main__':
import sys
sys.exit(main(sys.argv))
</code></pre>
<p>Which produces the following output:</p>
<p><a href="https://i.sstatic.net/vG8LIo7C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vG8LIo7C.png" alt="enter image description here" /></a></p>
|
<python><pyparsing>
|
2024-05-18 22:12:05
| 1
| 5,472
|
jcoppens
|
78,500,891
| 5,686,623
|
Unable to configure logging for Scrapy
|
<p>Based on scrapy <a href="https://docs.scrapy.org/en/latest/topics/logging.html#logging-settings" rel="nofollow noreferrer">documentatnion</a> I configured some logging settings:</p>
<pre><code>LOG_LEVEL = 'INFO'
LOG_FORMAT = '%(asctime)s [%(name)s] [%(levelname)s] %(message)s'
DATE_FORMAT = '%Y-%m-%d %H:%M:%S'
</code></pre>
<p>It looks like it does not work.</p>
<p>What I would expect, there will be only INFO logs with defined LOG and DATE format but it does not work, this is an example of some logs:</p>
<pre><code>DEBUG:scrapy.utils.log:Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
DEBUG:scrapy.utils.log:Using asyncio event loop: asyncio.unix_events._UnixSelectorEventLoop
DEBUG:spidername:URL: https://www.somepage.com/123123123
</code></pre>
<p>I tried also <code>LOG_ENABLED = False</code> but it does not work at all.</p>
<p>I tried this:</p>
<pre><code>logging.getLogger('scrapy').setLevel(logging.INFO)
logging.getLogger('asyncio').setLevel(logging.INFO)
</code></pre>
<p>But it also does not work and this is some logs example:</p>
<pre><code>DEBUG:scrapy.utils.log:Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
DEBUG:scrapy.utils.log:Using asyncio event loop: asyncio.unix_events._UnixSelectorEventLoop
</code></pre>
<p>Only way to change "something" is this:</p>
<pre class="lang-py prettyprint-override"><code>logging.basicConfig(
format=LOG_FORMAT,
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S',
)
</code></pre>
<p>This changed only LOG and DATE format, but there are still DEBUG logs. If I want to configure Spider log level, I have to do it in spider with <code>self.logger.setLevel(LOG_LEVEL)</code>.</p>
<p>Why settings works only partially and how to fix this? I do not want to configure multiple loggers in multiple places. It looks like best practice of python logging is not working here.</p>
<p>Scrapy version: 2.11.2</p>
<p><strong>EDIT1:</strong></p>
<p>I configured this settings in module settings:</p>
<pre><code>settings
βββ __init__.py
βββ local.py # this file is empty
βββ settings.py # scrapy setting file
βββ myapp.py # app config
βββ ua.py # UA config
</code></pre>
<p><code>__init__.py</code> looks like this:</p>
<pre class="lang-py prettyprint-override"><code>from .settings import * # noqa
from .myapp import * # noqa
# Import local config if exists
try:
from .local import * # noqa
except ModuleNotFoundError:
pass
except Exception:
raise
</code></pre>
<p>Other settings works, e.g. logging to file, or configuration for my app or even User Agent config. Just some configuration from settings.py works and some not.</p>
<p>Spider is run with command <code>scrapy crawl myspider</code></p>
|
<python><logging><scrapy>
|
2024-05-18 20:07:50
| 0
| 1,777
|
dorinand
|
78,500,757
| 8,372,455
|
Embedding Python in Rust to call external Python libraries
|
<p>I'm trying to learn how to embed Python into a Rust application. For learning purposes, I want to create a Rust script/app that runs a forever loop. This loop sleeps for a set interval, and upon waking, it uses the Python requests library to fetch the current time from an internet time server. While this isn't a practical application, my goal is to understand how to call external Python libraries from Rust.</p>
<p>My ultimate goal is to see if I can integrate a Python BACnet stack into a Rust application.</p>
<p>My Setup</p>
<ul>
<li>main.rs</li>
</ul>
<pre><code>use pyo3::prelude::*;
use pyo3::types::IntoPyDict;
use std::thread;
use std::time::Duration;
fn main() -> PyResult<()> {
// Safely acquire the GIL and run the Python code
Python::with_gil(|py| {
// Print the version of Python being used
py.run("import sys; print('Python version:', sys.version)", None, None)?;
// Import the requests library in Python
let requests = py.import("requests")?;
loop {
// Sleep for 10 seconds
thread::sleep(Duration::from_secs(10));
// Execute the Python code to get the current time from a time server
let locals = [("requests", requests)].into_py_dict(py);
let time_response: String = py.eval(
r#"
import requests
response = requests.get('http://worldtimeapi.org/api/timezone/Etc/UTC')
response.json()['datetime']
"#,
None,
Some(locals)
)?.extract()?;
// Print the time received from the server
println!("Current UTC Time: {}", time_response);
}
Ok(())
})
}
</code></pre>
<ul>
<li>Cargo.toml</li>
</ul>
<pre><code>[package]
name = "rust_python_time_fetcher"
version = "0.1.0"
edition = "2021"
[dependencies]
pyo3 = { version = "0.21.2", features = ["extension-module"] }
[build-dependencies]
pyo3-build-config = "0.21.2"
</code></pre>
<ul>
<li>build_with_python.sh</li>
</ul>
<pre><code>#!/bin/bash
# Activate the virtual environment
source env/bin/activate
# Get the path to the Python interpreter
PYTHON=$(which python3)
# Get the Python version
PYTHON_VERSION=$($PYTHON -c "import sys; print(f'{sys.version_info.major}.{sys.version_info.minor}')")
# Set the path to the Python interpreter
export PYO3_PYTHON="$PYTHON"
# Set the paths for Python libraries and include files
export LD_LIBRARY_PATH="$($PYTHON -c "import sysconfig; print(sysconfig.get_config_var('LIBDIR'))"):$LD_LIBRARY_PATH"
export LIBRARY_PATH="$($PYTHON -c "import sysconfig; print(sysconfig.get_config_var('LIBDIR'))"):$LIBRARY_PATH"
export PYTHONPATH="$($PYTHON -c "import site; print(site.getsitepackages()[0])"):$PYTHONPATH"
# Include and lib directories might vary based on how Python was installed or the distro specifics
export CFLAGS="$($PYTHON -c "import sysconfig; print('-I' + sysconfig.get_paths()['include'])")"
export LDFLAGS="$($PYTHON -c "import sysconfig; print('-L' + sysconfig.get_config_var('LIBDIR'))")"
# Now try running Cargo build again
cargo build
# Print Python version and path
$PYTHON --version
which $PYTHON
# Print Python includes and libs
$PYTHON-config --includes
$PYTHON-config --libs
</code></pre>
<p><strong>The problem</strong></p>
<p>When I try to build the project with cargo build, I encounter the following error:</p>
<pre><code>> cargo build
Compiling pyo3-build-config v0.21.2
error: failed to run custom build command for `pyo3-build-config v0.21.2`
Caused by:
process didn't exit successfully: `C:\Users\bbartling\Desktop\rust_python_time\target\debug\build\pyo3-build-config-6b34c9835096c15d\build-script-build` (exit code: 1)
--- stdout
cargo:rerun-if-env-changed=PYO3_CONFIG_FILE
cargo:rerun-if-env-changed=PYO3_NO_PYTHON
cargo:rerun-if-env-changed=PYO3_ENVIRONMENT_SIGNATURE
cargo:rerun-if-env-changed=PYO3_PYTHON
cargo:rerun-if-env-changed=VIRTUAL_ENV
cargo:rerun-if-env-changed=CONDA_PREFIX
cargo:rerun-if-env-changed=PATH
--- stderr
error: no Python 3.x interpreter found
</code></pre>
<p><strong>What I've tried</strong></p>
<p>I created a <code>build_with_python.sh</code> script ran on a rasp pi to set up the environment correctly, including activating a virtual environment and setting the necessary paths for Python libraries and include files. However, I'm still facing the same error.</p>
<p><strong>Questions</strong></p>
<ul>
<li>How can I correctly configure my Rust project to recognize and use the Python interpreter and libraries from my virtual environment?</li>
<li>Are there any additional steps I need to take to ensure that the pyo3 crate can find and use the Python interpreter?</li>
</ul>
|
<python><rust>
|
2024-05-18 19:12:30
| 0
| 3,564
|
bbartling
|
78,500,589
| 9,642,042
|
Is there a machines module in Anaconda Spyder or a an alternative?
|
<p>I am creating a new class and saved the data to a .py file.
I read that I could call that class from such file using the machines module.
I am using Anaconda Spyder and it seems there is no such module. moreover, where I read it seems the author of the article is using 'MicroPython'.</p>
<p>see code below. Notice that 'some_tests' is the name of my .py file containing the class code - 'test'</p>
<p>thank you.</p>
<pre><code>from machines.some_tests import test
</code></pre>
|
<python><anaconda><spyder>
|
2024-05-18 18:08:10
| 1
| 301
|
Eyal Marom
|
78,500,471
| 5,024,699
|
Reading small files is too slow
|
<p>I have 3 millions small jpeg images (256x256 resolution). When I read them using the PIL library in python (using PIL.Image.open method), the reading time is highly variable. It can take as low as 1ms and as high as 500ms, there is a huge variance in this reading time. The average reading time is 20ms.
After I read a set of files, if I reread them it's very fast for all files (1ms) probably because of caching of the filesystem.</p>
<p>How can I read these files with the best reading time corresponding to my HDD specs ?</p>
|
<python><image><performance><python-imaging-library><filesystems>
|
2024-05-18 17:15:11
| 0
| 1,538
|
Rodolphe LAMPE
|
78,500,243
| 7,123,797
|
Lexeme evaluation in Python
|
<p><a href="https://en.wikipedia.org/wiki/Lexical_analysis" rel="nofollow noreferrer">Wikipedia</a> makes a clear distinction between the concept of a lexeme and the concept of a token:</p>
<blockquote>
<p>Lexing can be divided into two stages: the scanning, which segments the input string into syntactic units called lexemes and categorizes these into token classes; and the evaluating, which converts lexemes into processed values.</p>
</blockquote>
<blockquote>
<p>A lexeme, however, is only a string of characters known to be of a certain kind (e.g., a string literal, a sequence of letters). In order to construct a token, the lexical analyzer needs a second stage, the evaluator, which goes over the characters of the lexeme to produce a value. The lexeme's type combined with its value is what properly constitutes a token, which can be given to a parser.</p>
</blockquote>
<p>As I understand it, this means that a token is the result of a mapping <category, lexeme> to <category, value>, where the category in the Python case belongs to the set {identifier, keyword, literal, operator, delimiter, NEWLINE, INDENT, DEDENT}.</p>
<p>I want to understand better what the evaluation of a lexeme means in the case of the lexeme of category <code>literal</code>. Can we see the result of such evaluation by simply typing the lexeme in the Python REPL?</p>
<p>For example, if I type the following string literal lexeme</p>
<pre><code>>>> '''some
... text'''
</code></pre>
<p>I get output <code>'some\n text'</code> - can we call this string the value of the above lexeme (note the partial stripping of the quotes and insertion of the `\n' symbol)?</p>
<p>And if I type the following numeric literal lexeme</p>
<pre><code>>>> 0b101
</code></pre>
<p>I get output <code>5</code> - can we call this number the value of the above lexeme?</p>
|
<python><token>
|
2024-05-18 15:50:13
| 2
| 355
|
Rodvi
|
78,500,230
| 7,195,666
|
Elasticsearch bulk API: how do I know which error "belongs" to which document
|
<p>I want to use <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html</a></p>
<p>with <code>raise_on_error=False</code>.</p>
<p>When some of the documents given to the bulk operation succeed and some do not how can I lik the errors to the data that I put in as the input?</p>
<p>I intend to use this via python elasticsearch library.</p>
|
<python><elasticsearch>
|
2024-05-18 15:45:36
| 1
| 2,271
|
Vulwsztyn
|
78,500,180
| 5,722,359
|
How to resolve these make install errors for Python 3.11.9?
|
<p>I am trying to configure and "make install" Python 3.11.9. However, it just quite without building a <code>lib</code> folder and <code>bin</code> folder is empty. Done on Ubuntu 22:04 installed with Python 3.10.12. Below are just a few.</p>
<p>Commands sent from debug folder:</p>
<pre><code>../configure --prefix=/usr --with-ensurepip=install --enable-optimizations
make -j$(nproc) DESTDIR=/home/user/project/myapp.AppDir install
</code></pre>
<p>Error 1: A sample. There are many and they all appear red in color in PyCharm.</p>
<pre><code>/usr/bin/ld: Python/errors.o: in function `_PyErr_Fetch':
/home/user/project/src/Python-3.11.9/debug/../Python/errors.c:428: undefined reference to `__gcov_indirect_call'
/usr/bin/ld: /home/user/project/src/Python-3.11.9/debug/../Python/errors.c:428: undefined reference to `__gcov_indirect_call_profiler_v4'
/usr/bin/ld: /home/user/project/src/Python-3.11.9/debug/../Python/errors.c:428: undefined reference to `__gcov_time_profiler_counter'
/usr/bin/ld: /home/user/project/src/Python-3.11.9/debug/../Python/errors.c:428: undefined reference to `__gcov_time_profiler_counter'
/usr/bin/ld: Python/errors.o: in function `PyErr_GivenExceptionMatches':
/home/user/project/src/Python-3.11.9/debug/../Python/errors.c:255: undefined reference to `__gcov_indirect_call'
/usr/bin/ld: /home/user/project/src/Python-3.11.9/debug/../Python/errors.c:255: undefined reference to `__gcov_indirect_call_profiler_v4'
/usr/bin/ld: /home/user/project/src/Python-3.11.9/debug/../Python/errors.c:255: undefined reference to `__gcov_time_profiler_counter'
/usr/bin/ld: /home/user/project/src/Python-3.11.9/debug/../Python/errors.c:255: undefined reference to `__gcov_time_profiler_counter'
/usr/bin/ld: Python/errors.o: in function `PyErr_ExceptionMatches':
/home/user/project/src/Python-3.11.9/debug/../Python/errors.c:294: undefined reference to `__gcov_indirect_call'
/usr/bin/ld: /home/user/project/src/Python-3.11.9/debug/../Python/errors.c:294: undefined reference to `__gcov_indirect_call_profiler_v4'
/usr/bin/ld: /home/user/project/src/Python-3.11.9/debug/../Python/errors.c:294: undefined reference to `__gcov_time_profiler_counter'
/usr/bin/ld: /home/user/project/src/Python-3.11.9/debug/../Python/errors.c:294: undefined reference to `__gcov_time_profiler_counter'
</code></pre>
<p>Error 2:</p>
<pre><code>/usr/bin/ld: warning: creating DT_TEXTREL in a PIE
collect2: error: ld returned 1 exit status
make: *** [Makefile:1203: Programs/_freeze_module] Error 1
</code></pre>
<p>Error 3: This was the last message that appeared before termination.</p>
<pre><code># This is an expensive target to build and it does not have proper
# makefile dependency information. So, we create a "stamp" file
# to record its completion and avoid re-running it.
</code></pre>
|
<python><ubuntu-22.04>
|
2024-05-18 15:25:21
| 1
| 8,499
|
Sun Bear
|
78,500,158
| 11,648,332
|
How to Implement Cell Execution Control and JavaScript Alerts in Azure Databricks Notebooks?
|
<p>I am trying to replicate some functionalities in Azure Databricks notebooks that I previously used in Jupyter notebooks, specifically related to controlling the visibility of notebook cells and showing JavaScript alerts and executing cells within the notebook. Below are three code snippets that work perfectly in Jupyter but not in Databricks:</p>
<p>Example 1:</p>
<pre><code>from IPython.display import HTML, Javascript, display as IPyDisplay, clear_output
HTML('''
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script>
function code_toggle(){
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
}
else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$(document).ready(function(){
code_shown=false;
$('div.input').hide()
});
</script>
<form action="javascript:code_toggle()">
<input type="submit" id="toggleButton" value="Show Code">
</form>
''')
</code></pre>
<p>Example 2:</p>
<pre><code>from IPython.display import display, Javascript
display(Javascript("""
alert('Hello, this is a JavaScript alert!');
"""))
</code></pre>
<p>Example 3:</p>
<pre><code>display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1, IPython.notebook.ncells())'))
</code></pre>
<p>These snippets help in toggling the visibility of code cells and in displaying JavaScript alerts in Jupyter. However, when I try to implement similar functionalities in Azure Databricks, they do not work because Databricks doesn't seem to support the same level of integration with JavaScript and HTML that Jupyter does.</p>
<p>Question:
Is there any alternative or method to implement such functionalities in Azure Databricks? Specifically, I am looking for a way to:</p>
<p>Control the visibility of cells in a Databricks notebook.
Execute cells by cell_id within the same notebook.
Trigger JavaScript alerts or other client-side JavaScript functionalities within the Databricks environment.</p>
<p>Any help or guidance on how to achieve this in Azure Databricks would be greatly appreciated!</p>
|
<javascript><python><jupyter-notebook><azure-databricks>
|
2024-05-18 15:16:09
| 1
| 447
|
9879ypxkj
|
78,500,113
| 6,999,569
|
Generating character ranges in python
|
<p>I want to "translate" integer values in the range from 0 to 26 to characters 'A' to 'Z', but that seems harder in python than in C++, where it is allowed to add integer values to char variables.</p>
<p>The other attempt to generate an array with consecutive characters from a range is apparently also not possible;<br />
I know that I could create an "ABC...XYZ" string for that purpose, but I don't want to do that.</p>
<p><strong>Question.</strong><br />
how can character-ranges Γ la</p>
<pre><code>ABC = [c for c in range('A','Z')]+['Z']
</code></pre>
<p>be generated, resp. emulated in python?</p>
|
<python>
|
2024-05-18 15:00:01
| 1
| 846
|
Manfred Weis
|
78,500,055
| 1,967,110
|
VSCode shows imported function as not defined, but the script runs correctly
|
<p>I'm trying to import functions located in a Python file outside of my Visual Studio Code workspace. Letβs say my structure is:</p>
<pre><code>folder/
βββ workspace/
β βββ main.py
β
βββ python_scripts/
βββ utils
βββ __init__.py
βββ functions.py
</code></pre>
<p>And letβs say from main.py I want to be able to call the list_files() function present in functions.py (of course I could put this file in my current workspace, but these are utility functions I want to share across many different projects). After (a lot!) of searching and tweaking, I was able to make it work using:</p>
<pre><code>#main.py
sys.path.append('C:\\folder\\python_scripts')
from utils import *
list_files()
</code></pre>
<p>This script runs without any issues (inside <code>__init__.py</code> there is also some code to add all functions from functions.py to the global name space). However, Visual Studio Code underlines the function <code>list_files()</code> with a warning saying "list_files" is not defined.</p>
<p>Any idea why, and does it mean I messed up (again!) with the import process, even though the script runs?</p>
|
<python><visual-studio-code><pylance>
|
2024-05-18 14:36:43
| 1
| 889
|
Sulli
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.