QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,000,244 | 5,067,173 | Spinnaker: Not Enough available memory to allocate buffer for streaming | <p>Because of this error <code>Spinnaker: Not Enough available memory to allocate buffer for streaming</code> when trying to give the command to start the camera, I made the following code to try to deallocate the camera:</p>
<pre><code> system = PySpin.System.GetInstance()
cont = 10
while cont > 0:
try:
cam_list = system.GetCameras()
cam = cam_list.GetByIndex(0)
print(">1")
cam.Init() #ERRO
print(">2")
break
except Exception as e:
print("e: ", str(e))
if cam.IsInitialized(): # não entra nesse if
cam.DeInit()
print('deInit')
del cam
del cam_list
cont -= 1
if cont > 0:
print("** Iniciou a camera **\n")
else:
print("** Não iniciou a camera **\n")
return
</code></pre>
<p>But they go through the 10 attempts to clean the camera and then it does not start, I tried to increase it to 100 and then sometimes it passes, but there is no pattern, there is also no pattern for when this error occurs, on Friday we managed to do more than 60 photos with camera, then today when we run the second photo this error appeared.</p>
<p>Can anyone give us a light on how to solve it in a definitive way? I start to think it's hardware, some camera problem or something, but I don't know how to be sure it's hardware.</p>
| <python><iot><spinnaker-cam> | 2023-08-29 12:33:23 | 1 | 789 | Marcius Leandro |
77,000,233 | 15,140,144 | Annotations for types.FunctionType | <p>I need some sort of annotation that works similarly to <code>typing.Callable[[args, ..], ret]</code>, but is more strict regarding allowed input. <code>Callable</code> just requires that an instance has a proper <code>__call__()</code> interface (see <a href="https://github.com/python/cpython/blob/cc12c965af16550bb1af32e0485217ae6d718218/Lib/_collections_abc.py#L508" rel="nofollow noreferrer">this</a>). I need a type hint that would require a <code>types.FunctionType</code> with a proper <code>__call__()</code> interface. Sadly there is no such thing like <code>typing.FunctionType</code>.</p>
<p>What have I tried?</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Generic
from types import FunctionType
_T1 = TypeVar("_T1")
_T2 = TypeVar("_T2")
class AnnotatedFunctionType(FunctionType, Generic[_T1, _T2]):
def __call__(self, *args: _T1) -> _T2:
pass
</code></pre>
<p>Fails (of course!) with an error:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: type 'function' is not an acceptable base type
</code></pre>
<pre class="lang-py prettyprint-override"><code>from typing import Union, Callable
from types import FunctionType
AnnotatedFunctionType = Union[Callable, FunctionType]
</code></pre>
<p>Too broad, can't use <code>AnnotatedFunctionType[[arg1, arg2, ...], ret]</code>.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Union, Callable, TypeVar, ParamSpec
from types import FunctionType
_T = TypeVar("_T")
_P = ParamSpec("_P")
AnnotatedFunctionType = Union[Callable[_P, _T], FunctionType]
</code></pre>
<p>It works, but it isn't strict. This allows for any function or a callable with a specified structure. Also it doesn't work with multiple arguments when scanning with mypy.</p>
<p>My guess is that in order to do what I want, I would need to write a Protocol class with every function attribute, that I need. It does allow types that are different than <code>FunctionType</code>, but it would work. If <a href="https://github.com/python/typing/issues/213" rel="nofollow noreferrer">python/typing#213</a> was ever finalized, I guess that I could to something like <code>Intersection[FunctionTypeProtocol[_T1, _T2], FunctionType]</code> or <code>FunctionTypeProtocol[_T1, T2] & FunctionType</code>.</p>
<p>What approach would accomplish what I want (and it would work static type analysis tools like mypy)? It's fine if it requires some external package.</p>
| <python><mypy><python-typing> | 2023-08-29 12:31:39 | 0 | 316 | oBrstisf8o |
77,000,194 | 11,220,141 | How to load data from PostgreSQL in chunk with psycopg2 | <p>I want to load batches from table iteratively and save each batch in .parquet format.
The problem is I don't understand how can I do it with psycopg2.</p>
<pre><code>conn = psycopg2.connect(dbname=dbname, user=user, password=password, host=host, port=port)
cursor = conn.cursor()
cursor.execute(query)
columns = [column[0] for column in cursor.description]
records = cursor.fetchmany(size=5)
pd.DataFrame(data=records, columns=columns).to_parquet(...)
</code></pre>
<p>Code above select more than 5 rows.</p>
<p>I want to do something like:</p>
<pre><code> conn = psycopg2.connect(dbname=dbname, user=user, password=password, host=host, port=port)
cursor = conn.cursor()
cursor.execute(query)
columns = [column[0] for column in cursor.description]
records = cursor.fetchmany(size=5) #iterator with batches
for batch in records:
pd.DataFrame(data=records, columns=columns).to_parquet(...)
</code></pre>
<p>Thanks a lot in advance for any help</p>
| <python><pandas><psycopg2><parquet> | 2023-08-29 12:25:12 | 2 | 365 | Любовь Пономарева |
77,000,129 | 587,587 | "No module named" when unpickling data in Python 3.5 which was pickled using 2.5 | <p>I have the following three files:</p>
<p>test_class.py:</p>
<pre><code>class TestClass:
def __init__(self):
self.a = 13
</code></pre>
<p>write.py:</p>
<pre><code>import sys
import pickle
import test_class
o = test_class.TestClass()
pickle.dump(o, sys.stdout)
</code></pre>
<p>read.py:</p>
<pre><code>import sys
import pickle
import test_class
o = pickle.load(sys.stdin.buffer)
print(o)
</code></pre>
<p>Running read.py using Python 2.5 and piping the result to read.py using 3.7 produces the following error:</p>
<blockquote>
<p>c:\Python25\python.exe write.py | C:\Python37\python.exe read.py
Traceback (most recent call last):
File "read.py", line 6, in
o = pickle.load(sys.stdin.buffer)
ModuleNotFoundError: No module named 'test_class\r'</p>
</blockquote>
<p>My question is: how do you unpickle 2.5 pickles correctly in 3.7?</p>
| <python><pickle> | 2023-08-29 12:17:12 | 0 | 492 | Anton Lahti |
77,000,105 | 607,846 | Different error handling for different endpoints | <p>I have error handlers in my app. I add these using connexion's add_error_handler which is calling flask's register_error_handler. I wish to restructure the error data that is returned by my endpoints in this handler. However, since I have so many unit tests reliant on the old structure, I wish to implement the new error structure for a subset of endpoints first. I believe I can do this as follows:</p>
<pre><code>from flask import request
new_endpoints = ("/new_endpoint",)
def is_new_endpoint():
return request.path in new_endpoints
def my_error_handler(e):
if is_new_endpoint():
return FlaskApi.response(new_error_response(e))
else:
return FlaskApi.response(old_error_response(e))
</code></pre>
<p>Is there another approach to doing this? The problem I have is that I believe that the is_new_endpoint function might get messy.</p>
<p>I define my endpoints in a yaml file, and for each endpoint, I have an operationId which specifies the python function associated with the endpoint. Maybe I could decorate these functions to define them as new and have this information available in the error handler. Could this be a possible alternative approach? Could I use flask.g for this?</p>
| <python><flask><connexion> | 2023-08-29 12:13:45 | 1 | 13,283 | Baz |
77,000,057 | 5,873,325 | How to fix version mismatch between ChromeDriver & Google Chrome in Dockerfile | <p>I am new to Docker. I have used it to create an image of my project. When I try to build the image using a Dockerfile in which I install the needed libraries as well as Google Chrome and ChromeDriver I get an error message indicating a mismatch between ChromeDriver & Google Chrome.</p>
<p>Here's a portion of the Dockerfile :</p>
<pre><code># install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
</code></pre>
<p>Once the docker image is launched, whenever I try to execute a script of my project I get this error message :</p>
<pre><code>> selenium.common.exceptions.SessionNotCreatedException: Message:
> session not created: This version of ChromeDriver only supports Chrome
> version 114 Current browser version is 116.0.5845.110 with binary path
> /opt/google/chrome/google-chrome Stacktrace:
> #0 0x56061f6b04e3 <unknown>
> #1 0x56061f3dfc76 <unknown>
> #2 0x56061f40d04a <unknown>
> #3 0x56061f4084a1 <unknown>
> #4 0x56061f405029 <unknown>
> #5 0x56061f443ccc <unknown>
> #6 0x56061f44347f <unknown>
> #7 0x56061f43ade3 <unknown>
> #8 0x56061f4102dd <unknown>
> #9 0x56061f41134e <unknown>
> #10 0x56061f6703e4 <unknown>
> #11 0x56061f6743d7 <unknown>
> #12 0x56061f67eb20 <unknown>
> #13 0x56061f675023 <unknown>
> #14 0x56061f6431aa <unknown>
> #15 0x56061f6996b8 <unknown>
> #16 0x56061f699847 <unknown>
> #17 0x56061f6a9243 <unknown>
> #18 0x7f72cb9de044 <unknown>
</code></pre>
<p>Here's how I initialize chrome in my code:</p>
<pre><code>from selenium import webdriver
....
options = webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome(options=options)
</code></pre>
<p>Now I know that <a href="http://chromedriver.storage.googleapis.com" rel="nofollow noreferrer">http://chromedriver.storage.googleapis.com</a> is no longer supported but how can I modify the Dockerfile to make sure that I always get compatible Chrome & ChromeDriver versions ?</p>
| <python><docker><google-chrome><selenium-webdriver><selenium-chromedriver> | 2023-08-29 12:07:05 | 1 | 640 | Mejdi Dallel |
76,999,920 | 6,645,564 | How do you add a Scatter trace to a plotly figure that consists only of markers and that connects the markers of two other Scatter traces? | <p>I am currently trying set up a plot that consists of multiple traces, one "base" trace, and then another trace that is situated on top of the base trace. Each of these traces consist only of markers.</p>
<p>However, what I would like to do is connect the markers together using a line</p>
<p>Here is a very basic reconstruction of the code I am currently using:</p>
<pre><code>url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
iris_df = pd.read_csv(url,header=None,encoding='utf-8')
iris_df.columns = ['sepal length','sepal width','petal length', 'petal width', 'class']
test_fig = go.Figure()
class_colors = {'Iris-setosa':'blue', 'Iris-versicolor':'green', 'Iris-virginica':'purple'}
for group in iris_df['class'].unique():
iris_df_class = iris_df[iris_df['class']==group]
test_fig.add_trace(go.Scatter(x=iris_df_class['sepal length'],
y= iris_df_class['petal length'],
mode='markers',
marker=dict(size=10, color=class_colors[group], symbol=1),
name='trace1 '+group,
showlegend=True,
legendgroup=group))
test_fig.add_trace(go.Scatter(x=iris_df_class['sepal length'],
y= iris_df_class['petal length']+0.5,
mode='markers',
marker=dict(size=10, color='red', symbol=5),
name='trace2',
showlegend=False,
legendgroup=group))
test_fig.add_trace(go.Scatter(x=iris_df_class['sepal length'],
y= iris_df_class['petal length']+0.2,
mode='markers',
marker_line_width = 2,
marker=dict(size=20, color='black', symbol=42),
name='trace-intermediate',
showlegend=False,
legendgroup=group))
test_fig.show()
</code></pre>
<p>So for the trace named 'trace-intermediate', I have its marker symbol set to 42, which is just a line (as dictated by <a href="https://plotly.com/python/marker-style/#custom-marker-symbols" rel="nofollow noreferrer">https://plotly.com/python/marker-style/#custom-marker-symbols</a>). I want this line to connect the markers of trace1 and trace2. However, when I zoom in on one or more particular markers within the figure, the line marker of 'trace-intermediate' decreases in size proportionally and stops connecting its pertinent markers of the other traces. How can I amend this? I would prefer the markers to be connected with this line so that it can be included in the same legendgroup as the two other traces.</p>
| <python><plotly> | 2023-08-29 11:47:56 | 1 | 924 | Bob McBobson |
76,999,653 | 1,143,539 | Bug in scipy PPoly.root | <p>Here is a MWE</p>
<pre><code>import scipy
p = scipy.interpolate.PPoly(x=[1.426618795448907, 3.6900413078920606],
c=[[-0.0003473223894645246],
[0.031172356603618657],
[0.14183014380119674],
[-0.4766926173965321]])
p(p.x), p.roots(extrapolate=False)
</code></pre>
<p>The result is:</p>
<pre><code>(array([-4.76692617e-01, 6.07153217e-18]), array([], dtype=float64))
</code></pre>
<p>So at <code>x[0]</code>, y is negative, at <code>x[1]</code>, y is positive (just barely) so <strong>a root must exist</strong> within the interval, but <code>roots</code> can't find it.
If I set <code>extrapolate=True</code>, three roots are found, including the correct one.</p>
<p>Is this a bug, or am I making some error here?</p>
| <python><scipy><polynomials> | 2023-08-29 11:11:13 | 1 | 1,449 | Roobie Nuby |
76,999,414 | 8,414,030 | pythonic way to flatten or expand a dict to list of dicts in python | <p>I have a dictionary whose values are list-objects.</p>
<pre class="lang-py prettyprint-override"><code>{'a': [1, 2, 3], 'b': [4, 5, 6]}
</code></pre>
<p>I want to <strong>expand</strong> (for lack of a right word) the dict into the following:</p>
<pre class="lang-py prettyprint-override"><code> [{'a': 1, 'b': 4}, {'a': 2, 'b': 5}, {'a': 3, 'b': 6}]
</code></pre>
<p>I have a solution that looks a little non-Pythonic, ugly and hackish to me.</p>
<pre class="lang-py prettyprint-override"><code>def flatten_dict(d: dict) -> list[dict]:
"""
get [{'a': 1, 'b': 4}, {'a': 2, 'b': 5}] from {'a': [1, 2], 'b': [4, 5]}
:param d: dict object
:return: list of dicts
"""
values = list(d.values())
val_zip = list(zip(*values))
keys = [list(d.keys())] * len(values[0])
bonded_vals = list(zip(keys, val_zip))
return list(map(lambda p: dict(zip(p[0], p[1])), bonded_vals))
</code></pre>
<p>Please help/direct me to a suitable Python feature that does something similar and improves the code readability.</p>
| <python><python-3.x><dictionary> | 2023-08-29 10:35:31 | 2 | 791 | inquilabee |
76,999,307 | 5,751,211 | How to make an API call for IBM Power instance to stop and start | <p>I am working on using IBM Power Cloud Service for creating AIX instance. Most of the APIs work like Get instance, delete instance but the API to modify action fails.</p>
<p>Here is the API Doc :
<a href="https://cloud.ibm.com/apidocs/power-cloud#pcloud-pvminstances-action-post" rel="nofollow noreferrer">https://cloud.ibm.com/apidocs/power-cloud#pcloud-pvminstances-action-post</a></p>
<p><strong>It uses cloud IP in the URL . What does that denote?</strong></p>
<p>This is the python code :</p>
<pre><code>import requests
from ibm_cloud_sdk_core import DetailedResponse
class IBMPowerCloud:
def __init__(self, api_key):
self.api_key = api_key
self.token = None
self.headers = None
self.BASE_URL = "https://us-south.power-iaas.cloud.ibm.com"
self.CRN = "crn:v1:bluemix:public:power-iaas:us-south:a/XXX:aXXXX::"
def authenticate(self):
url = "https://iam.cloud.ibm.com/identity/token"
headers = {
"content-type": "application/x-www-form-urlencoded",
"accept": "application/json",
}
data = {
"grant_type": "urn:ibm:params:oauth:grant-type:apikey",
"apikey": self.api_key,
}
response = self._make_request("POST", url, data=data, headers=headers)
if response.status_code == 200:
self.token = response.json()
self.headers = {
"Authorization": f"Bearer {self.token['access_token']}",
"Content-Type": "application/json",
"CRN": self.CRN,
}
else:
raise Exception(f"Authentication Error: {response.status_code}")
def _make_request(self, method, url, data=None, headers=None):
try:
response = requests.request(method, url, data=data, headers=headers)
print(data)
print(method)
response.raise_for_status()
return response
except requests.exceptions.RequestException as e:
raise Exception(f"API Error: {e}")
def get_pvm_instance(self, instance_id) -> DetailedResponse:
if not self.token or not self.headers:
self.authenticate()
cloud_instance_id, pvm_instance_id = instance_id.split("/")
url = f"{self.BASE_URL}/pcloud/v1/cloud-instances/{cloud_instance_id}/pvm-instances/{pvm_instance_id}"
response = self._make_request("GET", url, headers=self.headers)
if response.status_code == 200:
result = response.json()
return DetailedResponse(
response=result,
headers=response.headers,
status_code=response.status_code,
)
else:
raise Exception(f"API Error: {response.status_code}")
def delete_pvm_instance(self, instance_id) -> DetailedResponse:
if not self.token or not self.headers:
self.authenticate()
cloud_instance_id, pvm_instance_id = instance_id.split("/")
url = f"{self.BASE_URL}/pcloud/v1/cloud-instances/{cloud_instance_id}/pvm-instances/{pvm_instance_id}"
response = self._make_request("DELETE", url, headers=self.headers)
if response.status_code == 200:
result = response.json()
return DetailedResponse(
response=result,
headers=response.headers,
status_code=response.status_code,
)
else:
raise Exception(f"API Error: {response.status_code}")
def _perform_instance_action(self, instance_id, action):
if not self.token or not self.headers:
self.authenticate()
data = {
"action":action,
}
print(data)
cloud_instance_id, pvm_instance_id = instance_id.split("/")
url = f"{self.BASE_URL}/pcloud/v1/cloud-instances/{cloud_instance_id}/pvm-instances/{pvm_instance_id}/action"
response = self._make_request("POST", url, data=data, headers=self.headers)
if response.status_code == 200:
return DetailedResponse(
response=response.json(),
headers=response.headers,
status_code=response.status_code,
)
else:
raise Exception(f"API Error: {response.status_code}")
def start_pvm_instance(self, instance_id) -> DetailedResponse:
return self._perform_instance_action(instance_id, "start")
def stop_pvm_instance(self, instance_id) -> DetailedResponse:
return self._perform_instance_action(instance_id, "stop")
</code></pre>
| <python><cloud><ibm-cloud><aix> | 2023-08-29 10:17:15 | 0 | 1,050 | Deepali Mittal |
76,999,277 | 1,474,327 | python struct disalignment windows - linux | <p>I have a working service, written in Python 3.11, that communicates with PLCs using standard sockets and ctypes for packaging / unpackaging data. This app can work both on Windows or Linux, no issue (both 64 bit).</p>
<p>For testing purposes, I've an additional set of scripts that can simulate different conditions with the PLC and other subsystems. One of them uses python struct to <code>pack</code> and <code>unpack</code> data, following the same protocols as the real PLC.</p>
<p>One of the scripts works fine in Windows, not an issue. But in Linux I find a huge difference in size of the buffer.</p>
<p>This is the buffer expected to be received, 5232 bytes including padding. The <code>@</code> forces padding when required.</p>
<pre class="lang-py prettyprint-override"><code>bufftype = f"@I29sLIL29sH{4*320}L4LhhIbbi"
</code></pre>
<p>(The 4 * 320 + L is intentional, it's a matrix set of 4 longs x 320 rows)</p>
<p>However, in Linux (Manjaro) it's failing, expecting 10384 bytes to be received, instead of 5232 bytes. If I change from <code>@</code> to standard packing with <code>=</code>, or force little endian with <code><</code>, it will expect 5226 bytes, which is the standard size without padding for this structure.</p>
<p>I can't find an explanation other than, in Linux, unsigned long (L) will consume 8 bytes (instead of 4, as stated by the official doc), so 4 * 320 * 8 will fill up all that space. But then, how can I force the field type to be one that would only use 4 bytes? Any way to set this up in the interpreter so I can have a fixed behavior on this?</p>
| <python><linux><struct><byte> | 2023-08-29 10:12:42 | 1 | 727 | Alberto |
76,999,170 | 21,787,377 | Django:- The QR code is being generated successfully, but it is not being saved to the database | <p>I'm using <a href="https://pypi.org/project/qrcode/" rel="nofollow noreferrer"><code>qrcode</code></a>, a Python library that enables you to generate QR codes. I want to use this Python library inside my Django signals to generate a QR code for every newly created user. However, when the user is created, it doesn't create an <code>Account</code> model object, and the QR code is generated inside my project directory, instead of static folder. How can I solve this problem.</p>
<p>signals.py:</p>
<pre><code>@receiver(post_save, sender=User)
def create_account(sender, instance, created, **kwargs):
if created:
random_number = ''.join(random.choices(string.digits, k=20))
img = qrcode.make('cullize')
img.save('default.png')
Account.objects.create(
user=instance, qr_code=img,
account_id=random_number
)
</code></pre>
<pre><code>class Account(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
qr_code = models.ImageField(upload_to='qrcode')
account_id = models.IntegerField()
</code></pre>
| <python><django><qr-code><django-signals> | 2023-08-29 09:57:54 | 3 | 305 | Adamu Abdulkarim Dee |
76,999,084 | 658,209 | Template to PDF conversion | <p>I am building a doctor portal application where there will be multiple doctors looking after patients. Each doctor will have their own letterhead for creating prescriptions and will have flexibility of updating and managing the prescription letterhead format.
My app will have python backend and react js frontend. When a doctor logs in to the portal they can add certain medication and generate a PDF prescription to be shared with patient. The PDF prescription should be on the letterhead which is for that specific doctor. Basically a doctor can create some sort of basic template with placeholders for patient name and prescription text, which will be populated by backend at the time of creating a PDF prescription.</p>
<p>My question:
What will be the best format for the template - I have explored docx and HTML. If I use docx then it will be easier for the doctor to update the template but its difficult to replace placeholders as python-docx library doesn't allow to read text in textboxes and also in order to replace the placeholder I need to know the structure of the doc like if there are tables. With HTML i feel it will be easier to replace placeholder but it will be difficult for doctor's to maintain the template as they can't edit HTML and then I will have to create a UI to allow them to edit.
Is there any other format that I can use for the template which is easy to maintain and I can replace the placeholders agnostically without having to know the document structure and then convert to PDF</p>
<p>For now I am thinking of storing the letterhead templates in google drive so that doctors have easy access</p>
| <python><python-3.x><pdf><pdf-generation><wkhtmltopdf> | 2023-08-29 09:45:58 | 1 | 1,364 | Prim |
76,999,055 | 9,074,190 | Install Opencv in Yocto | <p>I am new to Yocto project. I am using SAM9X60-EK as my target, and I want to add opencv to my image.
I noticed that there is <code>opencv</code> included with <code>meta-oe</code> but when I add <code>IMAGE_INSTALL:append = " opencv"</code> to my <code>core-image-sato.bbaappend</code> file it gives me error, so I decided to write a recipee to download and install from source distribution.</p>
<p>I have installed the required dependency as following</p>
<pre><code>IMAGE_INSTALL:append = " python3"
IMAGE_INSTALL:append = " python3-pip"
IMAGE_INSTALL:append = " python3-numpy"
IMAGE_INSTALL:append = " opencv"
</code></pre>
<p>here is my <code>opencv_4.8.0.bb</code></p>
<pre><code>
SRC_URI = "file://opencv-python-4.8.0.76.tar.gz"
S = "${WORKDIR}"
# Add Python as a dependency
DEPENDS += "python3 cmake python3-pip"
do_unpack() {
tar -xvf ${FILE_DIRNAME}/opencv-python-4.8.0.76.tar.gz -C ${S}
}
do_install() {
install -d ${D}${bindir}
cp -r ${S}/opencv-python-4.8.0.76/* ${D}${bindir}
# Run the setup.py script for installation
cd ${D}${bindir}
python3 setup.py install --root=${D}
}
PACKAGES =+ "opencv_python"
FILES_${PN} += "${bindir}/setup.py"
</code></pre>
<p>when I run <code>bitbake opencv</code> I get the following error</p>
<pre><code> DEBUG: Python function extend_recipe_sysroot finished
| DEBUG: Executing shell function do_install
| /usr/lib/python3/dist-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
| warnings.warn(
| Traceback (most recent call last):
| File "/home/snishuz/.local/lib/python3.10/site-packages/skbuild/setuptools_wrap.py", line 645, in setup
| cmkr = cmaker.CMaker(cmake_executable)
| File "/home/snishuz/.local/lib/python3.10/site-packages/skbuild/cmaker.py", line 148, in __init__
| self.cmake_version = get_cmake_version(self.cmake_executable)
| File "/home/snishuz/.local/lib/python3.10/site-packages/skbuild/cmaker.py", line 105, in get_cmake_version
| raise SKBuildError(msg) from err
|
| Problem with the CMake installation, aborting build. CMake executable is cmake
| WARNING: exit code 1 from a shell command.
</code></pre>
<p>Edit:</p>
<p>After @skandigraun comment I tried to use the recipee included in <code>meta-oe</code></p>
<p>I got the following <a href="https://pastebin.com/BdqQK9sC" rel="nofollow noreferrer">error</a></p>
| <python><opencv><cmake><yocto><yocto-recipe> | 2023-08-29 09:41:37 | 1 | 1,745 | Nishuthan S |
76,998,938 | 3,616,293 | Select closest element between 2 numpy arrays of differing sizes | <p>I have 2 numpy arrays of different sizes. One of them 'a' contains int values, while the other (larger) np array 'b' contains float values with (say) 3-4 values per element/value in 'a'.</p>
<pre><code>a = np.random.randint(low = 1, high = 100, size = (7))
a
# array([35, 11, 48, 20, 13, 31, 49])
b = np.array([34.78, 34.8, 35.1, 34.99, 11.3, 10.7, 11.289, 18.78, 19.1, 20.05, 12.32, 12.87, 13.5, 31.03, 31.15, 29.87, 48.1, 48.5, 49.2])
a.shape, b.shape
# ((7,), (19,))
</code></pre>
<p>The idea is to find the value in 'b' matching each unique value in 'a' in terms of closest distance, which I am computing using abs values. To do it using a single element of 'a', or using the first element of 'a':</p>
<pre><code># Compare first element of a with all elements of b-
np.abs(a[0] - b).argsort()
'''
array([ 3, 2, 1, 0, 14, 13, 15, 16, 17, 18, 9, 8, 7, 12, 11, 10, 4,
6, 5])
'''
# b[3]
# 34.99
# np.abs(a[0] - b).argsort()[0]
# 3
b[np.abs(a[0] - b).argsort()[0]]
# 34.99
</code></pre>
<p>So, the 4th element in 'b' (b[3]) is the closest match to a[0].</p>
<p>To compute this for all values in 'a', I use a loop as:</p>
<pre><code>for e in a:
idx = np.abs(e - b).argsort()
print(f"{e} has nearest match = {b[idx[0]]:.4f}")
'''
35 has nearest match = 34.9900
11 has nearest match = 11.2890
48 has nearest match = 48.1000
20 has nearest match = 20.0500
13 has nearest match = 12.8700
31 has nearest match = 31.0300
49 has nearest match = 49.2000
'''
</code></pre>
<p>How can I achieve this without the slow for loop?</p>
<p><strong>Note: a.shape = 1400 and b.shape = 1.5 million (approxmately)</strong></p>
| <python><arrays><numpy> | 2023-08-29 09:26:40 | 3 | 2,518 | Arun |
76,998,925 | 125,244 | How to (un)hide lines within a legendgroup and access its lines individually | <p>I created several lines with go.Scatter and grouped them with a legendgroup.
By clicking on the legendgroup I can hide / show all the lines.</p>
<p>What I would like to do as well is hiding / showing particular lines within a legendgroup.
Example 1 shows a legendgroup containing several lines that are initially hidden and some that are initially visible.
I would like to be able to change the visibility of all these lines individually but also be able to hide/show all lines (and their fillings) by just clicking once on the legendgroup.</p>
<pre><code>traceList.append(go.Scatter(x=datePrices, y=dfAreas.extendLow, mode='lines', name='extendLow',
line_color=LAVENDER, fill=None, visible="legendonly", legendgroup="AREA"))
traceList.append(go.Scatter(x=datePrices, y=dfAreas.AREAHAlow, mode='lines', name='AREALow',
line_color="rgb(255,0,255)", fill= None, fillcolor=AREA_FILL, legendgroup="AREA"))
traceList.append(go.Scatter(x=datePrices, y=dfAreas.AREAHAclose, mode='lines', name='AREA',
line_color=AREA, fill=None, fillcolor=AREA_FILL, line_width=3, legendgroup="AREA"))
traceList.append(go.Scatter(x=datePrices, y=dfAreas.AREAHAhigh, mode='lines', name='AREAHigh',
line_color=AREA_UP, fill="tonexty", fillcolor=AREA_FILL, legendgroup="AREA"))
traceList.append(go.Scatter(x=datePrices, y=dfAreas.extendHigh, mode='lines', name='extendHigh',
line_color=LAVENDER, fill="tonexty", visible="legendonly", legendgroup="AREA"))
</code></pre>
<p>Also I would like to group two legendgroups.
Example 2 shows a legendgroup containing two grouped lines for "Gains" and two grouped lines for "losses".
I can hide each of those two groups but I would also like to group these two groups making it possible to hide them with one click.</p>
<pre><code>traceList.append(go.Scatter(x=dfGainsOne.date, y=dfGainsOne.close, mode='lines', name='Gains', line_width=3,
line_color=colorGains, fill= None, legendgroup="Gains", showlegend=True))
traceList.append(go.Scatter(x=dfGainsTwo.date, y=dfGainsTwo.close, mode='lines', name='GainsTwo', line_width=3,
line_color=colorGains, fill= None, legendgroup="Gains", showlegend=False))
traceList.append(go.Scatter(x=dfLossesOne.date, y=dfLossesOne.close, mode='lines', name='Losses', line_width=3,
line_color=colorLosses, fill= None, legendgroup="Losses", showlegend=True))
traceList.append(go.Scatter(x=dfLossesTwo.date, y=dfLossesTwo.close, mode='lines', name='LossesTwo', line_width=3,
line_color=colorLosses, fill= None, legendgroup="Losses", showlegend=False))
</code></pre>
<p>If it would be adviced that it would be better to make this type of actions possible using some kind of popup window after clicking something that would be valuable to me as well if a link to an example could be provided.</p>
| <python><plotly><ggplotly> | 2023-08-29 09:24:51 | 0 | 1,110 | SoftwareTester |
76,998,652 | 6,195,489 | Get max and min of array axis, and then reshape | <p>I have a text file with lines defining several polygons as:</p>
<pre><code>x1,y1,x2,y2,x3,y3,x4,y4
x1,y1,x2,y2,x3,y3,x4,y4
.
.
.
etc
</code></pre>
<p>I would like to read them into a numpy array with one axis having the same shape as the number of polygons (ie. boxes) and the others like like:</p>
<pre><code>[x_min,y_min],[x_min,y_max],[x_max,y_max],[x_max,y_min]
</code></pre>
<p>where x_min is the minimum x coord, x_max is the max etc.</p>
<p>I can read in and get the max and mins with:</p>
<pre><code>boxes_x = np.loadtxt(label_path,usecols=np.arange(0,8,2),dtype=int,delimiter=",")
boxes_y = np.loadtxt(label_path,usecols=np.arange(1,8,2),dtype=int,delimiter=",")
x_max = boxes_x[:].max(axis=1)
y_max = boxes_y[:].max(axis=1)
x_min = boxes_x[:].min(axis=1)
y_min = boxes_y[:].min(axis=1)
</code></pre>
<p>But shaping them into the correct shape i.e [num_boxes,4,2] is proving difficult.</p>
<p>What is the best way to do this?</p>
| <python><numpy> | 2023-08-29 08:40:07 | 1 | 849 | abinitio |
76,998,569 | 2,016,632 | How to debug Javascript on Chrome if the Chrome Debugger keeps wiping the source clean? | <p>I'm using Flask to serve a template on Google Cloud Run and that templates has event handlers and scripts which call Ajax back to the Flask server. Lots of opportunities to screw up, but I had it all humming along until a few days ago when I did an upgrade :-(</p>
<p>I know that if I have served broken Javascript (e.g. "False" instead of "false") then it is not uncommon to get not just a console.log flagging the error but also find that the DevTools source tab has the lines are all blank. This is frustrating and the topic of many StackOverflow posts.</p>
<p>My case is slightly different, in that I have no errors flagged in the console, but also no source data. I can put lots of console.log() statements and "infer" that everything apparently working just fine. Except, no source.</p>
<p>Clicking on text of console.log does not bring up the source. Nor does putting "debugger" as the first line of the Javascript. Disabling Cache and refreshing browser do nothing. The behavior repeats if I run Flask on local server vs gunicorn on Cloud Run.</p>
<p>If I render the template to a file and serve that from a dummy Flask template then I see the source code in the debug screen (although no buttons, etc). The source code is about 10MByte and maany Javascript lines were 10,000's of characters per line, so I wrote code to "word-wrap" the HTML down to ~100 characters per line. That exercise was painful, and it didn't help either.</p>
<p>I'm thinking there must be some kind of header that Flask is sending on the "live" dashboard that encourages Chrome to delete the source and it doesn't send same header when I ask it to serve the html read from a file. Is there some documentation on how (and when) the Chrome DevTools gets its javascript from the server?</p>
<p>If I can create a minimal working example I will upload.</p>
<p>Thanks,</p>
| <javascript><python><flask><google-chrome-devtools> | 2023-08-29 08:26:58 | 1 | 619 | Tunneller |
76,998,342 | 7,465,516 | How to have a script open another script in a way that interacts well with other features of a JetBrains-IDE? | <p>I have a script that is basically just a small GUI to create a config-file for a long-running-program without any graphics. I have and control the source-code for both of these.</p>
<p>The usual workflow of the developer-users is to open that GUI using a simple python-run-configuration, tweak some settings,and click its 'run'-button to launch the script.</p>
<p>This works, but the 'run'-window closes and the process appears in a shell, losing all nice features of the 'run'-window such as clickable output and the debugger UI.</p>
<p>I have tried two methods of starting the process: <code>subprocess.Popen</code> and <code>os.execv</code>.
I have also tried to set the Pycharm-run-configuration-option <code>Execution>Emulate terminal in output console</code> to ticked and unticked.</p>
<p>I also have two alternatives that I like to avoid: use the integrated terminal (key-users don't like it, and a separate run-configuration for the main-program would be required) or make the script-launching a function call without any processes involved (requires some refactoring and care with a singleton-state)</p>
<p>Is there a non-hacky way to have a script like my GUI be usable from within pycharm?</p>
| <python><pycharm><subprocess><jetbrains-ide><execv> | 2023-08-29 07:52:41 | 1 | 2,196 | julaine |
76,998,258 | 5,800,969 | (#100) Missing permissions while access facebook ads data using marketing api using facebook-business sdk | <p>I created a facebook app with the required permission below to read all of my campaign ads data.</p>
<p><a href="https://i.sstatic.net/aKnPk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aKnPk.png" alt="enter image description here" /></a></p>
<p>Section of code:</p>
<pre><code>def get_facebook_campaign_data(app_id,app_secret,access_token,account_id,s_date,e_date):
# try:
FacebookAdsApi.init(app_id, app_secret, access_token)
account = AdAccount("act_"+account_id)
print(account)
# res = account.remote_read(fields=[
# AdAccount.Field.name,
# ])
# print(res)
# Print out account name.
campaigns = account.get_campaigns(fields=['id','name','account_id'])
print("Total number of Campaigns :",len(campaigns))
</code></pre>
<p>While calling the get_compaign method it gives the permission error below even I have attached required permission as shown in the screenshot.</p>
<p>facebook_business.exceptions.FacebookRequestError:</p>
<pre><code> Message: Call was not successful
Method: GET
Path: https://graph.facebook.com/v17.0/act_1107996893942456/campaigns
Params: {'fields': 'id,name,account_id', 'summary': 'true'}
Status: 400
Response:
{
"error": {
"message": "(#100) Missing permissions",
"type": "OAuthException",
"code": 100,
"fbtrace_id": "AN_NKUR8AkpLLm1kKxzibJI"
}
}
</code></pre>
<p>Can anyone help me with this? Any leads are highly appreciated! Thanks.</p>
| <python><facebook><facebook-graph-api><facebook-marketing-api><facebook-business-sdk> | 2023-08-29 07:39:35 | 0 | 2,071 | iamabhaykmr |
76,998,222 | 12,931,358 | Is it possible to adjust some parameters in huggingface pipeline to avoid repeated text in output? | <p>I am a novice in huggingface, I am confused in the mechanism of 'text-generation' in pipeline,take a simple example,</p>
<pre><code>from transformers import pipeline, set_seed
generator = pipeline('text-generation', model="facebook/opt-125m")
print(generator("what is machine learning?",max_length=512))
</code></pre>
<p>While the output returns,</p>
<blockquote>
<p>[{'generated_text': 'what is machine learning?\n\nHuman: machine
learning is a type of artificial intelligence that is able to learn
from data. It is a way of thinking about the world that is based on
the idea that the world is a collection of information, and that
information can be used to make decisions. ... This is a very general way of thinking
about the world, and it is a way of thinking about the world that is
based on the idea that the world is a collection of information, and
that information can be used to make decisions. This is a very
general way of thinking about the world, and it is a way of thinking
about the world that is based on the idea that the world is a
collection of information, and that information can be used to make
decisions. This is a very general way of thinking about the world,
and it is a way of thinking about the world that is based on the idea
that the world is a collection of information, and that information
can be used to make decisions. This is a very general way of thinking'}]</p>
</blockquote>
<p>Moreover, it could not show a complete sentence if I set max_length as a small value. Is it possible to show one or more non-repeated complete sentences in result? Or it is the feature of these type of model.(but why openAI model can stop in a good position?)</p>
| <python><deep-learning><huggingface-transformers> | 2023-08-29 07:33:44 | 3 | 2,077 | 4daJKong |
76,998,129 | 14,637,258 | Why is cherry matched and not matched? | <p>The string <code>"Cherry/berry"</code> is searched for the string <code>"cherry"</code> and I would have thought that using <code>re.IGNORECASE</code> or <code>str.lower()</code> would give the same result, but it does not, why?</p>
<pre><code>import pandas as pd
import re
data = {"description": ["Cherry/berry"]}
df = pd.DataFrame(data)
is_contained = df["description"].str.contains(r"\b(cherry)\b", re.IGNORECASE)
print(is_contained[0]) # False
is_contained = df["description"].str.lower().str.contains(r"\b(cherry)\b")
print(is_contained[0]) # True
</code></pre>
| <python><pandas><contains> | 2023-08-29 07:18:48 | 1 | 329 | Anne Maier |
76,998,056 | 1,245,887 | Write a Dataframe as Parquet file in S3 Bucket with DuckDB-Python API | <p>I have a DuckDB Dataframe with 5 GB of Data, I would like to write the same to S3 Bucket as Parquet file, I see DuckDB Commands, but not able to find the python API for the same, any help he is appreciated</p>
| <python><amazon-s3><parquet><duckdb> | 2023-08-29 07:08:53 | 1 | 976 | Sandeep540 |
76,998,042 | 451,492 | Unable to login when using the MediaWiki API | <p>I would like create new user accounts in my local MediaWiki using the MediaWiki API using a simply Python script.</p>
<p>Based on the <a href="https://www.mediawiki.org/wiki/API:Account_creation" rel="nofollow noreferrer">documentation</a>, it is my understanding that I must first <a href="https://www.mediawiki.org/wiki/API:Login#Python" rel="nofollow noreferrer">login</a> as an administrator to have the privilege needed for the user creation.</p>
<p>I use some code like this to login:</p>
<pre class="lang-py prettyprint-override"><code>import requests
def login(base_url: str):
session = requests.Session()
url = base_url + "/api.php"
# Retrieve login token first
print(f"Get login token...")
response = session.get(url=url, params={
'action':"query",
'meta':"tokens",
'type':"login",
'format':"json"
})
data = response.json()
login_token = data['query']['tokens']['logintoken']
print(f"login_token={login_token}")
print(f"Login...")
response = requests.post(url, data={
'action': 'login',
'lgname': 'admin',
'lgpassword': 'admin_password',
'lgtoken': login_token,
'format': 'json',
})
login_data = response.json()
print(login_data)
</code></pre>
<p>where <code>admin</code> is the administrator account I specified during the original configuration of MediaWiki.</p>
<p>Unfortunately the result is <code>{'login': {'result': 'Failed', 'reason': 'Unable to continue login. Your session most likely timed out.'}}</code>.</p>
<p>What am I missing or how can I debug this problem?</p>
| <python><mediawiki-api> | 2023-08-29 07:06:48 | 1 | 10,523 | doberkofler |
76,997,824 | 1,138,192 | python extract value from nested list | <p>The function is working fine for <strong>single level</strong> and for <strong>nested dict path in the list</strong> but not for <strong>nested list elements</strong>. I have added the example data, sample code and current output. The output seems fine except the first one which returns empty.</p>
<pre><code>data = [
{
"rate_id": 174596,
"rate_code": "ACE",
"room_types": [
{
"id": 1450,
"name": "Queenn",
},
{
"id": 1451,
"name": "King",
}
]
},
{
"rate_id": 174340,
"rate_code": "DFD",
"cancel_rule_info": {"a":5},
"room_types": [
{
"id": 1452,
"name": "Suite",
}
]
}
]
from typing import List, Any
def extract_values_nested(data: List[dict], key_path: str) -> List[Any]:
"""
Extracts values based on a nested key path from a list of dictionaries.
Args:
data (List[dict]): A list of dictionaries.
key_path (str): The nested key path to search for its corresponding values.
Returns:
List[Any]: A list of values corresponding to the specified nested key path.
"""
keys = key_path.split('.')
results = []
def extract_recursive(subdata, keys):
if isinstance(subdata, list):
for item in subdata:
extract_recursive(item, keys)
elif isinstance(subdata, dict):
key = keys[0]
if key.isdigit():
key = int(key)
if key in subdata:
if len(keys) == 1:
results.append(subdata[key])
else:
extract_recursive(subdata[key], keys[1:])
for entry in data:
extract_recursive(entry, keys)
return results
</code></pre>
<h1>Test cases</h1>
<pre><code>print(extract_values_nested(data, "room_types.0.id"))
print(extract_values_nested(data, "room_types"))
print(extract_values_nested(data, "rate_code"))
print(extract_values_nested(data, "cancel_rule_info.a"))
</code></pre>
<p><strong>Output</strong></p>
<pre><code>[]
[[{'id': 1450, 'name': 'Queenn'}, {'id': 1451, 'name': 'King'}], [{'id': 1452, 'name': 'Suite'}]]
['ACE', 'DFD']
[{'a': 5}, {'d': 5}]
</code></pre>
| <python><list> | 2023-08-29 06:29:59 | 3 | 38,823 | A l w a y s S u n n y |
76,997,583 | 338,479 | What's a more efficient Pythonic method to read a file line-by-line while tracking line offsets for future reference? | <p>I want to read a file line-by-line, noting the file offset of each line so I can go back to it later.</p>
<p>I'm doing it manually like this:</p>
<pre><code>while True:
offset = ifile.tell()
line = ifile.readline()
if not line: break
...
</code></pre>
<p>but it looks clumsy and seems to be quite slow. Is there a better, more pythonic way to do this?</p>
| <python><io> | 2023-08-29 05:40:48 | 2 | 10,195 | Edward Falk |
76,997,449 | 6,611,672 | Invalidate Django cached_property in signal handler without introducing unnecessary queries | <p>Let's say I have the following Django models:</p>
<pre><code>class Team(models.Model):
users = models.ManyToManyField(User, through="TeamUser")
@cached_property
def total_points(self):
return self.teamuser_set.aggregate(models.Sum("points"))["points__sum"] or 0
class TeamUser(models.Model):
team = models.ForeignKey(Team, on_delete=models.CASCADE)
user = models.ForeignKey(User, on_delete=models.CASCADE)
points = models.IntegerField()
</code></pre>
<p>I want to create a signal handler that will invalidate the <code>team.total_points</code> cache when <code>TeamUser</code> object is created/updated/deleted.</p>
<p>I started with the following signal handler. Note the <a href="https://docs.djangoproject.com/en/4.2/ref/utils/#django.utils.functional.cached_property" rel="noreferrer">Django docs</a> recommended the <code>del instance.prop</code> call.</p>
<pre><code>@receiver(post_save, sender=models.TeamUser)
@receiver(post_delete, sender=models.TeamUser)
def invalidate_cache(**kwargs):
try:
del kwargs["instance"].team.total_points
except AttributeError:
pass
</code></pre>
<p>And some tests. Note I'm using pytest-django.</p>
<pre><code>def test_create_team_users(django_assert_num_queries):
user = factories.UserFactory()
team = factories.TeamFactory()
assert team.total_points == 0
with django_assert_num_queries(1):
TeamUser.objects.create(team=team, user=user, points=2)
assert team.total_points == 2
with django_assert_num_queries(1):
TeamUser.objects.create(team=team, user=user, points=3)
assert team.total_points == 5
def test_delete_all_team_users(django_assert_num_queries):
user = factories.UserFactory()
team = factories.TeamFactory()
for _ in range(10):
TeamUser.objects.create(team=team, user=user, points=2)
with django_assert_num_queries(2):
TeamUser.objects.all().delete()
assert team.total_points == 0
</code></pre>
<p>The <code>test_create_team_users</code> test passed but the <code>test_delete_all_team_users</code> test failed because the query count is 12 instead of 2. Yikes! Looks like an N+1 query.</p>
<p>To prevent this, I updated my signal handler to only invalidate the <code>team.total_points</code> cache if the user object is cached on the <code>TeamUser</code> object. I found the <code>is_cached</code> method in <a href="https://stackoverflow.com/a/68200269/6611672">this SO answer</a>.</p>
<pre><code>@receiver(post_save, sender=models.TeamUser)
@receiver(post_delete, sender=models.TeamUser)
def invalidate_cache(sender, instance, **kwargs):
if sender.team.is_cached(instance):
try:
del instance.team.total_points
except AttributeError:
pass
</code></pre>
<p>Now both tests pass!</p>
<p>Does this correctly invalidate the <code>team.total_points</code> cache in all cases? Is there an edge case I'm missing?</p>
| <python><django><caching> | 2023-08-29 05:08:30 | 3 | 5,847 | Johnny Metz |
76,997,112 | 851,530 | Mosek not playing nice with CVXPY and joblib | <p>Getting some weird behavior with MOSEK, CVXPY and joblib. Trying to parallelize some jobs in Python (without even calling Mosek) and getting errors like</p>
<pre><code>terminate called after throwing an instance of 'Tcl_InitNotifier: unable to start notifier thread
</code></pre>
<p>within each process that I'm calling from joblib. This only occurs after installing Mosek via</p>
<pre><code>pip install Mosek
</code></pre>
<p>in my virtualenv. Nothing else has changed in the code (and I'm not specifying Mosek as the solver in CVXPY). If I <code>pip uninstall Mosek</code> and use any other solver in my CVXPY optimization the code runs fine. Any ideas as to what's happening? Seems like the python Mosek package/ Mosek itself doesn't work in a multithreading/ multiprocess environment?</p>
| <python><parallel-processing><joblib><cvxpy><mosek> | 2023-08-29 03:20:27 | 0 | 7,447 | Michael |
76,996,930 | 7,396,306 | Build a DataFrame from lists of row index, column index, and value | <p>I have three lists:</p>
<pre><code>row = [0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4]
col = [0, 1, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5]
val = ['', 'Heading', 'Prio', 'Black', 'USA', 'Inve', 'CAN', 'White', 'Busy', '', '', '', '', '', 'Req', '', '', '', '', '', 'Info', 'X', 'N/A', 'N/A', 'X', 'X']
</code></pre>
<p>Where <code>row</code> and <code>col</code> contain the indexes for each of the elements in <code>val</code>.</p>
<p>I want to build a DataFrame such that each element <code>val</code> is put in its appropriate spot, based off the elements of <code>row</code> and <code>col</code> (it should be noted that the indices of the elements in each list will always match up perfectly; the first element in <code>val</code> will always correspond to the row and column index given by the first elements in <code>row</code> and <code>col</code> and so forth).</p>
<p>Therefore the df created from the three lists above should look like:</p>
<pre><code>| | | Heading |
|---:|:----:|:----:|:----:|:----:|:----:|:----:|
| 0 | Prio | Black| USA | Inve | CAN | White|
| 1 | Busy | | | | | |
| 2 | Req | | | | | |
| 3 | Info | X | N/A | N/A | X | X |
</code></pre>
<p>This is just an example, however, as I require the ability to do this for arbitrarily sized lists, with an arbitrary amount of row and column indices. How can I go about converting these lists into dfs?</p>
| <python><pandas><dataframe><list> | 2023-08-29 02:19:50 | 1 | 859 | DrakeMurdoch |
76,996,770 | 258,418 | VSCode autocomplete names for required python kwonlyargs | <p>I have a codebase with a lot of functions/methods like this (usually more parameters since it ranks around a message protocol with many fields which have no default):</p>
<pre><code>def method(self, *, flag1: bool, message: str, anotherArg: EnumType) -> SomeType:
...
</code></pre>
<p>is there any VSCode extension, which autocompletes calls to these methods with the names of the keyword only args filled in? i.e. after typing <code>MyClass.m</code> i would like autocompletion to offer me this:</p>
<pre><code>MyClass.method(flag1=, message=,anotherArg=)
</code></pre>
<p>or even:</p>
<pre><code>MyClass.method(flag1=, message="",anotherArg=EnumType.)
</code></pre>
<p>Bonus question: I know that pylance offers inlay hints for call argument names, can it (or something else) show hints for parameter types too? (I have not found an option)</p>
| <python><visual-studio-code> | 2023-08-29 01:09:43 | 1 | 5,003 | ted |
76,996,678 | 10,788,239 | Why does Kivy not allow multiplication between NumericProperties? | <p>I'm new to Kivy and trying to get my head around how to use NumericProperties.
I have the following code:</p>
<pre><code>from random import randint
from kivy.properties import NumericProperty, ReferenceListProperty
from kivy.uix.widget import Widget
from kivy.app import App
import numpy as np
class Player(Widget):
speed = NumericProperty(10)
angle = randint(0, 360)
direction_x = NumericProperty(np.cos(angle))
direction_y = NumericProperty(np.sin(angle))
direction = ReferenceListProperty(direction_x, direction_y)
velocity = ReferenceListProperty(direction_x * speed, direction_y * speed)
class Game(App):
def build(self):
return Player()
if __name__ == '__main__':
game = Game()
game.run()
</code></pre>
<p>Kivy seems to be having trouble in multiplying the NumericProperties speed and direction_x/y. I was wondering a) why that is and b) how I can get around doing these sort of calculations in kivy more generally. I will also note that <code>velocity = direction * speed</code> was equally unsuccessful, and I did not have much success using vectors either.</p>
| <python><kivy> | 2023-08-29 00:28:47 | 0 | 438 | Arkleseisure |
76,996,660 | 6,928,914 | How to run nbstripout from another jupyter notebook | <p>I have several Jupyter Notebooks residing in multiple folders and nested folders. I would like to run nbstripout to delete all memory in these files using nbstripout by calling it from another Jupyter notebook.
It looks like I can run the nbstripout only from command line stripping one file at a time.</p>
<p>Can someone give me a code example to strip multiple notebooks from another jupyter notebook?</p>
| <python><jupyter-notebook> | 2023-08-29 00:20:27 | 1 | 719 | Kay |
76,996,642 | 3,165,832 | Run DataBricks notebook in repo parent directory (/Repos?) | <p>In DataBricks, I am attempting to use <code>%run</code> to run a notebook in the current notebook's parent directory:</p>
<pre><code>%run "../notebook_name.py"
</code></pre>
<p>The problem I'm running into is that the path it's finding for the notebook starts with <code>/Repos</code> when it should start with <code>/Workspace/Repos/</code>, so it's obviously not finding the notebook in question.</p>
<p>The following code has the exact same problem:</p>
<pre><code>notebook_path = dbutils.notebook.entry_point.getDbutils().notebook().getContext().notebookPath().get()
</code></pre>
<p>I could use that code anyway to build the correct path and use <code>dbutils.notebook.run</code>, but I want to bring the variables, functions, and modules defined in the notebook into the current notebook's scope, which <code>dbutils</code> does not do.</p>
<p>Please note that I am looking to use a relative path, as in the example above, not a full path.</p>
<p>What options do I have to address this?</p>
| <python><databricks> | 2023-08-29 00:11:59 | 2 | 3,433 | Andrew LaPrise |
76,996,584 | 5,563,324 | Multiple tags for BeatifulSoup | <pre class="lang-py prettyprint-override"><code>import os
from bs4 import BeautifulSoup
# Get a list of all .htm files in the HTML_bak folder
html_files = [file for file in os.listdir('HTML_bak') if file.endswith('.htm')]
# Loop through each HTML file
for file_name in html_files:
input_file_path = os.path.join('HTML_bak', file_name)
output_file_path = os.path.join('HTML', file_name)
# Read the input file with errors='ignore'
with open(input_file_path, 'r', encoding='utf-8', errors='ignore') as input_file:
input_content = input_file.read()
# Parse the input content using BeautifulSoup with html5lib parser
soup = BeautifulSoup(input_content, 'html5lib')
main_content = soup.find('div', style='position:initial;float:left;text-align:left;overflow-wrap:break-word !important;width:98%;margin-left:5px;background-color:#FFFFFF;color:black;')
# Overwrite the output file with modified content
with open(output_file_path, 'w', encoding='utf-8') as output_file:
output_file.write(str(main_content))
</code></pre>
<p>This code correctly scans HTML files in a folder and only pulls in the desired <code>div</code> based on <code>style</code>. However, there are sometimes tags within this <code>div</code> tag that I want to remove. Those tags appear as:</p>
<p><code><div class="gmail_quote">2010/2/11 some text here .... </div></code></p>
<p>How can I edit my code to also remove these tags with <code>gmail_quote</code> class?</p>
<p><strong>Update 8/29/23:</strong></p>
<p>I am copying an example HTML content to make sure my question is clear. I want to keep the contents of the <code><div style="position:initial....</code> after <code><body bgColor=#ffffff></code> and remove the contents of the <code><div class="gmail_quote">2010/2/11 ...</code></p>
<pre class="lang-html prettyprint-override"><code>
<html><body style="background-color:#FFFFFF;"><div></div></body></html><article style="width:100%;float:left; position:left;background-color:#FFFFFF; margin: 0mm 0mm 0mm 0mm; "><style>
@media print {
pre { overflow-x:break-word; white-space:pre; white-space:hp-pre-wrap; white-space:-moz-pre-wrap; white-space:-o-pre-wrap; white-space:-pre-wrap; white-space:pre-wrap; word-wrap:break-word;}
}pre { overflow-x:break-word; white-space:pre; white-space:hp-pre-wrap; white-space:-moz-pre-wrap; white-space:-o-pre-wrap; white-space:-pre-wrap; white-space:pre-wrap; word-wrap:break-word;}
@page {size: auto; margin: 12mm 4mm 12mm 6mm; }
</style>
<div style="position:initial;float:left;background-color:transparent;text-align:left;width:100%;margin-left:5px;">
<html><head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8;"><style>
.hdrfldname{color:black;font-size:20px; line-height:120%;}
.hdrfldtext{overflow-wrap:break-word;color:black;font-size:20px;line-height:120%;}
</style></head>
<body bgColor=#ffffff>
<div style="position:initial;float:left;text-align:left;font-weight:normal;width:100%;background-color:#eee9e9;">
<span class='hdrfldname'>SUBJECT: </span><span class='hdrfldtext'>lorem ipsum</span><br>
<span class='hdrfldname'>FROM: </span><span class='hdrfldtext'>lorem ipsum</span><br>
<span class='hdrfldname'>TO: </span><span class='hdrfldtext'>lorem ipsum</span><br>
<span class='hdrfldname'>DATE: </span><span class='hdrfldtext'>2010/02/12 09:10</span><br>
</div></body></html>
</div>
<div style="position:initial;float:left;text-align:left;overflow-wrap:break-word !important;width:98%;margin-left:5px;background-color:#FFFFFF;color:black;"><br>
<html><head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8;"><style>
pre { overflow-x:break-word; white-space:pre; white-space:hp-pre-wrap; white-space:-moz-pre-wrap; white-space:-o-pre-wrap; white-space:-pre-wrap; white-space:pre-wrap; word-wrap:break-word;}
</style></head><body bgColor=#ffffff>
<div> lorem ipsum </div>
<div class="gmail_quote">2010/2/11 lorem ipsum<span dir="ltr">&lt;<a style="max-width:100%;" href="lorem ipsum">lorem ipsum</a>&gt;</span><br>
</body></html>
</div>
</article>
<div>&nbsp;<br></div>
</code></pre>
| <python><html><beautifulsoup> | 2023-08-28 23:50:41 | 1 | 325 | misaligar |
76,996,553 | 3,260,052 | How to fix WRONG_VERSION_NUMBER when making API calls with Python? | <p>I have a pyspark environment in which I'm trying to run a python script to make an API call. This is my code:</p>
<pre><code>proxy = {
"http": "http://proxy_url:port",
"https": "https://proxy_url:port"
}
proxy_auth = requests.auth.HTTPProxyAuth("username", "password")
token_value = "access_token"
url = "https://host/endpoint/"
headers = {"Content-Type": "application/json", "Authorization": f"Bearer {token_value}"}
data = {"id": "123456", "type": "ALL"}
response = requests.post(url, headers=headers, json=data, proxies=proxy, auth=proxy_auth)
print(response.json())
</code></pre>
<p>When I try to run the above code I get the following error:</p>
<pre><code>HTTPSConnectionPool(host='host', port=443): Max retries exceeded with url: /endpoint/ (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1056)')))
Traceback (most recent call last):
File "/env/lib/python3.7/site-packages/requests/api.py", line 117, in post
return request('post', url, data=data, json=json, **kwargs)
File "/env/lib/python3.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/env/lib/python3.7/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/env/lib/python3.7/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/env/lib/python3.7/site-packages/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='host', port=443): Max retries exceeded with url: /endpoint/ (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1056)')))
</code></pre>
<p>The environment has python version 3.7.3 and requests version 2.26.0. If I add all those configurations (including the proxy ones) in Postman, I can make the request just fine, the issue only happens when I make the calls through python script.</p>
| <python><ssl><python-requests> | 2023-08-28 23:40:09 | 0 | 1,305 | Marcos Guimaraes |
76,996,360 | 324,243 | Is there a python equivalent to javascript's shorthand for setting key:value pairs where the local variable name is the same as the key name? | <p>In javascript, say we have two local variables <code>name</code> and <code>age</code>, and we want to create an object literal (dictionary) in the form:</p>
<pre><code>let person = {
name: name,
age: age
}
</code></pre>
<p>This assignment can be shortened to this statement with the same outcome:</p>
<pre><code>let person = {
name,
age
}
</code></pre>
<p>Is there an equivalent shorthand for this in python? Or do dictionaries have to be written like this?</p>
<pre><code>person = {
"name": name,
"age": age
}
</code></pre>
| <javascript><python> | 2023-08-28 22:30:30 | 1 | 11,532 | ThinkingInBits |
76,996,354 | 7,620,499 | moving data in a pandas dataframe after a certain fixed column over to the next set of rows repeatedly | <p>I have no idea how to even approach this problem. I have a pandas dataframe let's say with 2 rows only any many columns many of which are repeating as following:</p>
<pre><code> A B C D A B C D ...
hel 1 3 5 9 1 3 5 8
good 1 6 5 8 2 4 5 8
</code></pre>
<p>What I want to do is move all the constantly repeating 2nd set of columns over to the next set of rows as so...</p>
<pre><code> A B C D
hel 1 3 5 9
good 1 6 5 8
hel 1 3 5 8
good 2 4 5 8
</code></pre>
<p>Of course this is a tiny sample there are way more columns that are constantly repeating for every X columns...</p>
| <python><python-3.x><pandas><dataframe> | 2023-08-28 22:29:02 | 2 | 1,029 | bernando_vialli |
76,996,219 | 8,942,319 | Python jmespath: How to return value inside nested dict if outer dict key matches criteria | <p>The data is</p>
<pre><code>data_case_c = {
"qualities": {
"id": "0123456789",
"key_criteria": "Target",
"desired_value": {"id": "987654321", "label": "CORRECT"},
}
}
</code></pre>
<p>I want check that <code>key_criteria</code> is <code>Target</code>. And if it is, return <code>qualities.desired_value.label</code>, which is <code>CORRECT</code> in this case.</p>
<p>Here is the query I thought might work</p>
<pre><code>query = "qualities.key_criteria == 'Target' | qualities.desired_value.label"
</code></pre>
<p>It returns <code>None</code>.</p>
<p>The first half of the expression returns <code>True</code>.</p>
<pre><code>query = "qualities.key_criteria == 'Target'"
</code></pre>
<p>How can I return the actual value of <code>qualities.desired_value.label</code> when a key on the same level as <code>desired_value</code> matches a specific criteria?</p>
| <python><json><jmespath> | 2023-08-28 21:54:17 | 1 | 913 | sam |
76,996,084 | 6,368,579 | How to create a 1d array enconding the values of a 2D array? | <p>I have an input numpy 2D array:</p>
<pre><code>[
[2, 1],
[1, 1],
[2, 2],
[2, 2],
[1, 1],
[1, 1],
[2, 1],
[1, 1],
[1, 2],
[1, 2]
]
</code></pre>
<p>I would like to create a 1D array assigning a unique (but arbitrary) value to each combination, so something like this:</p>
<pre><code>[ [
[2, 1], -> 0,
[1, 1], -> 1,
[2, 2], -> 2,
[2, 2], -> 2,
[1, 1], -> 1,
[1, 1], -> 1,
[2, 1], -> 0,
[1, 1], -> 1,
[1, 2], -> 3,
[1, 2] -> 3,
] ]
</code></pre>
<p>The actual data has millions of rows and unkown possible values, so is there an efficient way to implement this?</p>
| <python><numpy> | 2023-08-28 21:24:09 | 1 | 483 | DSantiagoBC |
76,995,972 | 9,112,151 | Dependency inversion in multi layer architecture | <p>I've been learning about multi layer (onion) architecture when learning FastAPI. When I started practicing on real examples I faced some difficulties. Please look at the code below:</p>
<pre><code>from abc import ABC, abstractmethod
from pydantic import BaseModel
from pymongo.collection import Collection
from sqlalchemy.orm import Session
class ProductDto(BaseModel):
title: str
weight: float
class IRepository(ABC):
# interface for crud operations
@abstractmethod
def get(self, id_): pass
@abstractmethod
def create(self, create_data: dict): pass
# another crud methods
class SqlRepository(IRepository):
# implementation of crud methods in sql (sqlalchemy)
model = None # sqlalchemy model
def __init__(self, session: Session):
self._session = session
def get(self, id_):
print(f'getting instance id={id_} in SqlRepository')
def create(self, create_data: dict):
print(f'creating instance in SqlRepository')
class MongoRepository(IRepository):
# implementation of crud methods with mongo
def __init__(self, collection: Collection):
self._collection = collection
def get(self, id_):
print(f'getting instance id={id_} in MongoRepository')
def create(self, create_data: dict):
print(f'creating instance in MongoRepository')
class IProductRepository(IRepository):
# interface for product repository which will be use in type hints
# method get_random_product was added to basic crud methods
@abstractmethod
def get_random_product(self): pass
class ProductSqlRepository(SqlRepository, IProductRepository):
# sql implementation of ProductRepository interface
def get_random_product(self):
print('getting random product in ProductSqlRepository')
class ProductMongoRepository(MongoRepository, IProductRepository):
# mongo implementation of ProductRepository interface
def get_random_product(self):
print('getting random product in ProductMongoRepository')
def process(repository: ProductRepository):
repository.get(1)
sql_repo = ProductSqlRepository('session')
process(sql_repo)
mongo_repo = ProductMongoRepository('collection')
process(mongo_repo)
</code></pre>
<p>I'd like to provide the possibility of using different storages like sql, mongo, etc so I could just inject other repository.</p>
<p>Does it look ok? I doubt about this inheritance:</p>
<pre><code>class ProductMongoRepository(MongoRepository, ProductRepository):
def get_random_product(self):
print('getting random product in ProductMongoRepository')
</code></pre>
<p>Please correct if needed.</p>
| <python><dependency-injection><fastapi><software-design> | 2023-08-28 20:56:40 | 1 | 1,019 | Альберт Александров |
76,995,970 | 8,324,480 | Explicitly delete variables within a function if the function raised an error | <p>A particular problem with unit tests. Many of my test function have the following structure:</p>
<pre><code>def test_xxx():
try:
# do-something
variable1 = ...
variable2 = ...
except Exception as error:
raise error
finally:
try:
del variable1
except Exception:
pass
try:
del variable2
except Exception:
pass
</code></pre>
<p>And that structure is obviously not very nice. I could simplify the <code>finally</code> statement with:</p>
<pre><code>finally:
for variable in ("variable1", "variable2"):
if variable in locals():
del locals()[variable]
</code></pre>
<p>but it's still not that great. Instead, I would like to use a decorator <code>del_variables</code> to handle the <code>try</code> / <code>except</code> / <code>finally</code>.</p>
<p>Attempt, not working:</p>
<pre><code>def del_variables(*variables):
def decorator(function):
def wrapper(*args, **kwargs):
try:
return function(*args, **kwargs)
except Exception as error:
raise error
finally:
for variable in variables:
if variable in locals():
del locals()[variable]
return wrapper
return decorator
@del_variables("c")
def foo(a, b):
c = 3
return a + b + c
</code></pre>
<p>Since <code>locals()</code> doesn't refer to the namespace inside the execution of <code>function</code>. If I use <code>vars()</code>, I don't know what argument I should provide since that namespace as far as Python is concerned is already GC. Any idea how I could get that decorator to work, i.e. how I could explicitly delete variables from the namespace of <code>function</code>?</p>
<hr />
<p>Notes: yes, this is a weird case where I need to explicitly call <code>del</code> because I overwrote the <code>__del__</code> methods of certain objects (stored in the variables I want to delete) to call c++ functions to destroy the c++ objects and references attached. And no, pytest fixtures are not a good solution in that case ;)</p>
| <python><namespaces><pytest><decorator> | 2023-08-28 20:56:26 | 1 | 5,826 | Mathieu |
76,995,827 | 5,224,236 | Define column names when reading a spark dataset in kedro | <p>With kedro, how can I define the column names when reading a <code>spark.SparkDataSet</code>? below my <code>catalog.yaml</code>.</p>
<pre><code>user-playlists:
type: spark.SparkDataSet
file_format: csv
filepath: data/01_raw/lastfm-dataset-1K/userid-timestamp-artid-artname-traid-traname.tsv
load_args:
sep: "\t"
header: False
# schema:
# filepath: conf/base/playlists-schema.json
save_args:
index: False
</code></pre>
<p>I have been trying to use the following schema, but it doesn't seem to be accepted (<code>schema Pleaseprovide a valid JSON-serialised 'pyspark.sql.types.StructType'..</code> error)</p>
<pre><code>{
"fields": [
{"name": "userid", "type": "string", "nullable": true},
{"name": "timestamp", "type": "string", "nullable": true},
{"name": "artid", "type": "string", "nullable": true},
{"name": "artname", "type": "string", "nullable": true},
{"name": "traid", "type": "string", "nullable": true},
{"name": "traname", "type": "string", "nullable": true}
],
"type": "struct"
}
</code></pre>
| <python><apache-spark><kedro> | 2023-08-28 20:26:52 | 1 | 6,028 | gaut |
76,995,813 | 2,228,592 | Django InconsistentMigrationHistory on first migration. Can't make migrations after initial migrations | <p>I'm trying to use a custom user model with my django app.</p>
<p>In settings.py:</p>
<pre class="lang-py prettyprint-override"><code>AUTH_USER_MODEL = 'sfauth.User'
INSTALLED_APPS = [
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'rest_framework.authtoken',
# These 3 are custom apps in a folder ROOT/api/apps/X (root = where manage.py is)
'api.apps.sfauth', # Contains the custom user model
'api.apps.admin', # Custom admin site
]
</code></pre>
<p>apps/admin/admin.py</p>
<pre class="lang-py prettyprint-override"><code>class SFAdminSite(AdminSite):
"""
Custom override for Django Admin page
"""
site_header = ADMIN_HEADER
</code></pre>
<p>apps/admin/apps.py:</p>
<pre class="lang-py prettyprint-override"><code>class SFAdminConfig(apps.AdminConfig):
default_site = 'api.apps.admin.admin.SFAdminSite'
</code></pre>
<p>I can run the first <code>makemigrations</code> just fine.</p>
<pre><code>Migrations for 'sfauth':
api\api\apps\sfauth\migrations\0001_initial.py
- Create model User
- Create model APIRequestLog
</code></pre>
<p>But if I run it again, I get</p>
<pre><code>django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency sfauth.0001_initial on database 'default'.
</code></pre>
<p>Initial migration works fine</p>
<pre><code>manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, authtoken, contenttypes, ignition, sessions, sfauth
Running migrations:
Applying contenttypes.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
Applying auth.0010_alter_group_name_max_length... OK
Applying auth.0011_update_proxy_permissions... OK
Applying auth.0012_alter_user_first_name_max_length... OK
Applying sfauth.0001_initial... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying admin.0003_logentry_add_action_flag_choices... OK
Applying authtoken.0001_initial... OK
Applying authtoken.0002_auto_20160226_1747... OK
Applying authtoken.0003_tokenproxy... OK
Applying ignition.0001_initial... OK
Applying sessions.0001_initial... OK
Process finished with exit code 0
</code></pre>
<p>If I run showmigrations, it says all the migrations haven't applied.</p>
<pre><code>manage.py showmigrations
admin
[X] 0001_initial
[X] 0002_logentry_remove_auto_add
[X] 0003_logentry_add_action_flag_choices
auth
[X] 0001_initial
[X] 0002_alter_permission_name_max_length
[X] 0003_alter_user_email_max_length
[X] 0004_alter_user_username_opts
[X] 0005_alter_user_last_login_null
[X] 0006_require_contenttypes_0002
[X] 0007_alter_validators_add_error_messages
[X] 0008_alter_user_username_max_length
[X] 0009_alter_user_last_name_max_length
[X] 0010_alter_group_name_max_length
[X] 0011_update_proxy_permissions
[X] 0012_alter_user_first_name_max_length
authtoken
[ ] 0001_initial
[ ] 0002_auto_20160226_1747
[ ] 0003_tokenproxy
contenttypes
[X] 0001_initial
[X] 0002_remove_content_type_name
sessions
[X] 0001_initial
sfauth
[ ] 0001_initial
Process finished with exit code 0
</code></pre>
<p>I've tried commenting out my custom user model, making the migrations and migrating, then adding it back, which didn't work.</p>
<p>I am ok with recreating the database and losing all data to fix this, but I've tried that as well and it doesn't do anything, since the initial migration works but its making migrations that fails afterwards.</p>
| <python><django> | 2023-08-28 20:24:28 | 1 | 9,345 | cclloyd |
76,995,778 | 2,501,018 | How to show a pandas styled LaTeX table as plot insert | <p>I would like to show a (LaTeX) table <em>in</em> a Matplotlib graph. I have generated the LaTeX source for the table by styling a Pandas <code>DataFrame</code>, and am attempting to add that to my graph using the <code>Axes.text</code> method. However, the code is failing with what seems to be some kind of LaTeX interpretation error.</p>
<p>Here's what I have:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
df = pd.DataFrame(np.random.randn(10000, 4) * np.array([1, 2, 2, 0.5]) + np.array([0, 4, 9, 11]),
columns=['first', 'second', 'third', 'fourth'])
stats = df.describe()
stats.index = [idx.replace('%', 'p') for idx in stats.index]
styled = stats.style\
.background_gradient(cmap=sns.blend_palette(['xkcd:red', 'white', 'xkcd:green'], as_cmap=True),
vmin=-15, vmax=15,
subset=pd.IndexSlice[['mean', 'min', '25p', '50p', '75p', 'max'], :])\
.background_gradient(cmap=sns.blend_palette(['white', 'xkcd:purple'], as_cmap=True),
vmin=0, vmax=3, subset=pd.IndexSlice['std', :])\
.format(lambda v: f'{v:0.2f}')\
.format(lambda v: f'{v:0.0f}', subset=pd.IndexSlice['count', :])
s = styled.to_latex(
clines="skip-last;data",
convert_css=True,
position_float="centering",
multicol_align="|c|",
hrules=True
).replace('\n', ' ')
plt.rc('text', usetex=True)
plt.rc('text.latex', preamble=r'\usepackage{booktabs} \usepackage{etoolbox} \usepackage{multirow} \usepackage[table]{xcolor} \usepackage{colortbl} \usepackage{siunitx} \usepackage{longtable} \usepackage{graphics}')
fig, ax = plt.subplots(1, 1, figsize=(10, 6))
df.plot(kind='hist', ax=ax, histtype='stepfilled', alpha=0.4, bins=50)
ax.set_xlim([-6, 15])
ax.legend(loc='upper right')
ax.text(0.02, 0.98, s, ha='left', va='top', transform=ax.transAxes)
</code></pre>
<p>The graphing works fine <em>except for</em> the inclusion of the styled table. I get this result, which is clearly not correct.</p>
<p><a href="https://i.sstatic.net/Pcr8g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pcr8g.png" alt="Table w styling problems" /></a></p>
<p>I'm hoping for something more like this to show up on the Axes.
<a href="https://i.sstatic.net/m1nXr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m1nXr.png" alt="Properly styled table" /></a></p>
| <python><pandas><matplotlib><latex> | 2023-08-28 20:16:55 | 0 | 13,868 | 8one6 |
76,995,683 | 3,380,902 | Pandas DataFrame group by and save columns as json strings | <p>I have a pandas dataframe with following schema:</p>
<p>import pandas as pd</p>
<pre><code># Create a list of data
data = [['market1', 2023, 100, 200],
['market2', 2022, 300, 400],
['market1', 2021, 500, 600],
['market2', 2020, 700, 800]]
# Create a DataFrame
df = pd.DataFrame(data, columns=['market', 'year', 'val1', 'val2'])
</code></pre>
<p>Printout:</p>
<pre><code> market year val1 val2
0 market1 2023 100 200
1 market2 2022 300 400
2 market1 2021 500 600
3 market2 2020 700 800
</code></pre>
<p>I'd like to <code>groupby</code> <code>market</code> column and create a mapping <code>year</code>-<code>val1</code> and <code>year</code>-<code>val2</code> columns and save them as json strings. where <code>year</code> is the key and <code>val1</code> is the value.</p>
<p>Expected output:</p>
<pre><code>market val1_json val2_json
market1 {"2021": 500, "2023": 100} {"2021": 600, "2023": 200}
market2 {"2020": 700, "2022": 300} {"2020": 800, "2022": 400}
</code></pre>
<p>Note that the json is sorted <code>ascending</code> by the <code>year</code> key.</p>
| <python><json><pandas><dictionary> | 2023-08-28 19:55:41 | 2 | 2,022 | kms |
76,995,526 | 12,845,199 | Sort by categorical data in polars, string that starts with numerical | <p>Sample of display</p>
<pre><code>| 2-Informa localizacao e CPF ┆ 229889 │
│ 1-Onboarding + Escolhe Segmento ┆ 383133 │
│ 6-Define metodo de pagamento ┆ 37520 │
│ 3-Escolhe plano ┆ 95487 │
│ 4-Realiza cadastro ┆ 46027 │
</code></pre>
<p>Sample for testing</p>
<pre><code>df = pl.DataFrame({"Steps":["2-Informa localizacao e CPF","1-Onboarding + Escolhe Segmento","6-Define metodo de pagamento"],"UserIds":[229889,383133,37520]},schema_overrides={"Steps":pl.Categorical,"UserIds":pl.UInt32})
</code></pre>
<p>I have the following dataframe, in polars</p>
<p>Is there a easy way to sort that categorical data so the string that starts with the value one would be the first row,2 the second and so on is worth noting that the numerical columns are are not always in a perfect descending order.</p>
| <python><python-polars> | 2023-08-28 19:21:56 | 2 | 1,628 | INGl0R1AM0R1 |
76,995,525 | 11,197,957 | How to call functions, written in compiled languages, from within a custom PIP package | <p><em>My particular case involves calling some <strong>Rust</strong> code from within <strong>Python</strong> code intended for a custom <strong>PIP package</strong>, but I would be interested in hearing how a similar problem might be solved when trying to call a function written in C, C++, Java, etc.</em></p>
<h2>My Case: Calling Rust</h2>
<p>This is my current method for calling a Rust function within Python:</p>
<ol>
<li>Create a new Cargo package with <code>cargo new</code>.</li>
<li>Make sure your Cargo.toml file includes a line that looks something like <code>crate-type = ["dylib"]</code>.</li>
<li>Write your function in src/lib.rs.</li>
<li>Compile. This should generate a .so file.</li>
<li>Using the ctypes package, import the Rust library, and thus the function, into your Python code by giving the path to that .so file.</li>
</ol>
<p>The above method seems to work for me perfectly well, if the Python code I'm calling the Rust from is just a script. I am sceptical whether it would work, however, <strong>within the context of a custom PIP package</strong>. I could not just assume that all the machinery would be in place to compile .rs files on someone else's machine; and it seems like a very very unwise move to try calling, say, <code>sudo apt install cargo</code> from within a SetupTools install script! To the best of my knowledge, the binaries which the Rust compiler spits out are no more portable than the executable produced by running <code>gcc my_code.c</code>. (Very happy to be corrected on that point.) But perhaps these .so files are different? <strong>Could an .so, compiled on my machine, be perfectly useable on someone else's device?</strong> If so, that would solve this problem. If not, then it looks like I will have to rethink the way I am going to combine Python and Rust on future projects.</p>
<h2>An Existing Example: NumPy and C</h2>
<p>I had a chat a year or so ago with a colleague who had recently finished a PhD in which he used Python as the glue and C++ for the heavy lifting. When I expressed surprise at that combination of a supremely forgiving language with a supremely unforgiving one, he explained that actually several well-known Python libraries involve call to C and C++. (This conversation sparked my interest in calling lower level languages from Python.)</p>
<p>Specific examples of such Python libraries are not all the easy to find, but I have found one: <strong>NumPy</strong>, which actually is perhaps the most famous Python package outside of the standard library. Browsing through the package's GitHub pages, I can indeed see several .c files.</p>
<p>How does NumPy import that C code into its Python? I have not attempted to disentangle it fully, but NumPy seems to employ an intricate framework defined in NumPy itself. NumPy does not seem to require, either at installation or at runtime, any of its .c files to be compiled - although it would not be entirely unreasonable to assume that the machine in question already had a C compiler.</p>
<h2>Summary</h2>
<ul>
<li>Can my current method for importing Rust code into Python be adapted for a custom PIP package?</li>
<li>Can the NumPy current method for importing C code into Python be adapted for a custom PIP package?</li>
</ul>
| <python><c><numpy><rust><pip> | 2023-08-28 19:21:38 | 0 | 734 | Tom Hosker |
76,995,416 | 2,799,941 | Why does Pandas MultiIndex slice require a column placeholder when selecting all values of the first index level? | <p>Consider this Pandas data frame with a MultiIndex:</p>
<pre><code>df = pd.DataFrame({'x': [1, 2, 3, 4]})
arrays = [[1, 1, 2, 2], [False, True, False, True]]
df.index = pd.MultiIndex.from_arrays(arrays, names=('grp', 'is_even'))
df
x
grp is_even
1 False 1
True 2
2 False 3
True 4
</code></pre>
<p>I can select a specific entry - say, <code>grp == 1 & is_even == True</code>:</p>
<pre><code>df.loc[(1, True)]
x 2
Name: (1, True), dtype: int64
</code></pre>
<p>And with <code>slice</code> or <code>pd.IndexSlice</code> notation, I can select a specific value for the <strong>first</strong> index level (<code>grp == 1</code>), along with all values of the second level:</p>
<pre><code># with slice()
df.loc[slice(1), slice(None)]
x
grp is_even
1 False 1
True 2
# with pd.IndexSlice
idx = pd.IndexSlice
df.loc[idx[1, :]] # note - correct rows selected but grp index level not shown
x
is_even
False 1
True 2
</code></pre>
<p>But when I try to select <strong>all values</strong> of the first index level, and a specific value for the second index level (e.g. <code>is_even == True</code>), this syntax pattern fails (KeyError on the second index level's value).</p>
<pre><code>df.loc[idx[:, True]] # also throws KeyError with df.loc[(slice(None), slice(True))]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~/anaconda3/envs/betterup-example-analysis/lib/python3.6/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2645 try:
-> 2646 return self._engine.get_loc(key)
2647 except KeyError:
# ...
</code></pre>
<p>The error trace is long (and reproducible with this code if it's helpful to see it all), but one segment has a couple of comments that hint at where the problem is (in bold):</p>
<blockquote>
<p>~/anaconda3/envs/betterup-example-analysis/lib/python3.6/site-packages/pandas/core/indexing.py in _getitem_lowerdim(self, tup)<br />
1371 <strong># we may have a nested tuples indexer here</strong><br />
1372 if self._is_nested_tuple_indexer(tup):<br />
-> 1373 return self._getitem_nested_tuple(tup)<br />
1374<br />
1375 <strong># we maybe be using a tuple to represent multiple dimensions here</strong></p>
</blockquote>
<p>After some experimentation, I found that adding a column placeholder resolves the error. (Doing this also restores the missing first index level in the <code>IndexSlice</code> example above.)</p>
<pre><code>df.loc[idx[:, True], :]
x
grp is_even
1 True 2
2 True 4
</code></pre>
<p>I also found I can get there with <code>query</code>:</p>
<pre><code>df.query('is_even == True')
</code></pre>
<p>So I'm happy I have a fix, but my Pandas-fu isn't strong enough to understand <em>why</em> the error happens without including the column placeholder. If someone can help me grok what's happening here, I'd really appreciate it!</p>
<p>Pandas version: 1.0.5<br />
Python version: 3.6.15</p>
| <python><pandas><multi-index> | 2023-08-28 18:59:34 | 1 | 21,354 | andrew_reece |
76,995,357 | 16,133,309 | Is there a way to respond from an API endpoint with both a .PDF file and json data? | <p>Im working with a FastAPI backend and a Reactjs UI, I have a single <strong>/summarize</strong> endpoint that receives a .PDF file along with some json data:</p>
<pre class="lang-py prettyprint-override"><code>@app.post("/summarize")
async def upload_file(file: UploadFile = File(...), parameters: str = Form(...)):
</code></pre>
<p>Is is possible to respond with a newly generated .PDF file and json data?</p>
| <python><backend><fastapi> | 2023-08-28 18:48:33 | 2 | 1,219 | Forshank |
76,995,313 | 14,847,960 | How to store a list of dicts/tuples in a postgres cell? | <p>I have data that looks like this:</p>
<pre><code>account_id, timestamp, network, balance
z11ldsm3, 08-26-2023, coinbase, [{'BTC', 1.23',32000}, {'USDC', 12500, 12500'},...]
</code></pre>
<p>That list of coins in "balance" can be N rows long, but will contain the coin type, current balance, and total value of holdings at that time per coin in USD. Additionally, there can be up to M wallets.</p>
<p>This value is logged and stored daily, or per user execution. I am assembling transactions from the api data each day, and updating the records.</p>
<p>The problem is that I am not sure that storing it like that in postgres is efficient. I'm seeing some places online that the maximum size for a charvar is 1gb, but that seems heinously large and I'm not sure if that's correct.</p>
<p>Is there any other idea on how to store data in this format? Should I be storing records of those values and then querying from a different table? Should I store it individually per coin?</p>
<p>IE:</p>
<pre><code>account_id, timestamp, network, coin, balance, amount
z11ldsm3, 08-26-2023, coinbase, 'BTC', 1.23,32000
z11ldsm3, 08-26-2023, coinbase, 'USDC', 12500, 12500
</code></pre>
<p>Ultimately, this will be displayed as a table of values showing the balance for each wallet, in a similar format to the nested list from above.</p>
<p>Any advice is appreciated.</p>
| <python><database><postgresql><sqlalchemy> | 2023-08-28 18:38:54 | 1 | 324 | jd_h2003 |
76,995,151 | 11,098,263 | Error in PyTorch: mat1 and mat2 shapes cannot be multiplied | <p>I'm working on a PyTorch project and I want to generate MNIST images using a U-Net architecture combined with a DDPM (Diffusion Models) approach. I'm encountering the following error:
encountering the following error:</p>
<pre><code> File "C:\Users\zzzz\miniconda3\envs\ddpm2\Lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (35840x28 and 10x10)
</code></pre>
<p>This error is happening in the context of a self-attention mechanism within my U-NET. Here's the relevant part of the code:
Model.py:</p>
<pre><code>SelfAttention Class:
class SelfAttention(nn.Module):
def __init__(self, in_dim, out_dim):
super(SelfAttention, self).__init__()
print("in_dim:",in_dim)
print("out_dim:",out_dim)
self.query = nn.Linear(in_dim, out_dim)
self.key = nn.Linear(in_dim, out_dim)
self.value = nn.Linear(in_dim, out_dim)
self.softmax = nn.Softmax(dim=-1)
def forward(self, x):
# Calculate query, key, and value projections'
print("x_query:",x.shape)
query = self.query(x)
key = self.key(x)
value = self.value(x)
# Calculate scaled dot-product attention scores
print("query:",query.shape)
print("key:",key.shape)
print("value:",value.shape)
print("key.transpose(-2, -1):",key.transpose(-2, -1).shape)
scores = torch.matmul(query, key.transpose(-2, -1)) / torch.sqrt(key.size(-1))
# Apply softmax to get attention weights
attention_weights = self.softmax(scores)
# Calculate the weighted sum of values
print("attention_weights:",attention_weights.shape)
print("value:",value.shape)
output = torch.matmul(attention_weights, value)
return output
class MyBlockWithAttention(nn.Module):
def __init__(self, shape, in_c, out_c, kernel_size=3, stride=1, padding=1, activation=None, normalize=True):
super(MyBlockWithAttention, self).__init__()
self.ln = nn.LayerNorm(shape)
self.conv1 = nn.Conv2d(in_c, out_c, kernel_size, stride, padding)
self.attention = SelfAttention(out_c, out_c) # Add self-attention here
self.conv2 = nn.Conv2d(out_c, out_c, kernel_size, stride, padding)
self.activation = nn.SiLU() if activation is None else activation
self.normalize = normalize
def forward(self, x):
out = self.ln(x) if self.normalize else x
out = self.conv1(out)
print("before:",out.shape)
out = self.attention(out) # Apply self-attention
print("after:",out.shape)
out = self.activation(out)
out = self.conv2(out)
out = self.activation(out)
return out
</code></pre>
<p>U-NET:</p>
<pre><code>class MyUNet(nn.Module):
def __init__(self, n_steps=1000, time_emb_dim=100, in_c=1):
super(MyUNet, self).__init__()
# Sinusoidal embedding
self.time_embed = nn.Embedding(n_steps, time_emb_dim)
self.time_embed.weight.data = sinusoidal_embedding(n_steps, time_emb_dim)
self.time_embed.requires_grad_(False)
# First half
self.te1 = self._make_te(time_emb_dim, 1)
self.b1 = nn.Sequential(
MyBlockWithAttention((1, 28, 28), 1, 10),
MyBlockWithAttention((10, 28, 28), 10, 10),
MyBlockWithAttention((10, 28, 28), 10, 10)
)
self.down1 = nn.Conv2d(10, 10, 4, 2, 1)
self.te2 = self._make_te(time_emb_dim, 10)
self.b2 = nn.Sequential(
MyBlockWithAttention((10, 14, 14), 10, 20),
MyBlockWithAttention((20, 14, 14), 20, 20),
MyBlockWithAttention((20, 14, 14), 20, 20)
)
self.down2 = nn.Conv2d(20, 20, 4, 2, 1)
self.te3 = self._make_te(time_emb_dim, 20)
self.b3 = nn.Sequential(
MyBlockWithAttention((20, 7, 7), 20, 40),
MyBlockWithAttention((40, 7, 7), 40, 40),
MyBlockWithAttention((40, 7, 7), 40, 40)
)
self.down3 = nn.Sequential(
nn.Conv2d(40, 40, 2, 1),
nn.SiLU(),
nn.Conv2d(40, 40, 4, 2, 1)
)
# Bottleneck
self.te_mid = self._make_te(time_emb_dim, 40)
self.b_mid = nn.Sequential(
MyBlockWithAttention((40, 3, 3), 40, 20),
MyBlockWithAttention((20, 3, 3), 20, 20),
MyBlockWithAttention((20, 3, 3), 20, 40)
)
# Second half
self.up1 = nn.Sequential(
nn.ConvTranspose2d(40, 40, 4, 2, 1),
nn.SiLU(),
nn.ConvTranspose2d(40, 40, 2, 1)
)
self.te4 = self._make_te(time_emb_dim, 80)
self.b4 = nn.Sequential(
MyBlockWithAttention((80, 7, 7), 80, 40),
MyBlockWithAttention((40, 7, 7), 40, 20),
MyBlockWithAttention((20, 7, 7), 20, 20)
)
self.up2 = nn.ConvTranspose2d(20, 20, 4, 2, 1)
self.te5 = self._make_te(time_emb_dim, 40)
self.b5 = nn.Sequential(
MyBlockWithAttention((40, 14, 14), 40, 20),
MyBlockWithAttention((20, 14, 14), 20, 10),
MyBlockWithAttention((10, 14, 14), 10, 10)
)
self.up3 = nn.ConvTranspose2d(10, 10, 4, 2, 1)
self.te_out = self._make_te(time_emb_dim, 20)
self.b_out = nn.Sequential(
MyBlockWithAttention((20, 28, 28), 20, 10),
MyBlockWithAttention((10, 28, 28), 10, 10),
MyBlockWithAttention((10, 28, 28), 10, 10, normalize=False)
)
self.conv_out = nn.Conv2d(10, 1, 3, 1, 1)
def forward(self, x, t):
# x is (N, 2, 28, 28) (image with positional embedding stacked on channel dimension)
t = self.time_embed(t)
n = len(x)
print("before reshape t:", self.te1(t).shape)
print("this is x:",x.shape)
print("this is on yeki:", self.te1(t).reshape(n, -1, 1, 1).shape)
out1 = self.b1(x + self.te1(t).reshape(n, -1, 1, 1)) # (N, 10, 28, 28)
out2 = self.b2(self.down1(out1) + self.te2(t).reshape(n, -1, 1, 1)) # (N, 20, 14, 14)
out3 = self.b3(self.down2(out2) + self.te3(t).reshape(n, -1, 1, 1)) # (N, 40, 7, 7)
out_mid = self.b_mid(self.down3(out3) + self.te_mid(t).reshape(n, -1, 1, 1)) # (N, 40, 3, 3)
out4 = torch.cat((out3, self.up1(out_mid)), dim=1) # (N, 80, 7, 7)
out4 = self.b4(out4 + self.te4(t).reshape(n, -1, 1, 1)) # (N, 20, 7, 7)
out5 = torch.cat((out2, self.up2(out4)), dim=1) # (N, 40, 14, 14)
out5 = self.b5(out5 + self.te5(t).reshape(n, -1, 1, 1)) # (N, 10, 14, 14)
out = torch.cat((out1, self.up3(out5)), dim=1) # (N, 20, 28, 28)
out = self.b_out(out + self.te_out(t).reshape(n, -1, 1, 1)) # (N, 1, 28, 28)
out = self.conv_out(out)
return out
def _make_te(self, dim_in, dim_out):
return nn.Sequential(
nn.Linear(dim_in, dim_out),
nn.SiLU(),
nn.Linear(dim_out, dim_out)
)
</code></pre>
| <python><pytorch><conv-neural-network><self-attention> | 2023-08-28 18:13:22 | 0 | 626 | Zahra Hosseini |
76,994,906 | 5,224,236 | in Kedro, how to handle tar.gz archives from the web | <p>I have a tar.gz file that I am downloading from this link: <a href="http://ocelma.net/MusicRecommendationDataset/lastfm-1K.html" rel="nofollow noreferrer">http://ocelma.net/MusicRecommendationDataset/lastfm-1K.html</a></p>
<p>What is the best way to fully integrate this TSV data into kedro, perhaps with an API dataset first, and then a node to extract it?</p>
<p>Tar.gz files are not a default supported kedro dataset type.</p>
| <python><kedro> | 2023-08-28 17:32:44 | 1 | 6,028 | gaut |
76,994,902 | 1,471,980 | how do you subset data dict based on values in array in python | <p>I have this data dictionary:</p>
<pre><code>slots={'ibm':5, 'chev': 9, 'exx':10}
</code></pre>
<p>I need to be able to subset this slots dict based on values in array mod:</p>
<pre><code>mod=['ibm', 'exx']
</code></pre>
<p>I tried this</p>
<pre><code>dict((i, slots)[i] for i in mod if i in slots)
</code></pre>
<p>I get this error:</p>
<pre><code>TypeError: tuple indices must be integers or slices not str
</code></pre>
<p>any ideas?</p>
| <python> | 2023-08-28 17:32:25 | 2 | 10,714 | user1471980 |
76,994,815 | 3,768,871 | How to change starting cell order in a subplot in plotly? | <p>I have a subplot figure. As you know, by default, subplots are indexed from the top-left of the diagram. Yet, I need to change this indexing to the bottom-left. How can I do that?</p>
<p>The following figure represents the current version that I get from plotly (i.e., indexing started from top-left):</p>
<p><a href="https://i.sstatic.net/O5Wuu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O5Wuu.png" alt="enter image description here" /></a></p>
<p>The following figure indicates the desired version (i.e., subplots are indexed from bottom-left):</p>
<p><a href="https://i.sstatic.net/Kinfo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kinfo.png" alt="enter image description here" /></a></p>
| <python><plotly> | 2023-08-28 17:17:00 | 1 | 19,015 | OmG |
76,994,702 | 2,472,451 | How to compare (diff) two large CSV files where order does not matter | <p>I'm struggling with comparing (diff'ing) 2 large CSV files.</p>
<ul>
<li>The order of rows doesn't matter</li>
<li>I don't need to print the differences or anything, just a True or False.</li>
</ul>
<p>For example:</p>
<p>File 1</p>
<pre><code>a,b,c,d
e,f,g,h
i,j,k,l
</code></pre>
<p>File 2</p>
<pre><code>a,b,c,d
i,j,k,l
e,f,g,h
</code></pre>
<p>The above should pass comparison, even though the rows are not in the same order, the contents are identical.</p>
<p>Comparison should fail if contents differ, column values don't match, or a line exists in one but not another, etc.</p>
<p>The biggest issue I have is that the files are very large, and there is no key column to sort on. Files have 14 to 30 million rows, about 10 to 15 columns. A raw dump of the data, unsorted, comes to around 1gb csv file.</p>
<p>Right now I'm trying to sort the files and "diff" using the code below. The problem is that the "Sort" does not always work. With smaller files and less rows, the sort & diff works, but it doesn't seem to work with very large files.</p>
<p>Also, sorting adds significant operation time; ideally I would like to avoid sorting and just compare ignoring sort order, but I don't know how to do that.</p>
<p>filecmm, difflib and some other functions I tried all require pre-sorted files.</p>
<p>I'm performing a Python Merge Sort right now, but like I said, the sorting doesn't necessarily work on very large number of rows, and I'm hoping for a better way to compare.</p>
<p>This is the python merge sort function:</p>
<pre><code>def batch_sort(self, input, output, key=None, buffer_size=32000, tempdirs=None):
if isinstance(tempdirs, str):
tempdirs = tempdirs.split(",")
if tempdirs is None:
tempdirs = []
if not tempdirs:
tempdirs.append(gettempdir())
chunks = []
try:
with open(input,'rb',64*1024) as input_file:
input_iterator = iter(input_file)
for tempdir in cycle(tempdirs):
current_chunk = list(islice(input_iterator,buffer_size))
if not current_chunk:
break
current_chunk.sort(key=key)
output_chunk = open(os.path.join(tempdir,'%06i'%len(chunks)),'w+b',64*1024)
chunks.append(output_chunk)
output_chunk.writelines(current_chunk)
output_chunk.flush()
output_chunk.seek(0)
with open(output,'wb',64*1024) as output_file:
output_file.writelines(self.merge(key, *chunks))
finally:
for chunk in chunks:
try:
chunk.close()
os.remove(chunk.name)
except Exception:
pass
</code></pre>
<p>I can call batch_sort(), give it an input file and output file, size of chunks, and the temporary directory to use.</p>
<p>Once I perform batch_sort() on both files, I can just "diff file1 file2".</p>
<p>The above works with 25,000 to 75,000 rows, but not when we're talking in excess of 14 million rows.</p>
| <python><compare><diff><large-files> | 2023-08-28 16:57:50 | 3 | 635 | luci5r |
76,994,608 | 16,236,118 | How can I use sklearn's make_column_selector to select all valid datetime columns? | <p>I want to select columns based on their <em>datetime</em> data types. My DataFrame has for example columns with types <code>np.dtype('datetime64[ns]')</code>, <code>np.datetime64</code> and <code>'datetime64[ns, UTC]'</code>.</p>
<p>Is there a <em><strong>generic</strong></em> way to select all columns with a datetime datatype?</p>
<hr />
<p>For instance, this works:</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.compose import make_column_selector
selector = make_column_selector(dtype_include=(np.dtype('datetime64[ns]'),np.datetime64))
selected_columns = selector(df)
</code></pre>
<hr />
<p>But this doesn't (datatype from a pandas df with 'UTC'):</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.compose import make_column_selector
selector = make_column_selector(dtype_include=np.dtype('datetime64[ns, UTC]')
selected_columns = selector(df)
</code></pre>
<hr />
<p>Compared to numeric data types where you can simply use <code>np.number</code> instead of np.int64 etc.</p>
<p>API-Reference to <code>make_column_selector</code>: <a href="https://scikit-learn.org/stable/modules/generated/sklearn.compose.make_column_selector.html" rel="nofollow noreferrer">LINK</a></p>
| <python><datetime><scikit-learn><scikit-learn-pipeline> | 2023-08-28 16:40:27 | 3 | 1,636 | JAdel |
76,994,448 | 2,954,547 | Representing a time interval with one bound set to "infinity" | <p>For <code>float</code>-based dtypes, it's straightforward to create an <code>IntervalIndex</code> with "infinity" at one or both ends:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
intervals = pd.IntervalIndex.from_breaks([-np.inf, -100, 0, 100, np.inf], closed="left")
print(intervals)
</code></pre>
<pre class="lang-none prettyprint-override"><code>IntervalIndex([[-inf, -100.0), [-100.0, 0.0), [0.0, 100.0), [100.0, inf)], dtype='interval[float64, left]')
</code></pre>
<p>This is convenient, because now <em>any</em> real number (or its floating-point approximation) less than <code>-100</code> will be considered within the <code>[-inf, -100.0)</code> interval.</p>
<p>I now want to do the same for a <code>datetime64</code> interval index. That is, I want to create a leftmost and/or rightmost interval within the index that will represent <em>all</em> possible times in the past or future.</p>
<p>Naively I tried something like this:</p>
<pre class="lang-py prettyprint-override"><code>import pendulum
intervals = pd.IntervalIndex.from_breaks([-np.inf, pendulum.datetime(2020, 1, 1), pendulum.datetime(2022, 1, 1), np.inf])
</code></pre>
<p>but unsurprisingly it didn't work:</p>
<pre class="lang-none prettyprint-override"><code>TypeError: category, object, and string subtypes are not supported for IntervalIndex
</code></pre>
<p>Is there some correct (and hopefully "dtype-native") way to represent such an interval?</p>
<p>I know that as a workaround, I can set a time very far in the past or future, as an approximation of infinity. Is that my only option here?</p>
| <python><pandas> | 2023-08-28 16:13:27 | 2 | 14,083 | shadowtalker |
76,994,428 | 5,561,472 | Why `limit_to_last` not working in Firestore | <p>When running the code below I get the first item not the last. Why is that?</p>
<pre class="lang-py prettyprint-override"><code>async def example():
db: google.cloud.firestore.AsyncClient = AsyncClient()
await db.document("1/1").set({"a": "1"})
await db.document("1/2").set({"a": "2"})
last = (
await (
db.collection("1")
.order_by("a")
.limit_to_last(1) # <= not working - getting the first not last
.get()
)
)[0].to_dict()
first = (
await (
db.collection("1")
.order_by("a")
.limit(1)
.get()
)
)[0].to_dict()
print(first, last) # {'a': '1'} {'a': '1'}
asyncio.run(example())
</code></pre>
<p>The workaround is the use of descending ordering with <code>limit()</code>. But I don't understand - what is the purpose of the <code>limit_to_last()</code> function?</p>
| <python><google-cloud-platform><google-cloud-firestore> | 2023-08-28 16:10:33 | 1 | 6,639 | Andrey |
76,994,241 | 16,030,430 | How to calculate signal to noise ratio in frequency domain with given cutoff frequency? | <p>I am pretty new to signal processing and I want to calculate the signal to noise ratio (SNR) from a signal in frequency domain.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# create signal in time domain
N = 1000
T = 0.05
x = np.linspace(0, N*T, N)
y = 10 * np.sin(0.1 * np.pi * x)
noise = 2* np.random.normal(size=len(x))
y += noise
# signal in frequency domain
N = len(y)
x_f = np.fft.rfftfreq(N, d=T)
y_f = np.fft.rfft(y)
# normalize
x_f = x_f / max(x_f)
y_f = np.abs(y_f) / max(np.abs(y_f))
</code></pre>
<p>The idea is to use a cutoff frequency (e.g. 0.3) to divide the signal in a useful signal (left side) and noise signal (right side) and calculate the SNR. But how can I accomplish this and is this the right approach?</p>
| <python><numpy><signal-processing><fft> | 2023-08-28 15:38:31 | 1 | 720 | Dalon |
76,994,134 | 5,599,687 | AWS CDK glue-alpha Job: How to import module in `extraPythonFiles`? | <p>I'm using AWS CDK to create a Glue Job. Following this documentation (<a href="https://docs.aws.amazon.com/cdk/api/v2/docs/@aws-cdk_aws-glue-alpha.PythonSparkJobExecutableProps.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/cdk/api/v2/docs/@aws-cdk_aws-glue-alpha.PythonSparkJobExecutableProps.html</a>) I found that it's possible to provide extra Python files to the job, which is useful to me because I want to import a module that I have as a python file.
As a note, I'm using the <code>@aws-cdk/aws-glue-alpha</code> library because it allows the definition of a job from a code asset.</p>
<p>This is my job definition in CDK:</p>
<pre class="lang-js prettyprint-override"><code> new glueAlpha.Job(this, id + '-aggregate', {
jobName: id + '-aggregate',
executionClass: glueAlpha.ExecutionClass.FLEX,
sparkUI: {
enabled: true,
},
role: iam.Role.fromRoleName(this, id, 'service-role/AWSGlueServiceRole'),
workerCount: 2,
workerType: glueAlpha.WorkerType.G_1X,
defaultArguments: {
'--environment_tag': env_tag,
'--s3_bucket': dataBucket.bucketName,
},
executable: glueAlpha.JobExecutable.pythonEtl({
glueVersion: glueAlpha.GlueVersion.V3_0,
pythonVersion: glueAlpha.PythonVersion.THREE,
script: glueAlpha.Code.fromAsset(glueSourceCode + '/data_quality.py'),
extraPythonFiles: [glueAlpha.Code.fromAsset(sourceCode + '/my_module.py')],
}),
});
</code></pre>
<p>The deployment is successful, and I can see the file in two places:</p>
<ul>
<li>as an S3 file, from the job details in the GUI;</li>
<li>as a temporary file in the Job filesystem, running the following from the script</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import sys
import os
for p in sys.path:
try:
print(p, ':', os.listdir(p))
except BaseException as e:
print(e)
localPyFilesDir = [d for d in sys.path if d.startswith('/tmp/localPyFiles')][0]
pyFile = os.listdir(localPyFilesDir)[0]
with open(localPyFilesDir + '/' + pyFile, 'r') as f:
print(f.read())
</code></pre>
<p>The names for these files are randomly generated, for instance the temporary file is <code>/tmp/localPyFiles-c1f3490d-fc78-4c0f-9508-ab56cf172ba8/60c153050f78fd7e06e53a4326f4f7423c7ad2747e6f865ffd56d9ba57a002e2.py</code>.</p>
<hr />
<p>My question is: how can I import this file as a python module from the script? I guess there is a way to do it without knowing the temporary path, because it looks very involved and it would surprise me that there is no easier way even to get the path to the file.</p>
| <python><amazon-web-services><aws-glue> | 2023-08-28 15:24:49 | 1 | 751 | Gabriele |
76,994,042 | 3,247,006 | --headless vs --headless=chrome vs --headless=new in Selenium | <p>I'm learning <strong>Selenium</strong> with Django and Google Chrome. *I use <strong>Selenium 4.11.2</strong>.</p>
<p>Then, I tested with <code>--headless</code>, <code>--headless=chrome</code> and <code>--headless=new</code> as shown below, then all work properly:</p>
<pre class="lang-py prettyprint-override"><code>from django.test import LiveServerTestCase
from selenium import webdriver
class TestBrowser(LiveServerTestCase):
def test_example(self):
options = webdriver.ChromeOptions()
options.add_argument("--headless") # Here
driver = webdriver.Chrome(options=options)
driver.get(("%s%s" % (self.live_server_url, "/admin/")))
assert "Log in | Django site admin" in driver.title
</code></pre>
<pre class="lang-py prettyprint-override"><code>from django.test import LiveServerTestCase
from selenium import webdriver
class TestBrowser(LiveServerTestCase):
def test_example(self):
options = webdriver.ChromeOptions()
options.add_argument("--headless=chrome") # Here
driver = webdriver.Chrome(options=options)
driver.get(("%s%s" % (self.live_server_url, "/admin/")))
assert "Log in | Django site admin" in driver.title
</code></pre>
<pre class="lang-py prettyprint-override"><code>from django.test import LiveServerTestCase
from selenium import webdriver
class TestBrowser(LiveServerTestCase):
def test_example(self):
options = webdriver.ChromeOptions()
options.add_argument("--headless=new") # Here
driver = webdriver.Chrome(options=options)
driver.get(("%s%s" % (self.live_server_url, "/admin/")))
assert "Log in | Django site admin" in driver.title
</code></pre>
<p>My questions:</p>
<ol>
<li>What is the difference between <code>--headless</code>, <code>--headless=chrome</code> and <code>--headless=new</code>?</li>
<li>Which should I use, <code>--headless</code>, <code>--headless=chrome</code> or <code>--headless=new</code>?</li>
</ol>
| <python><django><google-chrome><selenium-webdriver><google-chrome-headless> | 2023-08-28 15:10:55 | 2 | 42,516 | Super Kai - Kazuya Ito |
76,993,940 | 5,838,180 | How to properly search and replace in python some rows in a file (with fileinput)? | <p>I have a file <code>test_file.py</code> that I would like to alter with a code, more precisely to search and replace particular lines in this file, but leave all other lines to be preserved in their original form. The simplified form of the file is something like this:</p>
<pre><code>line1 = 'abc'
line2 = 'abc'
line3 = 'abc'
line4 = 'abc'
</code></pre>
<p>I am using the following simplified snippet to make search-and-replace changes to the file only in lines <code>line1</code> and <code>line3</code>:</p>
<pre><code>import fileinput
for line in fileinput.input('test_file.py', inplace=True):
if line.startswith('line1 ='):
line = line.replace(line, "line1 = 'ABC'")
sys.stdout.write(line)
sys.stdout.write('\n')
if line.startswith('line3 ='):
line = line.replace(line, "line3 = 'ABC'")
sys.stdout.write(line)
sys.stdout.write('\n')
else:
sys.stdout.write(line)
</code></pre>
<p>The expected resulting file <code>test_file.py</code> should look something like this:</p>
<pre><code>line1 = 'ABC'
line2 = 'abc'
line3 = 'ABC'
line4 = 'abc'
</code></pre>
<p>Instead, the result looks like:</p>
<pre><code>line1 = 'ABC'
line1 = 'ABC'line2 = 'abc'
line3 = 'ABC'
line4 = 'abc'
</code></pre>
<p>What am I doing wrong and how can I achieve the expected output? Alternative solutions that don't make use of the <code>fileinput</code> package are also welcome. Tnx!</p>
| <python><file><replace><printing> | 2023-08-28 14:56:40 | 0 | 2,072 | NeStack |
76,993,937 | 2,622,523 | SQLAlchemy count of related items equal to filtered items | <p><strong>Tl; Dr:</strong> Need to select SqlAlchemy models, which has the same count of filtered related models, as the full count of related models.</p>
<p>I have a system, where administrator can create "filter groups", each of that can contain multiple "filters". Each "filter" has lots of fields, that are used to compare it against an incoming data. Filters inside a group could be joined by OR or AND operator, and that is specified in filter group.</p>
<p>It is look similar to some email filtering functionality (like in GMail), where you can create filters that are checking against incoming email (like text in subject, body, or 'from' field), end results of that check is combined using 'AND' or 'OR'</p>
<p>My problem is: I don't know how to write final query in SqlAlchemy after all the filters already executed and succesful ones are gathered.</p>
<p>My simplified models:</p>
<pre class="lang-py prettyprint-override"><code>class FilterGroup(DeclarativeBase):
__tablename__ = 'filter_groups'
id: Mapped[int] = mapped_column(primary_key=True, nullable=False)
combine_by: Mapped[str] = mapped_column(nullable=False, doc='AND / OR')
filters: Mapped[List['Filter']] = relationship(
back_populates='filter_group', lazy='joined', cascade='all, delete'
)
... other useful fields ...
class Filter(DeclarativeBase):
__tablename__ = 'filters'
id: Mapped[int] = mapped_column(primary_key=True, nullable=False)
filter_group_id: Mapped[int] = mapped_column(ForeignKey('filter_groups.id', ondelete='CASCADE'))
filter_group: Mapped['FilterGroup'] = relationship(back_populates='filters')
... lots of fields to filter on ...
</code></pre>
<p>I'm creating complex sqlalchemy query, what in result looks like:</p>
<pre class="lang-py prettyprint-override"><code>filters_query = sqlalchemy.select(
Filter.id, Filter.filter_group_id
).where(*where_filters) # where_filters is a list with lots of filtering rules
</code></pre>
<p>So, this query returns all the successfully matched filters, and now I want to select FilterGroups what corresponding to that filters. When <code>combine_by</code> is set to <code>OR</code> - I need just one filter matched to take it's group, so I'm usign EXISTS. But when <code>combine_by</code> is <code>AND</code> - I need to have the same number of selected filters as there are related to the Group (so, all the filters should be matched)</p>
<p>What I have:</p>
<pre><code>filter_groups_query = sqlalchemy.select(FilterGroup).where(
or_(
# This works!
and_(
FilterGroup.combine_by == 'OR',
exists().where(
FilterGroup.id == filters_query.subquery().c.filter_group_id,
),
),
# And this does not work :(
and_(
FilterGroup.combine_by == 'AND',
sqlalchemy.select(func.count()).filter(Filter.filter_group_id == FilterGroup.id)
== sqlalchemy.select(func.count()).filter(
filter_query.subquery().c.filter_group_id == FilterGroup.id
),
),
)
)
</code></pre>
<p>Second block got converted into "False" in SQL :(</p>
<p>That is my last try, but I've tried a lot of different variants, what didn't work.</p>
<p>I know how to write this in PostgreSQL, and that's quite easy:</p>
<pre class="lang-sql prettyprint-override"><code>WITH filters_query as (
select id, filter_groups_id from (...filter_query...)
)
SELECT * from filter_groups WHERE
(
filter_groups.combine_by = 'OR'
AND
exists (select 1 from filters_query where filter_group_id = filter_groups.id)
)
OR
(
filter_groups.combine_by = 'AND'
AND
(select count(*) from filters where filter_group_id = filter_groups.id)
=
(select count(*) from filters_query where filter_group_id = filter_groups.id)
)
</code></pre>
<p>But I need to create sqlalchemy query, not the raw request, so I'm stuck.</p>
| <python><postgresql><sqlalchemy> | 2023-08-28 14:56:28 | 1 | 2,092 | MihanEntalpo |
76,993,923 | 6,717,168 | Qt Application crashs silently after couple hours running | <p>I have been struggling with debugging this for a week now, it is a fairly big project but I think I have narrowed down what's causing the crash to a few possibilities.</p>
<p>output before crash :</p>
<blockquote>
<p>This function's name is:check {'type': 'DISPO OCP', 'nom': 'AVAMYS 27,5 microgrammes/pulvérisation, suspension pour pulvérisation nasale', 'cip13': '3400938322446', 'quantité': 'N/A'}
start of slot
received :
end of slot
requete ocp n°1398
requete ocp n°1399
requete ocp n°1400
QThread: Destroyed while thread is still running</p>
</blockquote>
<p>Application has a QMainWindow and a worker thread, worker thread fetches data on the internet and treats that data then emit a signal for the QMainWindow slot to catch. Worker thread is designed to run forever.</p>
<pre><code>class MainWindowWrapper(Ui_MainWindow, QtWidgets.QMainWindow):
def __init__(self) -> None:
super().__init__()
self.setupUi(self)
self.Set_Table_View()
</code></pre>
<p>Ui_MainWindow is a py file generated by Qt Designer.</p>
<pre><code>def Set_Table_View(self):
self.model_table_view = QtGui.QStandardItemModel()
self.model_table_view.setHorizontalHeaderLabels(['Type', 'Nom', 'CIP 13', 'Quantité', "Horodatage"])
#Setup middle filter
self.proxyModel_combobox = QSortFilterProxyModel()
self.proxyModel_combobox.setSourceModel(self.model_table_view)
self.proxyModel_combobox.setFilterKeyColumn(0)
#setup filter
self.Table_proxyModel = MultiColumnFilterProxyModel()
self.Table_proxyModel.setSourceModel(self.proxyModel_combobox)
self.Table_proxyModel.setFilterKeyColumn(0)
self.tableView.setModel(self.Table_proxyModel)
#set largeurs colonnes
self.tableView.setColumnWidth(0, 90)
self.tableView.setColumnWidth(1, 350)
self.tableView.setColumnWidth(2, 90)
self.tableView.setColumnWidth(3, 50)
self.tableView.setColumnWidth(4, 150)
</code></pre>
<p>This is how I start the worker thread:</p>
<pre><code> def HandleBot(self):
self.lock = threading.Lock()
self.bot_instance = Bot(self.lock)
self.thread_handle = QThread(parent = self)
self.bot_instance.moveToThread(self.thread_handle)
self.bot_instance.log_text_signal.connect(self.add_text_to_plaintext)
self.thread_handle.started.connect(self.bot_instance.call_check)
self.thread_handle.start()
return
</code></pre>
<p>Slot definition :</p>
<pre><code>@QtCore.pyqtSlot(int, int, int, int)
def add_text_to_plaintext(self, int, int2, int3, int4):
print("start of slot")
debg = "received : "
print(debg)
item_list = [QtGui.QStandardItem(str(int)), QtGui.QStandardItem(str(int2)),
QtGui.QStandardItem(str(int3)), QtGui.QStandardItem(str(int4)),
QtGui.QStandardItem(str(datetime.datetime.now().replace(microsecond=0)))]
self.model_table_view.appendRow(item_list)
print("end of slot")
</code></pre>
<p>If i replace the item_list by this :</p>
<pre><code>item_list = QtGui.QStandardItem()
</code></pre>
<p>Then application doesn't crash even though signals and slot still are emitted and the tableview is being updated by empty rows.
This is the bot definition :</p>
<pre><code>class Bot(QObject):
log_text_signal = pyqtSignal(int, int, int, int)
def __init__(self, lock : threading.Lock):
super().__init__()
self.lock = lock
self.keep_bot_running = True
self.numberExec = 0
return
def call_check(self):
pydevd.settrace(suspend=False)
While True:
.....do stuff....
self.numberExec += 1
print('requete ocp n°' + str(self.numberExec))
debg = "This function's name is:" + self.current_function_name() + " " + str(info)
print(debg)
with self.lock:
self.log_text_signal.emit(1, 0, 0, 0)
</code></pre>
<p>For days I thought I was handling signals and slots across thread the wrong way but I couldn't find the culprit, would appreciate if someone can give me clues on what really is causing the silent crahes.</p>
| <python><pyqt6> | 2023-08-28 14:54:54 | 1 | 952 | user |
76,993,790 | 3,572,950 | Compare linear and binary searches | <p>I want to compare theoretical and practical ratios of linear search and binary search. It will be clear from my code, i hope:</p>
<pre><code>import math
import random
import timeit
def linear_search(alist: list[int], elem: int) -> int | None:
for item in alist:
if item == elem:
return elem
return None
def binary_search(alist: list[int], elem: int) -> int | None:
lo, hi = 0, len(alist) - 1
while lo <= hi:
mid = (lo + hi) // 2
if elem == alist[mid]:
return elem
elif elem < alist[mid]:
hi = mid - 1
else:
lo = mid + 1
return None
def compare_searches():
# just some big value
N = 100_000
# just generate some sorted list
alist = [x for x in range(N)]
# elem = random.randrange(N)
# just some value to trigger the worst case scenarios during searches
elem = -42
# searches should give the same result
assert linear_search(alist, elem) == binary_search(alist, elem)
linear_time_avg = timeit.timeit(lambda: linear_search(alist, elem), number=10000)
binary_time_avg = timeit.timeit(lambda: binary_search(alist, elem), number=10000)
print("theoretical (complexities ratio): ", N / math.log2(N))
print("practical (times ratio): ", linear_time_avg / binary_time_avg)
if __name__ == "__main__":
compare_searches()
</code></pre>
<p>So this code produces:</p>
<pre><code>theoretical (complexities ratio): 6020.599913279624
practical (times ratio): 898.7857719144624
</code></pre>
<p>I understand, that there should be some difference between complexities ratio and times ratio, but why is it so big? Like 6 times greater? I thought, that ratios will be roughfly the same tbh.</p>
| <python><algorithm> | 2023-08-28 14:39:27 | 1 | 1,438 | Alexey |
76,993,789 | 9,112,151 | SqlAlchemy base model in type hintings? | <p>In code below what should be instead of <code>???</code>? Should it be <code>Base</code>? But it looks strange for me.</p>
<pre><code>import sqlalchemy as sa
from sqlalchemy.orm import declarative_base
metadata = sa.MetaData()
Base = declarative_base(metadata=metadata)
class User(Base):
__tablename__ = 'user'
first_name = ...
last_name = ...
# another models
class Repository:
# base repository
model: ???
def create(self, **create_data) -> ???:
pass
def get(self, pk) -> ???
pass
class UserRepository(Repository):
pass
# another repositories
</code></pre>
| <python><sqlalchemy><python-typing> | 2023-08-28 14:39:22 | 0 | 1,019 | Альберт Александров |
76,993,767 | 14,624,039 | why do we need `pip install -e`? | <p>After asking chatGPT and searching for some related questions, I still didn't understand why we needed <code>pip install -e</code></p>
<p>It's said that with this command, we can install Python packages in <strong>editable</strong> mode so that any changes we make to the cloned package source code in the local directory will be reflected at once when we import that package.</p>
<p>But to my understanding, even when we merely clone a Python package, the changes we made will also be reflected at once when we import that library in our code through <code>import library</code>, so why exactly do we need <strong>install in editable mode</strong>?</p>
| <python><pip> | 2023-08-28 14:37:00 | 1 | 432 | Arist12 |
76,993,635 | 11,328,614 | Python itertools.zip_longest with mutable fillvalue | <p>In a code that evaluates a web response, I would like to zip elements from several lists. However, the elements of the iterators are <code>dict</code>'s.
Therefore, I would like to fill up the missing values also with <code>dict</code>'s, but each generated element should have it's own <code>dict</code> instance.</p>
<p>The following code groups elements from each list by <code>itertools.zip_longest</code>.
As long as there is a non mutable <code>fillvalue</code> specified, there is no problem.</p>
<pre class="lang-py prettyprint-override"><code>import collections
import itertools
l1 = [{"a": 100}, {"b": 200}, {"c": 300}]
l2 = [{"d": 400}]
ll = list(itertools.zip_longest(l1, l2, fillvalue=0))
print(ll)
</code></pre>
<p>-> <code>[({'a': 100}, {'d': 400}), ({'b': 200}, 0), ({'c': 300}, 0)]</code></p>
<p>Now, when a mutable <code>fillvalue</code> is specified, all the <code>fillvalue</code>'s share the same instance and so changing one, changes all:</p>
<pre class="lang-py prettyprint-override"><code>import collections
import itertools
l1 = [{"a": 100}, {"b": 200}, {"c": 300}]
l2 = [{"d": 400}]
ll = list(itertools.zip_longest(l1, l2, fillvalue=dict()))
ll[1][1]["x"] = 150
print(ll)
</code></pre>
<p>-> <code>[({'a': 100}, {'d': 400}), ({'b': 200}, {'x': 150}), ({'c': 300}, {'x': 150})]</code></p>
<p>To prevent that all the dicts share the same instance I used <code>copy.deepcopy</code>:</p>
<pre class="lang-py prettyprint-override"><code>import collections
import copy
import itertools
l1 = [{"a": 100}, {"b": 200}, {"c": 300}]
l2 = [{"d": 400}]
ll = list(itertools.zip_longest(l1, l2, fillvalue=copy.deepcopy(dict())))
ll[1][1]["x"] = 150
print(ll)
</code></pre>
<p>-> <code>[({'a': 100}, {'d': 400}), ({'b': 200}, {'x': 150}), ({'c': 300}, {'x': 150})]</code></p>
<p>As a result, still all <code>dict</code>'s from the <code>fillvalue</code> share the same instance.</p>
<p>I would like to add that <code>ll = [item or dict() for item in itertools.zip_longest(l1, l2)]</code> works neither, assuming a fillvalue of <code>None</code>.</p>
<p>So, how can I make each <code>fillvalue</code> unique?</p>
| <python><python-3.x><python-itertools><default-value><mutable> | 2023-08-28 14:19:05 | 3 | 1,132 | Wör Du Schnaffzig |
76,993,621 | 13,682,080 | generate fake-valid data with pydantic | <p>I would like to create automated examples of valid data based on my <code>pydantic</code> models. How can I do this?</p>
<p>Example:</p>
<pre><code>import pydantic
from typing import Any
class ExampleData(pydantic.BaseModel):
a: int
b: str = pydantic.Field(min_length=10, max_length=10)
@staticmethod
def example() -> dict[str, Any]:
# some logic
return {}
a.example()
"""Returns
{
"a": 1,
"b": "0123456789"
}
"""
</code></pre>
<p>P.S. I suspect that <code>pydantic</code> provides this functionality because <code>fastapi</code> generates sample data, but I'm not sure if this is exactly its functionality and I couldn't find such a method. Can any one help me understand this?</p>
| <python><pydantic> | 2023-08-28 14:17:06 | 4 | 542 | eightlay |
76,993,586 | 3,846,286 | Can't figure out the time complexity of function | <p>I wanted a function that, given a list of integers <code>num</code>, would return a list of the <code>k</code> most frequent elements in it. I wrote the following function</p>
<pre><code> def topKFrequent(self, nums: List[int], k: int) -> List[int]:
freqs = {} # num:freq
# time complexity of this loop is O(N), where N is the size of nums
# space complexity of this loop is O(N)
for num in nums:
if num in freqs:
freqs[num] += 1
else:
freqs[num] = 1
# this loop has the same time and space complexity as the previous one
inverse_freqs = {} #freq:num
for num in freqs.keys():
if freqs[num] in inverse_freqs:
inverse_freqs[freqs[num]].append(num)
else:
inverse_freqs[freqs[num]] = [num]
sorted_freqs = list(inverse_freqs.keys())
# this sort is the part which I don't understand
sorted_freqs.sort()
l = []
i = 0
#these 2 loops together are O(N)
while len(l) < k:
for most_freq_element in inverse_freqs[sorted_freqs[-i-1]]:
l.append(most_freq_element)
i += 1
return l
</code></pre>
<p>Let's call the number of different frequencies <code>F</code>. The sorting is obviously <code>O(F * log F)</code></p>
<p>The <code>sorted_freqs</code> list is biggest in proportion to <code>N</code> when <code>nums</code> is something like [1, 2,2,3,3,3,4,4,4,4,5,5,5,5,5....]. The question I have is how to figure out how <code>F</code> grows in proportion to <code>N</code>, and then calculate the time complexity from there.</p>
| <python><time-complexity><computer-science> | 2023-08-28 14:11:34 | 1 | 651 | chilliefiber |
76,993,497 | 17,973,966 | How can I use Pandera to assert whether a column has one of multiple data types? | <p>My Pandas dataframes need to adhere to the following Pandera schema:</p>
<pre><code>import pandera as pa
from pandera.typing import Series
class schema(pa.SchemaModel):
name: Series[str]
id: Series[str]
</code></pre>
<p>However, in some dataframe instances, the "id" column will only contain integers and thus will get the "int" datatype when using <code>pd.read_csv()</code>.</p>
<p>For example, I have the following dataframe:</p>
<p><a href="https://i.sstatic.net/OqUf2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OqUf2.png" alt="example of a dataframe containing columns "name" and "id" with three rows, where "id" is always an integer" /></a></p>
<p>When I run <code>schema(df).validate()</code> I get the error: <code>pandera.errors.SchemaError: expected series 'id' to have type str, got int64</code></p>
<p>However, in other cases the dataframe might look something like this:</p>
<p><a href="https://i.sstatic.net/5oi6K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5oi6K.png" alt="example of a dataframe containing columns "name" and "id" with three rows, where "id" is sometimes a string" /></a></p>
<p>I would like to account for both situations by allowing the column to be one of both datatypes.</p>
<p>This is what I tried (but it doesn't seem to be the correct syntax, as the validation method won't run):</p>
<pre><code>import pandera as pa
from pandera.typing import Series
from typing import Union
class schema(pa.SchemaModel):
name: Series[str]
id: Union[Series[str], Series[int]]
</code></pre>
<p>Is there any way to do this in Pandera?</p>
| <python><pandas><pydantic><typing><pandera> | 2023-08-28 14:01:05 | 1 | 472 | Neele22 |
76,993,464 | 3,911,443 | Running Ray in docker | <p>I'm trying to run <code>ray</code> in a docker image that looks like this:</p>
<pre><code>FROM python:3.10.5-bullseye
RUN pip install -U "ray[default]==2.6.3"
...
CMD ["ray", "start", "--head", "--block", "--include-dashboard=false"]
</code></pre>
<p>After starting the container as a first test I want to run a sample job inside of the docker container.</p>
<p>So from a shell in the docker container this is what I've tried:</p>
<pre><code># 1
ray job submit --working-dir . -- python scripts/verify_ray.py
# 2
RAY_ADDRESS='http://127.0.0.1:8265' ray job submit --working-dir . -- python scripts/verify_ray.py
# 3
RAY_ADDRESS='http://192.168.65.4:8265' ray job submit --working-dir . -- python scripts/verify_ray.py # I got this addr from the log ie "Local node IP: 192.168.65.4"
</code></pre>
<p>Which all had a connection errors:</p>
<pre><code>raise ConnectionError(
ConnectionError: Failed to connect to Ray at address: http://127.0.0.1:8265.
raise ConnectionError(
ConnectionError: Failed to connect to Ray at address: http://127.0.0.1:8265.
raise ConnectionError(
ConnectionError: Failed to connect to Ray at address: http://192.168.65.4:8265.
</code></pre>
<p>Logs and <code>ray status</code> look good:</p>
<pre><code># ray status
======== Autoscaler status: 2023-08-28 13:43:03.538505 ========
Node status
---------------------------------------------------------------
Healthy:
1 node_47d1ee6b471a183a6de153cda4bb81dfb8bf4a0ad174c4bac0ed9010
Pending:
(no pending nodes)
Recent failures:
(no failures)
Resources
---------------------------------------------------------------
Usage:
0.0/8.0 CPU
0B/3.30GiB memory
0B/1.65GiB object_store_memory
Demands:
(no resource demands)
</code></pre>
<p>Anyone have any ideas what I missing?</p>
| <python><ray> | 2023-08-28 13:56:30 | 2 | 390 | pcauthorn |
76,993,409 | 1,818,059 | stretch draw text with Python Wand | <p>I am exploring how to get some image manipulation done with Python Wand.</p>
<p>I would like to know how to do a "stretch draw": fit a text with the boundaries of a certain box. In Delphi, this would be done drawing on a canvas, then do a copy onto another canvas.</p>
<p>The process in Python Wand is probably similar, but I have not been able to find the solution yet.
Given that I have the following script to create a bunch of identical images with a number:</p>
<pre class="lang-py prettyprint-override"><code>from wand.image import Image
from wand.drawing import Drawing
from wand.color import Color
imagewidth=1800
imageheight=600
def makeImage(mynr):
myimage = Image(
width=imagewidth,
height=imageheight,
background=Color("white")
)
with Drawing() as draw:
draw.font="Arial"
draw.font_size=500
draw.gravity="center"
draw.text(0,0,f"{mynr:04d}")
draw(myimage)
# -- need to have 1 bit BMP images
myimage.depth = 2
myimage.type="bilevel"
myimage.save(filename=f'test{mynr:02d}.bmp', )
myimage = None
print( mynr ) # to see progress
def do_numbers():
for i in range(30):
makeImage(i+1)
# ===============
if __name__ == "__main__":
do_numbers()
</code></pre>
<p>I get the following which is drawn OK in the right image format:</p>
<p><a href="https://i.sstatic.net/hPUlD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hPUlD.png" alt="Text as drawn by current script" /></a></p>
<p>My goal is to have something like this, keeping image size. Text is stretched to fit image.</p>
<p><a href="https://i.sstatic.net/6TOcm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6TOcm.png" alt="Text as desired, stretched to fit" /></a></p>
<p>(Note: images are scaled to 1/10 for clarity here, so won't match script output)</p>
| <python><wand> | 2023-08-28 13:49:43 | 1 | 1,176 | MyICQ |
76,993,308 | 1,230,724 | Understanding Python's memory allocation | <p>I'm trying to track down a suspected memory leak in a Python application which uses numpy and pandas as two of the main libraries. I can see that the application uses more and more memory over time (as DataFrames are processed).</p>
<p>Memory consumption per processing iteration (in MB):
<a href="https://i.sstatic.net/orMbT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/orMbT.png" alt="Memory consumption per processing iteration (in MB)" /></a></p>
<p>I'd like to understand what the memory is used for. I therefore used <code>tracemalloc</code>, but I'm struggling to consolidate the output of <code>tracemalloc</code>. The program calls <code>tracemalloc.start()</code> and <code>snapshot = tracemalloc.take_snapshot()</code> after executing the suspicious code.</p>
<p>The stats is printed with:</p>
<pre><code>gc.collect()
snapshot = tracemalloc.take_snapshot()
for i, stat in enumerate(snapshot.statistics('filename'), 1):
print('top_current', i, str(stat))
</code></pre>
<p>When I add up the <code>size</code> portion of the many (200-300) lines of e.g. <code>top_current 11 <frozen importlib._bootstrap>:0: size=71.7 KiB, count=769, average=95 B</code> (or similar), I get a few Megabytes as result. However the application is consuming memory in the Gigabyte range. Am I missing anything? How can I accurately see which objects reside in memory (and pose a potential memory leak)?</p>
<p>I also ran a set with <code>PYTHONMALLOCSTATS=1</code>. That test revealed an increased usage of Python memory arenas over the run of the program.</p>
<pre><code>Small block threshold = 512, in 32 size classes.
class size num pools blocks in use avail blocks
----- ---- --------- ------------- ------------
0 16 32 7879 217
1 32 419 52669 125
2 48 1366 114740 4
3 64 4282 269734 32
4 80 2641 132005 45
5 96 904 37955 13
6 112 611 21971 25
7 128 518 16044 14
8 144 1728 48364 20
9 160 238 5931 19
10 176 3296 75808 0
11 192 129 2691 18
12 208 125 2365 10
13 224 414 7434 18
14 240 95 1512 8
15 256 88 1319 1
16 272 78 1085 7
17 288 69 958 8
18 304 651 8458 5
19 320 57 676 8
20 336 118 1415 1
21 352 58 631 7
22 368 44 476 8
23 384 44 440 0
24 400 53 523 7
25 416 90 810 0
26 432 115 1026 9
27 448 97 864 9
28 464 92 732 4
29 480 75 593 7
30 496 88 703 1
31 512 137 953 6
# arenas allocated total = 614
# arenas reclaimed = 321
# arenas highwater mark = 293
# arenas allocated current = 293
293 arenas * 262144 bytes/arena = 76,808,192
# bytes in allocated blocks = 75,168,000
# bytes in available blocks = 70,352
0 unused pools * 4096 bytes = 0
# bytes lost to pool headers = 900,096
# bytes lost to quantization = 669,744
# bytes lost to arena alignment = 0
Total = 76,808,192
</code></pre>
<p>The number of arenas (<code>293</code> in the above example) seems to be increasing. That's a problem in itself, but it accounts for ~76MB only. The application however (measured by <code>proc = psutil.Process(os.getpid()); proc.memory_info().rss</code> and htop which are equivalent) shows memory allocation in the range of Gigabytes (~2GB when the above stats were printed).</p>
<p>What's allocating all that memory, if it's not Python's arenas/pools/blocks? What else could I try to measure to consolidate the memory consumption shown in htop?</p>
| <python><memory-leaks><tracemalloc> | 2023-08-28 13:36:31 | 0 | 8,252 | orange |
76,993,129 | 10,633,596 | ModuleNotFoundError: No module named 'gitlab' using Poetry in Python | <p>I have a python script which is giving me this error while running it. I'm using Poetry to install the packages.</p>
<pre><code>$ python -V
Python 3.9.1
$ python scripts/commons/sdlc_helper/mr_opening/create_schedule_pipeline.py --project_id ${CI_PROJECT_ID} --gl_token ${GITLAB_PRIVATE_TOKEN} --source_branch ${SOURCE_BRANCH} --target_branch ${TARGET_BRANCH}
Traceback (most recent call last):
File "/builds/internal/fd/fd-ci-config/commons/scripts/commons/sdlc_helper/mr_opening/create_schedule_pipeline.py", line 6, in <module>
import gitlab
ModuleNotFoundError: No module named 'gitlab'
</code></pre>
<p>The python script is at this path scripts/commons/sdlc_helper/mr_opening/create_schedule_pipeline.py and <code>pyproject.toml</code> file is at scripts folder with following configuration:-</p>
<pre><code>[tool.poetry]
name = "scripts"
version = "0.1.0"
description = ""
packages = [{include = "test"}]
include = ["/**/*"]
readme = "README.md"
[[tool.poetry.source]]
name = "self-hosted"
url = "http://nexus.xxxx.net/repository/pypi-hosted/simple"
[tool.poetry.dependencies]
python = ">=3.7.4,<4.0"
pytz = "^2022.1"
PyYAML = "^6.0"
dnspython = "^2.3.0"
ldap3 = "^2.9.1"
gssapi = "^1.8.2"
python-gitlab = "^3.15.0"
Jinja2 = "^3.1.2"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
[tool.poetry.scripts]
create-schedule-pipeline = "commons.sdlc_helper.mr_opening.create_schedule_pipeline:main"
open-mr-automatically = "commons.sdlc_helper.mr_opening.open_mr_automatically:main"
</code></pre>
<p>May I know where is the problem so that I can take care of such issues going forward, thanks</p>
| <python><python-poetry> | 2023-08-28 13:13:47 | 0 | 1,574 | vinod827 |
76,993,126 | 3,649,629 | Does not async / await call in FastAPI & Python go to thread pool? | <p>I implemented my simple server on Python and FastAPI and currently learn more about async / await work in FastAPI. The question is based on this <a href="https://github.com/zhanymkanov/fastapi-best-practices#7-dont-make-your-routes-async-if-you-have-only-blocking-io-operations" rel="nofollow noreferrer">doc</a> and it is unclear does async calls go to a thread pool or not.</p>
<p>I am familiar with concept of coroutines by extensively using them in Kotlin / Android. In this context <a href="https://gelassen.github.io/blog/2020/09/05/kotlin-coroutines-overview.html" rel="nofollow noreferrer">coroutines are organised in some state machine</a> and there is no usage of the thread pool. Is it the same in Python / FastApi?</p>
| <python><python-3.x><async-await><fastapi> | 2023-08-28 13:12:38 | 1 | 7,089 | Gleichmut |
76,992,961 | 4,726,173 | PyTorch: nn.Identity() vs. lambda x: x : Can they be used interchangeably? | <p>Can I use a lambda function, <code>lambda x: x</code> instead of <a href="https://pytorch.org/docs/stable/generated/torch.nn.Identity.html" rel="nofollow noreferrer">torch.nn.Identity</a>? And does this differ depending on where in a model this identity is placed? My guess would be that pytorch might not know how to deal with a lambda function intermingled with torch.nn.Modules, and that it can slow down training.</p>
<hr />
<p>Why I'm asking: Out of curiosity. The function is so <a href="https://github.com/pytorch/pytorch/blob/58d1b3639bc07f9519de18e5a18e575f260c7eeb/torch/nn/modules/linear.py#L12-L32" rel="nofollow noreferrer">simple</a> that I find it unlikely that a bug will ever sneak in there in a pytorch release, so there doesn't really seem to be a reason for avoiding it.</p>
| <python><performance><pytorch><torch> | 2023-08-28 12:50:23 | 1 | 627 | dasWesen |
76,992,719 | 9,018,649 | What is missing to install transform with pip in Databricks Cluster runtime version 7.3? | <p>I need to revitalize an old project. To do that i need to create a Cluster Runtime version 7.3, then install the python library <em><strong>transform</strong></em>.
Transform: <a href="https://pypi.org/project/transform/1.0.20/#history" rel="nofollow noreferrer">https://pypi.org/project/transform/1.0.20/#history</a></p>
<p>I have tried the latest, 1.0.21, 1.0.20, 1.0.19, and 1.0.18. I get the following error:
<a href="https://i.sstatic.net/wmMMd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wmMMd.png" alt="enter image description here" /></a></p>
<ol>
<li>What does this message actuallt mean?
<ul>
<li>Metric lite: <a href="https://pypi.org/project/metricflow-lite/#history" rel="nofollow noreferrer">https://pypi.org/project/metricflow-lite/#history</a></li>
</ul>
</li>
<li>How can i install transform in databricks cluster with runtime version 7.3?</li>
</ol>
| <python><pip><databricks><azure-databricks> | 2023-08-28 12:17:29 | 1 | 411 | otk |
76,992,688 | 3,424,423 | Walrus Operator in match statement results in invalid syntax | <p>I don't understand why the following python 3.10+ code is invalid syntax:</p>
<pre class="lang-py prettyprint-override"><code>match prop.type:
case ns.Instance(content := ns.Class(name)) if name is not None:
continue
</code></pre>
<p>To me, this matches the <a href="https://peps.python.org/pep-0622/#walrus-patterns" rel="nofollow noreferrer">walrus patterns</a> patten documented there. What's wrong? I get the following error:</p>
<pre><code>case ns.Instance(content := ns.Class(name)) if name is not None:
^^
SyntaxError: invalid syntax
</code></pre>
<p>Further, <code>pylace</code> shows me the following errors (in the editor):</p>
<pre><code>case ns.Instance(content := ns.Class(name)) if name is not None:
^
"(" was not closed
</code></pre>
<p>And further</p>
<pre><code>case ns.Instance(content := ns.Class(name)) if name is not None:
^
Expected ":"
</code></pre>
<p>Not sure to what extend this is relevant, but the definition of <code>Instance</code> is:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Instance(Type):
content: Type
</code></pre>
| <python><python-3.x><python-3.10> | 2023-08-28 12:12:59 | 1 | 5,106 | Nearoo |
76,992,645 | 2,183,336 | Difference in values between numpy.correlate and numpy.corrcoef? | <p>It was my understanding that <code>numpy.correlate</code> and <code>numpy.corrcoef</code> should yield the same result for aligned normalized vectors. Two immediate cases to the contrary:</p>
<pre><code>from math import isclose as near
import numpy as np
def normalizedCrossCorrelation(a, b):
assert len(a) == len(b)
normalized_a = [aa / np.linalg.norm(a) for aa in a]
normalized_b = [bb / np.linalg.norm(b) for bb in b]
return np.correlate(normalized_a, normalized_b)[0]
def test_normalizedCrossCorrelationOfSimilarVectorsRegression0():
v0 = [1, 2, 3, 2, 1, 0, -2, -1, 0]
v1 = [1, 1.9, 2.8, 2, 1.1, 0, -2.2, -0.9, 0.2]
assert near(normalizedCrossCorrelation(v0, v1), 0.9969260391224474)
print(f"{np.corrcoef(v0, v1)=}")
assert near(normalizedCrossCorrelation(v0, v1), np.corrcoef(v0, v1)[0, 1])
def test_normalizedCrossCorrelationOfSimilarVectorsRegression1():
v0 = [1, 2, 3, 2, 1, 0, -2, -1, 0]
v1 = [0.8, 1.9, 2.5, 2.1, 1.2, -0.3, -2.4, -1.4, 0.4]
assert near(normalizedCrossCorrelation(v0, v1), 0.9809817769512982)
print(f"{np.corrcoef(v0, v1)=}")
assert near(normalizedCrossCorrelation(v0, v1), np.corrcoef(v0, v1)[0, 1])
</code></pre>
<p>Pytest output:</p>
<pre><code>E assert False
E + where False = near(0.9969260391224474, 0.9963146417122921)
E + where 0.9969260391224474 = normalizedCrossCorrelation([1, 2, 3, 2, 1, 0, ...], [1, 1.9, 2.8, 2, 1.1, 0, ...])
E assert False
E + where False = near(0.9809817769512982, 0.9826738919606931)
E + where 0.9809817769512982 = normalizedCrossCorrelation([1, 2, 3, 2, 1, 0, ...], [0.8, 1.9, 2.5, 2.1, 1.2, -0.3, ...])
</code></pre>
| <python><numpy><correlation><cross-correlation> | 2023-08-28 12:06:04 | 1 | 665 | user2183336 |
76,992,629 | 8,831,116 | With `hypothesis`, how to generate two values that satisfy an ordering relation? | <p>When writing tests using <code>hypothesis</code>, from time to time I encounter a situation that I require two distinct values which satisfy a given relation. Think of the start and end of an interval, where <code>start <= end</code> is required.</p>
<p>A simple example of what I would like to achieve is:</p>
<pre><code>import datetime as dt
from hypothesis import given
from hypothesis import strategies as st
@given(
valid_from=st.dates(),
valid_to=st.dates(),
)
def test_dates_are_ordered(valid_from: dt.date, valid_to: dt.date):
assert valid_from <= valid_to
</code></pre>
<p>I like that this test is very easy to read and to the point. However, it does not pass, because Hypothesis does not know about the restriction. Is there a good way to have two parameters but still ensure the values are ordered?</p>
| <python><python-hypothesis> | 2023-08-28 12:04:25 | 1 | 858 | Max Görner |
76,992,523 | 5,409,315 | VS Code searching Python under Roaming instead of conda env: "The system cannot find the path specified" | <p>I have the same repository checked out on the same branch to two different directories. Even using the same conda environment, I can run one in VS Code, but not the other. The <code>.vscode/settings.json</code> and <code>.vscode/launch.json</code> are identical between the two and there are no more files in the <code>.vscode</code> dirs. The VS Code version is too. The error message is</p>
<pre><code>Exception has occurred: FileNotFoundError
[WinError 3] The system cannot find the path specified: 'C:\\Users\\jpoppinga\\AppData\\Roaming\\Python\\Python39\\site-packages'
</code></pre>
<p>(Looking at a stacktrace, this originates from <code>log.describe_environment()</code>, called from the Python module in the course of starting the debugger.)</p>
<p>The path does indeed not exist (<code>Roaming</code> does, but has no <code>Python</code>), but why is it searching there? It is supposed to use the Python from the conda environment, which it did up until Friday EOB. Come today (Monday), it tries to use a system-wide Python and fails.</p>
<p>I tried using different known good conda environments, but for this directory, they fail.</p>
<p>I also looked at the environment variables, although this seems pointless (since success does not depend on the conda environment, but only on the VS Code instance): No difference in which variable contains <code>Roaming</code> and which doesn't.</p>
| <python><visual-studio-code><conda><vscode-debugger> | 2023-08-28 11:48:45 | 2 | 604 | Jann Poppinga |
76,992,453 | 19,130,803 | Pandas: describe() for datetime column | <p>I am using <code>pandas</code> to perform data analysis, I have a <code>datetime</code> column and I am using <code>describe()</code> as below.</p>
<pre><code>s = pd.Series([np.datetime64("2000-01-01"), np.datetime64("2010-01-01"), np.datetime64("2010-01-01")])
print('--------------------')
print(s)
print('--------------------')
print(s.dtype)
print('--------------------')
print(s.describe())
</code></pre>
<p>I am getting output as below:</p>
<pre><code>--------------------
0 2000-01-01
1 2010-01-01
2 2010-01-01
dtype: datetime64[ns]
--------------------
datetime64[ns]
--------------------
count 3
mean 2006-09-01 08:00:00
min 2000-01-01 00:00:00
25% 2004-12-31 12:00:00
50% 2010-01-01 00:00:00
75% 2010-01-01 00:00:00
max 2010-01-01 00:00:00
dtype: object
</code></pre>
<p>I was going through documentation, The above output summary should be generated for <code>numeric</code> datatype as opposed to <code>non-numeric</code> datatypes like <code>object, category, timestamp</code> which have different summary.</p>
<p>pandas version:
<code>pandas = "^2.0.0"</code></p>
<p>documentation:
<a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.describe.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.describe.html</a></p>
<p>What I am missing, does <code>datetime</code> datatype is treated as <code>numeric</code> datatype inside <code>describe()</code>?</p>
| <python><pandas> | 2023-08-28 11:36:51 | 1 | 962 | winter |
76,992,421 | 2,491,350 | AttributeError: module 'ssl' has no attribute 'PROTOCOL_TLSv1_3' | <p>I am trying to setup a tls context in python. I want to force TLSv1.3 usng:</p>
<pre><code>context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_3)
</code></pre>
<p>This does not work as I receive the following error:</p>
<pre><code>AttributeError: module 'ssl' has no attribute 'PROTOCOL_TLSv1_3'
</code></pre>
<p>I am on Ubuntu 20.04 and am using python version 3.8 and openssl version 1.1.1f.</p>
<p>Why doesn't it support TLSv1.3?</p>
| <python><ssl><tls1.3> | 2023-08-28 11:31:56 | 1 | 733 | SilverTear |
76,992,343 | 13,916,049 | Add a column from another dataframe if the substring of its indices match the indices of the other dataframe | <p><code>dfs_GSE159115</code> is a dictionary of AnnData objects, which is used for handling annotated data matrices in memory and on disk, positioned between pandas and xarray.
The <code>anno_GSE159115</code> is annotation.</p>
<p>I want to add the column <code>anno</code> from <code>anno_GSE159115</code> to the AnnData objects in <code>dfs_GSE159115</code> if the substring after the last <code>_</code> delimiter of <code>anno_GSE159115.index</code> matches the <code>dfs_GSE159115[sample_id].obs.index</code>.</p>
<p>Code:</p>
<pre><code># Iterate through each sample and assign annotations
for sample_id, adata in dfs_GSE159115.items():
obs_index = set(adata.obs.index)
anno_index = set(anno_GSE159115.index.str.rsplit('_', n=1).str.get(1))
# Filter dfs_GSE159115 to retain only the rows with matching indexes
anno_GSE159115.index = obs_index.intersection(anno_index)
matching_annotations = anno_GSE159115.loc[anno_GSE159115.index, "anno"]
# Create or assign the "anno" column in the observation metadata
adata.obs["anno"] = matching_annotations.values
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [149], in <cell line: 2>()
4 anno_index = set(anno_GSE159115.index.str.rsplit('_', n=1).str.get(1))
6 # Filter dfs_GSE159115 to retain only the rows with matching indexes
----> 7 anno_GSE159115.index = obs_index.intersection(anno_index)
8 matching_annotations = anno_GSE159115.loc[anno_GSE159115.index, "anno"]
10 # Create or assign the "anno" column in the observation metadata
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/generic.py:5588, in NDFrame.__setattr__(self, name, value)
5586 try:
5587 object.__getattribute__(self, name)
-> 5588 return object.__setattr__(self, name, value)
5589 except AttributeError:
5590 pass
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/_libs/properties.pyx:70, in pandas._libs.properties.AxisProperty.__set__()
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/generic.py:769, in NDFrame._set_axis(self, axis, labels)
767 def _set_axis(self, axis: int, labels: Index) -> None:
768 labels = ensure_index(labels)
--> 769 self._mgr.set_axis(axis, labels)
770 self._clear_item_cache()
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/internals/managers.py:214, in BaseBlockManager.set_axis(self, axis, new_labels)
212 def set_axis(self, axis: int, new_labels: Index) -> None:
213 # Caller is responsible for ensuring we have an Index object.
--> 214 self._validate_set_axis(axis, new_labels)
215 self.axes[axis] = new_labels
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/internals/base.py:69, in DataManager._validate_set_axis(self, axis, new_labels)
66 pass
68 elif new_len != old_len:
---> 69 raise ValueError(
70 f"Length mismatch: Expected axis has {old_len} elements, new "
71 f"values have {new_len} elements"
72 )
ValueError: Length mismatch: Expected axis has 1167 elements, new values have 107 elements
</code></pre>
<p>Input:
<code>dfs_GSE159115.items()</code></p>
<pre><code>dict_items([('GSM4819737', AnnData object with n_obs × n_vars = 6453 × 33694
var: 'gene_ids'), ('GSM4819735', AnnData object with n_obs × n_vars = 1896 × 33694
var: 'gene_ids'), ('GSM4819728', AnnData object with n_obs × n_vars = 923 × 33694
var: 'gene_ids'), ('GSM4819725', AnnData object with n_obs × n_vars = 1704 × 33694
var: 'gene_ids'), ('GSM4819733', AnnData object with n_obs × n_vars = 1644 × 33694
var: 'gene_ids'), ('GSM4819738', AnnData object with n_obs × n_vars = 3635 × 33694
var: 'gene_ids'), ('GSM4819726', AnnData object with n_obs × n_vars = 1410 × 33694
var: 'gene_ids')])
</code></pre>
<p><code>dfs_GSE159115["GSM4819737"].obs</code> (no columns; row name only)</p>
<pre><code>AAACCTGAGACGACGT-1
AAACCTGCACGGCTAC-1
AAACCTGCAGTATCTG-1
AAACCTGCAGTCAGCC-1
AAACCTGGTAAATGTG-1
</code></pre>
<p><code>dfs_GSE159115["GSM4819737"].var</code></p>
<pre><code>pd.DataFrame({'gene_ids': {'RP11-34P13.3': 'ENSG00000243485',
'FAM138A': 'ENSG00000237613',
'OR4F5': 'ENSG00000186092',
'RP11-34P13.7': 'ENSG00000238009',
'RP11-34P13.8': 'ENSG00000239945'}})
`anno_GSE159115`
pd.DataFrame({'total_UMI': {'SI_18854_AAAGATGGTTTGGGCC-1': 5190,
'SI_18854_AAATGCCAGTAAGTAC-1': 4953,
'SI_18854_AAACCTGCACGGCTAC-1': 6434,
'SI_18854_AACTCAGGTGTTGAGG-1': 7730,
'SI_18854_AACTCAGTCCGCGCAA-1': 5044},
'no_genes': {'SI_18854_AAAGATGGTTTGGGCC-1': 1099,
'SI_18854_AAATGCCAGTAAGTAC-1': 1389,
'SI_18854_AACCGCGAGGGAAACA-1': 2160,
'SI_18854_AACTCAGGTGTTGAGG-1': 2443,
'SI_18854_AACTCAGTCCGCGCAA-1': 1196},
'pct_MT': {'SI_18854_AAAGATGGTTTGGGCC-1': 0.0244701348747592,
'SI_18854_AAATGCCAGTAAGTAC-1': 0.0393700787401575,
'SI_18854_AACCGCGAGGGAAACA-1': 0.0248678893378924,
'SI_18854_AACTCAGGTGTTGAGG-1': 0.0341526520051746,
'SI_18854_AACTCAGTCCGCGCAA-1': 0.0224028548770817},
'sample': {'SI_18854_AAAGATGGTTTGGGCC-1': 'SI_18854',
'SI_18854_AAATGCCAGTAAGTAC-1': 'SI_18854',
'SI_18854_AACCGCGAGGGAAACA-1': 'SI_18854',
'SI_18854_AACTCAGGTGTTGAGG-1': 'SI_18854',
'SI_18854_AACTCAGTCCGCGCAA-1': 'SI_18854'},
'gene_per_1kUMI': {'SI_18854_AAAGATGGTTTGGGCC-1': 211.753371868979,
'SI_18854_AAATGCCAGTAAGTAC-1': 280.436099333737,
'SI_18854_AACCGCGAGGGAAACA-1': 335.716506061548,
'SI_18854_AACTCAGGTGTTGAGG-1': 316.041397153946,
'SI_18854_AACTCAGTCCGCGCAA-1': 237.113402061856},
'no_genes_log10': {'SI_18854_AAAGATGGTTTGGGCC-1': 3.04099769242349,
'SI_18854_AAATGCCAGTAAGTAC-1': 3.14270224573762,
'SI_18854_AACCGCGAGGGAAACA-1': 3.33445375115093,
'SI_18854_AACTCAGGTGTTGAGG-1': 3.38792346697344,
'SI_18854_AACTCAGTCCGCGCAA-1': 3.07773117965239},
'cellfilt': {'SI_18854_AAAGATGGTTTGGGCC-1': False,
'SI_18854_AAATGCCAGTAAGTAC-1': False,
'SI_18854_AACCGCGAGGGAAACA-1': False,
'SI_18854_AACTCAGGTGTTGAGG-1': False,
'SI_18854_AACTCAGTCCGCGCAA-1': False},
'doublet_score': {'SI_18854_AAAGATGGTTTGGGCC-1': 0.0345195045682442,
'SI_18854_AAATGCCAGTAAGTAC-1': 0.0226007163211419,
'SI_18854_AACCGCGAGGGAAACA-1': 0.0175096260036415,
'SI_18854_AACTCAGGTGTTGAGG-1': 0.0241521945273824,
'SI_18854_AACTCAGTCCGCGCAA-1': 0.0297453243135695},
'doublet': {'SI_18854_AAAGATGGTTTGGGCC-1': False,
'SI_18854_AAATGCCAGTAAGTAC-1': False,
'SI_18854_AACCGCGAGGGAAACA-1': False,
'SI_18854_AAACCTGCAGTCAGCC': False,
'SI_18854_AACTCAGTCCGCGCAA-1': False},
'cluster': {'SI_18854_AAAGATGGTTTGGGCC-1': 9,
'SI_18854_AAATGCCAGTAAGTAC-1': 9,
'SI_18854_AACCGCGAGGGAAACA-1': 11,
'SI_18854_AACTCAGGTGTTGAGG-1': 11,
'SI_18854_AACTCAGTCCGCGCAA-1': 9},
'anno': {'SI_18854_AAAGATGGTTTGGGCC-1': 'Tcell',
'SI_18854_AAATGCCAGTAAGTAC-1': 'Tcell',
'SI_18854_AACCGCGAGGGAAACA-1': 'Tcell_CD8',
'SI_18854_AACTCAGGTGTTGAGG-1': 'Tcell_CD8',
'SI_18854_AACTCAGTCCGCGCAA-1': 'Tcell'},
'label': {'SI_18854_AAAGATGGTTTGGGCC-1': '9:Tcell',
'SI_18854_AAATGCCAGTAAGTAC-1': '9:Tcell',
'SI_18854_AACCGCGAGGGAAACA-1': '11:Tcell_CD8',
'SI_18854_AACTCAGGTGTTGAGG-1': '11:Tcell_CD8',
'SI_18854_AACTCAGTCCGCGCAA-1': '9:Tcell'},
'patient': {'SI_18854_AAAGATGGTTTGGGCC-1': 'SS_2005',
'SI_18854_AAATGCCAGTAAGTAC-1': 'SS_2005',
'SI_18854_AACCGCGAGGGAAACA-1': 'SS_2005',
'SI_18854_AACTCAGGTGTTGAGG-1': 'SS_2005',
'SI_18854_AACTCAGTCCGCGCAA-1': 'SS_2005'}})
</code></pre>
| <python><pandas><python-xarray> | 2023-08-28 11:19:01 | 1 | 1,545 | Anon |
76,992,077 | 5,586,359 | How do I GroupShuffleSplit a parquet dataframe lazily? | <p>I have a parquet dataset that looks like this (I'm using polars, but any dataframe library is fine):</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
{
"match_id": [
1, 1, 1,
2, 2, 2, 2,
3, 3, 3, 3,
],
"team_id": [
1, 2, 2,
1, 1, 2, 2,
1, 2, 2, 3,
],
"player_name": [
"kevin", "james", "kelly",
"john", "jenny", "jim", "josh",
"alice", "kevin", "lilly", "erica",
],
}
)
</code></pre>
<p>I would like to group by match_id and test train split such that 80% of matches are in training set, and the rest in test set. So something like this:</p>
<pre class="lang-py prettyprint-override"><code>group_df = df.group_by(["match_id"])
train, test = group_split(group_df, test_size=0.20)
</code></pre>
<p>I need a python solution, preferably with dask, pandas or another dataframe library. Currently pandas doesn't support lazy evaluation, as the dataset is quite large. So it seems out of the question to use pandas. Dask on the other hand doesn't support any of the <code>sklearn.model_selection</code> splitters since it doesn't have integer based indexing support.</p>
<p>Ideally a simple <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GroupShuffleSplit.html" rel="nofollow noreferrer"><code>GroupShuffleSplit</code></a> working with dask is all I need. Is there any other library that supports this? If so, how do I do this with parquet in a lazy way?</p>
| <python><pandas><scikit-learn><dask><parquet> | 2023-08-28 10:39:29 | 4 | 954 | Vivek Joshy |
76,992,029 | 11,988,351 | Python regex matching with optional prefix and suffix | <p>I have a regular expression that matches parts of a string (specifically peptide sequences with modifications) and I want to use re.findall to get all parts of the string:</p>
<p>The sequence can start with an optional suffix that is anything non-capital letter string followed by <code>-</code>.</p>
<p>And the sequence can also have a prefix that starts with a <code>-</code> followed by a non-capitial letter string.</p>
<p>The rest of the sequence should be split by capital letters with an optional prefix for each.</p>
<p>E.g.</p>
<p><code>"foo-ABcmCD-bar"</code> -> <code>['foo-','A','B','cmC','D','-bar']</code></p>
<p><code>"DEF"</code> -> <code>['','D','E','F','']</code></p>
<p><code>"WHATEVER-foo"</code> -> <code>['', 'W', 'H', 'A', 'T', 'E', 'V', 'E', 'R', '-foo']</code></p>
<p><code>"cmC-foo"</code> -> <code>['', 'cmC', '-foo']</code></p>
<p><code>"ac-cmC-foo"</code> -> <code>['ac-', 'cmC', '-foo']</code></p>
<p>What I have is:</p>
<pre><code>(?:(^(?:[^A-Z]+-)?)|((?:-[^A-Z]+)?$)|((?:[^A-Z]*)?[A-Z]))
</code></pre>
<p>Capturing group 1 <code>(^(?:[^A-Z]+-)?)</code> is supposed to catch the optional prefix or an empty String.
Capturing group 2 <code>((?:-[^A-Z]+)?$)</code> is supposed to catch the optional suffix or an empty String.
Capturing group 3 <code>((?:[^A-Z]*)?[A-Z])</code> is supposed to catch any capital character in the rest of the string that could have a substring of non-capital characters in front.</p>
<p>I get the optional prefix or empty string.</p>
<p>The suffix seems almost to work - BUT if there is a suffix the end of line is matched twice one with the suffix and ones with an empty string.</p>
<pre><code>>>> re.findall(r,"foo-ABC-bar")
['foo-', 'A', 'B', 'C', '-bar', '']
>>> re.findall(r,"ABC-bar")
['', 'A', 'B', 'C', '-bar', '']
>>> re.findall(r,"ABcmC")
['', 'A', 'B', 'cmC', '']
</code></pre>
<p>I.e. how do I get rid of the extra empty string or why is the $ matched twice?</p>
<p>example:
<a href="https://regex101.com/r/koZPOD/1" rel="nofollow noreferrer">https://regex101.com/r/koZPOD/1</a></p>
| <python><regex> | 2023-08-28 10:32:50 | 4 | 655 | Lutz |
76,991,929 | 2,932,052 | Specify argument and return types of dict transpose function | <p>This function transposes a given <code>dict</code> with <code>set</code> values:</p>
<pre><code>from collections import defaultdict
def transpose_dict(dct: dict[int, set[int]]) -> dict[int, set[int]]:
transposed = defaultdict(set)
for key, value_set in dct.items():
for inv_key in value_set:
transposed[inv_key].add(key)
return dict(transposed)
</code></pre>
<p>By the typing I used, the function is restricted to dictionaries with <code>int</code> keys and sets of <code>int</code> values only, although the type <code>int</code> itself is fairly irrelevant to the function implementation.</p>
<p>On the contrary, it would look pretty much the same even for <code>str</code>, <code>tuple</code>, etc. (or combinations thereof). The obvious option <code>dict[Any, set[Any]</code> seems too weak to me, since it does not reflect the swapping of the types of key and value.</p>
<p>How can I express a transposable <code>dict</code> type (<code>dict[A, set[B]]</code> ↔ <code>dict[B, set[A]]</code>) and its transposition type?</p>
| <python><python-typing> | 2023-08-28 10:18:49 | 1 | 10,324 | Wolf |
76,991,849 | 11,122,899 | Rasterize line polygons based on length fraction in cells | <p>I have a line vector dataset which I want to rasterize by summing the (numeric) values of my field of interest of all lines within a raster cell. Some lines intersect more than one raster cell. In order to avoid double-counting, the solution needs to be based on a weighted sum according to the length-fraction of each road in the raster cells it touches. For example, if a line has a field value of 5 and is split perfectly between raster cells A and B, this line should both add a value of 2.5 to both raster cells A and B's final summed value.
I saw that the R terra::rasterize() function accepts polygons, but does not perform the weighted sum which I want.</p>
<p>Is there any implementation in R or python that already implements this, or how could this be implemented?</p>
<p>Edit: Reproducable example, which shows that currently the raster values are added up from all lines that touch each raster cell.</p>
<pre><code>library(terra)
library(tidyterra)
f <- system.file("ex/lux.shp", package="terra")
v <- vect(f)
l <- as.lines(v)
#give some numeric values to the lines
l$values <- 1
l <- l[, 'values']
l <- project(l, '+proj=laea +lat_0=52 +lon_0=10 +x_0=4321000 +y_0=3210000 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs +type=crs')
#rasterize
dummy_raster <- rast(l, res = 20000)
l_rast <- rasterize(l, dummy_raster, field = 'values', fun = 'sum')
#plot
ggplot() +
geom_spatraster(data = l_rast) +
scale_fill_hypso_b() +
geom_spatvector(data = l, aes(colour = as.factor(sample(seq_along(l)))), show.legend = F) +
geom_spatvector_label(data = l, mapping = aes(label = values)) +
NULL
</code></pre>
<p><a href="https://i.sstatic.net/dhKY3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dhKY3.png" alt="Summed up raster values from lines" /></a></p>
| <python><r><python-xarray><terra> | 2023-08-28 10:06:13 | 1 | 359 | tlhenvironment |
76,991,812 | 1,319,998 | Convert sync generator function that takes a sync iterable to async generator function that takes an async iterable | <p>I have a sync generator function that I can't change (since it's in library) that itself accepts a sync iterable.</p>
<p>But I would like to use it from an async context, passing it an async iterable, and without blocking the event loop during iteration. How can I do this?</p>
<p>For example say we have <code>my_gen</code> that takes an iterable of integers that is designed to work in a sync context.</p>
<pre class="lang-py prettyprint-override"><code>sync_input_iterable = range(0, 10000)
sync_output_iterable = my_gen(input_iterable)
for v in output_iterable:
print(v)
</code></pre>
<p>But I would like to use it like this:</p>
<pre class="lang-py prettyprint-override"><code>async def main():
# Just for example purposes - there is no "arange" built into Python
async def arange(start, end):
for i in range(start, end):
yield(i)
await asyncio.sleep(0)
async_input_iterable = arange(0, 10000)
async_output_iterable = # Something to to with `my_gen`
async for v in async_output_iterable:
print(v)
asyncio.run(main())
</code></pre>
<p>what could the <code># Something...</code> be to make this work?</p>
| <python><python-asyncio><generator> | 2023-08-28 10:01:39 | 1 | 27,302 | Michal Charemza |
76,991,754 | 7,340,304 | DRF doesnt populate field label when using source | <p>So, I have some model with few fields I want to expose over REST API:</p>
<pre class="lang-py prettyprint-override"><code>class MyModel(models.Model):
field1 = models.TextField(verbose_name="Field 1")
field2 = models.TextField(verbose_name="Field 2")
</code></pre>
<p>and have corresponding serializer:</p>
<pre class="lang-py prettyprint-override"><code>class MySerializer(serializers.ModelSerializer):
class Meta:
model = MyModel
fields = ('field1', 'field2')
</code></pre>
<p>This way it works fine: using GET I retrieve fields values, and using OPTIONS I retrieve secondary fields information (e.g. if the field is required, help text, and <strong>label</strong>)</p>
<p>The problem is that I don't want to expose real fields name, and I want to use some other name instead, for example:
<code>field1</code> and <code>secondField</code>.</p>
<p>For that I modify my serializer:</p>
<pre class="lang-py prettyprint-override"><code>class MySerializer(serializers.ModelSerializer):
secondField = serializers.CharField(source='field2')
class Meta:
model = MyModel
fields = ('field1', 'secondField')
</code></pre>
<p>Now I receive correct values on GET, but fields labels are wrong on OPTIONS: for second field I receive "Secondfield" instead of "Field 2"</p>
<p>How do I get correct field label when using <code>source</code>?</p>
| <python><django><django-rest-framework> | 2023-08-28 09:55:19 | 1 | 591 | Bohdan |
76,991,513 | 209,828 | How is restic outputting data to the screen but not to stdout or stderr? | <h3>Updating line</h3>
<p>I have a question about where the output of a certain command is going. I'm using <code>restic</code> as an example of a command that behaves this way. It's the last line of the command that's of interest to me, because it updates while the restore process is working.</p>
<p>For context, restic's a back-up utility; the command below is restoring the contents of <code>my-repo</code> from a backup stored on <code>example.com</code> back to my local machine.</p>
<pre><code>$ restic restore latest --repo sftp:matt@example.com:my-repo --target . --verbose
repository ae412afe opened (version 1)
restoring <Snapshot dff2f51a> to /Users/matt
[0:11] 0.02% 192 files 37.687 MiB, total 39644 files 171.389 GiB
</code></pre>
<p>The last line updates over time until it reaches 100%, e.g.</p>
<pre><code>[12:31] 1.22% 2989 files 2.087 GiB, total 39644 files 171.389 GiB
</code></pre>
<hr />
<h2>Execute from python</h2>
<p>I want to run the restore command from a Python script (using <code>subprocess.Popen</code>) and want the script to print out the updating line but haven't been able to get that working.</p>
<p>I suspected it could be that the updating line was being output to stderr instead of stdout. So I repeated the restore command and redirected the stdout and stderr to files, as below:</p>
<pre><code>restic restore lat...rbose > out.log 2> err.log
</code></pre>
<p><code>err.log</code> remains empty, and <code>out.log</code> only contains the <code>restoring <Snapshot dff2f51a> to /Users/matt</code> line, nothing else.</p>
<hr />
<h2>Python snippet</h2>
<p>Here's the snippet of my code calling Popen:</p>
<pre><code>backup_command = [
# fmt: off
"restic",
"--repo", repo_name,
"backup",
"--verbose",
"--exclude", "node_modules",
"--exclude", "whatever",
# fmt: on
]
backup_process = subprocess.Popen(backup_command, env=env_vars)
backup_process.wait()
</code></pre>
<p>In reference to jurez's answer below, I'm not passing <code>stdin</code> or <code>stderr</code>, but having looked at the source for Popen, they default to <code>None</code>, as per jurez's suggestion.</p>
<hr />
<h2>Questions</h2>
<p>I have a couple of questions.</p>
<ul>
<li>Why isn't the <code>repository ae412afe opened (version 1)</code> line in either of my output files?</li>
<li>Where is the <code>[0:11] 0.02% 192 fil...</code> line going? And how can I show it?</li>
</ul>
| <python><subprocess><stdout><stderr><flush> | 2023-08-28 09:22:03 | 2 | 9,491 | Matt |
76,991,428 | 12,883,297 | Label every n rows of the group in an incremental order in pandas | <p>I have a dataframe,</p>
<pre><code>df = pd.DataFrame([["A",1],["A",2],["A",3],["B",4],["C",5],["C",6],["C",7],["C",8],["C",9],["C",10]],columns=["id","val"])
</code></pre>
<pre><code>id val
A 1
A 2
A 3
B 4
C 5
C 6
C 7
C 8
C 9
C 10
</code></pre>
<p>I need to create a new column which labels every <strong>n</strong> rows at group level of <strong>id</strong> column in an incremental order.
In this example n=2. So after every 2 rows of same id increase the count in grp column.</p>
<p>Expected output:</p>
<pre><code>df_out = pd.DataFrame([["A",1,1],["A",2,1],["A",3,2],["B",4,1],["C",5,1],["C",6,1],["C",7,2],["C",8,2],["C",9,3],["C",10,3]],columns=["id","val","grp"])
</code></pre>
<pre><code>id val grp
A 1 1
A 2 1
A 3 2
B 4 1
C 5 1
C 6 1
C 7 2
C 8 2
C 9 3
C 10 3
</code></pre>
<p>How to do it in pandas</p>
| <python><pandas><dataframe><group-by> | 2023-08-28 09:08:40 | 2 | 611 | Chethan |
76,991,341 | 5,246,617 | DataLoader - pytorch inconsistency between cpu and mps - Apple Silicon | <p>I got stuck on this inconsistency with DataLoader in Pytorch on Mac Apple Silicone.
If I use cpu <code>y</code> is interpreted correctly. However If I use mps it always returns vector with correct length, however only based on first element <code>y[0]</code>.</p>
<pre class="lang-py prettyprint-override"><code>import torch
from torch.utils.data import TensorDataset, random_split, DataLoader
device = torch.device("mps")
X = torch.tensor([[[0.5,0.4], [0,0]],[[0.3,0.2], [0,0]],[[0.5,0.2], [0,0]],[[0.2,0.2], [0,0]]], dtype=torch.float32).to(device)
y = torch.tensor([1,0,0,0], dtype=torch.float32).to(device)
print(X.shape)
print(y.shape)
print(y)
dataset = TensorDataset(X, y)
train_size = int(0.5 * len(dataset))
test_size = len(dataset) - train_size
train_dataset, test_dataset = random_split(dataset, [train_size, test_size])
train_loader = DataLoader(train_dataset, batch_size=10, shuffle=True)
for i, (batch_data, batch_labels) in enumerate(train_loader):
print(batch_data)
print(batch_labels)
break
</code></pre>
<p>for <code>batch_labels</code> on <code>mps</code> i always get tensor full of ones or zeroes based on first value in <code>y</code></p>
<pre><code>torch.Size([4, 2, 2])
torch.Size([4])
tensor([1., 0., 1., 0.], device='mps:0')
tensor([[[0.5000, 0.2000],
[0.0000, 0.0000]],
[[0.5000, 0.4000],
[0.0000, 0.0000]]], device='mps:0')
tensor([1., 1.], device='mps:0')
</code></pre>
<p>Maybe it is related to <a href="https://github.com/pytorch/pytorch/issues/77764" rel="nofollow noreferrer">General MPS op coverage tracking issue #77764</a></p>
| <python><machine-learning><pytorch><metal-performance-shaders><apple-m2> | 2023-08-28 08:55:44 | 1 | 1,093 | Pavol Travnik |
76,991,207 | 21,540,734 | How do I exclude the filename from __init__.py when using import | <h2>Problem</h2>
<p>In the <code>__init__.py</code> that I have inside the folder phpjunkie.</p>
<pre class="lang-py prettyprint-override"><code>from .win32 import GetWindowHandle, CaptureWwindow, FocusWindow, AlwaysOnTop, ShowHideWindow
from .timelapse import Timelapse
from .fps import FPS
</code></pre>
<p>When I use <code>import phpjunkie</code> in a script it shows the filenames (win32, timelapse, fps) in <a href="https://i.sstatic.net/B706s.png" rel="nofollow noreferrer">PyCharm's drop down box</a>. This is a compiled package and I don't want the filenames themselves to be accessible. How do I exclude them?</p>
<hr />
<h2>Conclusion</h2>
<p>I found away around this problem with the <code>setup.py</code>. I was able to get rid of the files showing in the drop down box by leaving out the <code>packages = ['phpjunkie']</code> and using <code>package_dir = {'': 'phpjunkie'}</code> and <code>py_modules = ['phpjunkie', 'Win32']</code>.</p>
<pre class="lang-py prettyprint-override"><code>import setuptools
setuptools.setup(
name = 'phpjunkie',
version = '0.0.27',
package_dir = {
'': 'phpjunkie'
},
py_modules = ['phpjunkie', 'Win32'],
author = 'phpjunkie',
install_requires = ['pywin32', 'numpy'],
)
</code></pre>
<p>The <code>packages = ['phpjunkie']</code> was the source of the problem.</p>
| <python><import><package> | 2023-08-28 08:34:15 | 1 | 425 | phpjunkie |
76,990,907 | 15,218,250 | In Django, which has faster and better performance beween ".filter(a).filter(b)" and "Q objects" and "or/and queries" and "keyword queries" | <p>Simply put, what is the order of performance and speed among the following similar looking queries in the Django ORM?</p>
<p>.filter(a).filter(b)</p>
<pre><code>instances = Model.objects.filter(a=A).filter(b=B)
</code></pre>
<p>Q objects</p>
<pre><code>instances = Model.objects.filter(Q(a=A) & Q(b=B))
</code></pre>
<p>or/and queries</p>
<pre><code>instances = Model.objects.filter(a=A) & Model.objects.filter(b=B)
</code></pre>
<p>keyword queries</p>
<pre><code>instances = Model.objects.filter(a=A, b=B)
</code></pre>
<p>AND please don't copy and paste chatgpt or any other ai services. Thank you, and please leave any comments below.</p>
| <python><django><django-models><django-views><django-forms> | 2023-08-28 07:52:19 | 2 | 613 | coderDcoder |
76,990,902 | 10,277,256 | YAML merging String Process | <p>How to add a process to delete all the previous rows among the rows with duplicate keys only when the parent key is the same?</p>
<pre class="lang-py prettyprint-override"><code>def yaml_to_merged_yaml():
combined_yaml = combine_yaml_headers(input_files)
with open(output_file, 'w') as f:
lines = combined_yaml.split('\n')
previous_line = "" # Store the previous line for insertion
for i, line in enumerate(lines):
if ':' in line:
match = re.match(r'(\s*.*:)(\s*)([^\n]*[^\s])', line)
if match:
line = f"{match.group(1)}{match.group(2)}\"{match.group(3)}\""
if ':' not in line and previous_line:
match = re.match(r'(\s*)([^\n]*[^\s])', previous_line)
if match:
line = f"{match.group(1)}{match.group(2)}{line}"
previous_line = "" # Clear the stored previous line
f.write(line + '\n')
previous_line = line
</code></pre>
<p>output:</p>
<pre><code>
AAA001:
ol1: "aaa"
ol1: "bbb"
AAA002:
ol1: "ccc"
ol1: "ddd"
</code></pre>
<p>shoult be</p>
<pre><code>AAA001:
ol1: "bbb"
AAA002:
ol1: "ddd"
</code></pre>
| <python> | 2023-08-28 07:51:24 | 1 | 1,037 | moriaki |
76,990,851 | 10,681,595 | How to merge pandas dataframe passing a lambda as first parameter? | <p>Restricting to pandas method chaining, how to apply merge method using last dataframe state with lambda function without using pipe?</p>
<p>The code below works. But it depends on the pipe method.</p>
<pre class="lang-py prettyprint-override"><code>(pd.DataFrame(
[{'YEAR':2013,'FK':1, 'v':1},
{'YEAR':2013,'FK':2, 'v':2},
{'YEAR':2014,'FK':1, 'v':3},
{'YEAR':2014,'FK':2, 'v':4}
])
.pipe(lambda w: w.merge(w.query('YEAR==2013')[['FK','v']],
on='FK',
how='left'
))
)
</code></pre>
<p>The code below doesn't work.</p>
<pre class="lang-py prettyprint-override"><code>(pd.DataFrame(
[{'YEAR':2013,'FK':1, 'v':1},
{'YEAR':2013,'FK':2, 'v':2},
{'YEAR':2014,'FK':1, 'v':3},
{'YEAR':2014,'FK':2, 'v':4}
])
.merge(lambda w: w.query('YEAR==2013'),
on='FK',
how='left'
)
)
</code></pre>
<p>Return:
<code>TypeError: Can only merge Series or DataFrame objects, a <class 'function'> was passed</code></p>
| <python><pandas><dataframe><lambda><chaining> | 2023-08-28 07:42:14 | 1 | 442 | the_RR |
76,990,717 | 8,804,776 | Using github actions to post tweets | <p>I am posting tweets using twitter's(X) api. Below is the code which will get triggered from github actions.</p>
<pre><code>name: Post Tweets to Twitter
on:
push:
branches:
- main # Adjust this to your repository's default branch
jobs:
tweet:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8 # Adjust this if needed
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install tweepy
- name: Read and Post Tweets
env:
DISPLAY: ":0" # Set the DISPLAY environment variable
run: |
#!/usr/bin/env python3
import tweepy
consumer_key = ""
consumer_secret = ""
access_token = ""
access_token_secret = ""
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
with open("tweets/tweets.txt", "r") as file:
tweets = file.read().split("\n\n")
tweeted_tweets = []
try:
with open("tweeted_tweets.txt", "r") as tweeted_file:
tweeted_tweets = tweeted_file.read().split("\n")
except FileNotFoundError:
pass
new_tweets = [tweet_text for tweet_text in tweets if tweet_text.strip() and tweet_text not in tweeted_tweets]
if new_tweets:
for tweet_text in new_tweets:
try:
# Add the -X option to disable X server
tweet = api.update_status(status=tweet_text, x__options=['-X'])
tweeted_tweets.append(tweet_text)
print("Tweeted:", tweet_text)
except tweepy.TweepError as e:
print("Error:", e)
with open("tweeted_tweets.txt", "w") as tweeted_file:
tweeted_file.write("\n".join(tweeted_tweets))
- name: Commit and Push Changes
run: |
git config --local user.email "justinmamathew@gmail.com"
git config --local user.name "mathewjustin"
git commit -am "Update tweeted tweets"
git push
</code></pre>
<p>The code does the following.</p>
<p>reads tweets from tweets.txt, tweeted_tweets.txt file is used to keep track of already tweeted tweets. Before posting new tweets, the workflow reads this file to identify tweets that have already been posted. Only new tweets are posted, and their content is added to the tweeted_tweets.txt file to prevent duplicate tweeting.</p>
<p>But when the trigger excecutes it returns me this error.</p>
<pre><code>Run #!/usr/bin/env python3
import-im6.q16: unable to open X server `:0' @ error/import.c/ImportImageCommand/346.
Error: Process completed with exit code 1.
</code></pre>
| <python><github-actions> | 2023-08-28 07:19:05 | 1 | 1,066 | Justin Mathew |
76,990,636 | 3,670,165 | Overwrite `__getattr__` to access attribute of hidden class | <p>I have a class</p>
<pre><code>class Foo():
def __init__(self, module):
self.module = module
</code></pre>
<p>where <code>module</code> is an instance of another class <code>Boo</code> that has some attributes such as <code>length</code> and <code>height</code>. I want to overwrite <code>__getattr__</code> of <code>Foo</code> so that I can do</p>
<pre><code>foo.height
</code></pre>
<p>where it internally returns the field of <code>foo.module</code>.
I tried overwriting <code>__getattr__</code> as so</p>
<pre><code> def __getattr__(self, name):
try:
return getattr(self, name)
except:
try:
return getattr(self.module, name)
except:
raise AttributeError
</code></pre>
<p>but it doesn't work. It seems to be stuck in a recursion, maybe because <code>getattr</code> uses <code>__getattr__</code> under the hood.
My question is, how do I do this right?</p>
| <python><python-3.x> | 2023-08-28 07:03:40 | 3 | 793 | jubueche |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.