QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,606,361
| 12,281,892
|
Legend in matplotlib jumps around with minimal change to bbox_to_anchor parameters
|
<p>I have this issue in matplotlib that drives me crazy. I want to position the legend within the plot to be in a certain position, for example, in this case, in the top left corner (matching the corner of the plot). I do not want to use the <code>loc</code> parameter as it doesn't give me the exact location I want. To achieve what I want, I use <code>bbox_to_anchor</code>. The issue is that with a minimal change to the <code>x,y</code> parameter, the legend jumps all over the place. Why is it happening? What's going on? How can I fix it?</p>
<p>For instance with <code>bbox_to_anchor=(0.12,.9)</code>:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.arange(-2*np.pi, 2*np.pi, 0.1)
fig, ax = plt.subplots(1,1)
ax.plot(x, np.sin(x), label='Sine')
ax.plot(x, np.arctan(x), label='Inverse tan')
ax.legend(bbox_to_anchor=(0.12,.9))
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/sobR0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sobR0.png" alt="enter image description here" /></a></p>
<p>and with <code>bbox_to_anchor=(0.13,.9)</code></p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.arange(-2*np.pi, 2*np.pi, 0.1)
fig, ax = plt.subplots(1,1)
ax.plot(x, np.sin(x), label='Sine')
ax.plot(x, np.arctan(x), label='Inverse tan')
ax.legend(bbox_to_anchor=(0.13,.9))
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/WiSgg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WiSgg.png" alt="enter image description here" /></a></p>
<p>And finally with <code>bbox_to_anchor=(0.13,.8)</code>:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.arange(-2*np.pi, 2*np.pi, 0.1)
fig, ax = plt.subplots(1,1)
ax.plot(x, np.sin(x), label='Sine')
ax.plot(x, np.arctan(x), label='Inverse tan')
ax.legend(bbox_to_anchor=(0.13,.8))
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/NGiVA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NGiVA.png" alt="enter image description here" /></a></p>
<p>Thanks!</p>
|
<python><matplotlib><legend>
|
2023-12-05 12:55:36
| 1
| 2,550
|
My Work
|
77,606,224
| 5,238,639
|
Snowflake : vectorized Python UDTFs with pandas dataframe as input with variable number of columns
|
<p>I have a usecase where the i need to create a vectorized UDTF on a pandas dataframe. This dataframe can have different columns from time to time as it is preprocessed data.</p>
<p>I was looking at the example in <a href="https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-tabular-vectorized#example-calculate-the-summary-statistic-for-each-column-in-the-partition" rel="nofollow noreferrer">https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-tabular-vectorized#example-calculate-the-summary-statistic-for-each-column-in-the-partition</a></p>
<p>Here the the input variables are explicitly mentioned as summary_stats(id varchar, col1 float, col2 float, col3 float, col4 float, col5 float)</p>
<p>The question is, is there way to handle this situation where the input dataframe has variable number of columns and also can have of different datatypes? How can the same function above be modified when the input columns are not known before hand?</p>
<p>Thanks in advance.</p>
|
<python><snowflake-cloud-data-platform><user-defined-functions>
|
2023-12-05 12:33:43
| 1
| 4,503
|
prashanth
|
77,606,194
| 521,347
|
Snyk reporting vulnerabilities in Apache-Beam 2.52.0
|
<p>We have Snyk integrated with our Python repository to identify any vulnerabilities in any of the libraries we are using. I am trying to add the dependency of apache-beam 2.52.0 (latest version) to pyproject.toml file. However, Synk is reporting a vulnerability during the build process in pyarrow 11.0.0 which Apache beam uses internally. This is also causing the build to fail.</p>
<pre><code>Pin pyarrow@11.0.0 to pyarrow@14.0.1 to fix
✗ Deserialization of Untrusted Data (new) [Critical Severity][https://security.snyk.io/vuln/SNYK-PYTHON-PYARROW-6052811] in pyarrow@11.0.0
introduced by apache-beam@2.52.0 > pyarrow@11.0.0
</code></pre>
<p>I tried going to back to Apache beam 2.44.0 which uses pyarrow 9 internally but same vulnerability is being reported with all the versions. Is there any workaround for this?
(I might not be able to disable Synk or add any exclusions)</p>
|
<python><google-cloud-dataflow><apache-beam><snyk>
|
2023-12-05 12:29:22
| 1
| 1,780
|
Sumit Desai
|
77,605,960
| 19,369,310
|
Comparing value of one column with the next row in another column in pandas dataframe
|
<p>I have the following large dataset recording the result of a math competition among students in descending order of date: So for example, student 1 comes third in Race 1 while student 3 won Race 2, etc.</p>
<pre><code>Race_ID Date adv C_k
1 1/1/2023 2.5 2.7
1 1/1/2023 1.4 2.6
1 1/1/2023 1.3 1.9
1 1/1/2023 1.1 1.2
2 11/9/2022 1.4 1.1
2 11/9/2022 1.3 1.2
2 11/9/2022 1.0 0.4
3 17/4/2022 0.9 0.2
3 17/4/2022 0.8 0.4
3 17/4/2022 0.7 0.5
3 17/4/2022 0.6 0.2
3 17/4/2022 0.5 0.4
</code></pre>
<p>That is grouped by <code>Group_ID</code>, the values in <code>adv</code> is sorted in descending value by groups. For each group, I want to define a value</p>
<p>t= min{n| adv_(n+1) <= C_n} and C_t be the corresponding value in the C_k column.
In words, I want to create a new column <code>C_t</code> for each group, where the value is equal to the first C_k such that C_k is greater or equal to the <code>adv</code> value in the next row. And if no such value exists, then set C_t = 1 (for example, in Race_ID = 3)</p>
<p>So the desired column looks like this:</p>
<pre><code>Race_ID Date adv C_k C_t
1 1/1/2023 2.5 2.7 1.9
1 1/1/2023 1.4 2.6 1.9
1 1/1/2023 1.3 1.9 1.9
1 1/1/2023 1.1 1.2 1.9
2 11/9/2022 1.4 1.1 1.2
2 11/9/2022 1.3 1.2 1.2
2 11/9/2022 1.0 0.4 1.2
3 17/4/2022 0.9 0.2 1.0
3 17/4/2022 0.8 0.4 1.0
3 17/4/2022 0.7 0.5 1.0
3 17/4/2022 0.6 0.2 1.0
3 17/4/2022 0.5 0.4 1.0
</code></pre>
<p>Thanks in advance.</p>
|
<python><python-3.x><pandas><dataframe><group-by>
|
2023-12-05 11:50:06
| 1
| 449
|
Apook
|
77,605,928
| 3,745,149
|
TypeError: cannot pickle '_thread.lock' object when wrapping Pytorch Distributed Data Parallel in a class
|
<p>Here is a minimal code:</p>
<pre><code>import threading
import torch
class DistributedMission:
def __init__(self):
self.lock = threading.Lock()
self.world_size = 8
def start(self):
torch.multiprocessing.spawn(self.worker, args=(), nprocs=self.world_size, join=True)
def worker(self, rank:int):
with self.lock:
print(f'{rank} working...')
print(f'{rank} done.')
if __name__ == '__main__':
mission = DistributedMission()
mission.start()
print('All Done.')
</code></pre>
<p>It has an error at line <code>torch.multiprocessing.spawn(self.worker, args=(), nprocs=self.world_size, join=True)</code>: TypeError: cannot pickle '_thread.lock' object</p>
<p>Removing <code>self.lock</code>, this error disappear.</p>
<pre><code>import threading
import torch
class DistributedMission:
def __init__(self):
self.world_size = 8
def start(self):
torch.multiprocessing.spawn(self.worker, args=(), nprocs=self.world_size, join=True)
def worker(self, rank:int):
print(f'{rank} working...')
print(f'{rank} done.')
if __name__ == '__main__':
mission = DistributedMission()
mission.start()
print('All Done.')
</code></pre>
<p>Or removing the wrapper class, the code also runs well:</p>
<pre><code>import threading
import torch
lock = threading.Lock()
world_size = 8
def start():
torch.multiprocessing.spawn(worker, args=(), nprocs=world_size, join=True)
def worker(rank:int):
with lock:
print(f'{rank} working...')
print(f'{rank} done.')
if __name__ == '__main__':
start()
print('All Done.')
</code></pre>
<p>Why does this happen? Why does <code>torch.multiprocessing.spawn()</code>'s container class member matters?</p>
<p>I really need a wrapper class, and need a lock too so that the processes will not run in parallel for part of my code.</p>
|
<python><multithreading><pytorch><spawn><multi-gpu>
|
2023-12-05 11:46:02
| 0
| 770
|
landings
|
77,605,926
| 9,196,760
|
Does nest js provide anything that runs after saving, updating or deleting, like how django signals provides?
|
<p>I am running a project with nest js and prisma orm.
Suppose I am creating a post record like following:</p>
<pre><code>
// Query the database to create post -------------------------------------------------------
try {
post = await this.prisma.post.create({
data: {
uuid: uuidv4(),
author: createPostDto.author,
categoryId: postCategory.id,
title: createPostDto.title,
content: createPostDto.content,
createdAt: new Date(),
updatedAt: new Date(),
}
})
} catch (err) {
this.logger.error(err);
throw new InternalServerErrorException("Failed to create the post");
}
</code></pre>
<p>After creating a record I want to run some perticular code. suppose I want to send notification to admins by calling <code>sendNotification()</code> method. But I don't want to call this method from inside an api.</p>
<p>I know that django signals provide similar feature that can be used to run some part of code after creating, updating, or deleting a row. But I don't know what should be aprropeiate in the case of nest js.</p>
|
<python><django><nestjs><prisma><django-signals>
|
2023-12-05 11:45:38
| 1
| 452
|
Barun Bhattacharjee
|
77,605,641
| 963,165
|
How do I Configure and Initialize Python.NET in a C# Console App on Linux containerized with Docker?
|
<p>I am trying to run a C#/.NET console application that uses the Python.NET library to interpret Python scripts at runtime. I want this app to run on Linux with a Docker container.</p>
<p>The problem I am having is that Python.NET requires you to initialize the <code>Runtime.PythonDLL</code> property with the path to the Python library binary.</p>
<p>The answer on <a href="https://stackoverflow.com/questions/69670715/how-to-set-runtime-pythondll-for-pythonnet-for-console-wpf-net-application-p">this post</a> makes me suspect I am looking for a file named e.g. <code>libpython3.12.so</code>.</p>
<p>I have created a dockerfile (see below) that installs the <code>python:3</code> image to a virtual environment and copies the result into my final environment. But I do not see any file named like <code>libpython3.12.so</code>. If I try to provide the <code>/app/python/bin/python</code> executable (which <em>does</em> exist), Python.NET throws an exception (see below) that the file does not exist.</p>
<p>I can't find an explicit guide to how this should be done, does one exist?</p>
<p>How should I configure my dockerfile, and what is the path to the <code>libpython*.so</code> file that I should assign to <code>Runtime.PythonDLL</code>?</p>
<hr />
<p>The .dockerfile:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY ["Farpoint.Service/Farpoint.Service.csproj", "Farpoint.Service/"]
RUN dotnet restore "Farpoint.Service/Farpoint.Service.csproj"
COPY . .
WORKDIR "/src/Farpoint.Service"
RUN dotnet build "Farpoint.Service.csproj" -c Release -o /app/build
# BEGIN Prepare python env
FROM python:3 as python
RUN python -m venv /venv
COPY --from=build /app/build/python/requirements.txt .
RUN /venv/bin/python -m pip install -r requirements.txt
# END Prepare python env
FROM build AS publish
RUN dotnet publish "Farpoint.Service.csproj" -c Release -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
# BEGIN Copy python env
COPY --from=python /venv /app/python
# END Copy python env
ENTRYPOINT ["dotnet", "Farpoint.Service.dll"]
</code></pre>
<hr />
<p>The exception:</p>
<blockquote>
<p>DllNotFoundException: Could not load /app/python/bin/python with flags RTLD_NOW | RTLD_GLOBAL: /app/python/bin/python: cannot open shared object file: No such file or directory</p>
</blockquote>
|
<python><c#><linux><docker><python.net>
|
2023-12-05 10:55:59
| 0
| 552
|
Jim Noble
|
77,605,480
| 3,973,269
|
Color around the figure in python plot (matplotlib)
|
<p>I have a matlab 3d barplot and within the grid, I understand that there is a background color. However, that color seems to extend outside the grid aswell forming a gray-ish square. I would like to get rid of that since it doesn't make sense and ht is odd with the text around it.
Especially around the right side, the numbers 40, 60 and 80 are on the border. I know that I could transform the image into another angle but I would rather not have this gray-ish field around the figure at all.</p>
<p>The figure:</p>
<p><a href="https://i.sstatic.net/zR7Nm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zR7Nm.png" alt="enter image description here" /></a></p>
<p>The code:</p>
<pre><code>plt.figure()
ax = plt.axes(projection='3d')
ax.bar3d(xpos,ypos,zpos,dx,dy,dz, alpha=0.4)
</code></pre>
|
<python><matplotlib><plot><3d>
|
2023-12-05 10:30:08
| 1
| 569
|
Mart
|
77,605,418
| 13,919,925
|
How to get JWK URL for epic fhir resources?
|
<p>Im using open epic documentation (<a href="https://fhir.epic.com/Documentation?docId=oauth2" rel="nofollow noreferrer">https://fhir.epic.com/Documentation?docId=oauth2</a>)
I'm able to create application
But in bottom it asked me upload public key and also add jwk url (which is not required) but for accessing the public key I feel i need to add this to for authenticating user.<a href="https://i.sstatic.net/yXx2g.png" rel="nofollow noreferrer">here is the image reference</a></p>
<p>I don't know from where will i get this JWK URL.
In many documentation it was mentioned to hit</p>
<pre><code>GET https://fhir.epic.com/interconnect-fhir-oauth/api/FHIR/R4/.well-known/smart-configuration HTTP/1.1
Accept: application/json
</code></pre>
<p>the above endpoint to get jwk url . But there is nothing related to it.
or
do i need to create an endpoint in my project and then pass it here. if yes, then what should that API looks like.
I'm using djangorestframework. If anyone can help me with this that would be grate.</p>
|
<python><hl7-fhir><jwk><smart-on-fhir>
|
2023-12-05 10:21:50
| 1
| 302
|
sandeepsinghnegi
|
77,605,362
| 12,415,855
|
How to get HTML when requesting this website?
|
<p>I try to parse the HTML content using the following code:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = "https://www.thespruce.com/christmas-village-display-ideas-8407777"
page = requests.get(url)
soup = BeautifulSoup (page.content, 'lxml')
print(soup.prettify())
</code></pre>
<p>But I only get the following response:</p>
<pre><code><html>
<body>
<p>
Signal - Not Acceptable
</p>
</body>
</html>
</code></pre>
<p>Is it somehow possible to get the page-content using <code>requests</code>?</p>
<p>(I need this without using <code>selenium</code>)</p>
|
<python><web-scraping><beautifulsoup><python-requests>
|
2023-12-05 10:13:45
| 1
| 1,515
|
Rapid1898
|
77,605,326
| 5,763,590
|
Streamlit pdf file cannot be opened after downloading with st.download_button
|
<p>I want to download a pdf on button click in my streamlit application.
I am using <a href="https://docs.streamlit.io/library/api-reference/widgets/st.download_button" rel="nofollow noreferrer"><code>st.download_button</code></a>, like this:</p>
<pre><code>st.download_button(
label="Download",
key="download_button",
on_click=None, # You can specify a callback function if needed
file_name="MyDocument.pdf",
data="MyDocumentSource.pdf",
help="Click to download.",
)
</code></pre>
<p>The pdf file is in the same directory as the python script. And I can open the pdf file as such. However after downloading (which also works), I can not open the pdf file and I get - "file could not be opened due to unsupported datatype".</p>
<p>What am I doing wrong?</p>
|
<python><pdf><download><streamlit>
|
2023-12-05 10:06:12
| 1
| 10,287
|
mrk
|
77,605,270
| 7,425,726
|
Find the closest distance between points following a set of LineStrings
|
<p>I have two datasets of Point coordinates (representing electrical installations) and would like to find the closest match between the two datasets following a given set of LineStrings (representing the cables). As an example let's take the following situation:</p>
<pre><code>points1 = gpd.GeoDataFrame({'geometry' : [Point(0,1), Point(3,3)]})
points3 = gpd.GeoDataFrame({'geometry' : [Point(1,0), Point(4,4)]})
lines = gpd.GeoDataFrame({'geometry' : [LineString([Point(0,0),Point(0,4)]),LineString([Point(0,4),Point(4,4)]), LineString([Point(2,4),Point(2,0)]),LineString([Point(1,0),Point(3,0), Point(3,3)])]})
</code></pre>
<p><a href="https://i.sstatic.net/phxiV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/phxiV.png" alt="enter image description here" /></a></p>
<p>Here we can see that red point (0,1) is closest to green point (4,4) when following the lines (distance 7) and red point (3,3) is closest to green point (1,0).</p>
<p>How can I automatically calculate the closest green point for each red point? The lines can sometimes be split into smaller segments like in the example and not all red/green points are necessarily on the endpoint of one of the linestrings.</p>
|
<python><geopandas><shapely>
|
2023-12-05 09:57:38
| 2
| 1,734
|
pieterbons
|
77,605,261
| 14,430,730
|
What if a Typer command returns a value
|
<p>I'm using <a href="https://typer.tiangolo.com/" rel="noreferrer">Typer</a> to implement a CLI tool. For testing purpose, I'd like some commands return some value, but I'm not sure whether it's safe. I didn't find anything related in its <a href="https://typer.tiangolo.com/" rel="noreferrer">user guide</a>.</p>
<p>As Typer uses <a href="https://click.palletsprojects.com/en/8.1.x/" rel="noreferrer">Click</a>, I found the following statements in <a href="https://click.palletsprojects.com/en/8.1.x/commands/#:%7E:text=When%20a%20Click%20script%20is%20invoked%20as%20command%20line%20application%20(through%20BaseCommand.main())%20the%20return%20value%20is%20ignored" rel="noreferrer">Click user guide</a>:</p>
<blockquote>
<p>When a Click script is invoked as command line application (through BaseCommand.main()) the return value is ignored unless the standalone_mode is disabled in which case it’s bubbled through.</p>
</blockquote>
<p>Does this behavior also apply to Typer?</p>
<hr />
<p>As a minimum example, I'd like to do change my original code</p>
<pre><code>@app.command()
def my_command():
do_something()
</code></pre>
<p>to</p>
<pre><code>@app.command()
def my_command():
result = do_something()
return result
</code></pre>
<p>where <code>result</code> is only for testing so that I can test <code>my_command</code> with:</p>
<pre><code>from typer.testing import CliRunner
from myapp import app
def test_my_command():
result = CliRunner().invoke(app)
assert result.return_value == expected_value
</code></pre>
<p>My question is whether the change (adding <code>return result</code>) will influence the behaviour seen by users.</p>
|
<python><typer>
|
2023-12-05 09:55:42
| 2
| 336
|
XuanInsr
|
77,605,089
| 8,087,322
|
__init__ in overridden classmethod
|
<p>I have a small class hierarchy with similar methods and <code>__init__()</code>, but slightly different static (class) <code>read()</code> methods. Specifically will the child class need to prepare the file name a bit before reading (but the reading itself is the same):</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
def __init__(self, pars):
self.pars = pars
@classmethod
def read(cls, fname):
pars = some_method_to_read_the_file(fname)
return cls(pars)
class Bar(Foo):
@classmethod
def read(cls, fname):
fname0 = somehow_prepare_filename(fname)
return cls.__base__.read(fname0)
</code></pre>
<p>The problem here is that <code>Bar.read(…)</code> returns a <strong>Foo</strong> object, not a <strong>Bar</strong> object. How can I change the code so that an object of the correct class is returned?</p>
|
<python><class-method>
|
2023-12-05 09:31:03
| 1
| 593
|
olebole
|
77,605,033
| 9,827,719
|
Python 3.12 does not support the package cloud-sql-python-connector on Windows 11
|
<p>I have upgraded from Python 3.11 to Python 3.12. After I have upgraded I am unable to install the package "cloud-sql-python-connector".</p>
<p>My colleague confirmed that he can install it on Python 3.11, but not 3.12.</p>
<p><strong>requirements.txt</strong></p>
<pre><code>cloud-sql-python-connector
flask
flask-cors
google-api-python-client
google-cloud-secret-manager
jira
matplotlib
numpy
PyYAML
python-pptx
pg8000
sqlalchemy
python-dateutil
Pillow
requests
</code></pre>
<p><strong>Command output:</strong></p>
<pre><code>Collecting cloud-sql-python-connector
Using cached cloud_sql_python_connector-1.4.3-py2.py3-none-any.whl.metadata (24 kB)
Collecting aiohttp (from cloud-sql-python-connector)
Using cached aiohttp-3.9.1-cp312-cp312-win_amd64.whl.metadata (7.6 kB)
Collecting cryptography>=38.0.3 (from cloud-sql-python-connector)
Using cached cryptography-41.0.7-cp37-abi3-win_amd64.whl.metadata (5.3 kB)
Requirement already satisfied: Requests in c:\users\admin\code\jira-statistics-as-powerpoint\venv\lib\site-packages (from cloud-sql-python-connector) (2.31.0)
Requirement already satisfied: google-auth in c:\users\admin\code\jira-statistics-as-powerpoint\venv\lib\site-packages (from cloud-sql-python-connector) (2.24.0)
Collecting cffi>=1.12 (from cryptography>=38.0.3->cloud-sql-python-connector)
Using cached cffi-1.16.0-cp312-cp312-win_amd64.whl.metadata (1.5 kB)
Collecting attrs>=17.3.0 (from aiohttp->cloud-sql-python-connector)
Using cached attrs-23.1.0-py3-none-any.whl (61 kB)
Collecting multidict<7.0,>=4.5 (from aiohttp->cloud-sql-python-connector)
Using cached multidict-6.0.4.tar.gz (51 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting yarl<2.0,>=1.0 (from aiohttp->cloud-sql-python-connector)
Using cached yarl-1.9.3-cp312-cp312-win_amd64.whl.metadata (29 kB)
Collecting frozenlist>=1.1.1 (from aiohttp->cloud-sql-python-connector)
Using cached frozenlist-1.4.0.tar.gz (90 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting aiosignal>=1.1.2 (from aiohttp->cloud-sql-python-connector)
Using cached aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in c:\users\admin\code\jira-statistics-as-powerpoint\venv\lib\site-packages (from google-auth->cloud-sql-python-connector) (5.3.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in c:\users\admin\code\jira-statistics-as-powerpoint\venv\lib\site-packages (from google-auth->cloud-sql-python-connector) (0.3.0)
Requirement already satisfied: rsa<5,>=3.1.4 in c:\users\admin\code\jira-statistics-as-powerpoint\venv\lib\site-packages (from google-auth->cloud-sql-python-connector) (4.9)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\admin\code\jira-statistics-as-powerpoint\venv\lib\site-packages (from Requests->cloud-sql-python-connector) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\admin\code\jira-statistics-as-powerpoint\venv\lib\site-packages (from Requests->cloud-sql-python-connector) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\admin\code\jira-statistics-as-powerpoint\venv\lib\site-packages (from Requests->cloud-sql-python-connector) (2.1.0)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\admin\code\jira-statistics-as-powerpoint\venv\lib\site-packages (from Requests->cloud-sql-python-connector) (2023.11.17)
Collecting pycparser (from cffi>=1.12->cryptography>=38.0.3->cloud-sql-python-connector)
Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Requirement already satisfied: pyasn1<0.6.0,>=0.4.6 in c:\users\admin\code\jira-statistics-as-powerpoint\venv\lib\site-packages (from pyasn1-modules>=0.2.1->google-auth->cloud-sql-python-connector) (0.5.1)
Using cached cloud_sql_python_connector-1.4.3-py2.py3-none-any.whl (36 kB)
Using cached cryptography-41.0.7-cp37-abi3-win_amd64.whl (2.7 MB)
Using cached aiohttp-3.9.1-cp312-cp312-win_amd64.whl (362 kB)
Using cached cffi-1.16.0-cp312-cp312-win_amd64.whl (181 kB)
Using cached yarl-1.9.3-cp312-cp312-win_amd64.whl (75 kB)
Building wheels for collected packages: frozenlist, multidict
Building wheel for frozenlist (pyproject.toml): started
Building wheel for frozenlist (pyproject.toml): finished with status 'error'
Building wheel for multidict (pyproject.toml): started
Building wheel for multidict (pyproject.toml): finished with status 'error'
Failed to build frozenlist multidict
error: subprocess-exited-with-error
Building wheel for frozenlist (pyproject.toml) did not run successfully.
exit code: 1
[33 lines of output]
*********************
* Accelerated build *
*********************
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-312
creating build\lib.win-amd64-cpython-312\frozenlist
copying frozenlist\__init__.py -> build\lib.win-amd64-cpython-312\frozenlist
running egg_info
writing frozenlist.egg-info\PKG-INFO
writing dependency_links to frozenlist.egg-info\dependency_links.txt
writing top-level names to frozenlist.egg-info\top_level.txt
reading manifest file 'frozenlist.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
warning: no previously-included files matching '*.lib' found anywhere in distribution
warning: no previously-included files matching '*.dll' found anywhere in distribution
warning: no previously-included files matching '*.a' found anywhere in distribution
warning: no previously-included files matching '*.obj' found anywhere in distribution
warning: no previously-included files found matching 'frozenlist\*.html'
no previously-included directories found matching 'docs\_build'
adding license file 'LICENSE'
writing manifest file 'frozenlist.egg-info\SOURCES.txt'
copying frozenlist\__init__.pyi -> build\lib.win-amd64-cpython-312\frozenlist
copying frozenlist\_frozenlist.pyx -> build\lib.win-amd64-cpython-312\frozenlist
copying frozenlist\py.typed -> build\lib.win-amd64-cpython-312\frozenlist
running build_ext
building 'frozenlist._frozenlist' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for frozenlist
error: subprocess-exited-with-error
Building wheel for multidict (pyproject.toml) did not run successfully.
exit code: 1
[74 lines of output]
*********************
* Accelerated build *
*********************
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-312
creating build\lib.win-amd64-cpython-312\multidict
copying multidict\_abc.py -> build\lib.win-amd64-cpython-312\multidict
copying multidict\_compat.py -> build\lib.win-amd64-cpython-312\multidict
copying multidict\_multidict_base.py -> build\lib.win-amd64-cpython-312\multidict
copying multidict\_multidict_py.py -> build\lib.win-amd64-cpython-312\multidict
copying multidict\__init__.py -> build\lib.win-amd64-cpython-312\multidict
running egg_info
writing multidict.egg-info\PKG-INFO
writing dependency_links to multidict.egg-info\dependency_links.txt
writing top-level names to multidict.egg-info\top_level.txt
reading manifest file 'multidict.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files found matching 'multidict\_multidict.html'
warning: no previously-included files found matching 'multidict\*.so'
warning: no previously-included files found matching 'multidict\*.pyd'
warning: no previously-included files found matching 'multidict\*.pyd'
no previously-included directories found matching 'docs\_build'
adding license file 'LICENSE'
writing manifest file 'multidict.egg-info\SOURCES.txt'
C:\Users\admin\AppData\Local\Temp\pip-build-env-pk6a7buh\overlay\Lib\site-packages\setuptools\command\build_py.py:207: _Warning: Package 'multidict._multilib' is absent from the `packages` configuration.
!!
********************************************************************************
############################
# Package would be ignored #
############################
Python recognizes 'multidict._multilib' as an importable package[^1],
but it is absent from setuptools' `packages` configuration.
This leads to an ambiguous overall configuration. If you want to distribute this
package, please make sure that 'multidict._multilib' is explicitly added
to the `packages` configuration field.
Alternatively, you can also rely on setuptools' discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
If you don't want 'multidict._multilib' to be distributed and are
already explicitly excluding 'multidict._multilib' via
`find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
you can try to use `exclude_package_data`, or `include-package-data=False` in
combination with a more fine grained `package-data` configuration.
You can read more about "package data files" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/datafiles.html
[^1]: For Python, any directory (with suitable naming) can be imported,
even if it does not contain any `.py` files.
On the other hand, currently there is no concept of package data
directory, all directories are treated like packages.
********************************************************************************
!!
check.warn(importable)
copying multidict\__init__.pyi -> build\lib.win-amd64-cpython-312\multidict
copying multidict\py.typed -> build\lib.win-amd64-cpython-312\multidict
running build_ext
building 'multidict._multidict' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for multidict
ERROR: Could not build wheels for frozenlist, multidict, which is required to install pyproject.toml-based projects
</code></pre>
<p>Are there any solutions on how to install "cloud-sql-python-connector" with Python 3.12?</p>
|
<python>
|
2023-12-05 09:21:31
| 1
| 1,400
|
Europa
|
77,604,957
| 22,538,132
|
How to measure inner dimensions of noisy bin box using histogram
|
<p>I have a bin box where the orientation is considered to be zero in all directions, and I want to measure the inner dimensions of the box using histograms of x, y , and z.</p>
<p><a href="https://i.sstatic.net/KjtQK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KjtQK.png" alt="box" /></a></p>
<p><a href="https://i.sstatic.net/kSQU1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kSQU1.png" alt="histogram" /></a></p>
<p>x, y inner dimensions are the distances along the inner (almost) flat histogram region between peaks, while on z axis is the distance between the lowest flat point and the max z.</p>
<p>Can you please tell me how can I measure the inner dimensions of the bin box please? thanks in advance.</p>
<p>Here is the <a href="https://drive.google.com/file/d/1eS15ERIaa7MtiT4a9OsBlITyU_rq1ZGI/view?usp=sharing" rel="nofollow noreferrer">npy file</a> of shape (N, 3) of N points coordinates X, Y, Z.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
bin_pts = np.load('bin_box.npy')
axes = plt.subplots(1, 3, figsize=(12, 4))[1]
axes[0].hist(bin_pts[:, 0], bins=200, edgecolor='red', alpha=0.5)
axes[0].set_title('Histogram X')
axes[0].set_xlabel('Value')
axes[0].set_ylabel('Frequency')
axes[1].hist(bin_pts[:, 1], bins=200, edgecolor='green', alpha=0.5)
axes[1].set_title('Histogram Y')
axes[1].set_xlabel('Value')
axes[1].set_ylabel('Frequency')
axes[2].hist(bin_pts[:, 2], bins=200, edgecolor='blue', alpha=0.5)
axes[2].set_title('Histogram Z')
axes[2].set_xlabel('Value')
axes[2].set_ylabel('Frequency')
plt.tight_layout()
plt.show()
</code></pre>
|
<python><numpy><histogram><measure>
|
2023-12-05 09:04:36
| 1
| 304
|
bhomaidan90
|
77,604,930
| 9,974,205
|
Problem specifying an objective function in optimization problem
|
<p>I have the following optimization problem written in Python</p>
<pre><code>from pulp import LpProblem, LpVariable, lpSum, LpMaximize
# Person i in group j
x = {(i, j): LpVariable(cat="Binary", name=f"x_{i}_{j}") for i in result_sum.index for j in result_sum.index}
# group j exists
y = {j: LpVariable(cat="Binary", name=f"y_{j}") for j in result_sum.index}
# objective function, maximize the punctuation of the people within each group
for i in result_sum.index:
for j in result_sum.index:
for k in result_sum.index:
if i != j:
prob += punctuation.loc[i, j] * x[i, k] * x[j, k]
# Each person belongs to one group
for i in result_sum.index:
prob += lpSum(x[i, j] for j in result_sum.index) == 1
# Each group has between 2 and 3 members
for j in result_sum.index:
prob += lpSum(x[i, j] for i in result_sum.index) >= 2 * y[j]
prob += lpSum(x[i, j] for i in result_sum.index) <= 3 * y[j]
# Solve the problem
prob.solve()
</code></pre>
<p>If I run this code I get the error</p>
<pre><code>line 91, in <module>
prob += result_sum.loc[i, j] * x[i, k] * x[j, k]
raise TypeError("Non-constant expressions cannot be multiplied")
TypeError: Non-constant expressions cannot be multiplied
</code></pre>
<p>Can someone please help me solve this problem?</p>
|
<python><matrix><optimization><compiler-errors><pulp>
|
2023-12-05 08:59:58
| 1
| 503
|
slow_learner
|
77,604,929
| 832,490
|
How to call asynchronous functions in cascade?
|
<p>Given the following code, I would like to know if there is a way to reduce it to a single line without using parentheses to resolve the promise.</p>
<pre><code>context = await browser.new_context(
viewport={
"width": 1600,
"height": 1200,
},
device_scale_factor=2,
)
page = await context.new_page()
</code></pre>
<p>Something like this:</p>
<pre><code>page = await pipe(browser.new_context(viewport={"width": 1600, "height": 1200}, device_scale_factor=2), lambda c: c.new_page())
</code></pre>
<p>Or this:</p>
<pre><code>result = await some_async_function().then(another_async_function).then(yet_another_async_function)
</code></pre>
|
<python><python-asyncio>
|
2023-12-05 08:59:41
| 1
| 1,009
|
Rodrigo
|
77,604,841
| 4,451,521
|
Finding equal intervals for two divided ranges
|
<p>I have been scratching my head trying several approaches without success.
I have four values <code>min1,max1,min2,max2</code> and the condition is that <code>min2>max1</code></p>
<p>This values can be negatives, can be floats.</p>
<p>I want to subdivide these into subintervals with the conditions:</p>
<ol>
<li>the subintervals have to be of the same length</li>
<li>They should be contained in the original intervals</li>
</ol>
<p>I tried</p>
<pre><code>import numpy as np
# Define the intervals
min1, max1 = 0, 6
min2, max2 = 8, 17
# Check if min2 is greater than max1
if min2 <= max1:
raise ValueError("Invalid intervals: min2 must be greater than max1.")
# Specify the number of intervals
num_intervals = 5 # Adjust as needed
# Calculate the length of each interval
total_length = max2 - min1
interval_length = total_length / num_intervals
# Generate array with intervals
intervals = np.linspace(min1, max2, num=num_intervals + 1)
print("Original intervals:")
print(f"Interval 1: [{min1}, {max1}]")
print(f"Interval 2: [{min2}, {max2}]")
print("\nDivided intervals:")
for i in range(num_intervals):
start_point = intervals[i]
end_point = start_point + interval_length
print(f"Interval {i + 1}: [{start_point}, {end_point}]")
</code></pre>
<p>but this is obviously incorrect since the results</p>
<pre><code>Original intervals:
Interval 1: [0, 6]
Interval 2: [8, 17]
Divided intervals:
Interval 1: [0.0, 3.4]
Interval 2: [3.4, 6.8]
Interval 3: [6.8, 10.2]
Interval 4: [10.2, 13.6]
Interval 5: [13.6, 17.0]
</code></pre>
<p>we see that interval 2 max is greater than max1 and also interval 3 min is less than min2</p>
<p>in reality this should have been for example</p>
<pre><code>Interval 1: [0.0, 3.0]
Interval 2: [3.0, 6.0]
Interval 3: [8.0, 11.0]
Interval 4: [11.0, 14.0]
Interval 5: [14.0, 17.0]
</code></pre>
<p>(take into account that this is not the only solution. it could also have been</p>
<pre><code>Interval 1: [0.0, 1.5]
Interval 2: [1.5, 3.0]
Interval 3: [3.0, 4.5]
Interval 4: [4.5, 6.0]
Interval 5: [8.0, 9.5]
Interval 6: [9.5, 11.0]
Interval 7: [11.0, 12.5]
Interval 8: [12.5, 14.0]
Interval 9: [14.0, 15.5]
Interval 10: [15.5, 17.0]
</code></pre>
<p>It seems simple but I am stuck.</p>
|
<python><numpy>
|
2023-12-05 08:45:12
| 0
| 10,576
|
KansaiRobot
|
77,604,743
| 7,158,719
|
Is there a way to access pysparks executors and send jobs to them manually via Jupyter or Zeppelin notebook?
|
<p>Probably just a pipe dream, but is there a way to access pysparks executors and send jobs to them manually in a Jupyter or Zeppelin notebook?</p>
<p>It probably goes against all pyspark convention as well, but the idea is to access a running EMR clusters executors(workers) and send them python scripts to run. Sort of like pythons multiprocessing where the pool is instead the executors themselves, and you just feed them a map or a list of the python scripts path+arguments or a function.</p>
<pre><code>pyspark_executors = sc.getExecutors()
def square(number):
return number ** 2
numbers = [1, 2, 3, 4, 5]
with pyspark_executors.Pool() as pool:
results = pool.map(square, numbers)
print(results)
</code></pre>
|
<python><apache-spark><pyspark><jupyter-notebook>
|
2023-12-05 08:29:15
| 2
| 547
|
Ranald Fong
|
77,604,519
| 7,988,415
|
How to reload a class imported from a module in python console?
|
<p>The way I imported it:</p>
<pre class="lang-py prettyprint-override"><code>from my_package.my_module import My_Class
</code></pre>
<p>Here, <code>my_package</code> is a subdirectory in my current project with an empty <code>__init__.py</code> in it.</p>
<p>Then I edited <code>My_Class</code> and wanted to reload it without restarting the python console. How do I do this?</p>
<p>None of <code>importlib.reload(My_Class)</code>, <code>importlib.reload(my_package.my_module)</code> or <code>importlib.reload(my_module)</code> works because <code>My_Class</code> is not a module and I didn't import a module.</p>
<p>Edit:
Since two comments mentioned this: I tried re-importing using the same command. Didn't work - sill the "old" class gets imported. I even tried deleting the module at all, still no problem re-importing in the same python console.</p>
|
<python><class><import><module>
|
2023-12-05 07:37:59
| 0
| 1,054
|
Alex
|
77,604,473
| 12,875,947
|
Ubuntu is not serving Static files from Django project. Getting Permission issue
|
<p>[Updated status]: It has been resolved by Updating the permission of root folder. update the user in nginx.conf with user can be option but I have not tried this one.</p>
<p>===== Actual Question============</p>
<p>I have a django project deployed on Ubuntu 23.04 and using nginx and gunicorn to serve the website. Static setting is defined as follows:</p>
<pre><code>STATIC_DIR=os.path.join(BASE_DIR,'static')
STATIC_URL = '/static/'
if DEBUG:
STATICFILES_DIRS = [
STATIC_DIR,
]
else:
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.StaticFilesStorage'
</code></pre>
<p>We have Debug = False in Production.</p>
<p>Gunicorn service</p>
<pre><code>[Unit]
Description=gunicorn daemon
Requires=opinionsdeal.socket
After=network.target
[Service]
User=panelviewpoint
Group=www-data
WorkingDirectory=/home/panelviewpoint/opinionsdealnew
ExecStart=/home/panelviewpoint/opinionsdealnew/venv/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/opinionsdeal.sock \
opinions_deal.wsgi:application
[Install]
WantedBy=multi-user.target
</code></pre>
<p>nginx file</p>
<pre><code>server {
server_name opinionsdeal.com www.opinionsdeal.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/panelviewpoint/opinionsdealnew;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/opinionsdeal.sock;
}
}
</code></pre>
<p>SSL lines have been removed from nginx file in this code only but it is already there in server. I am using Django 4.2 and python 3.11</p>
<p>We are seeing that website HTML pages are being served but static files like js, css and images are not being served.</p>
<p>I have deployed similar projects in past on ubuntu 20 and worked well but this time it is not working. I have given permission for static folder to all users.</p>
<p>Please suggest if there is any solution.</p>
|
<python><django><ubuntu><nginx><gunicorn>
|
2023-12-05 07:28:15
| 1
| 1,886
|
Narendra Vishwakarma
|
77,604,377
| 2,587,904
|
Harmonizing data spread over multiple rows with duplicate values for some keys
|
<p>I face data spread over multiple columns.</p>
<p>Some rows hold duplicate data for a key.
Some semantically similar data
Others new fresh data (sometimes only for different keys - however sometimes also for the same key)</p>
<p>What could be a great strategy to make this data accessible? I want to have one row for the key.</p>
<p>I am trying to work with data about companies. It is from ORBIS.
For one primary key multiple rows hold values that describe it.</p>
<p>In some cases it is about NACE codes and classifications in other cases (like this one outlined below) it is about ID numbers (tax and others).</p>
<p>What is a great way to harmonize the data (in a somewhat generic way)?</p>
<p>I have explored:</p>
<ul>
<li>collect_list(struct(for each non-grouping-key-column))</li>
<li>array_compact(collect_set) (for-each-non-grouping-key-column)</li>
<li>combining the first aggregation with a UDF that JSONifies the data and tries to match similar values. However, it is only keeping the first occurence of a key (and I sometimes need multiple).</li>
</ul>
<p>You find the code below for an reproducible example</p>
<p>The structure looks like this in pandas:</p>
<pre><code>import pandas as pd
d = pd.DataFrame(
[
{
"bv_d_id_number": "XX00000000000",
"national_id_number": "XX00000000000",
"national_id_label": "European VAT number",
"national_id_type": "European VAT number",
"trade_register_number": None,
"vat_per_tax_number": None,
"european_vat_number": None,
"lei_legal_entity_identifier": None,
"statistical_number": None,
"other_company_id_number": None,
"ip_identification_number": None,
"ip_identification_label": None,
"_13": None,
"ticker_symbol": None,
"isin_number": None
},
{
"bv_d_id_number": "XX00000000000",
"national_id_number": "00000000000",
"national_id_label": "TIN",
"national_id_type": "TIN",
"trade_register_number": None,
"vat_per_tax_number": None,
"european_vat_number": None,
"lei_legal_entity_identifier": None,
"statistical_number": None,
"other_company_id_number": None,
"ip_identification_number": None,
"ip_identification_label": None,
"_13": None,
"ticker_symbol": None,
"isin_number": None
},
{
"bv_d_id_number": "XX00000000000",
"national_id_number": "AA0000000",
"national_id_label": "CCIAA number",
"national_id_type": "Trade register number",
"trade_register_number": "AA0000000",
"vat_per_tax_number": "00000000000",
"european_vat_number": "XX00000000000",
"lei_legal_entity_identifier": None,
"statistical_number": None,
"other_company_id_number": None,
"ip_identification_number": None,
"ip_identification_label": None,
"_13": None,
"ticker_symbol": None,
"isin_number": None
},
{
"bv_d_id_number": "XX00000000000",
"national_id_number": "00000000000",
"national_id_label": "Codice fiscale",
"national_id_type": "VAT/Tax number",
"trade_register_number": None,
"vat_per_tax_number": "00000000000",
"european_vat_number": None,
"lei_legal_entity_identifier": None,
"statistical_number": None,
"other_company_id_number": None,
"ip_identification_number": None,
"ip_identification_label": None,
"_13": None,
"ticker_symbol": None,
"isin_number": None
},
{
"bv_d_id_number": "XX00000000000",
"national_id_number": "00000000000",
"national_id_label": "Partita IVA",
"national_id_type": "VAT/Tax number",
"trade_register_number": None,
"vat_per_tax_number": None,
"european_vat_number": None,
"lei_legal_entity_identifier": None,
"statistical_number": None,
"other_company_id_number": None,
"ip_identification_number": None,
"ip_identification_label": None,
"_13": None,
"ticker_symbol": None,
"isin_number": None
}
]
)
</code></pre>
<p>The cleanup functions are below:</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql import functions as F
from pyspark.sql import DataFrame
from pyspark.sql.functions import udf
spark = SparkSession.builder.appName("orbis").master("local[2]").getOrCreate()
sdf = spark.createDataFrame(dxx.fillna(-1).fillna('NULL'))
collect_as_struct_list(sdf, grouping_keys=["bv_d_id_number"], mode="context").show()
root
|-- bv_d_id_number: string (nullable = true)
|-- column_values: array (nullable = false)
| |-- element: struct (containsNull = false)
| | |-- national_id_number: string (nullable = true)
| | |-- national_id_label: string (nullable = true)
| | |-- national_id_type: string (nullable = true)
| | |-- trade_register_number: string (nullable = true)
| | |-- vat_per_tax_number: string (nullable = true)
| | |-- european_vat_number: string (nullable = true)
| | |-- lei_legal_entity_identifier: string (nullable = true)
| | |-- statistical_number: string (nullable = true)
| | |-- other_company_id_number: string (nullable = true)
| | |-- ip_identification_number: string (nullable = true)
| | |-- ip_identification_label: string (nullable = true)
| | |-- _13: boolean (nullable = true)
| | |-- ticker_symbol: string (nullable = true)
| | |-- isin_number: string (nullable = true)
collect_as_struct_list(sdf, grouping_keys=["bv_d_id_number"], mode="individual_set").show()
root
|-- bv_d_id_number: string (nullable = true)
|-- national_id_number: array (nullable = false)
| |-- element: string (containsNull = false)
|-- national_id_label: array (nullable = false)
| |-- element: string (containsNull = false)
|-- national_id_type: array (nullable = false)
| |-- element: string (containsNull = false)
|-- trade_register_number: array (nullable = false)
| |-- element: string (containsNull = false)
|-- vat_per_tax_number: array (nullable = false)
| |-- element: string (containsNull = false)
|-- european_vat_number: array (nullable = false)
| |-- element: string (containsNull = false)
|-- lei_legal_entity_identifier: array (nullable = false)
| |-- element: string (containsNull = false)
|-- statistical_number: array (nullable = false)
| |-- element: string (containsNull = false)
|-- other_company_id_number: array (nullable = false)
| |-- element: string (containsNull = false)
|-- ip_identification_number: array (nullable = false)
| |-- element: string (containsNull = false)
|-- ip_identification_label: array (nullable = false)
| |-- element: string (containsNull = false)
|-- _13: array (nullable = false)
| |-- element: boolean (containsNull = false)
|-- ticker_symbol: array (nullable = false)
| |-- element: string (containsNull = false)
|-- isin_number: array (nullable = false)
| |-- element: string (containsNull = false)
def collect_as_struct_list(
df: DataFrame,
grouping_keys: list,
included_columns: list = None,
struct_name: str = "column_values",
mode="context",
) -> DataFrame:
if not isinstance(grouping_keys, list):
raise ValueError("grouping_keys must be a list of column names")
if not all(key in df.columns for key in grouping_keys):
raise ValueError("All keys in grouping_keys must be present in the DataFrame")
if included_columns is None:
included_columns = [
col_name for col_name in df.columns if col_name not in grouping_keys
]
elif not isinstance(included_columns, list) or not all(
col_name in df.columns for col_name in included_columns
):
raise ValueError(
"included_columns must be a list of valid column names from the DataFrame"
)
if mode == "context":
struct_cols = F.struct(*included_columns)
aggregated_df = df.groupBy(*grouping_keys).agg(
F.collect_list(struct_cols).alias(struct_name)
)
if mode == "individual_set":
aggregations = [
F.array_compact(F.collect_set(col)).alias(col) for col in included_columns
]
aggregated_df = df.groupBy(*grouping_keys).agg(*aggregations)
return aggregated_df
def harmonize_data(column_values):
# Initialize a dictionary to store unique values for each key
condensed_info = {}
for row in column_values:
# Convert each Row to a dictionary
entry = row.asDict()
for key, value in entry.items():
if value: # Check if the value is not None/Null
if key in condensed_info:
# If the key already exists, append the value to the list if it's unique
if value not in condensed_info[key]:
condensed_info[key].append(value)
else:
# Initialize a new list for this key
condensed_info[key] = [value]
# Optionally: Convert lists with a single value back to just the value
for key in condensed_info:
if len(condensed_info[key]) == 1:
condensed_info[key] = condensed_info[key][0]
return json.dumps(condensed_info)
harmonize_udf = udf(harmonize_data, StringType())
dd = collect_as_struct_list(sdf, grouping_keys=["bv_d_id_number"], mode="context")
result_df = dd.withColumn("harmonized_info", harmonize_udf(dd["column_values"]))
</code></pre>
<p>The particular version of spark is 3.5</p>
|
<python><apache-spark><nested><primary-key><data-cleaning>
|
2023-12-05 07:07:51
| 1
| 17,894
|
Georg Heiler
|
77,604,022
| 13,803,549
|
Is it possible to have a conditional statement distinguish which model a Python variable is coming from?
|
<p>In my Django view I have 3 models and I want to execute some code based condition from where a variable is coming from.</p>
<p>I have tried...</p>
<pre class="lang-py prettyprint-override"><code> if (type(variable)) is ModelA:
do this...
</code></pre>
<p>If I do <code>print(type(variable))</code> I am expecting <code>'<class 'app.models.ModelA'></code> on console
But the conditional never kicks in. I'm not sure what I am missing.</p>
|
<python><sql><django>
|
2023-12-05 05:39:03
| 1
| 526
|
Ryan Thomas
|
77,603,720
| 9,536,653
|
How can I retreive the main body of a question using the StackExchange API? I can only retreive the title of a question
|
<pre><code>import requests, openai
# Set your Stack Exchange API key
stack_exchange_api_key = 'your_stack_exchange_api_key'
# Set your OpenAI API key
openai.api_key = 'your_openai_api_key'
# Step 1: Retrieve new unanswered questions with the Python tag from Stack Exchange API
# Set parameters for Stack Exchange API
stack_exchange_endpoint = 'https://api.stackexchange.com/2.3/questions'
stack_exchange_params = {
'site': 'stackoverflow',
'key': stack_exchange_api_key,
'order': 'desc',
'sort': 'creation',
'tagged': 'python',
'answers': 0, # Filter for unanswered questions
}
# Make the API request to Stack Exchange
stack_exchange_response = requests.get(stack_exchange_endpoint, params=stack_exchange_params)
# Check if the request was successful (status code 200)
if stack_exchange_response.status_code == 200:
# Parse the response JSON
stack_exchange_data = stack_exchange_response.json()
# Step 2: Ask ChatGPT for answers to each question
</code></pre>
<p>How can I retreieve the main body of the question? I cannot see any key in the json response that corresponds to that main part of the question. I can only get the title. Do I need to use the question_id and make some other request?</p>
|
<python><json><python-requests><openai-api>
|
2023-12-05 03:52:24
| 1
| 566
|
Derek M. D. Chan
|
77,603,409
| 13,882,618
|
How can I put an action that applied on every exception in python?
|
<p>I wrote something similar to the following code, and there is duplicated code (process_error) on every exception. But there are duplications.</p>
<pre><code>try:
do_something()
...
except KeyError:
process_error()
...
except ValueError:
process_error()
...
except:
process_error()
...
</code></pre>
<p>So I changed the code like this.</p>
<pre><code>try:
try:
do_something()
...
except:
process_error()
raise
except KeyError:
...
except ValueError:
...
except:
...
</code></pre>
<p>I feel like this is very ugly. Is there any better structure than this?</p>
|
<python><exception><code-duplication>
|
2023-12-05 01:45:21
| 0
| 401
|
jolim
|
77,603,135
| 3,170,530
|
How can I roll a matrix in Tensorflow but set a different roll value for each index?
|
<p>Suppose I have the following matrix in tensorflow:</p>
<pre><code>[[0, 1, 2],
[3, 4, 5],
[6, 7, 8]]
</code></pre>
<p>I would like to roll the matrix by <code>[0, 1, 2]</code> along axis 1, meaning the result matrix should be:</p>
<pre><code>[[0, 1, 2], # row 0 rolled by 0 - unchanged
[5, 3, 4], # row 1 rolled by 1
[7, 8, 6]] # row 2 rolled by 2
</code></pre>
<p>Is this possible in tensorflow? How to best achieve it?</p>
<p>I have attempted to do this with tf.roll, but tf.roll does not allow varying roll amounts for different indices, forcing the entire matrix to roll by the same amount.</p>
|
<python><tensorflow><matrix>
|
2023-12-04 23:47:31
| 0
| 448
|
user3170530
|
77,603,017
| 252,464
|
How can I extract certain qualified nodes from a yaml document and add to a new document using PyYAML?
|
<p>I'm new with both Python and YAML (great with Javascript and JSON) and I need to construct a subset of a YAML file. In particular, I want to generate a .yaml file that can be used to create a new deployment in openshift/kubernetes, gleaning the needed information from running pods/deployments. The YAML file will have separate sections for PVCs, Service, and Deployment.</p>
<p>I need to eliminate things like status, and a lot of the metadata that is generated when new pvcs, services, and deployments are created. The steps I want to use are:</p>
<ul>
<li>get a list of PVC claims for a running deployment</li>
<li>for each of these, generate the barebones YAML to create these PVCs</li>
<li>get the yaml for the service currently attached to the target deployment</li>
<li>generate the barebones YAML to create this service</li>
<li>get the yaml for the target deployment</li>
<li>generate the barebones YAML to create this deployment</li>
</ul>
<p>It seems like an easy thing to do, but with my scarce experience with both Python and YAML, I can't seem to get it right. I can't even traverse all of the nodes of the YAML document and print out the qualified node names.</p>
<p>Edit: I know which oc/kubectl commands I need to get the YAML I need, I just need some help on taking only certain sections from those files and generating a new document by inserting those extracted sections,</p>
|
<python><kubernetes><openshift><pyyaml>
|
2023-12-04 23:06:53
| 1
| 1,969
|
svenyonson
|
77,603,014
| 4,431,535
|
How do I use mypy to annotate a numpy array containing a custom dtype?
|
<p>I have a custom numpy dtype that will be used in numpy arrays. How should I annotate the array variable in a function declaration in order to capture the type constraint? I'm guessing I need to better annotate the dtype declaration itself, but am not sure how.</p>
<p>As an example, please consider this script:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import numpy.typing as npt
MY_DTYPE = np.dtype([("major", "u2"), ("minor", "u2"), ("patch", "u2")])
my_array = np.array([(1, 2, 3), (4, 5, 6)], dtype=MY_DTYPE)
def print_entries(array_var: npt.NDArray[MY_DTYPE]) -> None: # <- mypy gets angry here
"""Print array entries."""
for row in array_var:
print(row)
</code></pre>
<p><code>mypy</code> gets angry at the <code>npt.NDArray[MY_DTYPE]</code> and points me to the <a href="https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases" rel="nofollow noreferrer">variables vs. type aliases</a> section of the common mistakes. But the <code>TypeAlias</code> and <code>Type</code> annotations don't seem to be solving the problem. How do I annotate this?</p>
<p>Thanks much for your help!</p>
|
<python><arrays><numpy><mypy><dtype>
|
2023-12-04 23:05:51
| 0
| 514
|
pml
|
77,602,906
| 11,850,322
|
Jupyter run all - then pause for choice to continue
|
<p>I want to add some Dropdown widgets in my jupyternotebook, then Run all. The program should pause at the Dropdown widgets to wait for me to choose and confirm before it can continue to run</p>
<p>I want to do this rather than keep pressing Shift+Enter</p>
<p>The structure is something like this:</p>
<p><code>cell1</code></p>
<p><code>cell2</code></p>
<pre><code>afill = widgets.Dropdown(
options=['sector 1',
'sector 2'],
value='sector 1',
description='Choose sector:')
weight = widgets.Dropdown(
options=['none',
'weight'],
value='none',
description='Choose weight/not:')
display(afill, weight)
confirm = None
while confirm != 'y':
confirm = input("""Select Dropdown and Type "y": """)
</code></pre>
<p><code>cell 3</code></p>
<p><code>cell 4</code></p>
<p>I tried several options to work around this including using loop <code>while</code> + <code>input</code> but so far unsuccessful.</p>
<p>Any suggestions?
Thanks</p>
|
<python><python-3.x><jupyter-notebook>
|
2023-12-04 22:32:08
| 0
| 1,093
|
PTQuoc
|
77,602,904
| 7,563,454
|
Python: Import script in a way that doesn't require writing its name before the class name
|
<p>I have a python file called <code>lib.py</code> which contains custom types I use in my project, for example an x + y + z vector registered as <code>vec3</code>:</p>
<pre><code>class vec3:
def __init__(self, x: float, y: float, z: float):
self.x = x
self.y = y
self.z = z
</code></pre>
<p>Whenever I use it I of course use <code>import lib</code> at the beginning of my other script. What I don't like is having to always call it as <code>lib.vec3</code> instead of just <code>vec3</code> the same way I would builtin types like <code>float</code>. Is there a way to write the import command to treat classes as if they're defined locally so I can use only the class name?</p>
|
<python><python-3.x>
|
2023-12-04 22:31:57
| 1
| 1,161
|
MirceaKitsune
|
77,602,885
| 1,519,058
|
Merge dataframe using common keys
|
<p>I've been trying to merge two dataframes so that:</p>
<ul>
<li>data with common keys will get updated => new column added with the additional value</li>
<li>data with no common keys will be added as new rows (see sample below)</li>
</ul>
<p>I have been testing with both <code>merge</code> and <code>concat</code> but I still did not find the right config for them I guess...</p>
<p>Runnable example: <a href="https://onecompiler.com/python/3zvfatnj9" rel="nofollow noreferrer">https://onecompiler.com/python/3zvfatnj9</a></p>
<pre><code># df A
host val1 val2
0 aa 11 44
1 bb 22 55
2 cc 33 66
# df B
host val1 val3
0 aa 11 77
1 bb 22 88
2 dd 0 99
# df Expected after merge
host val1 val2 val3
0 aa 11 44.0 77.0
1 bb 22 55.0 88.0
2 cc 33 66.0 NaN
3 dd 0 NaN 99.0
</code></pre>
|
<python><python-3.x><pandas><dataframe><merge>
|
2023-12-04 22:26:06
| 3
| 4,963
|
Enissay
|
77,602,632
| 8,232,290
|
Why does 101010 return `True` when the expression`str(000) in str(101010)` is evaluated in Python?
|
<p>I'm working on <a href="https://leetcode.com/problems/largest-3-same-digit-number-in-string/?envType=daily-question&envId=2023-12-04" rel="nofollow noreferrer">this LeetCode problem here</a>, and I'm implementing a brute force approach as well as an iterative approach. The iterative approach works fine, but my brute force approach seems to have hit an edge case. I want to determine whether a string has 3 consecutive numbers and return the highest 3 consecutive numbers in it. My code is as follows:</p>
<pre><code>class Solution:
def largestGoodInteger(self, num: str) -> str:
if str(999) in num:
return "999"
elif str(888) in num:
return "888"
elif str(777) in num:
return "777"
elif str(666) in num:
return "666"
elif str(555) in num:
return "555"
elif str(444) in num:
return "444"
elif str(333) in num:
return "333"
elif str(222) in num:
return "222"
elif str(111) in num:
return "111"
elif str(000) in num:
return "000"
else:
return ""
</code></pre>
<p>For some reason, the string <code>101010</code> returns <code>"000"</code> even though <code>000</code> is not sequential in the string. I figured that the <code>in</code> operator may be evaluating the string <code>000</code> as an array and iterating over each element in the string, but the operator does not appear to do that <a href="https://python-reference.readthedocs.io/en/latest/docs/operators/in.html" rel="nofollow noreferrer">according to the docs</a>. As such, why does <code>000</code> get returned instead of <code>""</code>?</p>
|
<python><in-operator>
|
2023-12-04 21:22:19
| 3
| 503
|
AndreasKralj
|
77,602,598
| 19,130,803
|
python: return list from generator function
|
<p>I have simple function that I am trying to convert into generator that return batch output. Here is code</p>
<pre><code># Simple function, working but too slow once get big list
def compute_add():
data = range(5)
cases = list(itertools.permutations(data, 2))
print(f"{cases=}")
result = []
for x, y in cases:
ans = x + y
result.append(ans)
return result
report = compute_add()
print(f"{report=}")
</code></pre>
<p>So in first attempt I tried it to convert into generator that returns single output</p>
<pre><code># Generator single value and working
def compute_add_generator():
data = range(5)
cases = list(itertools.permutations(data, 2))
print(f"{cases=}")
for x, y in cases:
ans = x + y
yield ans
report = []
for res in compute_add_generator():
report.append(res)
print(f"{report=}")
</code></pre>
<p>In the second attempt I tried it to convert to a generator that returns batch output(list)</p>
<pre><code># Generator batch output and not working
def compute_add_generator_batch(batch_size):
data = range(5)
cases = list(itertools.permutations(data, 2))
print(f"{cases=}")
res = []
for x, y in cases:
ans = x + y
if len(res) != batch_size:
res.append(ans)
continue
yield res
res = []
batch_size=3 # it can vary
for res in compute_add_generator_batch(batch_size):
print(f"{res=}")
</code></pre>
<p>It is giving me wrong output as it always skiping the 4th output</p>
<pre><code>cases=[(0, 1), (0, 2), (0, 3), (0, 4), (1, 0), (1, 2), (1, 3), (1, 4), (2, 0), (2, 1), (2, 3), (2, 4), (3, 0), (3, 1), (3, 2), (3, 4), (4, 0), (4, 1), (4, 2), (4, 3)]
res=[1, 2, 3]
res=[1, 3, 4]
res=[2, 3, 5]
res=[3, 4, 5]
res=[4, 5, 6]
</code></pre>
<p>Expected output</p>
<pre><code>res=[1, 2, 3]
res=[4, 1, 3]
res=[4, 5, 2]
res=[3, 5, 6]
res=[3, 4, 5]
res=[7, 4, 5]
res=[6, 7]
</code></pre>
<p>I tried storing 4th result in <code>res = [ans]</code> still wrong output as misses last row.</p>
<p>What I am missing?</p>
|
<python>
|
2023-12-04 21:14:31
| 2
| 962
|
winter
|
77,602,568
| 8,372,455
|
compile Python code to WebAssembly for non-web environments
|
<p>I'm currently exploring if I can use a Python-based BACnet stack due to its reliability and effectiveness in conjunction with the possibility of leveraging Rust for its WebAssembly (WASM) capabilities, particularly in a non-web browser environment for IoT purposes <a href="https://webassembly.org/docs/non-web/" rel="nofollow noreferrer">as mentioned in this article</a> for non-web embeddings. The main goal is to utilize Rust for specific tasks that benefit from WASM's performance, while maintaining the existing Python code for the BACnet stack. I think <a href="https://pyodide.org/en/stable/" rel="nofollow noreferrer">pyodide</a> maybe only browser based WASM for Python and where using it for BACnet may not be feasible where BACnet is dependent on UDP communications where from my research maybe blocked by browsers for security purposes.</p>
<p><strong>Approach Overview:</strong>
Python BACnet Stack: Continue using the existing Python codebase for the BACnet stack.</p>
<p><strong>Rust for WebAssembly:</strong> Utilize Rust for parts of the application that could benefit from WebAssembly's performance, such as data processing or interfacing with other systems.</p>
<p><strong>Integration Strategy:</strong> Create a mechanism for Python and Rust to communicate effectively, potentially through a data exchange format like JSON or Protobuf.</p>
<p><strong>Non-Web Browser WASM Environment:</strong> Adapt Rust components for compatibility with non-web WASM environments, understanding their specific APIs and limitations.</p>
<p><strong>Challenges and Questions:</strong>
Python in Non-Web WASM: Running Python in a non-web WASM environment is proving to be a challenge. Pyodide is primarily designed for web browsers, and I'm looking for tools or methods to compile Python code to WebAssembly that can work in non-web environments. Does anyone have experience or insights on accomplishing this?</p>
<p><strong>Inter-language Communication:</strong> What are the best practices for data exchange between Python and Rust in this context? Any recommendations for efficient serialization and deserialization methods that work well with Python and WebAssembly?</p>
<p><strong>Performance Considerations:</strong> Ensuring that the Rust-compiled WebAssembly modules are optimized for performance in a non-web environment. What are some key performance metrics to consider, and how can I benchmark these in a non-web WASM setting?</p>
<p>Thanks for any tips... not a lot of wisdom here but curious to explore more.</p>
|
<python><rust><iot><webassembly><bacnet>
|
2023-12-04 21:08:02
| 0
| 3,564
|
bbartling
|
77,602,463
| 885,650
|
pip install can't find a package during build even though it's installed
|
<p>During pip install of a local package, which uses custom code for building, I am getting the following error:</p>
<pre><code>python3 -m pip install -vvv .
...
Created temporary directory: /tmp/pip-unpack-g_w7p_yj
Building wheels for collected packages: mypackagename
Created temporary directory: /tmp/pip-wheel-2p3g08e0
Destination directory: /tmp/pip-wheel-2p3g08e0
Running command Building wheel for mypackagename (pyproject.toml)
running bdist_wheel
running build
Traceback (most recent call last):
File "/home/user/.local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/home/user/.local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 251, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/setuptools/build_meta.py", line 404, in build_wheel
return self._build_with_temp_dir(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/setuptools/build_meta.py", line 389, in _build_with_temp_dir
self.run_setup()
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/setuptools/build_meta.py", line 480, in run_setup
super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/setuptools/build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 37, in <module>
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/setuptools/__init__.py", line 103, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/setuptools/dist.py", line 963, in run_command
super().run_command(command)
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/wheel/bdist_wheel.py", line 368, in run
self.run_command("build")
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/setuptools/dist.py", line 963, in run_command
super().run_command(command)
File "/tmp/pip-build-env-8htpge2f/overlay/local/lib/python3.11/dist-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "<string>", line 25, in run
File "/myfolder/mypackagename/mycustominstaller/__init__.py", line 19, in <module>
import numpy as np
ModuleNotFoundError: No module named 'numpy'
error: subprocess-exited-with-error
× Building wheel for mypackagename (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /usr/bin/python3 /home/user/.local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /tmp/tmpivmip1g1
cwd: /myfolder/mypackagename
Building wheel for mypackagename (pyproject.toml) ... error
ERROR: Failed building wheel for mypackagename
Failed to build mypackagename
ERROR: Could not build wheels for mypackagename, which is required to install pyproject.toml-based projects
Exception information:
Traceback (most recent call last):
File "/home/user/.local/lib/python3.11/site-packages/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper
status = run_func(*args)
^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/pip/_internal/cli/req_command.py", line 245, in wrapper
return func(self, options, args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/pip/_internal/commands/install.py", line 429, in run
raise InstallationError(
pip._internal.exceptions.InstallationError: Could not build wheels for mypackagename, which is required to install pyproject.toml-based projects
Remote version of pip: 23.3.1
Local version of pip: 23.3.1
Was pip installed by pip? True
Removed build tracker: '/tmp/pip-build-tracker-thq7346g'
</code></pre>
|
<python><python-3.x><pip>
|
2023-12-04 20:46:35
| 1
| 2,721
|
j13r
|
77,602,318
| 2,031,442
|
How can I use python and selectolax to extract multiple CSS tags with the same name
|
<p>I'm trying to learn basic web-scraping and have came across an issue I can't figure out out. Basically I found a site that lists a retail price, and sale price but the both have the class of "price"</p>
<p>Looking for some pointers to get me back on track. Thanks.</p>
<p>What I would like to do is be able to extract with the results as follows:</p>
<pre><code>{'model': <'model_number'>, 'retail_price': <'xxx.xx'>, 'sale_price': <'xxx.xx'>}
</code></pre>
<p>I can get the first price which is the retail price since it comes first, but I'm having issues being able to figure out how to extract the sale price.</p>
<p>Sample of what I'm attempting to extract:</p>
<p><a href="https://i.sstatic.net/SftBp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SftBp.png" alt="enter image description here" /></a></p>
<p>Code so far to get the model and retail price:</p>
<pre><code>import httpx
from selectolax.parser import HTMLParser
url = "https://hvacdirect.com/ductless-mini-splits/single-zone-ductless-mini-splits/filter/wall-mounted/aciq.html"
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36 OPR/104.0.0.0"}
#Get page html
resp = httpx.get(url, headers=headers)
html = HTMLParser(resp.text)
#Get product list
products = html.css("div.products.wrapper.grid.products-grid ol li")
for product in products:
item = {
"name":product.css_first(".product-item-sku").text().strip(),
"retail_price":product.css_first(".price").text().strip(),
}
print(item)
</code></pre>
|
<python><web-scraping><beautifulsoup>
|
2023-12-04 20:14:06
| 1
| 685
|
digital_alchemy
|
77,602,160
| 3,458,788
|
Using map with pd.DataFrame
|
<p>I am writing a function that I would like to return a <code>pd.DataFrame()</code> and I would then like to use this function with <code>map</code>. I would then like to join these DataFrames to have one big DataFrame. Here's an example (note that this is not my problem I'm trying to solve, but just a toy problem):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
df = pd.DataFrame({
'x' : np.random.chisquare(df = 2, size = 10),
'y' : np.random.chisquare(df = 5, size = 10)
})
def z_score(x):
z = (np.mean(df[x]) - df[x]) / np.std(df[x])
mean_ctr = np.mean(df[x]) - df[x]
out = pd.DataFrame({
'z': z,
'mean_ctr': mean_ctr,
'var' : x
})
return out
list(map(z_score, ['x', 'y']))
pd.DataFrame(map(z_score, ['x', 'y'])) ## doesn't work
</code></pre>
<p>I can get my desired output if I run this function twice within <code>pd.concat()</code></p>
<pre><code>pd.concat([z_score('x'),
z_score('y')])
</code></pre>
<p>Is this possible using the <code>map</code> function?</p>
|
<python><pandas><dataframe>
|
2023-12-04 19:41:38
| 0
| 515
|
cdd
|
77,601,975
| 2,990,266
|
How to make Pandas bar char add X axis labels to transposed dataframe?
|
<p>I bar chart that I am creating with Pandas. The plot display the label on the X axis. How do I get the chart to display the expected labels?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
metrics_1_csv = Path("/tmp/metrics_2.csv")
print("reading", metrics_1_csv)
df = pd.read_csv(metrics_1_csv)
df = df.groupby(["transaction"]).sum(numeric_only=True)
df = df.sort_values(by=["duration"], ascending=False)
ax = df.transpose().plot.bar(
stacked=False,
title="Duration by Transaction",
figsize=(8, 4))
ax.legend(loc='right')
</code></pre>
<p>This produces the following chart, note that the X label is just "duration", but I'd expect it to say "ddd bbb cccc aaa eee".
<a href="https://i.sstatic.net/bnZ7m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bnZ7m.png" alt="enter image description here" /></a></p>
<p>The data looks like this:</p>
<pre><code>transaction,duration
aaa,12451
aaa,8388
aaa,7267
aaa,14852
bbb,10752
bbb,29139
bbb,5694
bbb,7942
bbb,441
bbb,34491
bbb,25707
bbb,14609
ccc,8035
ccc,14772
ccc,8178
ccc,13672
ccc,7792
ccc,18970
ccc,9190
ccc,16020
ccc,5726
ccc,12720
ddd,366
ddd,29148
ddd,10511
ddd,12349
ddd,8560
ddd,2287
ddd,21380
ddd,26270
ddd,12041
ddd,11331
ddd,7277
eee,8854
eee,9090
</code></pre>
|
<python><pandas><charts>
|
2023-12-04 19:05:46
| 1
| 2,025
|
Aage Torleif
|
77,601,763
| 9,032,335
|
Error when reading a parquet file with polars which was saved with pandas
|
<p>I'd like to read a parquet file with polars (0.19.19) that was saved using pandas (2.1.3).</p>
<pre><code>test_df = pd.DataFrame({"a":[10,10,0,100,0]})
test_df["b"] = test_df.a.astype("category")
test_df.to_parquet("test_df.parquet")
test_pl_df = pl.read_parquet("test_df.parquet")
</code></pre>
<p>I get this error:</p>
<pre><code>polars.exceptions.ComputeError: only string-like values are supported in dictionaries
</code></pre>
<p><strong>How can I read the parquet file with polars?</strong></p>
<p>Reading with pandas first works, but seems rather ugly and does not allow lazy methods such as scan_parquet.</p>
<pre><code>test_pa_pl_df = pl.from_pandas(pd.read_parquet("test_df.parquet", dtype_backend="pyarrow"))
</code></pre>
|
<python><dataframe><parquet><python-polars>
|
2023-12-04 18:21:22
| 1
| 723
|
ivegotaquestion
|
77,601,287
| 4,465,454
|
"No parameter named table" in SQLModel
|
<p>The SQLModel <a href="https://sqlmodel.tiangolo.com/tutorial/insert/" rel="nofollow noreferrer">documentation</a> tells us that we should define a mapped table as follows:</p>
<pre><code>from typing import Optional
from sqlmodel import Field, SQLModel
class Hero(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str
secret_name: str
age: Optional[int] = None
</code></pre>
<p>Since the latest version 0.0.14, pylance is giving me the warning: "No parameter called 'table'".</p>
<p><a href="https://i.sstatic.net/UV6B0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UV6B0.png" alt="enter image description here" /></a></p>
<p>Any ideas on how to resolve this for now? The code still runs if I use <code># type: ignore</code>, but this seems non-ideal in the long run.</p>
|
<python><sqlalchemy><pydantic><sqlmodel>
|
2023-12-04 17:07:10
| 1
| 1,642
|
Martin Reindl
|
77,601,218
| 12,596,824
|
Llama Index custom embeddings - difference between getting text embeddings vs query embeddings?
|
<p>I'm looking here at the Llama index documentation to create custom embeddings: <a href="https://docs.llamaindex.ai/en/stable/examples/embeddings/custom_embeddings.html" rel="nofollow noreferrer">https://docs.llamaindex.ai/en/stable/examples/embeddings/custom_embeddings.html</a></p>
<p>I'm confused at the documentation - what's the difference betweeen the two methods <code>_get_query_embedding()</code> and <code>_get_text_embedding</code>? They look exactly the same to me</p>
<pre><code>from typing import Any, List
from InstructorEmbedding import INSTRUCTOR
from llama_index.embeddings.base import BaseEmbedding
class InstructorEmbeddings(BaseEmbedding):
def __init__(
self,
instructor_model_name: str = "hkunlp/instructor-large",
instruction: str = "Represent the Computer Science documentation or question:",
**kwargs: Any,
) -> None:
self._model = INSTRUCTOR(instructor_model_name)
self._instruction = instruction
super().__init__(**kwargs)
def _get_query_embedding(self, query: str) -> List[float]:
embeddings = self._model.encode([[self._instruction, query]])
return embeddings[0]
def _get_text_embedding(self, text: str) -> List[float]:
embeddings = self._model.encode([[self._instruction, text]])
return embeddings[0]
def _get_text_embeddings(self, texts: List[str]) -> List[List[float]]:
embeddings = self._model.encode(
[[self._instruction, text] for text in texts]
)
return embeddings
</code></pre>
|
<python><nlp><llama-index>
|
2023-12-04 16:56:57
| 1
| 1,937
|
Eisen
|
77,601,217
| 12,016,688
|
Shouldn't importing print_function change the keyword.kwlist in python2?
|
<p>In <code>python-2.7</code>, shouldn't the <code>keyword.kwlist</code> be updated after importing <code>print_function</code> from <code>__future__</code>?</p>
<pre class="lang-py prettyprint-override"><code>Python 2.7.18 (default, Feb 1 2021, 00:00:00)
[GCC 10.2.1 20201125 (Red Hat 10.2.1-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import keyword
>>> keyword.iskeyword('print')
True
>>> print = 3
File "<stdin>", line 1
print = 3
^
SyntaxError: invalid syntax
>>>
>>> from __future__ import print_function
>>>
>>> keyword.iskeyword('print')
True
>>> print = 3
>>>
>>> print
3
>>> keyword.kwlist
['and', 'as', 'assert', 'break', 'class', 'continue', 'def', 'del', 'elif', 'else', 'except', 'exec', 'finally', 'for', 'from', 'global', 'if', 'import', 'in', 'is', 'lambda', 'not', 'or', 'pass', 'print', 'raise', 'return', 'try', 'while', 'with', 'yield']
</code></pre>
<p>Is this kind of bug? Or it is intentional?</p>
|
<python><python-2.x><keyword>
|
2023-12-04 16:56:40
| 0
| 2,470
|
Amir reza Riahi
|
77,601,164
| 9,974,205
|
Looking for ways of clustering data acording to distance
|
<p>I have a pandas dataframe defined as</p>
<pre><code> Alejandro Ana Beatriz Jose Juan Luz Maria Ruben
Alejandro 0.0 0.0 1000.0 0.0 1037.0 1014.0 100.0 0.0
Ana 0.0 0.0 15.0 0.0 100.0 0.0 16.0 1100.0
Beatriz 1000.0 15.0 0.0 100.0 1000.0 1100.0 15.0 0.0
Jose 0.0 0.0 100.0 0.0 0.0 100.0 1000.0 14.0
Juan 1037.0 100.0 1000.0 0.0 0.0 1014.0 0.0 100.0
Luz 1014.0 0.0 1100.0 100.0 1014.0 0.0 0.0 0.0
Maria 100.0 16.0 15.0 1000.0 0.0 0.0 0.0 0.0
Ruben 0.0 1100.0 0.0 14.0 100.0 0.0 0.0 0.0
</code></pre>
<p>This dataframe contains compatibility measurements
I want to group these people in groups of two or three people. This could be [Alejandro, Ana, Beatriz], [Jose, Juan, Luz], [Maria, Ruben].</p>
<p>To do that, I have to maximize the compatibility inside each group.</p>
<p>Can someone please point me towards a methodology?</p>
|
<python><pandas><optimization><cluster-analysis><heuristics>
|
2023-12-04 16:47:42
| 1
| 503
|
slow_learner
|
77,600,799
| 3,336,423
|
Python how to run unit tests from different packages?
|
<p>Here is my folder structure:</p>
<pre><code>src/
src/package1
src/package1/package1
src/package1/package1/__init__.py
src/package1/package1/class_package1.py
src/package1/package1/test_package1.py
src/package2
src/package2/package2
src/package1/package2/__init__.py
src/package2/package2/class_package1.py
src/package2/package2/test_package1.py
</code></pre>
<p>This is a ROS2 packages structure, I can't modify it.</p>
<ul>
<li>Running <code>python -m unittest</code> from <code>src/package1/package1</code> runs all
<code>package1</code> tests</li>
<li>Running <code>python -m unittest</code> from
<code>src/package2/package2</code> runs all <code>package2</code> tests</li>
<li>Running <code>python -m unittest discover package1/package1</code> from <code>src</code> runs all <code>package1</code>
tests</li>
<li>Running <code>python -m unittest discover package2/package2</code> from
<code>src</code> runs all <code>package2</code> tests</li>
</ul>
<p>However, is there a way to run all <code>package1</code> AND <code>package2</code> tests from <code>src</code> folder?</p>
<p>I suppose moving all tests to a new <code>test</code> package could make this work easily but I'd like to avoid that. I like tests to remain close to code itself.</p>
|
<python><python-unittest>
|
2023-12-04 15:49:28
| 0
| 21,904
|
jpo38
|
77,600,625
| 7,563,454
|
Python, tkinter: How do I snap the mouse pointer to the center of a window?
|
<p>I need to create a window in Python, the best way by far is tkinter so I went with that. Mouse looking is going to be used: I need to make it so my mouse cursor is always snapped to the center of this window after calculating movement. I looked at every solution but can't get this seemingly simple thing to work. I prefer a way that doesn't require importing other libraries, doing it from tkinter itself seems most ideal... also I'm not creating any other windows or widgets, I only have my main window running. Of course I want it to work cross-platform, in my case Linux (KDE, Wayland). So far I tried 3 options, I'll share the relevant parts of my class which is started with <code>window = Window()</code></p>
<pre><code>class Window:
def __init__(self):
self.root = tk.Tk()
self.root.event_generate("<Motion>", warp = True, x = 50, y = 50)
self.root.mainloop()
</code></pre>
<p>Result: Does nothing, mouse cursor is still free. In this case it's likely because the <code>event_generate</code> doesn't have a cause so it won't run each frame... as such I went ahead and connected it to a trigger, but that won't work either:</p>
<pre><code>class Window:
def __init__(self):
self.root = tk.Tk()
self.root.bind("<KeyPress>", self.onKeyPress)
self.root.mainloop()
def onKeyPress(self, event):
self.root.event_generate("<Motion>", warp = True, x = 50, y = 50)
</code></pre>
<p>Result: This should have snapped the mouse pointer when any key is pressed, a print confirms the <code>onKeyPress</code> event runs but cursor position is not affected.</p>
<pre><code>class Window:
def __init__(self):
self.root = tk.Tk()
self.root.bind("<Motion>", self.onMouseMove)
self.root.mainloop()
def onMouseMove(self, event):
self.root.event_generate("<Motion>", warp = True, x = 50, y = 50)
</code></pre>
<p>Result: Crashes with an infinite loop error: <code>RecursionError: maximum recursion depth exceeded</code></p>
<p>Edit: Since my window will contain a <code>canvas</code> element spanning the full size, I attempted to use <code>icursor</code> on its widget as suggested by other answers. While I can draw on the canvass successfully, not even this will affect the mouse cursor when it hovers over the element... neither does running <code>self.canvas.focus()</code> or <code>self.canvas.focus_set()</code> anywhere.</p>
<pre><code>self.canvas = tk.Canvas(self.root, width = 400, height = 200)
self.canvas.icursor(50, 50)
self.canvas.pack()
</code></pre>
|
<python><python-3.x><tkinter><tk-toolkit>
|
2023-12-04 15:25:39
| 2
| 1,161
|
MirceaKitsune
|
77,600,509
| 1,014,217
|
validation error for RetrievalQA retriever value is not a valid dict (type=type_error.dict)
|
<p>I am trying to create a custom filtered retriever to solve:
<a href="https://github.com/langchain-ai/langchain/issues/14227" rel="nofollow noreferrer">https://github.com/langchain-ai/langchain/issues/14227</a></p>
<p>So my code is:</p>
<pre><code>class FilteredRetriever:
def __init__(self, retriever, title):
self.retriever = retriever
self.title = title
def retrieve(self, *args, **kwargs):
results = self.retriever.retrieve(*args, **kwargs)
return [doc for doc in results if doc['title'].startswith(self.title)]
filtered_retriever = FilteredRetriever(vector_store.as_retriever(), '25_1_0.pdf')
llm = AzureChatOpenAI(
azure_deployment="chat",
openai_api_version="2023-05-15",
)
retriever = vector_store.as_retriever(search_type="similarity", kwargs={"k": 3})
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=filtered_retriever,
return_source_documents=True)
result = chain({"query": 'Can Colleagues contact their managers??'})
for res in result['source_documents']:
print(res.metadata['title'])
</code></pre>
<p>However I am getting this error:</p>
<pre><code>ValidationError: 1 validation error for RetrievalQA
retriever
value is not a valid dict (type=type_error.dict)
</code></pre>
<p>Not sure what am I missing here:</p>
|
<python><langchain>
|
2023-12-04 15:09:56
| 0
| 34,314
|
Luis Valencia
|
77,600,448
| 14,175,601
|
Efficient way to list parquet file partitions in python
|
<p>I have a partitioned parquet file that I want to read each partition iteratively.</p>
<p>However, I want to get the list of partitions first.</p>
<p>In this example I would like to get the list <code>[1, 2]</code>:</p>
<pre><code>myparquet.parquet
- /partition_col=1/file1
- /partition_col=2/file2
</code></pre>
<p>For that, I might use pandas:</p>
<pre><code>pd.read_parquet("myparquet.parquet", columns=["partition_col"])["partition_col"].unique()
</code></pre>
<p>But that results in a slow and unefficient way as it is creating a huge dataset (as many rows as I have in the whole dataset, in the order of tens of millions) to just enumerate the partitions (just a hundred).</p>
<p>There should be a simpler and fast way to do this, similar to what you get in SQL with <code>SHOW PARTITIONS table</code>.</p>
|
<python><pandas><parquet><pyarrow>
|
2023-12-04 15:01:04
| 2
| 1,653
|
ronkov
|
77,600,270
| 4,451,521
|
Finding a point close enough to a point
|
<p>I have a problem with some peculiar characteristics.</p>
<p>I have:</p>
<ul>
<li><p>a point P (well in reality it is an array of points <code>ArrayP</code>, but the function I want to write is applied to each and every point, so it receives each time <em>one</em> point)</p>
</li>
<li><p>an array of other points (which form a trajectory) <code>Trajectory</code></p>
</li>
<li><p>also the index of the first point in the original array.</p>
</li>
</ul>
<p>Also the number of points in <code>Trajectory</code> is always bigger than the number of points in <code>ArrayP</code>.</p>
<p>I need to find the closest point in <code>Trajectory</code> to <code>P</code></p>
<p>For example
<a href="https://i.sstatic.net/0Ssfw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0Ssfw.png" alt="enter image description here" /></a></p>
<p>My problem is that I don't know the indexes in <code>Trajectory</code>, only the index in <code>ArrayP</code></p>
<p>Also the number of points in <code>Trajectory</code> is big so I prefer not to try all of them.</p>
<p>I have developed an algorithm that works well for some simple trajectories but it fails in some a bit more complicated trajectories still. Rather than posting it here and ask for improvements, I would like to hear a fresh approach on how to solve this problem</p>
|
<python><euclidean-distance><closest-points>
|
2023-12-04 14:36:09
| 1
| 10,576
|
KansaiRobot
|
77,600,048
| 13,641,064
|
Azure function - Logging to Azure blob with python
|
<p>I have an Azure function where I tried multiple ways to write/log, my python script logs to an Azure blob.
However, I can't seem to make it work, even <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python?tabs=asgi%2Capplication-level&pivots=python-mode-decorators#logging" rel="nofollow noreferrer">the documentation</a> leads to a dead end.</p>
<p><a href="https://stackoverflow.com/questions/51766722/how-to-send-azure-function-logs-to-blob-storage">This question has been asked before</a>, but it's a bit outdated.</p>
<p>The <a href="https://pypi.org/project/azure-storage-logging/" rel="nofollow noreferrer">azure-storage-logging package</a> also seems deserted.</p>
<p>Here is some example code I use for testing different methods:</p>
<pre><code>import logging
import azure.functions as func
app = func.FunctionApp()
@app.schedule(schedule="*/30 * * * *", arg_name="myTimer", run_on_startup=True,
use_monitor=False)
def func_demo_timer_trigger(myTimer: func.TimerRequest) -> None:
if myTimer.past_due:
logging.info('The timer is past due!')
logging.info('Python timer trigger function executed.')
</code></pre>
<p>How would I get this log message in to a blob?
Also, every run there is a log message it should be added to that same file.</p>
<p>(Application insights or related topics are not relevant).</p>
<p>Any tips and or small examples are welcome.</p>
|
<python><azure><logging><azure-functions>
|
2023-12-04 14:05:31
| 2
| 442
|
Collega
|
77,599,954
| 4,470,831
|
Django capture traces of Open Telemetry
|
<p>I am a newbie to Open Telemetry and trying to see how traces are captured for a Django app. I am assuming that OTel should be able to provide traces of all invoked functions for a particular Django Request.
E.g.I have a very basic Django app with a views.py as</p>
<pre><code>
def log_console():
print("Print in terminal")
def show_date():
now = datetime.now()
log_console()
return HTMLResponse()
</code></pre>
<p>The url is configured to serve <code>show_date</code></p>
<p>My expectation js that when I configure <code>manage.py</code> with <code>DjangoInstrumentor().instrument()</code></p>
<p>In the console I should be able to see reference to the function <code>log_console</code> as it is invoked when serving the <code>show_date</code></p>
<p>But cant see any reference to log_console, not sure if this is a valid use case for Tracing in Open Telemetry</p>
|
<python><django><open-telemetry><open-telemetry-collector>
|
2023-12-04 13:51:29
| 0
| 552
|
H D
|
77,599,953
| 1,613,749
|
Malformed node in ast.literal_eval call
|
<p>It is my understanding that <code>ast.literal_eval</code> works with literals, so why does this give me an error?</p>
<pre class="lang-py prettyprint-override"><code>import ast
ast.literal_eval("1+2")
</code></pre>
<p>The error:</p>
<pre><code>Traceback (most recent call last):
File "/snap/pycharm-community/355/plugins/python-ce/helpers/pydev/pydevd.py", line 1500, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/snap/pycharm-community/355/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/home-pc/.config/JetBrains/PyCharmCE2023.2/scratches/scratch.py", line 3, in <module>
ast.literal_eval("(1+2)")
File "/usr/lib/python3.10/ast.py", line 110, in literal_eval
return _convert(node_or_string)
File "/usr/lib/python3.10/ast.py", line 109, in _convert
return _convert_signed_num(node)
File "/usr/lib/python3.10/ast.py", line 83, in _convert_signed_num
return _convert_num(node)
File "/usr/lib/python3.10/ast.py", line 74, in _convert_num
_raise_malformed_node(node)
File "/usr/lib/python3.10/ast.py", line 71, in _raise_malformed_node
raise ValueError(msg + f': {node!r}')
ValueError: malformed node or string on line 1: <ast.BinOp object at 0x7f85e1b73a00>
python-BaseException
</code></pre>
|
<python><python-3.x><eval>
|
2023-12-04 13:51:28
| 2
| 4,166
|
SamuelNLP
|
77,599,944
| 6,653,602
|
Speeding up INSERT Query of a Pandas Dataframe into MySQL Database
|
<p>I am using simple <code>to_sql</code> pandas method to save dataframe into MySQL database:</p>
<pre><code>engine = create_engine('mysql+mysqlconnector://xxxx:xxxxx@localhost:3306/db_qa')
df.to_sql('my_table', con=engine, if_exists='replace', index=False, chunksize=120, method='multi')
</code></pre>
<p>However, this way is not very fast. For a small dataframe 10000 rows and 42 columns it takes around 3 minutes to save. It should be way faster than that.</p>
<p>I am running mysql database on GCP (SQL instance) and also running a cloud sql proxy on a sperate VM to allow connection to the SQL instance in the cloud from.</p>
<p>I am trying to identify what can be bottlenecking the INSERT query. I already tried removing all <code>None</code> values from the dataframe but it did not speed up the query at all. I also experimented with chunksizes and found out that for my case 120 was the fastest. Bigger chunksizes were much slower.</p>
<p>I was also checking the usage of SQL instance on GCP but I could not find that anything was at the peak usage or running out of resources at all.
Are there any other ways to speed this up using python scripts or something else?</p>
|
<python><mysql><pandas><google-cloud-platform>
|
2023-12-04 13:50:09
| 1
| 3,918
|
Alex T
|
77,599,671
| 1,821,450
|
Is there a way to mass-identify typos in methods in VS Code?
|
<p>I noticed that, when typing a method from a different class in VS Code, if you type in an incorrect method name or a missing method, VS Code will highlight it in white. If the method is yellow that means that the method name is correct.</p>
<p>Is there any way to do this on a project scale so you can catch any errors all at once?</p>
<p>Would love an option to run on the project to catch all of these typos at once vs. having to spot them via color. VS Code can clearly do it as it's identifying them already just not sure how to do it on a mass scale.</p>
<p>Tried Googling and tried going through all the menus but no clues.</p>
<p>I can run tests or run the code to catch the error but that adds another potentially lengthy process when VS Code has already identified them and could instantly give me feedback.</p>
<p><a href="https://i.sstatic.net/aQdnI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aQdnI.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code>
|
2023-12-04 13:09:16
| 1
| 2,185
|
Dr. Chocolate
|
77,599,660
| 832,490
|
How to obtain a user's profile picture using only their @handler
|
<p>I have a Python bot for Telegram, and the idea, to summarize, is that given a user input containing mentions, I would like to obtain either the photo of the mentioned user or their user.id. I have a code that works for users without a handler, but I would like to find a way that works for all users.</p>
<pre><code>entities = message.parse_entities()
if not entities:
return
for entity in entities:
if entity.type not in {"mention", "text_mention"}:
continue
user = entity.user
if not user:
print(">>> user not found") # this only happen for who have @username
continue
print(">>> user", user.id, user.name) # this works for non @username
</code></pre>
|
<python><telegram><python-telegram-bot>
|
2023-12-04 13:07:24
| 1
| 1,009
|
Rodrigo
|
77,599,646
| 1,213,934
|
Field in Document possible being a list and and object
|
<p>I have some legacy MongoDB collection that has a particular field that can be null, a list (<code>[]</code>) and an object with a particular list of fields. I'm trying to implement the basic Document that allows this behavior.</p>
<p>I tried using this approach:</p>
<pre class="lang-py prettyprint-override"><code>class MyDBEntity(Document):
my_field = GenericEmbeddedDocumentField(
choices=(
MyParticularField,
List
),
default=None,
null=True
)
</code></pre>
<pre class="lang-py prettyprint-override"><code>class MyParticularField(EmbeddedDocument):
# some fields, etc.
</code></pre>
<p>Every time I try to use this approach, I get this error:</p>
<pre><code>File "path_to_venv/mongoengine/fields.py", line 837, in to_python
doc_cls = get_document(value["_cls"])
KeyError: '_cls'
</code></pre>
<p>My understanding is that MongoEngine should populate <code>_cls</code> field for me, however, it's not there.</p>
<p>If I use a standalone <code>EmbeddedDocumentField</code>, the solution works when the expected object is in entries in the collection:</p>
<pre class="lang-py prettyprint-override"><code>class MyDBEntity(Document):
my_field = EmbeddedDocumentField(MyParticularField, default=None, null=True)
</code></pre>
<p>Obviously, it fails when a list is present.</p>
<hr />
<p><strong>How should I proceed with this use case, expecting both a list and an object?</strong></p>
|
<python><mongoengine>
|
2023-12-04 13:05:23
| 1
| 3,452
|
logoff
|
77,599,639
| 13,550,050
|
Data dependencies and caching
|
<p>I have a complex python function which computes a level based on various sub functions and data dependencies.</p>
<p>What are the best ways to control the data dependencies and have reproducible calculations?</p>
<p>The data can come from external data bases and data sources, so potentially they can change when the code is run multiple times.</p>
<p>Currently i am using the following approach: wrap all data access into an accessor class that caches all data that it requests. The accessor object is a global variable, hence at the end of the calculation i can look up the state of my cache.</p>
<p>This gives me reproducibility: If i want to rerun the calculation with the exact same data i just force the accessor function to directly retrieve data from the cache.</p>
<p>One drawback is that this approach forces the use of a global variable.</p>
<p>An alternative would be to inject the data dependencies directly into the functions parameters, but i dont know upfront the dependencies or even their structure, as branching in my code can depend on data.</p>
<p>Does anyone know better alternatives or references for that topic?</p>
<p>Thank you</p>
|
<python><dependency-injection><functional-programming><dependencies>
|
2023-12-04 13:03:31
| 1
| 369
|
crixus
|
77,599,318
| 3,155,966
|
Generate static/complete `.rst` files with autodoc
|
<p>is there a way to make <code>sphinx-apidoc</code> (by using the <code>autodoc</code> extension) generate fully static <code>.rst</code> files with the complete documentation content (docstrings) directly embedded in them?</p>
<p>For example, given a <code>foo.py</code> module with one fuction:</p>
<pre class="lang-py prettyprint-override"><code># foo.py
def add_one(a: int) -> int:
"""Some documentation.
"""
return a + 1
</code></pre>
<p>currently the <code>foo.rst</code> file it produces is:</p>
<pre><code>foo module
==========
.. automodule:: foo
:members:
:undoc-members:
:show-inheritance:
</code></pre>
<p>Can we somehow get a complete static <code>.rst</code> so that e.g. you can copy the docs in a different directory, and run <code>make html</code> without the need to import the python code?</p>
<p>Thank you!</p>
|
<python><python-sphinx><autodoc><sphinx-apidoc>
|
2023-12-04 12:07:27
| 0
| 347
|
Low Yield Bond
|
77,599,292
| 9,020,273
|
How to get a list of mounted sub applications from a FastAPI app?
|
<p>I have a FastAPI app with a few mounted <a href="https://fastapi.tiangolo.com/advanced/sub-applications/" rel="nofollow noreferrer">sub applications</a>, and would like to know if there is a <code>app.subapps()</code> or similar method to give me the instances of sub applications mounted to the main app.</p>
<p>Thanks</p>
|
<python><fastapi><starlette>
|
2023-12-04 12:03:39
| 1
| 1,315
|
Jamie J
|
77,599,120
| 18,360,265
|
How to get api req as cURL and print on terminal using python?
|
<p>I am trying to scrap a website by intercepting http network calls using <code>playwright</code> python module.</p>
<p>I want to get the cURL from network call.</p>
<p>something like this, we do it manually. To copy the cURL of API call.</p>
<p><a href="https://i.sstatic.net/2ONDz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2ONDz.png" alt="enter image description here" /></a></p>
<p>How can I get the cURL text and print on terminal?</p>
<p>Here is my attempt.</p>
<pre class="lang-py prettyprint-override"><code>from playwright.sync_api import sync_playwright
def scrape_graphql_response():
with sync_playwright() as pw:
browser = pw.chromium.launch(
headless=False,
slow_mo=1000,
)
context = browser.new_context(
viewport={"width": 1920, "height": 1080}
)
page = context.new_page()
# Enable request interception
page.route('**/graphql', lambda route: route.continue_())
# Navigate to your website
page.goto('https://example.com') #sample url for website home page
# Wait for the GraphQL request to complete
with page.expect_request('**/graphql') as req:
# Access the GraphQL response
print(req.value)
# Close the browser
browser.close()
if __name__ == "__main__":
scrape_graphql_response()
</code></pre>
<p>I am getting this. which is not a cURL.
<code><Request url='https://www.threads.net/api/graphql' method='POST'></code></p>
|
<python><web-scraping><playwright-python>
|
2023-12-04 11:38:06
| 1
| 409
|
Ashutosh Yadav
|
77,598,964
| 18,904,265
|
How do I effectively share the database engine between different instances of a DB class?
|
<p>Preamble: Not sure if this kind of question belongs to stackoverflow or Software Engineering, please tell me if I'm at the wrong place.</p>
<h2>The issue</h2>
<p>I am working on a DB-wrapper for a CLI-app. For using that wrapper (a class), I want to instantiate it only once for each table I want to map. The version I am using at the moment however has the issue, that multiple engines get created per DB (every time I instantiate that class).</p>
<p>That's my current class (A version with tests is at <a href="https://codereview.stackexchange.com/q/288196/278229">Code Review</a>):</p>
<pre class="lang-py prettyprint-override"><code>from sqlmodel import Session, SQLModel, create_engine, select
class DB:
def __init__(self, url: str, table: SQLModel, *, echo=False):
"""Database wrapper specific to the supplied database table.
url: URL of the database file.
table: Model of the table to which the operations should refer.
Must be a Subclass of SQLModel.
"""
self.url = url
self.table = table
self.engine = create_engine(url, echo=echo)
def create_metadata(self):
"""Creates metadata, call only once per database connection."""
SQLModel.metadata.create_all(self.engine)
def read_all(self):
"""Returns all rows of the table."""
with Session(self.engine) as session:
entries = session.exec(select(self.table)).all()
return entries
def read(self, _id):
"""Returns a row of the table."""
with Session(self.engine) as session:
entry = session.get(self.table, _id)
return entry
def add(self, **fields):
"""Adds a row to the table. Fields must map to the table definition."""
with Session(self.engine) as session:
entry = self.table(**fields)
session.add(entry)
session.commit()
def update(self, _id, **updates):
"""Updates a row of the table. Updates must map to the table definition."""
with Session(self.engine) as session:
entry = self.read(_id)
for key, val in updates.items():
setattr(entry, key, val)
session.add(entry)
session.commit()
def delete(self, _id):
"""Delete a row of the table."""
with Session(self.engine) as session:
entry = self.read(_id)
session.delete(entry)
session.commit()
</code></pre>
<p>The use would work like this:</p>
<pre class="lang-py prettyprint-override"><code>from db import DB
from models import Project, Account
URL = "sqlite:///database.db"
projects = DB(url=URL, table=Project)
accounts = DB(url=URL, table=Account)
projects.read_all()
accounts.read(4)
</code></pre>
<p>However, <code>projects</code> and <code>accounts</code> now use different engines. I am not that familiar with ORMs, but I could imagine, that this is undesirable.</p>
<p>Is there a way to only create the engine once and keep the usage of the class the same, while still being able to connect to multiple DBs?</p>
<h2>Solution idea (only if the URL stays the same)</h2>
<p>I was thinking of something like this, which however only works when the url stays the same for all instances. This however already causes my tests to not pass anymore, since they use a temporary DB which is different for each test function. And if I ever wanted to connect to more than 1 DB, this wouldn't work anymore either.</p>
<p>Maybe there is something more robust than this?</p>
<pre class="lang-py prettyprint-override"><code>class DB:
engine = None
def __init__(self, url: str, table: SQLModel, *, echo=False):
self.url = url
self.table = table
if DB.engine is None:
DB.engine = create_engine(url, echo=echo)
def create_metadata(self):
SQLModel.metadata.create_all(self.engine)
</code></pre>
<h2>Possible alternative?</h2>
<p>The second solution I thought of is this, which however adds an extra step to getting the DB connection. It solves the issues of the solutions above though: I only create one engine per DB, while still being able to connect to multiple DBs. It also works for my tests. It complicates setup though when using the DB wrapper, I now need to keep track of the DB instances AND the engine instance.</p>
<pre class="lang-py prettyprint-override"><code>class Engine:
def __init__(self, url, *, echo=False):
self.url = url
self.engine = create_engine(url, echo=echo)
def create_metadata(self):
"""Creates metadata, call only once per database connection."""
SQLModel.metadata.create_all(self.engine)
class DB:
def __init__(self, table: SQLModel, engine):
self.table = table
self.engine = engine.engine
</code></pre>
<p>Usage:</p>
<pre class="lang-py prettyprint-override"><code>from db import Engine, DB
from models import Project, Account
URL = "sqlite:///database.db"
engine = Engine(URL)
projects = DB(table=Project, engine=engine)
accounts = DB(table=Account, engine=engine)
projects.read_all()
accounts.read(4)
</code></pre>
|
<python><sqlalchemy><orm><sqlmodel>
|
2023-12-04 11:12:23
| 1
| 465
|
Jan
|
77,598,864
| 243,031
|
Secure connection between django and mysql
|
<p>We have Django application and we are configuring <code>ssl</code> in the <code>DATABASES</code> settings in <code>settings.py</code>.</p>
<p><strong>settings.py</strong></p>
<pre><code>DATABASES['default']['ENGINE'] = 'django.db.backends.mysql'
DATABASES['default']['NAME'] = 'database'
DATABASES['default']['HOST'] = "v2_host"
DATABASES['default']['USER'] = "v2_user"
DATABASES['default']['PASSWORD'] = "password"
DATABASES['default']['CONN_MAX_AGE'] = 180
DATABASES['default']['OPTIONS'] = {'init_command': 'SET SESSION wait_timeout=300',
'ssl': {'cert': "/home/myuser/certpath",
'key': "/home/myuser/keypath",
}
}
</code></pre>
<p>We are able to connect the database using this. Db team recently enforce secure/encrypted connection from client. When they enforced, we are start getting error <code>Connections using insecure transport are prohibited while --require_secure_transport=ON.</code></p>
<p>We though, passing <code>ssl</code> is secure connection. But still we are getting error. What we have to pass to make sure connection between Django and mysql are secure?</p>
|
<python><mysql><django><insecure-connection>
|
2023-12-04 10:52:24
| 0
| 21,411
|
NPatel
|
77,598,815
| 10,353,865
|
tf-neural network not working - pytorch does
|
<p>I have created a tiny dataset where an exact linear relationship holds. The code is as follows:</p>
<pre><code>import numpy as np
def gen_data(n, k):
np.random.seed(5711)
beta = np.random.uniform(0, 1, size=(k, 1))
print("beta is:", beta)
X = np.random.normal(size=(n, k))
y = X.dot(beta).reshape(-1, 1)
D = np.concatenate([X, y], axis=1)
return D.astype(np.float32)
</code></pre>
<p>Now I have fitted a pyTorch neural network with SGD optimizer and MSE-loss and it converged approximately to the true values within 50 epochs and a learning rate of 1e-1</p>
<p>I tried to setup exactly the same model in tensorflow:</p>
<pre><code>import keras.layers
from sklearn.model_selection import train_test_split
from keras.models import Sequential
import tensorflow as tf
n = 10
k = 2
X = gen_data(n, k)
D_train, D_test = train_test_split(X, test_size=0.2)
X_train, y_train = D_train[:,:k], D_train[:,k:]
X_test, y_test = D_test[:,:k], D_test[:,k:]
model = Sequential([keras.layers.Dense(1)])
model.compile(optimizer=tf.keras.optimizers.SGD(lr=1e-1), loss=tf.keras.losses.mean_squared_error)
model.fit(X_train, y_train, batch_size=64, epochs=50)
</code></pre>
<p>When I call model.get_weights it shows substantial differences to the true values and the loss is still not even close to zero. I don't know why this model does not perform as well as the pytorch model. Even if you disregard the pytorch model, shoudln't the network converge to the true values in this tiny toy-dataset. What is my error in setting up the model?</p>
<p>EDIT: And here is my full pytorch code for comparison:</p>
<pre><code>import torch
from torch.utils.data import DataLoader, Dataset, Sampler, SequentialSampler, RandomSampler
from torch import nn
from sklearn.model_selection import train_test_split
n = 10
k = 2
device = "cpu"
class Daten(Dataset):
def __init__(self, df):
self.df = df
self.ycol = df.shape[1] - 1
def __getitem__(self, index):
return self.df[index, :self.ycol], self.df[index, self.ycol:]
def __len__(self):
return self.df.shape[0]
def split_into(D, batch_size=64, **kwargs):
D_train, D_test = train_test_split(D, **kwargs)
df_train, df_test = Daten(D_train), Daten(D_test)
dl_train, dl_test = DataLoader(df_train, batch_size=batch_size), DataLoader(df_test, batch_size=batch_size)
return dl_train, dl_test
D = gen_data(n, k)
dl_train, dl_test = split_into(D, test_size=0.2)
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Sequential(
nn.Linear(k, 1)
)
def forward(self, x):
ypred = self.linear(x)
return ypred
model = NeuralNetwork().to(device)
print(model)
loss_fn = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-1)
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
print(y.shape)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
loss.backward()
optimizer.step()
optimizer.zero_grad()
if batch % 100 == 0:
loss, current = loss.item(), (batch + 1) * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
epochs = 50
for t in range(epochs):
print(f"Epoch {t + 1}\n-------------------------------")
train(dl_train, model, loss_fn, optimizer)
print("Done!")
</code></pre>
<p>EDIT:</p>
<p>I increased epochs dramatically. After epochs=1000 we come close to the true values. Therefore my best guess for the discrepancy is that tf applies some non-optimal initialization?</p>
|
<python><tensorflow><neural-network>
|
2023-12-04 10:46:42
| 1
| 702
|
P.Jo
|
77,598,668
| 12,415,863
|
convert list into list of lists: each sublist is a different length, and method of filling sublists is different
|
<p>input I have:</p>
<p><code>big_list = [1,2,3,4,5,6,7,8]</code></p>
<p>output I want:</p>
<p><code>list_of_lists = [[1],[2,5][3,6,8],[4,7]]</code></p>
<p>I have manually defined the output I need as a list-of-lists as the following:</p>
<ul>
<li>1st sublist length = 1</li>
<li>2nd sublist length = 2</li>
<li>3rd sublist length = 3</li>
<li>4th sublist length = 2</li>
</ul>
<p>So the total number of elements in the output <code>list_of_lists</code> = 1x2x3x2 = 8. The same number of elements as in the input, <code>big_list</code>.</p>
<p>I want to fill the sublists by first filling the 1st element of each sublist, and then the 2nd element of each sublist, etc etc, so I get this:</p>
<p><code>list_of_lists = [[1],[2,5][3,6,8],[4,7]]</code></p>
<p>To clarify, I know how to create a list of lists. And I've seen posts like <a href="https://stackoverflow.com/questions/312443/how-do-i-split-a-list-into-equally-sized-chunks">this</a>, and <a href="https://stackoverflow.com/questions/38619143/convert-python-sequence-to-numpy-array-filling-missing-values">this</a> and <a href="https://stackoverflow.com/questions/43146266/convert-list-of-lists-with-different-lengths-to-a-numpy-array">this</a>.</p>
<p>But these answers are either about creating a list-of-lists where the sublists are of equal lengths, or they deal with sublists of different lengths by creating sublists of the largest length and filling the smaller ones with a "filler" value. Additionally, most answers create the first sublist from the first n elements, the second sublist from the second n elements, etc.</p>
<p>Neither of these is what I want.</p>
<p>Does anyone know how to do this? Thanks in advance.</p>
|
<python><list>
|
2023-12-04 10:23:10
| 1
| 301
|
abra
|
77,598,619
| 17,136,258
|
Create waterfall chart
|
<p>I have a problem. I want to create a waterfall chart. But Unfortunately it does not look correctly what I want. For example the numbers are not correctly placed and the text is missing at the end. And beside it want to create a line, that it shows more the waterfall.</p>
<p>How can I create the waterfall chart more what I want?</p>
<p><a href="https://i.sstatic.net/mVPpe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mVPpe.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
data = {
'Market': ['EU', 'EU', 'US', 'US', 'CN'],
'Vehicle ID': ['001-1234', '001-1254', '003-7485', '001-1232', '001-2138'],
'Status': ['Rec.', 'Rec.', 'Development', 'Stopped', 'Development']
}
df = pd.DataFrame(data)
filtered_df = df[df['Vehicle ID'].str.startswith('001')]
status_order = [
'Wachting', 'Stopped', 'Development', 'Rec.',
]
status_counts = filtered_df['Status'].value_counts().reindex(status_order, fill_value=0)
total_count = df.shape[0]
status_counts = status_counts.append(pd.Series({'Total': total_count})).reindex(['Total'] + status_order)
plt.bar(status_counts.index, status_counts)
plt.xlabel('Status')
plt.ylabel('Number of Vehicles')
plt.title('Distribution of Vehicle Status for vehicles starting with "001"')
for index, value in enumerate(status_counts):
plt.text(index, -0.1, str(value), ha='center')
plt.gca().axes.get_yaxis().set_visible(False)
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/yvbuH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yvbuH.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><matplotlib><seaborn>
|
2023-12-04 10:14:52
| 3
| 560
|
Test
|
77,598,509
| 12,978,930
|
JAX-metal error: 'anec.reshape' op failed: input tensor dimensions are not supported on ANEs
|
<p>I just tested <a href="https://developer.apple.com/metal/jax/" rel="nofollow noreferrer"><code>jax-metal</code></a> on my M1 Max MacBook Pro. However, I already get a warning when loading the MNIST dataset as follows.</p>
<pre class="lang-py prettyprint-override"><code>import gzip
import os
import struct
import urllib.request
import jax.numpy as jnp
def mnist():
url_dir = "https://storage.googleapis.com/cvdf-datasets/mnist"
target_dir = os.getcwd() + "/data/mnist"
# download images and labels into data folder
url = f"{url_dir}/train-images-idx3-ubyte.gz"
target = f"{target_dir}/train-images-idx3-ubyte.gz"
if not os.path.exists(target):
os.makedirs(target_dir, exist_ok=True)
urllib.request.urlretrieve(url, target)
print(f"Downloaded {url} to {target}")
# load images into memory
target = f"{target_dir}/train-images-idx3-ubyte.gz"
with gzip.open(target, "rb") as fh:
_, batch, rows, cols = struct.unpack(">IIII", fh.read(16))
shape = (batch, 1, rows, cols)
images = jnp.frombuffer(fh.read(), dtype=jnp.uint8).reshape(shape)
return images
if __name__ == "__main__":
images = mnist()
print(images.shape)
</code></pre>
<p>The warning states:</p>
<blockquote>
<p>loc("jit(reshape)/jit(main)/reshape[new_sizes=(60000, 1, 28, 28)
dimensions=None]"("train_stack.py":25:0)): error: 'anec.reshape' op
failed: input tensor dimensions are not supported on ANEs.</p>
</blockquote>
<p>Here, (60000, 1, 28, 28) is the shape of the images in the MNIST dataset. Still, the program finishes running successfully. Now, I am wondering about the following.</p>
<ul>
<li>What are the consequences of <code>anec.reshape</code> not being supported?</li>
<li>Can I avoid this warning (by side-stepping the <code>reshape</code> in some way) or do we just need to wait until <code>anec.reshape</code> is fully supported?</li>
</ul>
<p><strong>Extra information.</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>GPU</td>
<td>Apple M1 Max</td>
</tr>
<tr>
<td>Operating System</td>
<td>MacOS Sonoma v14.1</td>
</tr>
<tr>
<td><code>jax-metal</code></td>
<td>0.0.4</td>
</tr>
</tbody>
</table>
</div>
|
<python><tensorflow><metal><jax>
|
2023-12-04 09:55:03
| 0
| 12,603
|
Hericks
|
77,598,507
| 6,394,722
|
How to make ElementTree parse XML without changing namespace declarations?
|
<p>I have next xml:</p>
<p><strong>data3.xml:</strong></p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<feed
xmlns="http://www.w3.org/2005/Atom">
<entry>
<content>
<ns2:executionresult
xmlns:ns2="http://jazz.net/xmlns/alm/qm/v0.1/"
xmlns:ns4="http://purl.org/dc/elements/1.1/">
<ns4:title>xxx</ns4:title>
</ns2:executionresult>
</content>
</entry>
</feed>
</code></pre>
<p>I have next code:</p>
<p><strong>test3.py:</strong></p>
<pre><code>import xml.etree.ElementTree as ET
tree = ET.parse('data3.xml')
root = tree.getroot()
xml_str = ET.tostring(root).decode()
print(xml_str)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>$ python3 test3.py
<ns0:feed xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:ns0="http://www.w3.org/2005/Atom" xmlns:ns1="http://jazz.net/xmlns/alm/qm/v0.1/">
<ns0:entry>
<ns0:content>
<ns1:executionresult>
<dc:title>xxx</dc:title>
</ns1:executionresult>
</ns0:content>
</ns0:entry>
</ns0:feed>
</code></pre>
<p>I wonder why the <code>ElementTree</code> automatically change the <code>namespace</code> for me? What's the rule? And how can I avoid it?</p>
|
<python><xml><elementtree><xml-namespaces>
|
2023-12-04 09:54:37
| 1
| 32,101
|
atline
|
77,598,018
| 4,640,936
|
odoo 16 create a link that opens in set company
|
<p>what i am trying to do is create link that will open a set company. now when i create the url using base_url with a link to a record it tries to open in the current company and so gives an access error.</p>
<p>Thank you for your suggestions.</p>
<pre><code> def make_record_link(self, record='', action='', menu=''):
link = ''
record = record or self
if record:
ir_param = self.env['ir.config_parameter'].sudo()
base_url = ir_param.get_param('web.base.url')
if base_url:
base_url += '/web#&model=%s&id=%s' % (record._name,record.id)
link = '<a href="'+base_url+'">'+record.name+'</a>'
return str(link)
</code></pre>
<p>i added more information for context the function is base on the odoo code but i am not sure how to represent the company.</p>
|
<python><odoo><odoo-16>
|
2023-12-04 08:14:01
| 1
| 740
|
Moaz Mabrok
|
77,597,847
| 684,883
|
How to declare Enum and Custom classes in numba jitclass spec?
|
<p>I am trying out numba (v0.56.4), I have a python enum Color, I want to declare it in @jitclass spec for class Paint. Can you please let me know the right way to declare it?</p>
<pre><code>class Color(Enum):
RED = 1
BLUE = 2
GREEN = 3
spec = [('name': types. String), ('color': Color)]
@jitclass(spec)
class Paint:
def __init__(self, name, color):
self.name = name
self. Color = color
</code></pre>
<p>When I directly give Enum Color in spec, I am getting error TypeError:</p>
<blockquote>
<p>spec values should be Numba type instances, got <enum 'Color'></p>
</blockquote>
<p>If I try to give int32 or int64 instead of enum Color, then I get error:</p>
<pre><code>('color', types.int64) # gives error
</code></pre>
<blockquote>
<p>Cannot cast Enum(Color) to int64.</p>
</blockquote>
<p>If I create a wrapper class for Color with @jitclass on ColorWrapper, I am getting error</p>
<blockquote>
<p>TypeError: spec values should be Numba type instances, got <class
'numba.experimental.jitclass.base.ColorWrapper'></p>
</blockquote>
<p>As per numba documentation I can use enum in njit functions but I could not find way to declare Enum types and custom classes with @jitclass for another class's @jitclass spec</p>
<p>Thanks</p>
|
<python><numba>
|
2023-12-04 07:41:06
| 1
| 1,355
|
Yogesh
|
77,597,489
| 1,080,517
|
Extract trigger words from safetensors file
|
<p>With many <code>.safetensors</code> files associated to different LoRA authors sometimes specify "trigger words".
I've been trying to extract find the way to extract them from <code>.safetensors</code> file but I cannot find them. They are not present in metadata section and according to documentation I cannot think of other place where those can be.</p>
<p>On the other hand, I know that those trigger words are working, because I am getting image generated when I'm providing them.</p>
<p>I thought that something like this will work:
from safetensors import safe_open</p>
<pre><code>tensors = {}
with safe_open("lora.safetensors", framework="pt") as f:
print(f.metadata())
</code></pre>
<p>But it seems the access is more complicated. Any suggestion?</p>
|
<python><safe-tensors>
|
2023-12-04 06:08:27
| 1
| 2,713
|
sebap123
|
77,597,141
| 17,800,932
|
MyPy not ignoring `valid-type` error code for type aliases
|
<p>Python 3.12 added support for type aliases as defined in <a href="https://peps.python.org/pep-0695/" rel="nofollow noreferrer">PEP 695</a>. MyPy does not support this, as discussed in <a href="https://github.com/python/mypy/issues/15238" rel="nofollow noreferrer">https://github.com/python/mypy/issues/15238</a>. However, any instance of a <code>type</code> statement results in a MyPy error. It isn't clear why this strategy was chosen, since lack of support of a valid feature should not be an error.</p>
<p>Since I am using Python 3.12 and the <code>type</code> statement, I want to disable this error. However, I am having trouble doing so.</p>
<p>I have tried the following ways:</p>
<ul>
<li>In <code>pyproject.toml</code>, I have:
<pre><code>[tool.mypy]
disable_error_code = 'valid-type'
</code></pre>
</li>
<li><code>type Number = int | float # type: ignore[valid-type]</code>. This even yields a <code>error: Unused "type: ignore" comment [unused-ignore]</code> when MyPy is ran.</li>
<li><code>type Number = int | float # mypy: disable-error-code="valid-type"</code></li>
</ul>
<p>None of these work. In all three cases, I get the error:</p>
<pre><code>error: PEP 695 type aliases are not yet supported [valid-type]
</code></pre>
<p>What am I doing wrong? How do I get MyPy to ignore any PEP 695 error? I have read the documentation here: <a href="https://mypy.readthedocs.io/en/stable/error_codes.html#error-codes" rel="nofollow noreferrer">https://mypy.readthedocs.io/en/stable/error_codes.html#error-codes</a>.</p>
|
<python><mypy><python-typing><python-3.12>
|
2023-12-04 03:52:13
| 0
| 908
|
bmitc
|
77,596,665
| 5,775,965
|
Calling Python code in Swift Callback from Python called from Swift
|
<p>Pardon the confusing title but I am doing something truly cursed.</p>
<p>I have a large project written in Python that I want to write a GUI in Swift for. I'm able to use PythonKit to call my Python code from Swift just fine.</p>
<p>The only problem is that the Python program deals with laying out the UI elements since they form a directed graph and the layout algorithm is non-trivial.</p>
<p>The layout algorithm needs to know the size of each node in the graph to deal with overlapping nodes, but only the GUI knows what the rendered size of each node will be.</p>
<p>Therefore, I want to pass a callback written in Swift to the layout function that is written in Python.</p>
<p>I've pared down the code to be able to minimally reproduce the behavior.</p>
<pre><code>let nodeSizeFn: @convention(c) (UnsafeMutableRawPointer?) -> UnsafeMutableRawPointer? = { _ in
return PyInt_FromLong(30)
}
let nodeSizeFnPtr = unsafeBitCast(nodeSizeFn, to: UnsafeMutableRawPointer.self)
let nodeSizeFnAddr = UInt(bitPattern: nodeSizeFnPtr)
private let ctypes = Python.import("ctypes")
let sizefn = ctypes.CFUNCTYPE(ctypes.py_object)(nodeSizeFnAddr)
let (xs, ys, ws, hs) = graph.layout(sizefn).tuple4
</code></pre>
<p>This code segfaults in the call to <code>PyInt_FromLong</code> in the callback. From using LLDB, it seems that the <code>SMALL_INT</code> singleton/global object/static object (in CPython parlance) is null in the callback. I'm not sure how this is possible since PythonKit calls <code>Py_Initialize</code> and a similar call to <code>PyInt_FromLong</code> outside of the callback works fine.</p>
<p>Any and all help is appreciated. Thank you.</p>
|
<python><swift><ffi><swift-pythonkit>
|
2023-12-04 00:23:11
| 0
| 1,159
|
genghiskhan
|
77,596,649
| 6,379,348
|
Validation errors for LLMChain when use langchain/openai for text generation
|
<p>I just started learning langchain. I was trying to use langchain and openai for text generation. Codes below. (I use python 3.10):</p>
<pre><code>#openai==1.3.5
#langchain==0.0.345
from transformers import pipeline
from langchain import PromptTemplate, LLMChain, OpenAI
import os
os.environ['OPENAI_API_KEY'] = '*****'
def generate_story (scenario):
template = '''
You are a story teller. Please make a story based on the context below:
CONTEXT: {scenario}
STORY:
'''
prompt = PromptTemplate (template = template, input_variables = ['scenario'])
story_llm = LLMChain(LLM = OpenAI(
model_name = 'gpt-3.5-turbo',
temperature = 1),
prompt = prompt
)
story = story_llm.predict(scenario = scenario)
print(story)
return story
scenario = 'A person is skiing down a snowy slope.'
story = generate_story (scenario)
print(story)
</code></pre>
<p>However, I got errors like below:</p>
<pre><code>/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
ValidationError: 2 validation errors for LLMChain
llm
field required (type=value_error.missing)
LLM
extra fields not permitted (type=value_error.extra)
</code></pre>
<p>Spent hours trying to find out why but could not figure that out. Does anyone know what went wrong? Thanks in advance.</p>
|
<python><openai-api><langchain><py-langchain>
|
2023-12-04 00:17:02
| 1
| 11,903
|
zesla
|
77,596,591
| 1,476,044
|
Is MLPClassifier Appropriate for Binary Classification?
|
<p>I've written a program that uses MLPClassifier to solve a binary classification problem. It sort of works but I am not convinced this is the right model to use.</p>
<p>I have 1300 hextuples of integers to put into one of two classes: class 0 and class 1. A potential issue is that in my training data, 98% are in class 0 so I would get a 98% accuracy from a "prediction function" that always returned "class 0" irrespective of the input.</p>
<p>Is there a machine-learning model that is designed for this type of problem?</p>
<p>==============================================================</p>
<p>TLDR?</p>
<p>My data looks like:</p>
<pre><code>X = array([[ 0, 11, 51, 13, 0, 9],
[51, 13, 0, 9, 0, 11],
[ 0, 8, 0, 10, 0, 13],
...,
[ 0, 11, 61, 12, 0, 8],
[ 0, 8, 0, 0, 60, 11],
[30, 11, 0, 6, 0, 9]])
</code></pre>
<p>The target is y, a list of 1300 0s and 1s. I used MLPClassifier and got a prediction accuracy of 98%. This was when it occurred to me that by coincidence 98% of the tuples are in class 0, so I would get an accuracy of 98% if I didn't bother with any machine learning and instead guessed the class was always 0.</p>
<p>I checked the fit to see how it fared on just the class 1 tuples, and found that 82% of them are correctly predicted so the accuracy is 82% of 98% i.e. around 80% which I would like to improve, but how?</p>
<p>I have no idea how to change the parameters to MLPClassifier other than blindly increase the size/number of layers, but it occurs to me that I may well be using entirely the wrong sort of model for a learning problem with a Yes/No classification. Also, the integers in the hextuples are not arbitrary and it occurred to me that this may also be relevant to the choice of model. In particular, three of six inputs are always in the range 0 - 15, and the other three are 2-digit codes with three possibilites for the first digit and two for the second.</p>
<p>Any thoughts gratefully recieved.</p>
<p>Code:</p>
<pre><code>m = MLPClassifier(hidden_layer_sizes = (256, 128, 64), max_iter=10000) # Pasted from example I found somewhere on the net :(
_ = m.fit(X, y)
yhat = m.predict(X)
cm = confusion_matrix(y, yhat)
print( 'Accuracy = ', np.mean( y == yhat ) )
print( cm )
</code></pre>
<p>Output:</p>
<pre><code>Accuracy = 0.9816653934300993
[[1267 13]
[ 11 18]]
(pdb) class1 = [ i for i in range(len(y)) if y[i] == 1]
(pdb) z = m.predict(np.row_stack((X[q] for q in class1)))
(pdb) z
array([0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1])
(pdb) len(z), sum(z)
29, 24
</code></pre>
|
<python><machine-learning><scikit-learn>
|
2023-12-03 23:53:22
| 0
| 397
|
user1476044
|
77,596,485
| 13,497,079
|
'anyio' has no attribute 'start_blocking_portal'
|
<p>One of my fast api endpoints is supposed to return a csv response.
I try to check this endpoint but I get this error</p>
<pre><code>self = <starlette.testclient.TestClient object at 0x7f834a5f6ec0>
@contextlib.contextmanager
def _portal_factory(
self,
) -> typing.Generator[anyio.abc.BlockingPortal, None, None]:
if self.portal is not None:
yield self.portal
else:
> with anyio.start_blocking_portal(**self.async_backend) as portal:
E AttributeError: module 'anyio' has no attribute 'start_blocking_portal'
</code></pre>
<p>This is how the test look like</p>
<pre><code>from starlette.testclient import TestClient
def test_endpoint(client: TestClient, file_regression):
response = client.get("/bla")
file_regression.check(response.content, extension=".csv", binary=True)
</code></pre>
<p>my requirements.txt</p>
<pre><code>black
fastapi==0.85.0
isort
pytest
pytest-regressions
requests
uvicorn
</code></pre>
|
<python><pytest><fastapi><starlette><anyio>
|
2023-12-03 23:01:12
| 1
| 534
|
Potis23
|
77,596,388
| 295,930
|
Loading a Python pickle slows down in a for loop
|
<p>I have a 25GB pickle of a dictionary of numpy arrays.
The dictionary looks like the following:</p>
<ul>
<li>668,956 key-value pairs.</li>
<li>The keys are strings. Example key:
<code>"109c3708-3b0c-4868-a647-b9feb306c886_1"</code></li>
<li>The values are numpy arrays of shape <code>200x23</code>, type <code>float64</code></li>
</ul>
<p>When I load the data using pickle repeatedly in a loop, the time to load slows down (see code and result below). What could be causing this?</p>
<p>Code:</p>
<pre><code>def load_pickle(file: int) -> dict:
with open(f"D:/data/batched/{file}.pickle", "rb") as handle:
return pickle.load(handle)
for i in range(0, 9):
print(f"\nIteration {i}")
start_time = time.time()
file = None
print(f"Unloaded file in {time.time() - start_time:.2f} seconds")
start_time = time.time()
file = load_pickle(0)
print(f"Loaded file in {time.time() - start_time:.2f} seconds")
</code></pre>
<p>Result:</p>
<pre><code>Iteration 0
Unloaded file in 0.00 seconds
Loaded file in 18.80 seconds
Iteration 1
Unloaded file in 14.78 seconds
Loaded file in 30.51 seconds
Iteration 2
Unloaded file in 28.67 seconds
Loaded file in 30.21 seconds
Iteration 3
Unloaded file in 35.38 seconds
Loaded file in 40.25 seconds
Iteration 4
Unloaded file in 39.91 seconds
Loaded file in 41.24 seconds
Iteration 5
Unloaded file in 43.25 seconds
Loaded file in 45.57 seconds
Iteration 6
Unloaded file in 46.94 seconds
Loaded file in 48.19 seconds
Iteration 7
Unloaded file in 51.67 seconds
Loaded file in 51.32 seconds
Iteration 8
Unloaded file in 55.25 seconds
Loaded file in 56.11 seconds
</code></pre>
<p>Notes:</p>
<ul>
<li>During the processing of the loop the RAM usage ramps down (I assume dereferencing the previous data in the <code>file</code> variable), before ramping up again. Both unloading and loading parts seem to slow down over time. It surprises me how slow the RAM decreases in the unloading part.</li>
<li>The total RAM usage it ramps up to stays about constant (it doesn't seem like there's a memory leak).</li>
<li>I've tried including <code>del file</code> and <code>gc.collect()</code> in the loop, but this doesn't speed anything up.</li>
<li>If I change <code>return pickle.load(handle)</code> to <code>return handle.read()</code>, the unload time is consistently 0.45s and load time is consistently 4.85s.</li>
<li>I'm using Python 3.9.13 on Windows with SSD storage (<code>Python 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:51:29) [MSC v.1929 64 bit (AMD64)]</code>).</li>
<li>I have 64GB RAM and don't seem to be maxing this out.</li>
<li>Why am I doing this? During training of an ML model, I have 10 files that are each 25GB big. I can't fit them all into memory simultaenously, so have to load and unload them each epoch.</li>
</ul>
<p>Any ideas? I'd be willing to move away from using pickle too if there's an alternative that has similar read speed and doesn't suffer from the above problem (I'm not worried about compression).</p>
<p>Edit:
I've run the above loading and unloading loop for different sized pickles. Results below showing the relative change in speed over time. For anything above 3 GB, the unload time starts to significantly ramp.</p>
<p><a href="https://i.sstatic.net/4o2vU.png" rel="noreferrer"><img src="https://i.sstatic.net/4o2vU.png" alt="Unload time relative to first iteration" /></a>
<a href="https://i.sstatic.net/qj1yg.png" rel="noreferrer"><img src="https://i.sstatic.net/qj1yg.png" alt="Load time relative to first iteration" /></a></p>
|
<python><pickle>
|
2023-12-03 22:25:49
| 1
| 486
|
The Salt
|
77,596,323
| 10,749,925
|
How do I run the following Python and Boto3 code?
|
<p>I'm trying to explore machine learning models from AWS. AWS recently released a Bedrock service, and I would like to setup an environment to do that. I would also need to interact with S3, Langchain and Vector databases (e.g.Pinecone). There's a bit of an idea here on what i'd likely be playing with and i'd try to follow this: <a href="https://github.com/generative-ai-on-aws/generative-ai-on-aws/blob/main/12_bedrock/01_bedrock_overview.ipynb" rel="nofollow noreferrer">EXAMPLE</a></p>
<p>I have VS Code on my PC, and i've seen a guide that details some of the steps to set this up with Pythin and Boto3: <a href="https://hands-on.cloud/install-boto3-python/" rel="nofollow noreferrer">GUIDE</a></p>
<p>The thing i see from the Example link, is that i'm guessing Jupyter notbeook is used. Also (and please excuse my naievity), if i'm putting S3 files into a Vector database, these services would need to be spun up (or setup) <em><strong>surely</strong></em>, and paid for by me. Otherwise where would this Vector database be located and where would Langchain run?</p>
<p>So for example running:</p>
<pre><code>%pip install --no-build-isolation --force-reinstall \
"boto3>=1.28.57" \
"awscli>=1.29.57" \
"botocore>=1.31.57"
%pip install --quiet langchain==0.0.309
</code></pre>
<p>How and where would you type this to run?</p>
<p>After that, the Vector database would just be queried via AWS Bedrock (which handles a lot of the underlying stuff). So i guess my other questions are for getting data into the Vector database initially:</p>
<p>Do i need AWS Sagemaker? I see in setting up SageMaker, that you can specify EC2 or some compute power.
Is there a simple way to work with this Example without SageMaker, or SageMaker is the easiest to setup and utilise?</p>
<p>Or is the guide linked here enough?</p>
<p>Essentially, to work with the Example i linked, what should my setup be?</p>
|
<python><amazon-web-services><machine-learning><boto3><aws-cli>
|
2023-12-03 22:03:30
| 0
| 463
|
chai86
|
77,596,284
| 17,800,932
|
How do I document a type alias defined using `type` in Python?
|
<p>Python 3.12 has the <a href="https://docs.python.org/3/reference/simple_stmts.html#type" rel="noreferrer"><code>type</code></a> statement. How do I document the type alias properly?</p>
<p>I tried:</p>
<pre><code>type Number = int | float
"""Represents a scalar number that is either an integer or float"""
</code></pre>
<p>But this doesn't seem to associate the docstring with the alias. I tried putting the docstring after the <code>=</code> and before the alias definition, but that generated an error.</p>
|
<python><python-typing><docstring><type-alias><python-3.12>
|
2023-12-03 21:45:27
| 1
| 908
|
bmitc
|
77,596,283
| 15,804,190
|
python regex failing to find all matches
|
<p>My regex is failing to capture all groups when I run in python, and I'm at a loss why...</p>
<p>I'm trying to pull out the sequences of numbers.</p>
<pre class="lang-py prettyprint-override"><code>import re
my_string = '467..114..'
re.search(r'(?:(\d+)(?:\.*))*',my_string).groups()
# outputs ('114',)
</code></pre>
<p>I expect this to get the two groups, '467', and '114'.</p>
<p>To explain the expression (or what I'm thinking for it):</p>
<ol>
<li>I want to capture a series of digits, needs to have at least one but can have many - <code>(\d+)</code></li>
<li>This will be followed by zero or more periods/dots, which I don't want to capture - <code>(?:\.*)</code></li>
<li>This pattern of digits followed by dots could repeat, so the two above are wrapped together in another non-capturing group that can repeat 0-or-more times.</li>
</ol>
<p>I've gotten things to pull out either the first or second set of digits, but struggling to get both at once...</p>
<p>I do know I could use <code>re.findall()</code> to just get the numbers, but I want the span and I want to know why it isn't working...</p>
<p>Edit:</p>
<p>I've found that <code>re.finditer()</code> does indeed return Match objects, not just the list of strings like <code>re.findall()</code>. That seems like a reasonable way for me to actually go, but still curious about how to get it to work with search, which I feel like should be possible...</p>
|
<python><regex><regex-group>
|
2023-12-03 21:45:27
| 2
| 3,163
|
scotscotmcc
|
77,596,091
| 4,980,705
|
Run df.merge, df.apply in batches for big dataframe
|
<p>I have a dataframe with around 500k rows and need to run several <code>df.merge</code>, <code>df.apply</code> and requests to the Google Maps API. After applying the changes on the columns I want to write to a <code>.csv</code> file.</p>
<p>I've tried to run the code several times but it crashes all the time and takes several hours/days. I believe it is because I'm making too many requests in a short time to the Google Maps API.</p>
<p>I've tested the code up to 1000 lines and it works.</p>
<p>How do I batch the code to run, let's says, 100 rows at a time, and write to the same <code>.csv</code>?</p>
|
<python><pandas><dataframe>
|
2023-12-03 20:38:35
| 1
| 717
|
peetman
|
77,595,969
| 4,348,400
|
How to provide a type hint for parameters whose arguments should be ordered and hashable?
|
<p>I want a type hint which indicates that the arguments are ordered and hashable.</p>
<pre class="lang-py prettyprint-override"><code>def foo(bar: OrderedHashable) -> None:
...
</code></pre>
<p>Here is my work in progress.</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Hashable
from typing import TypeVar
OrderedHashable = TypeVar('OrderedHashable', bound=Hashable)
def foo(bar: OrderedHashable) -> None:
...
</code></pre>
<p>So far this still says nothing about ordered except in the name of the type <code>OrderedHashable</code>. How do I write such a type so that it really indicates that such instances should have ordered dunder methods such as <code>__gt__</code> or <code>__lt__</code>.</p>
|
<python><python-typing>
|
2023-12-03 20:00:35
| 1
| 1,394
|
Galen
|
77,595,878
| 6,245,473
|
How to add price column to dataframe based on date column already present in yahooquery?
|
<p>The following code is being used to create the output below it:</p>
<pre><code>from yahooquery import Ticker
# Fetching data for AAPL
aapl = Ticker('AAPL')
types = ['asOfDate', 'TangibleBookValue', 'ShareIssued']
financial_data = aapl.get_financial_data(types, trailing=False)
# Dropping specific columns
columns_to_exclude = ['periodType', 'currencyCode']
financial_data.drop(columns=columns_to_exclude, inplace=True)
print(financial_data)
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/LYyo2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LYyo2.png" alt="enter image description here" /></a></p>
<p>I would like to add an additional column from the <a href="https://yahooquery.dpguthrie.com/guide/ticker/historical/" rel="nofollow noreferrer">history module</a> that grabs the <em>adjclose</em> price based on the corresponding asOfDate. So the output should be:</p>
<p><a href="https://i.sstatic.net/x3dZn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x3dZn.png" alt="enter image description here" /></a></p>
<p>Below is sample code that grabs price history data with the output below it:</p>
<pre><code>tickers = Ticker('aapl', asynchronous=True)
# Default period = ytd, interval = 1d
df = tickers.history(start='2019-01-01', end='2023-12-31')
df.head()
</code></pre>
<p><a href="https://i.sstatic.net/PClWy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PClWy.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><stock><yfinance>
|
2023-12-03 19:33:58
| 1
| 311
|
HTMLHelpMe
|
77,595,820
| 1,473,517
|
How to define types for a numba dict
|
<p>I want to convert a python dict to a numba dict for speed reasons. Here is my non-working MWE:</p>
<pre><code>import numba
import numpy as np
mat = np.random.normal(size=(3,3))
pyth_dict = {"A": 2, "B": mat}
@numba.njit()
def numbaize(pyth_dict):
numba_dict = {}
for key in pyth_dict.keys():
numba_dict[key] = python_dict["B"]
return numba_dict
numba_dict = numbaize(pyth_dict)
</code></pre>
<p>This doesn't work as I haven't specified the types.</p>
<p>My keys are strings and one of the keys has a corresponding integer value and the other a 2d numpy array as its value.</p>
<p>I should add that once the dict is made, it will never be modified. It is just used as a lookup table. So the types of the keys, their names and the types of the values are all fixed and known in advance.</p>
<p>How can I specify the types correctly?</p>
|
<python><numba>
|
2023-12-03 19:19:42
| 0
| 21,513
|
Simd
|
77,595,758
| 3,861,775
|
Efficient copying of an ensemble in JAX
|
<p>I have an ensemble of models and want to assign the same parameters to each of the models. Both the models' parameters as well as the new parameters have the same underlying structure. Currently I use the following approach that uses a for-loop.</p>
<pre><code>import jax
import jax.numpy as jnp
model1 = [
[jnp.asarray([1]), jnp.asarray([2, 3])],
[jnp.asarray([4]), jnp.asarray([5, 6])],
]
model2 = [
[jnp.asarray([2]), jnp.asarray([3, 4])],
[jnp.asarray([5]), jnp.asarray([6, 7])],
]
models = [model1, model2]
params = [
[jnp.asarray([3]), jnp.asarray([4, 5])],
[jnp.asarray([6]), jnp.asarray([7, 8])],
]
models = [jax.tree_map(jnp.copy, params) for _ in range(len(models))]
</code></pre>
<p>Is there a more efficient way in JAX to assign the parameters from <code>params</code> to each model in <code>models</code>?</p>
|
<python><jax>
|
2023-12-03 19:00:01
| 1
| 3,656
|
Gilfoyle
|
77,595,614
| 1,798,704
|
Equivalnet of C# or Python even rounding in Javascript
|
<p>There is a discrepancy between how C# or Python rounds a decimal number and how Javascript handles it. I've already found this answer <a href="https://stackoverflow.com/questions/3108986/gaussian-bankers-rounding-in-javascript#comment135873096_49080858">banker-rounding</a>, but this solution does not work in my case and causes different results between server and client.</p>
<p>Suppose I have a variable <code>number</code> that its value is from <code>1.36500000000001</code> to <code>1.36500000000009</code>.</p>
<p>In C#:</p>
<pre class="lang-cs prettyprint-override"><code>// 1.36500000000001m
Math.Round(decimal_number, 2, MidpointRounding.ToEven)
// 1.37
</code></pre>
<p>In Python:</p>
<pre class="lang-py prettyprint-override"><code>Decimal(str(number)).quantize(Decimal('0.01')), ROUND_HALF_EVEN))
# 1.37
</code></pre>
<p>But in Javascript:
based on different functions of <a href="https://stackoverflow.com/questions/3108986/gaussian-bankers-rounding-in-javascript#comment135873096_49080858">banker-rounding</a></p>
<pre class="lang-js prettyprint-override"><code>function bankersRound(n, d=2) {
var x = n * Math.pow(10, d);
var r = Math.round(x);
var br = Math.abs(x) % 1 === 0.5 ? (r % 2 === 0 ? r : r-1) : r;
return br / Math.pow(10, d);
}
bankersRound(number, 2)
// 1.36
</code></pre>
<p>Has the community found a solution for Javascript? Is there any other implementation in Javascript?</p>
|
<javascript><python><c#>
|
2023-12-03 18:19:38
| 1
| 589
|
Mohammad
|
77,595,297
| 451,878
|
Bi directionnal relation many-to-many (on same table) with sqlAlchemy, python and pydantic (fastapi)
|
<p>I wonder, if it's possible to have a many to many relation <em>on the same table</em> without extra table (with IDs) ?</p>
<p>I've basicly this one :</p>
<pre><code>from sqlalchemy import Column, Integer, String, ForeignKey, Table, text
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
import sqlalchemy
from sqlalchemy.orm import sessionmaker, relationship
import uuid
from typing import List
from sqlalchemy.dialects.postgresql import UUID
engine = sqlalchemy.create_engine("sqlite:///:memory:")
session = sessionmaker(bind=engine)()
class Base(DeclarativeBase):
pass
association_table = Table(
"association_table_user_to_prev_user",
Base.metadata,
Column("user_id", Integer, ForeignKey("user.id")),
Column("prev_user_id", Integer, ForeignKey("user.prev_user_id")))
class User(Base):
__tablename__ = "user"
id: Mapped[int] = mapped_column(primary_key=True)
username = Column(String)
prev_user_id = mapped_column(ForeignKey('user.id'))
prev_user: Mapped[List["User"]] = relationship(secondary=association_table, back_populates="prev_user")
user_a = User(username="Andrew")
print(f"user A={user_a}")
</code></pre>
<p>So, can I have the same relation but, in "many to many" way ?. This, <strong>without</strong> an extra table ; like : prev_table(id, prev_id)</p>
<p>Can I create an invisible association_table ?</p>
<p>Thanks a lot</p>
<p>F.</p>
|
<python><sqlalchemy><pydantic><alembic>
|
2023-12-03 17:00:57
| 1
| 1,481
|
James
|
77,595,276
| 5,801,127
|
Bash script to execute python program if its not running
|
<p>I have the following bash script to check if my python script miner_nbeats.py is currently running and if not, then re-run because sometimes my python script gets killed due to out of memory error. So I have this bash script below, which works and executed the program at the beginning. However, I noticed that it’s not really re-running the python script after it gets killed.</p>
<pre><code>PATH=/opt/conda/bin:/opt/conda/condabin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
while true; do
if /bin/pgrep -f "miner_nbeats.py" | grep -v $$ >/dev/null; then
echo "script running"
else
echo "script not running"
tmux new-session \; send-keys "source activate python310 && cd /home/putsncalls23/directory && python miner_nbeats.py” Enter
fi
sleep 300
done
</code></pre>
<p>So I checked the htop program afterwards when <code>miner_nbeats.py</code> has been killed but the bash script isn't re-executing the python script since it is failing the first part of the if statement. Moreover, when I check <code>pgref -f miner_nbeats.py</code>, it would return me the pid thats associated with the bash script (or the tmux terminal ?)</p>
<p>Hence, I believe that the system is detecting the associated pid from running the bash script.</p>
<p>As an example, when I first run the bash script, it would help me start a new tmux window with the command entered, however, eventually my python script would get killed in the tmux window, and the bash script fails to re-execute the python script.</p>
<p>When I enter the <code>pgrep -f miner_nbeats.py</code> it would show a pid of 14826, which is the pid associated with the bash script in htop.</p>
<p>Also the reason I am using the tmux command is because I would like to have the python script running in the background even if my ssh session disconnects from the terminal to the linux server.</p>
<p>I am relatively new to bash scripting and would appreciate any help. Thank you</p>
<p><strong>-------EDIT-----------</strong></p>
<p>Thank you for the recommendation on systemd, this is something very new to me and after reading up on systemd, I figured to use something like this:</p>
<pre><code>[Unit]
Description=Mining service for nbeats
After=network.target
[Service]
Type=simple
User=putsncalls23
WorkingDirectory=/home/putsncalls23/directory
ExecStart=/opt/conda/envs/python310/bin/python miner_nbeats.py
Restart=always
RestartSec=300
[Install]
WantedBy=default.target
</code></pre>
<p>Would really appreciate if you could help verify, and it is very helpful for me to learn this new tool since I was stuck in the bash scripting world.</p>
|
<python><linux><bash><pid>
|
2023-12-03 16:54:02
| 1
| 1,011
|
PutsandCalls
|
77,595,180
| 1,682,470
|
List all files containing a string between two specific strings (not on the same line)
|
<p>I'd like to recursively find all <code>.md</code> files of the current directory that contain the “Narrow No-Break Space” <code>U+202F</code> Unicode character between the two strings <code>\begin{document}</code> and <code>\end{document}</code>, possibly (and in fact essentially) not on the same line as <code>U+202F</code>.</p>
<p>A great addition would be to replace such <code>U+202F</code>s by normal spaces.</p>
<p>I already find a way to extract text between <code>\begin{document}</code> and <code>\end{document}</code> with <a href="https://regex101.com/r/92HcFA/1" rel="nofollow noreferrer">a Python regexp</a> (which I used to find easier for multi-line substitutions. I tried to use it just to list files with this pattern (planning to afterwards chain with <code>grep</code> to at least get the files where this pattern contains <code>U+202F</code>) but my attempts with:</p>
<pre class="lang-py prettyprint-override"><code>def finds_files_whose_contents_match_a_regex(filename):
textfile = open(filename, 'r')
filetext = textfile.read()
textfile.close()
matches = re.findall("\\begin{document}\s*(.*?)\s*\\end{document}", filetext)
for root, dirs, files in os.walk("."):
for filename in files:
if filename.endswith(".md"):
filename=os.path.join(root, filename)
finds_files_whose_contents_match_a_regex(filename)
</code></pre>
<p>but I got unintelligible (for me) errors:</p>
<pre><code>Traceback (most recent call last):
File "./test-bis.py", line 14, in <module>
finds_files_whose_contents_match_a_regex(filename)
File "./test-bis.py", line 8, in finds_files_whose_contents_match_a_regex
matches = re.findall("\\begin{document}\s*(.*?)\s*\\end{document}", filetext)
File "/usr/lib64/python3.10/re.py", line 240, in findall
return _compile(pattern, flags).findall(string)
File "/usr/lib64/python3.10/re.py", line 303, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib64/python3.10/sre_compile.py", line 788, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib64/python3.10/sre_parse.py", line 955, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib64/python3.10/sre_parse.py", line 444, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib64/python3.10/sre_parse.py", line 526, in _parse
code = _escape(source, this, state)
File "/usr/lib64/python3.10/sre_parse.py", line 427, in _escape
raise source.error("bad escape %s" % escape, len(escape))
re.error: bad escape \e at position 27
</code></pre>
|
<python><regex>
|
2023-12-03 16:31:18
| 4
| 674
|
Denis Bitouzé
|
77,595,178
| 17,523
|
Setting node size in `pyvis`
|
<p>I use <a href="https://pyvis.readthedocs.io/en/latest/tutorial.html" rel="nofollow noreferrer">pyvis</a> to visualize graphs. Everything works well, except for the fact that the node labels are super small, and I need to perform extreme zoom-in in order to see them (see the screenshot)</p>
<p>Is there a way to make the node labels much larger?</p>
<p>Here's a minimal example:</p>
<pre><code>import networkx as nx
GG = nx.gnp_random_graph(100, 0.02, directed=True)
for n in GG.nodes:
GG.nodes[n]["label"] = f"Node {n:03d}"
g = Network(cdn_resources="in_line", notebook=True)
g.toggle_physics(True)
g.toggle_hide_edges_on_drag(True)
g.force_atlas_2based()
g.from_nx(GG)
g.show("graph.html")
!open graph.html
</code></pre>
<p>And here's the screenshot:
<a href="https://i.sstatic.net/AU1cM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AU1cM.png" alt="a screnshot" /></a></p>
|
<python><visualization><pyvis>
|
2023-12-03 16:30:59
| 1
| 32,057
|
Boris Gorelik
|
77,595,150
| 22,407,544
|
Should I use chunks() or save() to save files in Django?
|
<p>I'm a little confused. When saving an uploaded file in Django should I save it using FileSystemStorage( fs.save() ) or should I use chunks? I just want to ensure that I use chunking to save files that are large so that they don't impact performance. But I'm also reading <a href="https://www.reddit.com/r/django/comments/189mhur/chunks_or_save/" rel="nofollow noreferrer">here</a> that files over 2.5MB are streamed to memory by default so does that make chunking obsolete?</p>
|
<python><django><file>
|
2023-12-03 16:25:27
| 0
| 359
|
tthheemmaannii
|
77,595,129
| 18,494,333
|
Python importlib.resources.files() throws UnicodeDecodeError: invalid continuation byte
|
<p>I am creating a Python package that needs certain data files in order to work. I've been looking for a way to include these data files with the package installation. I found a way using <code>importlib.resources.files()</code>. However, I'm receiving an error when I try to decode the objects I am returned.</p>
<p>I've created a barebones example package. The package tree is as follows.</p>
<pre><code>.
├── package
│ ├── __init__.py
│ ├── one.ppn
│ └── two.rhn
├── pyproject.toml
└── setup.py
1 directory, 5 files
</code></pre>
<p>The entire point of this example package is to be able to access <code>one.ppn</code> and <code>two.rhn</code>. This is done by identifying absolute file paths, and then savings them as constants to be imported. The code is located in <code>__init__.py</code>.</p>
<pre><code># package.__init__.py
from importlib.resources import files
PACKAGE_DATA = files('package')
KEYWORD_PATH = PACKAGE_DATA.joinpath('one.ppn')
print(PACKAGE_DATA)
print(KEYWORD_PATH)
CONTEXT_PATH = PACKAGE_DATA.joinpath('two.rhn').read_text()
</code></pre>
<p>I have created an editable install (<code>pip3 install -e ../Package</code>) in a seperate directory. If I then import <code>package</code>, I receive the following output.</p>
<pre><code>/home/millertime/Desktop/Package/package
/home/millertime/Desktop/Package/package/one.ppn
Traceback (most recent call last):
File "/home/millertime/Desktop/Test/test.py", line 1, in <module>
import package
File "/home/millertime/Desktop/Package/package/__init__.py", line 11, in <module>
CONTEXT_PATH = PACKAGE_DATA.joinpath('two.rhn').read_text()
File "/usr/lib/python3.9/pathlib.py", line 1256, in read_text
return f.read()
File "/usr/lib/python3.9/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe0 in position 0: invalid continuation byte
</code></pre>
<p>You can see that <code>importlib</code> is functioning perfectly at first, and has correctly identified the absolute file paths to my data files. However, when I try to decode to a <code>str</code> that I can actually use, I receive a <code>UnicodeDecodeError</code>.</p>
<p>I'm not sure if my <code>pyproject.toml</code> file is relevant, so I'm going to include it here. The only part I could see contributing to the problem is <code>[tool.setuptools.package-data]</code>.</p>
<pre><code># pyproject.toml
[build-system]
requires = ["setuptools>=61.0.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "package"
version = "1.0.0"
[tool.setuptools]
packages = [
"package"
]
[tool.setuptools.package-data]
package = [
"one.ppn",
"two.rhn"
]
</code></pre>
<p>I researched other instances of this error and tried a couple of things to solve it.</p>
<ol>
<li>I attempted to create my own decoding method using a <code>with</code> statement and the <code>read_bytes()</code> method of the object, but received the same error.</li>
<li>I saw that many of the errors were related to the encoder, and thought that maybe I was using the wrong one (<code>utf-8</code>). I installed <code>chardet</code> to tell me what kind I should use, and received another error relating to being unable to decode due to an "invalid continuation byte".</li>
</ol>
<p>It seems to me that this an internal problem with <code>importlib</code>. I don't see how it could be related to my data file types, given it's just a string representing a file path, not the actual data of the file.</p>
<p>I am currently using Python 3.9.2 on a Raspberry Pi 4. Thanks in advance.</p>
|
<python><python-unicode><python-importlib><data-files>
|
2023-12-03 16:21:53
| 0
| 387
|
MillerTime
|
77,594,973
| 1,497,139
|
Handling UnknownTimezoneWarning in Python's dateutil for Various Timezone Abbreviations
|
<p>I'm encountering multiple UnknownTimezoneWarning errors when parsing dates with Python's dateutil.parser. The warnings are for various timezone abbreviations like UT, PDT, EDT, EST, PST, CEDT, EET, EEST, CES, and MET. The warning suggests using the tzinfos argument for timezone-aware datetime parsing. How can I effectively address these warnings? Is there a comprehensive tzinfos dictionary available, or should I manually map these timezones? Additionally, why does this issue arise even though I'm using libraries like pendulum, which depends on pytz?</p>
<p>I tried to handle these warnings by creating a tzinfos dictionary, but it's incomplete and doesn't cover all cases. Here's what I've done so far in Python:</p>
<pre class="lang-py prettyprint-override"><code>import pendulum
# Dictionary to map timezone abbreviations to their UTC offsets
tzinfos = {
"UT": 0, "UTC": 0, "GMT": 0, # Universal Time Coordinated
"EST": -5*3600, "EDT": -4*3600, # Eastern Time
"CST": -6*3600, "CDT": -5*3600, # Central Time
"MST": -7*3600, "MDT": -6*3600, # Mountain Time
"PST": -8*3600, "PDT": -7*3600, # Pacific Time
"HST": -10*3600, "AKST": -9*3600, "AKDT": -8*3600, # Hawaii and Alaska Time
"CEDT": 2*3600, "EET": 2*3600, "EEST": 3*3600, # Central European and Eastern European Time
"CES": 1*3600, "MET": 1*3600 # Central European Summer Time and Middle European Time
}
def convert_dates_to_iso_with_pendulum(data):
for item in data:
# Parse the date using Pendulum with the tzinfos dictionary
parsed_date = pendulum.parse(item["date"], strict=False, tzinfos=tzinfos)
# Convert the date to ISO 8601 format
item["date"] = parsed_date.to_iso8601_string()
return data
</code></pre>
<p>This code attempts to map the known timezone abbreviations to their respective UTC offsets. I am looking for advice on improving this approach to cover more cases or handle it more effectively.</p>
<p>is e.g. any of</p>
<ul>
<li><a href="https://stackoverflow.com/a/54629675/1497139">https://stackoverflow.com/a/54629675/1497139</a></li>
<li><a href="https://stackoverflow.com/a/4766400/1497139">https://stackoverflow.com/a/4766400/1497139</a>
comprehensive?</li>
</ul>
|
<python><datetime><pendulum>
|
2023-12-03 15:39:24
| 1
| 15,707
|
Wolfgang Fahl
|
77,594,717
| 1,818,443
|
WeasyPrint Python module not found
|
<p>I installed WeasyPrint on MacOS Ventura 13.4.1 using Homebrew. It seems to have installed properly,</p>
<pre><code>which weasyprint
\usr\local\weasyprint
</code></pre>
<p>When I try to load the Python module however, I get the error,</p>
<pre><code>python3 weasytest.py
...
ModuleNotFoundError: No module named 'weasyprint'
</code></pre>
<p>Some clues:</p>
<ul>
<li>I'm using Python 3.11.6</li>
<li>Python is looking for modules in the following directories,</li>
</ul>
<pre><code>python3 -m site
sys.path = [ '/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python311.zip',
'/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11',
'/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/lib-dynload',
'/usr/local/lib/python3.11/site-packages',
'/usr/local/opt/python@3.11/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages',
]
</code></pre>
<ul>
<li>Homebrew looks to have placed the python module in <code>/usr/local/Cellar/weasyprint/60.1_1/libexec/lib/python3.12/site-packages/weasyprint</code></li>
</ul>
<p>I can copy the module and a few of the dependencies (e.g. cssselect2, html5lib, pyphen) to one of the places that Python checks, but then eventually I get a missing module that isn't in the Cellar directory (specifically fontTools). I've tried upgrading brew, as well as running brew doctor and cleanup. Is there a missing link somewhere that I need to create?</p>
|
<python><python-3.x><macos><macos-ventura><weasyprint>
|
2023-12-03 14:30:09
| 0
| 1,272
|
Lukas Bystricky
|
77,594,625
| 10,089,200
|
How can I fix my perceptron to recognize numbers?
|
<p>My exercise is to train 10 perceptrons to recognize numbers (0 - 9). Each perceptron should learn a single digit. As training data, I've created 30 images (5x7 bmp). 3 variants per digit.</p>
<p>I've got a perceptron class:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def unit_step_func(x):
return np.where(x > 0, 1, 0)
def sigmoid(x):
return 1 / (1 + np.exp(-x))
class Perceptron:
def __init__(self, learning_rate=0.01, n_iters=1000):
self.lr = learning_rate
self.n_iters = n_iters
self.activation_func = unit_step_func
self.weights = None
self.bias = None
#self.best_weights = None
#self.best_bias = None
#self.best_error = float('inf')
def fit(self, X, y):
n_samples, n_features = X.shape
self.weights = np.zeros(n_features)
self.bias = 0
#self.best_weights = self.weights.copy()
#self.best_bias = self.bias
for _ in range(self.n_iters):
for x_i, y_i in zip(X, y):
linear_output = np.dot(x_i, self.weights) + self.bias
y_predicted = self.activation_func(linear_output)
update = self.lr * (y_i - y_predicted)
self.weights += update * x_i
self.bias += update
#current_error = np.mean(np.abs(y - self.predict(X)))
#if current_error < self.best_error:
# self.best_weights = self.weights.copy()
# self.best_bias = self.bias
# self.best_error = current_error
def predict(self, X):
linear_output = np.dot(X, self.weights) + self.bias
y_predicted = self.activation_func(linear_output)
return y_predicted
</code></pre>
<p>I've tried both, <code>unit_step_func</code> and <code>sigmoid</code>, activation functions, and pocketing algorithm to see if there's any difference. I'm a noob, so I'm not sure if this is even implemented correctly.</p>
<p>This is how I train these perceptrons:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from PIL import Image
from Perceptron import Perceptron
import os
def load_images_from_folder(folder, digit):
images = []
labels = []
for filename in os.listdir(folder):
img = Image.open(os.path.join(folder, filename))
if img is not None:
images.append(np.array(img).flatten())
label = 1 if filename.startswith(f"{digit}_") else 0
labels.append(label)
return np.array(images), np.array(labels)
digits_to_recognize = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
perceptrons = []
for digit_to_recognize in digits_to_recognize:
X, y = load_images_from_folder("data", digit_to_recognize)
p = Perceptron()
p.fit(X, y)
perceptrons.append(p)
</code></pre>
<p>in short:</p>
<p>training data filename is in the format <code>digit</code>_<code>variant</code>. As I said before, each digit has 3 variants,</p>
<p>so for digit <code>0</code> it is <code>0_0</code>, <code>0_1</code>, <code>0_2</code>,</p>
<p>for digit <code>1</code> it's: <code>1_0</code>, <code>1_1</code>, <code>1_2</code>,</p>
<p>and so on...</p>
<p><code>load_images_from_folder</code> function loads 30 images and checks the name. If <code>digit</code> part of the name is the same as <code>digit</code> input then it appends <code>1</code> in labels, so that the perceptron knows that it's the desired digit.</p>
<p>I know that it'd be better to load these images once and save them in some array of <code>tuples</code>, for example, but I don't care about the performance right now (I won't care later either).</p>
<p>for digit <code>0</code> labels array is <code>[1, 1, 1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]</code></p>
<p>for digit <code>1</code> labels array is <code>[0,0,0, 1, 1, 1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]</code></p>
<p>and so on...</p>
<p>then I train 10 perceptrons using this data.</p>
<p>This exercise also requires to have some kind of GUI that allows me to draw a number. I've choosen <code>pygame</code>, I could use <code>pyQT</code>, it actually does not matter.</p>
<p>This is the code, you can skip it, it's not that important (except for <code>on_rec_button</code> function, but I'll address on it):</p>
<pre class="lang-py prettyprint-override"><code>import pygame
import sys
pygame.init()
cols, rows = 5, 7
square_size = 50
width, height = cols * square_size, (rows + 2) * square_size
screen = pygame.display.set_mode((width, height))
pygame.display.set_caption("Zad1")
rec_button_color = (0, 255, 0)
rec_button_rect = pygame.Rect(0, rows * square_size, width, square_size)
clear_button_color = (255, 255, 0)
clear_button_rect = pygame.Rect(0, (rows + 1) * square_size + 1, width, square_size)
mouse_pressed = False
drawing_matrix = np.zeros((rows, cols), dtype=int)
def color_square(x, y):
col = x // square_size
row = y // square_size
if 0 <= row < rows and 0 <= col < cols:
drawing_matrix[row, col] = 1
def draw_button(color, rect):
pygame.draw.rect(screen, color, rect)
def on_rec_button():
np_array_representation = drawing_matrix.flatten()
for digit_to_recognize in digits_to_recognize:
p = perceptrons[digit_to_recognize]
predicted_number = p.predict(np_array_representation)
if predicted_number == digit_to_recognize:
print(f"Image has been recognized as number {digit_to_recognize}")
def on_clear_button():
drawing_matrix.fill(0)
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
elif event.type == pygame.MOUSEBUTTONDOWN and event.button == 3:
mouse_pressed = True
elif event.type == pygame.MOUSEBUTTONUP and event.button == 3:
mouse_pressed = False
elif event.type == pygame.MOUSEMOTION:
mouse_x, mouse_y = event.pos
if mouse_pressed:
color_square(mouse_x, mouse_y)
elif event.type == pygame.MOUSEBUTTONDOWN and event.button == 1:
if rec_button_rect.collidepoint(event.pos):
on_rec_button()
if clear_button_rect.collidepoint(event.pos):
on_clear_button()
for i in range(rows):
for j in range(cols):
if drawing_matrix[i, j] == 1:
pygame.draw.rect(screen, (255, 0, 0), (j * square_size, i * square_size, square_size, square_size))
else:
pygame.draw.rect(screen, (0, 0, 0), (j * square_size, i * square_size, square_size, square_size))
draw_button(rec_button_color, rec_button_rect)
draw_button(clear_button_color, clear_button_rect)
pygame.display.flip()
</code></pre>
<p>so, now that I run the app, draw the digit <code>3</code>, and click the green button that runs <code>on_rec_button</code> function, I expected to see <code>Image has been recognized as number 3</code>, but I get <code>Image has been recognized as number 0</code>.</p>
<p>This is what I draw:</p>
<p><a href="https://i.sstatic.net/gnWxU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gnWxU.png" alt="enter image description here" /></a></p>
<p>These are training data:</p>
<p><a href="https://i.sstatic.net/mHNtp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mHNtp.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/1oPnH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1oPnH.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/JqFbY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JqFbY.png" alt="enter image description here" /></a></p>
<p>These are very small because of the resolution <code>5x7</code> that was required in the exercise.</p>
<p>When I draw the digit <code>1</code> then I get 2 results:
<code>Image has been recognized as number 0</code>
<code>Image has been recognized as number 1</code></p>
<p><a href="https://i.sstatic.net/EIiXo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EIiXo.png" alt="enter image description here" /></a></p>
<p>What should I do to make it work the way I want? I don't expect this to work 100% accurate but I guess it could be better.</p>
|
<python><numpy><machine-learning><neural-network><perceptron>
|
2023-12-03 14:03:49
| 3
| 693
|
Shout
|
77,594,590
| 1,481,986
|
Streamlit controlling sidebar title
|
<p>I have a streamlit app with multiple pages which I run from <code>minimal_example.py</code></p>
<p>This is the file content -</p>
<pre><code>import streamlit as st
def main():
page_config = st.set_page_config(
page_title="my demo",
)
if __name__ == "__main__":
main()
</code></pre>
<p>The pages folder contains a few other files. It automatically creates a sidebar where the title is the name of the file (see attached image). Is there a way to control it?</p>
<p><a href="https://i.sstatic.net/Gv67m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gv67m.png" alt="enter image description here" /></a></p>
|
<python><streamlit>
|
2023-12-03 13:55:19
| 2
| 6,241
|
Tom Ron
|
77,594,060
| 20,920,790
|
Can't import Swifter for python
|
<p>I got installed Pandas 2.1.3 and Swifter 1.4.0.
When I try import swifter I got:</p>
<pre><code> AttributeError: module 'pandas.core.strings' has no attribute 'StringMethods'
</code></pre>
<p>That's is going wrong?
This version of Pandas not compatible with swifter?</p>
|
<python><pandas><swifter>
|
2023-12-03 11:24:51
| 1
| 402
|
John Doe
|
77,593,997
| 10,623,444
|
Efficiently compute item colaborating filtering similarity using numba, polars and numpy
|
<p><strong>Disclaimer</strong> The question is part of a thread including those two SO questions (<a href="https://stackoverflow.com/q/77567521/10623444">q1</a>, <a href="https://stackoverflow.com/a/77558612/10623444">q2</a>)</p>
<p>The data resemble movie ratings from the ratings.csv <a href="https://drive.google.com/file/d/1C5AkLrXjUu-BlhgzZdIIqZ-paavDuYkt/view?usp=sharing" rel="nofollow noreferrer">file</a> (~891mb) of ml-latest dataset.</p>
<p>Once I read the csv file with <code>polars</code> library like:</p>
<pre class="lang-py prettyprint-override"><code>movie_ratings = pl.read_csv(os.path.join(application_path + data_directory, "ratings.csv"))
</code></pre>
<p><em>Let's assume we want to compute the similarity between movies seen by user=1 (so for example 62 movies) with the rest of the movies in the dataset. FYI, the dataset has ~83,000 movies so for each other_movie (82,938) compute a similarity with each movie seen by user 1 (62 movies). The complexity is 62x82938 (iterations).</em></p>
<p>For this example the benchmarks reported are only for 400/82,938 <code>other_movies</code></p>
<p>To do so, I create two <code>polars</code> dataframes. One dataframe with the <code>other_movies</code> (~82,938 row) and a second dataframe with only the movies seen by the user (62 rows).</p>
<pre class="lang-py prettyprint-override"><code>user_ratings = movie_ratings.filter(pl.col("userId")==input_id) #input_id = 1 (data related to user 1)
user_rated_movies = list(user_ratings.select(pl.col("movieId")).to_numpy().ravel()) #movies seen by user1
potential_movies_to_recommend = list(
movie_ratings.select("movieId").filter( ~(pl.col("movieId").is_in(user_rated_movies)) ).unique().sort("movieId").to_numpy().ravel()
)
items_metadata = (
movie_ratings.filter(
~pl.col("movieId").is_in(user_rated_movies) #& pl.col("movieId").is_in(potential_movie_recommendations[:total_unseen_movies])
)
.group_by("movieId").agg(
users_seen_movie=pl.col("userId").unique(),
user_ratings=pl.col("rating")
)
)
target_items_metadata = (
movie_ratings.filter(
pl.col("movieId").is_in(user_rated_movies) #& pl.col("movieId").is_in(potential_movie_recommendations[:total_unseen_movies])
).group_by("movieId").agg(
users_seen_movie=pl.col("userId").unique(),
user_ratings=pl.col("rating")
)
)
</code></pre>
<p>The result are two <code>polars</code> dataframes with rows(movies) and columns(users seen the movies & the ratings from each user).</p>
<p><a href="https://i.sstatic.net/J7c4h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J7c4h.png" alt="enter image description here" /></a></p>
<p>The first dataframe contains only <code>other_movies</code> that we can potentially recommend to user1 seen he/she has not seen them.</p>
<p>The second dataframe contains only the movies seen by the user.</p>
<p>Next my approach is to iterate over each row of the first dataframe by applying a UDF function.</p>
<pre class="lang-py prettyprint-override"><code>item_metadata_similarity = (
items_metadata.with_columns(
similarity_score=pl.struct(pl.all()).map_elements(
lambda row: item_compute_similarity_scoring_V2(row, similarity_metric, target_items_metadata),
return_dtype=pl.List(pl.List(pl.Float64)),
strategy="threading"
)
)
)
</code></pre>
<p>, where <code>item_compute_similarity_scoring_V2</code> is defined as:</p>
<pre class="lang-py prettyprint-override"><code>def item_compute_similarity_scoring_V2(
row,
target_movies_metadata:pl.DataFrame
):
users_item1 = np.asarray(row["users_seen_movie"])
ratings_item1 = np.asarray(row["user_ratings"])
computed_similarity: list=[]
for row2 in target_movies_metadata.iter_rows(named=True): #iter over each row from the second dataframe with the movies seen by the user.
users_item2=np.asarray(row2["users_seen_movie"])
ratings_item2=np.asarray(row2["user_ratings"])
r1, r2 = item_ratings(users_item1, ratings_item1, users_item2, ratings_item2)
if r1.shape[0] != 0 and r2.shape[0] != 0:
similarity_score = compute_similarity_score(r1, r2)
if similarity_score > 0.0: #filter out negative or zero similarity scores
computed_similarity.append((row2["movieId"], similarity_score))
most_similar_pairs = sorted(computed_similarity, key=lambda x: x[1], reverse=True)
return most_similar_pairs
</code></pre>
<p>, <code>item_ratings</code> & <code>compute_similarity_score</code> defined as</p>
<pre class="lang-py prettyprint-override"><code>def item_ratings(u1:np.ndarray, r1:np.ndarray, u2:np.ndarray, r2:np.ndarray) -> (np.ndarray, np.ndarray):
common_elements, indices1, indices2 = np.intersect1d(u1, u2, return_indices=True)
sr1 = r1[indices1]
sr2 = r2[indices2]
assert len(sr1)==len(sr2), "ratings don't have same lengths"
return sr1, sr2
@jit(nopython=True, parallel=True)
def compute_similarity_score(array1:np.ndarray, array2:np.ndarray) -> float:
assert(array1.shape[0] == array2.shape[0])
a1a2 = 0
a1a1 = 0
a2a2 = 0
for i in range(array1.shape[0]):
a1a2 += array1[i]*array2[i]
a1a1 += array1[i]*array1[i]
a2a2 += array2[i]*array2[i]
cos_theta = 1.0
if a1a1!=0 and a2a2!=0:
cos_theta = float(a1a2/np.sqrt(a1a1*a2a2))
return cos_theta
</code></pre>
<p>The function basically, iterates over each row of the second dataframe and for each row computes the similarity between <code>other_movie</code> and the movie seen by the user. Thus, for
400 movies we do 400*62 iterations, generating 62 similarity scores per <code>other_movie</code>.</p>
<p>The result from each computation is an array with schema <code>[[1, 0.20], [110, 0.34]]...</code> (length 62 pairs per <code>other_movie</code>)</p>
<p><a href="https://i.sstatic.net/lkPBK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lkPBK.png" alt="enter image description here" /></a></p>
<blockquote>
<p>Benchmarks for 400 movies</p>
</blockquote>
<ol>
<li>INFO - Item-Item: Computed similarity scores for 400 movies in: 0:05:49.887032</li>
<li>~2 minutes.</li>
<li>~5gb of RAM used.</li>
</ol>
<p>I would to identify how can I improve the computations by using native <code>polars</code> commands or exploiting the <code>numba</code> framework for parallelism.</p>
<h4>Update - 2nd approach using <code>to_numpy()</code> operations without <code>iter_rows()</code> and <code>map_elements()</code></h4>
<pre class="lang-py prettyprint-override"><code>user_ratings = movie_ratings.filter(pl.col("userId")==input_id) #input_id = 1
user_rated_movies = user_ratings.select(pl.col("movieId")).to_numpy().ravel()
potential_movies_to_recommend = list(
movie_ratings.select("movieId").filter( ~(pl.col("movieId").is_in(user_rated_movies)) ).unique().sort("movieId").to_numpy().ravel()
)
items_metadata = (
movie_ratings.filter(
~pl.col("movieId").is_in(user_rated_movies)
)
)
# print(items_metadata.head(5))
target_items_metadata = (
movie_ratings.filter(
pl.col("movieId").is_in(user_rated_movies)
)
)
# print(target_items_metadata.head(5))
</code></pre>
<p>With this second approach <code>items_metadata</code> and <code>target_items_metadata</code> are two large polars tables.</p>
<p>Then my next step is to save both tables into <code>numpy.ndarrays</code> with the <code>to_numpy()</code> command.</p>
<pre class="lang-py prettyprint-override"><code>items_metadata_array = items_metadata.to_numpy()
target_items_metadata_array = target_items_metadata.to_numpy()
</code></pre>
<pre class="lang-py prettyprint-override"><code>computed_similarity_scores:dict = {}
for i, other_movie in enumerate(potential_movies_to_recommend[:400]): #take the first 400 unseen movies by user 1
mask = items_metadata_array[:, 1] == other_movie
other_movies_chunk = items_metadata_array[mask]
u1 = other_movies_chunk[:,0].astype(np.int32)
r1 = other_movies_chunk[:,2].astype(np.float32)
computed_similarity: list=[]
for i, user_movie in enumerate(user_rated_movies):
print(user_movie)
mask = target_items_metadata_array[:, 1] == user_movie
target_movie_chunk = target_items_metadata_array[mask]
u2 = target_movie_chunk[:,0].astype(np.int32)
r2 = target_movie_chunk[:,2].astype(np.float32)
common_r1, common_r2 = item_ratings(u1, r1, u2, r2)
if common_r1.shape[0] != 0 and common_r2.shape[0] != 0:
similarity_score = compute_similarity_score(common_r1, common_r2)
if similarity_score > 0.0:
computed_similarity.append((user_movie, similarity_score))
most_similar_pairs = sorted(computed_similarity, key=lambda x: x[1], reverse=True)[:k_similar_user]
computed_similarity_scores[str(other_movie)] = most_similar_pairs
</code></pre>
<blockquote>
<p>Benchmarks of the second approach (8.50 minutes > 6 minutes of the first approach)</p>
</blockquote>
<ul>
<li>Item-Item: Computed similarity scores for 400 movies in: 0:08:50.537102</li>
</ul>
<h4>Update - 3rd approach using <code>iter_rows()</code> operations</h4>
<p>In my third approach, I have better results from the previous two methods, getting results in approximately 2 minutes for user 1 and 400 movies.</p>
<pre><code>items_metadata = (
movie_ratings.filter(
~pl.col("movieId").is_in(user_rated_movies)
)
.group_by("movieId").agg(
users_seen_movie=pl.col("userId").unique(),
user_ratings=pl.col("rating")
)
)
target_items_metadata = (
movie_ratings.filter(
pl.col("movieId").is_in(user_rated_movies)
).group_by("movieId").agg(
users_seen_movie=pl.col("userId").unique(),
user_ratings=pl.col("rating")
)
)
</code></pre>
<p><code>items_metadata</code> is the metadata of <code>other_movies</code> not seen by the user 1.</p>
<p><code>target_items_metadata</code> the metadata of the movies rated by user 1. By the term metadata I refer to the two aggregated <code>.agg()</code> columns, <code>users_seen_movie</code> and <code>user_ratings</code></p>
<p>Finally, I create two for loops using <code>iter_rows()</code> method from <code>polars</code></p>
<pre class="lang-py prettyprint-override"><code>def cosine_similarity_score(array1:np.ndarray, array2:np.ndarray) -> float:
assert(array1.shape[0] == array2.shape[0])
a1a2 = 0
a1a1 = 0
a2a2 = 0
for i in range(array1.shape[0]):
a1a2 += array1[i]*array2[i]
a1a1 += array1[i]*array1[i]
a2a2 += array2[i]*array2[i]
# cos_theta = 1.0
cos_theta = 0.0
if a1a1!=0 and a2a2!=0:
cos_theta = float(a1a2/np.sqrt(a1a1*a2a2))
return max(0.0, cos_theta)
for row1 in item_metadata.iter_rows():
computed_similarity: list= []
for row2 in target_items_metadata.iter_rows():
r1, r2 = item_ratings(np.asarray(row1[1]), np.asarray(row1[2]), np.asarray(row2[1]), np.asarray(row2[2]))
if r1.shape[0]!=0 and r2.shape[0]!=0:
similarity_score = cosine_similarity_score(r1, r2)
computed_similarity.append((row2[0], similarity_score if similarity_score > 0 else 0))
computed_similarity_scores[str(row1[0])] = sorted(computed_similarity, key=lambda x: x[1], reverse=True)[:k_similar_user]
</code></pre>
<blockquote>
<p>Benchmarks for 400 movies</p>
</blockquote>
<ol>
<li>INFO - Item-Item: Computed similarity scores for 400 movies in: 0:01:50</li>
<li>~2 minutes.</li>
<li>~4.5gb of RAM used.</li>
</ol>
|
<python><dataframe><numpy><numba><python-polars>
|
2023-12-03 11:09:06
| 1
| 1,589
|
NikSp
|
77,593,852
| 125,673
|
fpdf2 write - how to get new lines printed instead of '\n'?
|
<p>I want to print some text in my pdf as follows;</p>
<pre><code>text = self.dict_text_for_pdf[category]
self.pdf.write(_c.text_line_height, text)
</code></pre>
<p>Inside the text I have some line break characters. But instead of a new line being printed I actually get "\n" printed instead. According to the documents this should work; <a href="https://py-pdf.github.io/fpdf2/LineBreaks.html" rel="nofollow noreferrer">https://py-pdf.github.io/fpdf2/LineBreaks.html</a>. How do I fix this?</p>
|
<python><fpdf>
|
2023-12-03 10:21:34
| 0
| 10,241
|
arame3333
|
77,593,648
| 14,037,975
|
How to split 100 million (18.5 GB) csv rows into 50k each small excel files
|
<p>I am trying to split a large CSV file into 50k rows Excel files. I tried using pandas but unfortunately it is getting crashed. So I switched to polar. It is working but the problem is it is only creating single files of 100k rows. How can I generate 50k rows excel files from 100 million (18.5 GB) csv file?</p>
<p>First I have given <code>next_batches(50000)</code> but still it is giving me only one excel file. So I have tried to <code>print</code> the <code>len</code> and it gave me one as output</p>
<pre><code>
import os
import polars as pl
def split_csv_to_excel(csv_file_path, output_dir, batch_size=50000):
reader = pl.read_csv_batched(csv_file_path,batch_size=batch_size)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
batches = reader.next_batches(1)
for i, df in enumerate(batches):
output_file_path = os.path.join(output_dir, f"batch_{i}.xlsx")
df.write_excel(output_file_path)
split_csv_to_excel("inputpath","outputpath")
</code></pre>
|
<python><python-polars>
|
2023-12-03 09:14:00
| 3
| 321
|
Indratej Reddy
|
77,593,276
| 5,165,649
|
pyspark to aggregate a column based on the column
|
<p>I have a dataframe like below
<a href="https://i.sstatic.net/7StDL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7StDL.png" alt="enter image description here" /></a></p>
<p>and I want like below <a href="https://i.sstatic.net/Td6Dp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Td6Dp.png" alt="enter image description here" /></a></p>
<p>where based on the value mentioned in the var_col_name that column values need to be checked and if they are same then it should aggregate and not while they are not the same.<br />
the below code aggregates irrespective the values of next col which is mentioned in var_col_name.</p>
<pre><code> from pyspark.sql import SparkSession
from pyspark.sql import functions as F
from pyspark.sql.window import Window
# Create a SparkSession
spark = SparkSession.builder.getOrCreate()
# Create the sample data
data = [
('a', 1, 'text1', 'col1', 'texts1', 'scj', 'dsiul'),
('a', 11, 'text1', 'col1', 'texts1', 'ftjjjjjjj', 'jhkl'),
('b', 2, 'bigger text', 'next_col', 'gfsajh', 'xcj', 'biggest text'),
('b', 21, 'bigger text', 'next_col', 'fghm', 'hjjkl', 'ghjljkk'),
('c', 3, 'soon', 'column', 'szjcj', 'sooner', 'sjdsk')
]
# Create the DataFrame
df = spark.createDataFrame(data, ['name', 'id', 'txt', 'var_col_name', 'col1', 'column', 'next_col'])
# Group by 'name' and the column specified in 'var_col_name', and collect the 'id' values into a set
grouped_df = df.groupby('name', 'var_col_name').agg(F.collect_set('id').alias('id_all'))
# Create a window partitioned by 'name' and the column specified in 'var_col_name'
window = Window.partitionBy('name', 'var_col_name')
# Join the original DataFrame with the grouped DataFrame and select the necessary columns
df_agg = df.join(grouped_df, ['name', 'var_col_name'], 'left').select(df['*'],
F.collect_set('id').over(window).alias('id_all'))
df_agg.show(truncate=False)
</code></pre>
<p>but the code is generating aggregations for name b for id even though the values "biggest text" not equal to "ghjljkk" (only aggregate if the text is same)</p>
<p><a href="https://i.sstatic.net/kGSJY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kGSJY.png" alt="enter image description here" /></a></p>
|
<python><pyspark>
|
2023-12-03 06:45:56
| 1
| 487
|
viji
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.