QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,162,810
| 15,986,016
|
Discord Bot Basic Example Fails due to PrivilegedIntentsRequired(exc.shard_id) Error (Python)
|
<p>I know I'm doing something wrong on my end because I coppied the example code Discord gives us (first code block found on <a href="https://discordpy.readthedocs.io/en/stable/quickstart.html" rel="nofollow noreferrer">discord's quickstart website</a>) But I can't seem to find what's wrong.</p>
<p>The code I'm running:
(Yes I know an import shouldn't be running code directly ... I just wanted to test the most basic code and I can't get it to work πππ)</p>
<pre class="lang-py prettyprint-override"><code>import discord
from config import __BOTTOKEN__ as __TOKEN__
intents = discord.Intents.default()
intents.message_content = True # this is where the error happens even when the bot has admin privs
# if I remove this line, I won't be able to read messages from other users
# in more complex versions of this code
client = discord.Client(intents=intents)
@client.event
async def on_ready():
print(f'We have logged in as {client.user}')
@client.event
async def on_message(message):
if message.author == client.user:
return
if message.content.startswith('$hello'):
await message.channel.send('Hello!')
client.run(__TOKEN__)
</code></pre>
<p>The error I'm getting:</p>
<pre><code>2024-03-14 18:34:51 INFO discord.client logging in using static token
Traceback (most recent call last):
File "/home/runner/Discord-bot/main.py", line 9, in <module>
main()
File "/home/runner/Discord-bot/main.py", line 4, in main
import example
File "/home/runner/Discord-bot/example.py", line 21, in <module>
client.run(__TOKEN__)
File "/home/runner/Discord-bot/.pythonlibs/lib/python3.10/site-packages/discord/client.py", line 860, in run
asyncio.run(runner())
File "/nix/store/8w6mm5q1n7i7cs1933im5vkbgvjlglfn-python3-3.10.13/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/nix/store/8w6mm5q1n7i7cs1933im5vkbgvjlglfn-python3-3.10.13/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/home/runner/Discord-bot/.pythonlibs/lib/python3.10/site-packages/discord/client.py", line 849, in runner
await self.start(token, reconnect=reconnect)
File "/home/runner/Discord-bot/.pythonlibs/lib/python3.10/site-packages/discord/client.py", line 778, in start
await self.connect(reconnect=reconnect)
File "/home/runner/Discord-bot/.pythonlibs/lib/python3.10/site-packages/discord/client.py", line 704, in connect
raise PrivilegedIntentsRequired(exc.shard_id) from None
discord.errors.PrivilegedIntentsRequired: Shard ID None is requesting privileged intents that have not been explicitly enabled in the developer portal. It is recommended to go to https://discord.com/developers/applications/ and explicitly enable the privileged intents within your application's page. If this is not possible, then consider disabling the privileged intents instead.
</code></pre>
<p>At first I thought maybe the privlages in <a href="https://discord.com/developers/applications" rel="nofollow noreferrer">discord's dev portal</a> of my bot might need to be higher, so I set it to admin because that would get all privs ... but I get the same message.</p>
<p>Any insights or suggestions would be much appreciated!</p>
<p>At first I thought maybe the privlages in discord's dev portal of my bot might need to be higher, so I set it to admin because that would get all privs ... but I get the same message.</p>
<ul>
<li>Yes I tried to reset all links and tokens after changeing privs incase it needs new token for privs</li>
</ul>
|
<python><async-await><discord>
|
2024-03-14 18:53:21
| 1
| 438
|
Jacob Glik
|
78,162,685
| 10,105,454
|
Python requests not returning proper data
|
<p>I am doing a scrapping project and I am having an issue, using python requests I am able to log-in in the website and grab the tokens to be sent afterward, problem comes after. Sending the proper tokens doing a post to gram the info of a table (that should appear with the post), but the post returns not the data but an HTML without the data.</p>
<p><a href="https://i.sstatic.net/VLAXq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VLAXq.png" alt="enter image description here" /></a>
Image of the response from the post on the web.</p>
<p><a href="https://i.sstatic.net/TVh3b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TVh3b.png" alt="enter image description here" /></a>
image of the response through python requests.</p>
<p>what could be causing this? and how to harvest the data, I am doing the appropriate post and giving the cookies and credentials.</p>
<p>edit with code</p>
<pre class="lang-py prettyprint-override"><code>def login():
credentials = file_manager.file_manager.get_credentials()
print(credentials)
headers = {
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0',
'Accept':'text/html,application/xhtml+xml,application/xml;'
}
session = requests.session()
session.headers.update(headers)
r = session.get(credentials[0])
soup = BeautifulSoup(r.text, 'html.parser')
verificationToken = (soup.find(attrs={'name':'__RequestVerificationToken'}).attrs['value'])
parameters = {
"__RequestVerificationToken":str(verificationToken),
"versao":"6",
"Email":credentials[1],
"Password":credentials[2],
"ReturnUrl":""
}
r = session.post(url,params=parameters, headers=headers)
r = session.get(url, headers=headers)
r = session.get(url,headers=headers)
parameters = {
parameter:parameter,
}
r = session.post(url, headers=headers)
f = open('response.html','w', encoding="utf-8")
f.write(r.text)
</code></pre>
|
<python><python-requests>
|
2024-03-14 18:26:18
| 1
| 312
|
Flari
|
78,162,654
| 14,534,480
|
Calculate the influence of various factors on the final change
|
<p>I have a dataframe with data about price of premises.
Example:</p>
<pre><code>df = pd.DataFrame({'num': [1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7],
'date': ['2024-01-01', '2024-01-01', '2024-01-01', '2024-01-01', '2024-01-01', '2024-01-01', '2024-01-01', '2024-01-02', '2024-01-02', '2024-01-02', '2024-01-02', '2024-01-02', '2024-01-02', '2024-01-02'],
'area': [100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100],
'price': [10000000, 10000000, 10000000, 10000000, 10000000, 12000000, 12000000, 11080000, 11090000, 10000000, 10000000, 10000000, 12000000, 12000000],
'price_yest': [10000000, 10000000, 10000000, 10000000, 10000000, 12000000, 12000000, 10000000, 10000000, 10000000, 10000000, 10000000, 12000000, 12000000],
'status': ['cur', 'cur', 'cur', 'cur', 'cur', 'nf_sale', 'nf_sale', 'cur', 'cur', 'cur', 'sold', 'sold', 'new', 'new']})
</code></pre>
<p>I need to calculate the price per square meter change from one day to the previous and the influence of each factor on this change.</p>
<p>Influencing factors:</p>
<ol>
<li>Price change by premises</li>
<li>Add new premises</li>
<li>Sale</li>
</ol>
<p>Rule for calculate:</p>
<ul>
<li>To calculate the average price per square meter, only current (<code>cur</code>)
and new premises (<code>new</code>) are taken every day.</li>
<li>Premises that were
not for sale (<code>nf_sale</code>) can be added and then become (<code>new</code>)</li>
<li>Sold (<code>sold</code>) premises are not taken into account in the price calculation</li>
</ul>
<p>Desired result:</p>
<pre><code> date area price avg avg/avg_yest by_price_change by_new_premises by_sale
0 2024-01-01 500 50000000 100000.0 0.0000 0.00 0.00 0.0000
1 2024-01-02 500 56170000 112340.0 0.1234 0.05 0.04 0.0334
</code></pre>
<p>I will be grateful for any help!</p>
|
<python><pandas><math>
|
2024-03-14 18:20:48
| 1
| 377
|
Kirill Kondratenko
|
78,162,635
| 10,322,652
|
Conventional commit type for library version bump
|
<p>I'm building a library in python and wanted to know which <code>type</code> I should use -following <a href="https://www.conventionalcommits.org/en/v1.0.0/" rel="nofollow noreferrer">Conventional Commits</a> idea- whenever I bump the version of my library in <code>pyproject.toml</code>.</p>
<p>This is the diff summary:</p>
<pre class="lang-bash prettyprint-override"><code>diff --git a/pyproject.toml b/pyproject.toml
index a0353d2..d44b8e8 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "library-name-here"
-version = "0.3.1"
+version = "0.3.2"
description = "..."
</code></pre>
<p>I'm tempted to use something like this, but not sure if it's ok.</p>
<pre><code>chore: Bump version to 0.3.2
</code></pre>
|
<python><python-poetry><conventional-commits>
|
2024-03-14 18:16:38
| 1
| 1,536
|
Cheche
|
78,162,619
| 2,986,153
|
How to prevent html code appearing above pandas tables in quarto gfm reports
|
<p>When I display a pandas table in a quarto gfm report I see html code above the table when I view the report in github. How can I prevent this?</p>
<p><a href="https://i.sstatic.net/7jFxe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7jFxe.png" alt="enter image description here" /></a></p>
<h1>Code from Qmd file that created the above report</h1>
<pre><code>---
title: 'HTML junk'
author: 'Joseph Powers'
date: 2024-03-14
format: gfm
---
```{python}
import numpy as np
import pandas as pd
```
# Notice the html code above the table
```{python}
N = int(5e3)
TRIALS = int(1)
pd.DataFrame(
{
"A": np.random.binomial(TRIALS, 0.65, N),
"B": np.random.binomial(TRIALS, 0.65, N),
"C": np.random.binomial(TRIALS, 0.65, N),
"D": np.random.binomial(TRIALS, 0.67, N)
}
)
```
</code></pre>
|
<python><pandas><quarto>
|
2024-03-14 18:12:39
| 1
| 3,836
|
Joe
|
78,162,601
| 1,030,287
|
matplotlib 3D plot axes aspect ratio
|
<p>I am plotting some 3D data and the plot always creates a 3D cube. It makes sense but I'd like to extend one axis to be longer (aspect ratio) than the others. This will create a 3D rectangle rather than a 3D cube.</p>
<p>For example, the plot below is fully functional - how can I make the y-axis longer relative to the others (visually)?</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
N = 1000
bm = pd.DataFrame(
index=pd.bdate_range(start='2012-01-01', periods=N, freq='B'),
data={x: np.random.randn(N) for x in range(1, 11)}
)
# Simulate some zeros
bm = pd.DataFrame(index=bm.index, columns=bm.columns, data=np.where(np.abs(bm.values) < 0.02, 0, bm.values))
# Set zeros to Nan so that I don't plot them
bm = bm.replace({0: np.nan})
# unstack dataframe
flat_bm = bm.reset_index(drop=True).unstack() # DROP DATES AND REPLACE WITH INT
x = flat_bm.index.get_level_values(0)
y = flat_bm.index.get_level_values(1)
z = flat_bm.values
# Set up plot
fig = plt.figure(figsize = (15,10))
ax = plt.axes(projection ='3d')
# plotting
ax.scatter(x, y, z, '.', c=flat_bm.values, cmap='Reds')
</code></pre>
|
<python><matplotlib>
|
2024-03-14 18:09:49
| 1
| 12,343
|
s5s
|
78,162,484
| 10,710,625
|
Merge two data frames based on values of columns
|
<p>I have these two data frames</p>
<pre><code>data1 = {'ID': [385908, 385909, 757947, 757946],
'A': ['LH', 'LH', 'LH', 'LH'],
'F': [646, 646, 646, 646],
'Orig': ['FRA', 'FRA', 'NQZ', 'NQZ'],
'Dest': ['NQZ', 'NQZ', 'ALA', 'ALA'],
'DayU': [1, 6, 1, 6],
'DepU': [650, 650, 1130, 1130]}
data2 = {'A': ['LH', 'LH', 'LH', 'LH', 'LH', 'LH'],
'F': [646, 646, 646, 646, 646, 646],
'Orig2': ['FRA', 'FRA', 'FRA', 'FRA', 'NQZ', 'NQZ'],
'Dest2': ['ALA', 'ALA', 'NQZ', 'NQZ', 'ALA', 'ALA'],
'DayL': [1, 6, 1, 6, 2, 7],
'DepL': [710, 710, 710, 710, 50, 50],
'IDL1': [385908, 385909, 385908, 385909, 757947, 757946],
'IDL2': [757947, 757946, -1, -1, -1, -1]}
df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)
</code></pre>
<p>I would like to add columns to df1 from df2 based on the IDL1,IDL2,ID values.</p>
<p>Where I have ID=IDL1 or ID=IDL2 I would like to add Orig2 and Dest2 into df1.
This is the expected output:
In this example: First two rows ID=IDL1 & last two rows ID=IDL2.</p>
<pre><code> data3 = {'ID': [385908, 385909, 757947, 757946],
'A': ['LH', 'LH', 'LH', 'LH'],
'F': [646, 646, 646, 646],
'Orig': ['FRA', 'FRA', 'NQZ', 'NQZ'],
'Dest': ['NQZ', 'NQZ', 'ALA', 'ALA'],
'DayU': [1, 6, 1, 6],
'DepU': [650, 650, 1130, 1130],
'Orig2':['FRA','FRA','FRA','FRA'],
'Dest2':['ALA','ALA','ALA','ALA'],
'DayL': [1, 6, 1, 6]}
}
</code></pre>
<p>This is a simplified solution, I would like to have a general solution. Thank you</p>
<p>Edit: I want to merge df1 and df2 such that if a ID row in df1 matches either IDL1 or IDL2 in df2, and the combination of Orig and Dest from df1 is different from the combination of Orig2 and Dest2.
For ID equal to 757947 where Orig=FRA & Dest=NQZ, I have multiple matches, IDL2=757947 with (Orig2=FRA & Dest2=ALA) & IDL2=757947 with (Orig2=FRA & Dest=NQZ). In this case I match where I have Orig2+Dest2 not equal to Orig+Dest.</p>
|
<python><pandas><dataframe>
|
2024-03-14 17:48:20
| 1
| 739
|
the phoenix
|
78,162,451
| 13,491,504
|
How do you square a Vector in a Python calculation
|
<p>Lets say you have this code with a Vector and want to square the Vector:</p>
<pre><code>import numpy as np
import sympy as sp
a, b, c, f, g = sp.symbols('a b c f g')
M = np.array([a, b, c])
</code></pre>
<p>Now you have a formula in which you need to square this vector like this:</p>
<pre><code>B = f * M**2 * g
</code></pre>
<p>How do you do this, because <code>np.vectorize(sp.simplify)(M**2)</code> doesnt seem to work.</p>
|
<python><numpy><math><sympy>
|
2024-03-14 17:42:46
| 2
| 637
|
Mo711
|
78,162,405
| 11,024,270
|
How to use Numba CUDA JIT decorator?
|
<p>I've followed this tutorial to use Numba CUDA JIT decorator: <a href="https://www.youtube.com/watch?v=-lcWV4wkHsk&t=510s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=-lcWV4wkHsk&t=510s</a>.</p>
<p>Here is my Python code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from timeit import default_timer as timer
from numba import cuda, jit
# This function will run on a CPU
def fill_array_with_cpu(a):
ββββββfor k in range(100000000):
ββββββββββββa[k] += 1
# This function will run on a CPU with @jit
@jit
def fill_array_with_cpu_jit(a):
ββββββfor k in range(100000000):
ββββββββββββa[k] += 1βββββββββ
# This function will run on a GPU
@jit(target_backend='cuda')
def fill_array_with_gpu(a):
ββββββfor k in range(100000000):
ββββββββββββa[k] += 1βββ
# Main
a = np.ones(100000000, dtype = np.float64)
for i in range(3):
ββββββstart = timer()
ββββββfill_array_with_cpu(a)
ββββββprint("On a CPU:", timer() - start)
for i in range(3):
ββββββstart = timer()
ββββββfill_array_with_cpu_jit(a)
ββββββprint("On a CPU with @jit:", timer() - start)
for i in range(3):
ββββββstart = timer()
ββββββfill_array_with_gpu(a)
ββββββprint("On a GPU:", timer() - start)
</code></pre>
<p>And here is the prompt output:</p>
<pre><code>On a CPU: 24.228116830999852
On a CPU: 24.90354355699992
On a CPU: 24.277727688999903
On a CPU with @jit: 0.2590671719999591
On a CPU with @jit: 0.09131158500008496
On a CPU with @jit: 0.09054700799993043
On a GPU: 0.13547917200003212
On a GPU: 0.0922475330000907
On a GPU: 0.08995077999998102
</code></pre>
<p>Using the <code>@jit</code> decorator greatly increases the processing speed. However, it is unclear to me that the <code>@jit(target_backend='cuda')</code> decorator allows the function to be processed on the GPU. The processing times are similar to the function with <code>@jit</code>. I suppose <code>@jit(target_backend='cuda')</code> does not use the GPU. Actually, I've tried this code on a machine where there is no NVIDIA GPU and I got the same result without any warning or error.</p>
<p>How to make it run on my GPU? I have a GeForce GT 730M.</p>
|
<python><gpu><numba>
|
2024-03-14 17:33:03
| 1
| 432
|
TVG
|
78,162,356
| 6,470,174
|
How can I apply image registration to images and their annotations with opencv?
|
<p>I wish to propagate polygon labels from a source image to a target image. The target image is just the source image, but slightly translated. I found <a href="https://www.geeksforgeeks.org/image-registration-using-opencv-python/" rel="nofollow noreferrer">this code snippet</a> that allows me to register a source image to a target image. If you write it as a function, it becomes:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import cv2
def register_images(
align: np.ndarray,
reference: np.ndarray,
):
"""
Registers two RGB images with each other.
Args:
align: Image to be aligned.
reference: Reference image to be used for alignment.
Returns:
Registered image and transformation matrix.
"""
# Convert to grayscale if needed
_align = align.copy()
_reference = reference.copy()
if _align.shape[-1] == 3:
_align = cv2.cvtColor(_align, cv2.COLOR_RGB2GRAY)
if _reference.shape[-1] == 3:
_reference = cv2.cvtColor(_reference, cv2.COLOR_RGB2GRAY)
height, width = _reference.shape
# Create ORB detector with 5000 features
orb_detector = cv2.ORB_create(500)
# Find the keypoint and descriptors
# The first arg is the image, second arg is the mask (not required in this case).
kp1, d1 = orb_detector.detectAndCompute(_align, None)
kp2, d2 = orb_detector.detectAndCompute(_reference, None)
# Match features between the two images
# We create a Brute Force matcher with Hamming distance as measurement mode.
matcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match the two sets of descriptors
matches = list(matcher.match(d1, d2))
# Sort matches on the basis of their Hamming distance and select the top 90 % matches forward
matches.sort(key=lambda x: x.distance)
matches = matches[:int(len(matches) * 0.9)]
no_of_matches = len(matches)
# Define empty matrices of shape no_of_matches * 2
p1 = np.zeros((no_of_matches, 2))
p2 = np.zeros((no_of_matches, 2))
for i in range(len(matches)):
p1[i, :] = kp1[matches[i].queryIdx].pt
p2[i, :] = kp2[matches[i].trainIdx].pt
# Find the homography matrix and use it to transform the colored image wrt the reference
homography, mask = cv2.findHomography(p1, p2, cv2.RANSAC)
transformed_img = cv2.warpPerspective(align, homography, (width, height))
return transformed_img, homography
</code></pre>
<p>Now, I can access the transformed image and the homography matrix used for aligning the two images. What I don't understand how to do is how can I apply the same trasformation also to polygon and bounding boxes used to annotate the image.</p>
<p>In patricular, annotations are in COCO format, which means you can access coordinates as follows:</p>
<pre class="lang-py prettyprint-override"><code>x0, y0, width, height = bounding_box
</code></pre>
<p>And annotations are a list of polygon coordinates:</p>
<pre class="lang-py prettyprint-override"><code>segmentations = [poly1, poly2, poly3, ...] # segmentations are a list of polygons
for poly in segmentations:
x_coords = poly[0::2] # x coordinates are integer values on the even index in the poly list
y_coords = poly[1::2] # y coordinates are integer values on the odd index in the poly list
</code></pre>
<p>Once I access the x and y coordinates, how can I apply the homography matrix?</p>
|
<python><opencv><transformation>
|
2024-03-14 17:23:53
| 1
| 965
|
Gabriele
|
78,162,249
| 1,585,017
|
Null value set to True, but still it violates not-null constraint
|
<p>I am trying to set the sidebar_id value as null but I get an IntegrityError error:</p>
<pre><code>IntegrityError at /admin/qa/howquestion/112/change/
null value in column "sidebar_id" violates not-null constraint
DETAIL: Failing row contains (112, <p>prettyprint a JSON file<br></p>, To pretty print a JSON file in Python, you can use the json m...).
</code></pre>
<p>That's strange because null is set to True already:</p>
<pre><code>sidebar = models.ForeignKey(Sidebar, on_delete=models.SET_NULL, related_name='qa_sidebars',null=True, blank=True)
</code></pre>
<p>And I have applied migrations too.</p>
<p>Please note that the error does not occur in development where an SQLite3 database is connected. This only happens in deployment which uses PostGreSQL.</p>
<p>I have ran out of options. I searched everything and still don't understand this. Anyone can help?</p>
<p><strong>Edit 1:</strong> I just checked migration files to make sure the migration was applied. This file suggests the migration was applied, I <em>think</em>:</p>
<pre><code># Generated by Django 3.2 on 2024-03-14 16:48
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('marketing', '0001_initial'),
('qa', '0022_howquestion_no_code'),
]
operations = [
migrations.AlterField(
model_name='howquestion',
name='sidebar',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='qa_sidebars', to='marketing.sidebar'),
),
]
~
</code></pre>
<p><strong>Edit 2:</strong><br />
I run the <code>showmigrations</code> command and there is an X next to the migration above:</p>
<pre><code>qa
[X] 0023_alter_howquestion_sidebar
</code></pre>
<p>So, that migration has already been applied with migrate.</p>
<p>I also just checked the database table directly with <code>\d qa_howquestion</code> found out that <code>sidebar_id</code> Nullable attribute is set to <code>not null</code>. So, Django shows that the migration which alters the field to allow non nulls has been applied, but that's apparently not the case! How so?
[1]: <a href="https://i.sstatic.net/v7UO7.png" rel="nofollow noreferrer">https://i.sstatic.net/v7UO7.png</a></p>
|
<python><django><postgresql>
|
2024-03-14 17:06:34
| 1
| 8,152
|
multigoodverse
|
78,162,230
| 11,628,437
|
Why do python projects have the following structure?
|
<p>I am working on creating my own installable python package using <code>setup.py</code>. While going over different repositories, I find the following structure -</p>
<pre class="lang-none prettyprint-override"><code>abc-def
|-abc_def
|-setup.py
</code></pre>
<p>Here, <code>setup.py</code> has the function <code>setup</code> with the following contents -</p>
<pre class="lang-py prettyprint-override"><code>setup(
name="abc_def",
)
</code></pre>
<p>My question is, does the parent directory need to share its name with the child directory, just swapping the <code>-</code> for a <code>_</code>. Especially since it seems that python doesn't accept the <code>-</code> symbol. For instance, if I use import <code>abc-def</code> anywhere in the code, I get the following error -</p>
<pre><code>>>> import abc-def
File "<stdin>", line 1
import abc-def
^
SyntaxError: invalid syntax
</code></pre>
|
<python><python-packaging>
|
2024-03-14 17:02:22
| 2
| 1,851
|
desert_ranger
|
78,162,225
| 3,595,231
|
How to extract the html link from a html page in python?
|
<p>From this python code,</p>
<pre><code>...
resp = logout_session.get(logout_url, headers=headers, verify=False, allow_redirects=False)
soup = BeautifulSoup(resp.content, "html.parser")
print(soup.prettify())
</code></pre>
<p>I was able to make an API call, and the response content is of this:</p>
<pre><code><!DOCTYPE html>
<html>
<head>...</head>
<body>
<div class="container">
<div class="title logo" id="header">
<img alt="" id="business-logo-login" src="/customviews/image/business_logo:f0a067275aba3c71c62cffa2f50ac69c/"/>
</div>
<div class="input-group alert alert-success text-center" id="title" role="alert">
Successfully signed out
</div>
<div class="input-group alert text-center">
<a href="/saml-idp/portal/">
Login again
</a>
</div>
<div>
<p>
You will be redirected to https://idpftc.business.com/saml/Gy736KPK3v1aWDPECRZKAn/proxy_logout/ after 5 seconds ...
</p>
<script language="javascript" nonce="">
window.onload = window.setTimeout(function() {
window.location.replace("https://idpftc.business.com/saml/Gy736KPK3v1aWDPECRZKAn/proxy_logout/?SAMLResponse=3VjJkuNIjv2VtKijLJObJIphlWnGfd93Xtoo7vsukvr6ZkRU");}, 5000);
</script>
</div>
</div>
</body>
</html>
</code></pre>
<p>Now I want to extract the html link:</p>
<pre><code>https://idpftc.business.com/saml/Gy736KPK3v1aWDPECRZKAn/proxy_logout/?SAMLResponse=3VjJkuNIjv2VtKijLJObJIphlWnGfd93Xtoo7vsukvr6ZkRU
</code></pre>
<p>from this content, does anyone know how to do it in python ?</p>
|
<python><beautifulsoup><urlparse>
|
2024-03-14 17:01:59
| 2
| 765
|
user3595231
|
78,162,180
| 12,040,751
|
Generate triangular matrix of cumulative products efficiently
|
<p>Take a 1D vector, for example <code>[a b c d]</code>.</p>
<p>Then build the following matrix</p>
<pre><code>a 0 0 0
ab b 0 0
abc bc c 0
abcd bcd cd d
</code></pre>
<p>The code I got so far does the job, but it's ugly and has a for loop which should be completely unnecessary.</p>
<pre><code>import numpy as np
v = np.array([1, 2, 3])
n = len(v)
matrix = np.zeros((n,n))
for i in range(n):
matrix [i,:i+1] = np.flip(np.cumprod(np.flip(v[:i+1])))
print(matrix)
# [[1. 0. 0.]
# [2. 2. 0.]
# [6. 6. 3.]]
</code></pre>
<p>How can I vectorise it?</p>
|
<python><numpy>
|
2024-03-14 16:55:15
| 4
| 1,569
|
edd313
|
78,162,074
| 525,865
|
iterate over 10 k pages & fetch data, parse: European Volunteering-Services: tiny scraper that collects opportunities from EU-Site
|
<p>I am looking for a public list of Volunteering - Services in Europe: I don't need full addresses - but the name and the website. I think of data ...
XML, CSV ... with these fields: name, country - and some additional fields would be nice one record per country of presence. <strong>btw:</strong> the european volunteering services are great options for the youth</p>
<p>well I have found a great page that is very very comprehensive;
want to gather data from the <strong>european volunteering services</strong> that are hosted on a European site:</p>
<p>see: <a href="https://youth.europa.eu/go-abroad/volunteering/opportunities_en" rel="nofollow noreferrer">https://youth.europa.eu/go-abroad/volunteering/opportunities_en</a></p>
<p>@HedgeHog showed me the right approach and how to find the correct selectors
in this thread: <a href="https://stackoverflow.com/questions/78154925/beatuifulsoup-iterate-over-10-k-pages-fetch-data-parse-european-volunteering">BeatuifulSoup iterate over 10 k pages & fetch data, parse: European Volunteering-Services: a tiny scraper that collects opportunities from EU-Site</a></p>
<pre><code># Extracting relevant data
title = soup.h1.get_text(', ',strip=True)
location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ',strip=True)
start_date,end_date = (e.get_text(strip=True)for e in soup.select('span.extra strong')[-2:])
</code></pre>
<p>but we have got several hundred volunteering opportunities there - which are stored in sites like the following:</p>
<pre><code> https://youth.europa.eu/solidarity/placement/39020_en
https://youth.europa.eu/solidarity/placement/38993_en
https://youth.europa.eu/solidarity/placement/38973_en
https://youth.europa.eu/solidarity/placement/38972_en
https://youth.europa.eu/solidarity/placement/38850_en
https://youth.europa.eu/solidarity/placement/38633_en
</code></pre>
<p><strong>idea:</strong></p>
<p>I think it would be awesome to gather the data - i.e. with a scraper that is based on <code>BS4</code> and <code>requests</code> - parsing the data and subsequently printing the data in a <code>dataframe</code></p>
<p>Well - I think that we could iterate over all the urls:</p>
<pre><code>placement/39020_en
placement/38993_en
placement/38973_en
placement/38850_en
</code></pre>
<p><strong>idea</strong>: I think that we can iterate from zero to 100 000 in stored to fetch all the results that are stored in placements.
But this idea is not backed with a code. In other words - at the moment I do not have an idea how to do this special idea of iterating over such a great range:</p>
<p>At the moment I think - it is a basic approach to start with this:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
# Function to generate placement URLs based on a range of IDs
def generate_urls(start_id, end_id):
base_url = "https://youth.europa.eu/solidarity/placement/"
urls = [base_url + str(id) + "_en" for id in range(start_id, end_id+1)]
return urls
# Function to scrape data from a single URL
def scrape_data(url):
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.content, 'html.parser')
title = soup.h1.get_text(', ', strip=True)
location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ', strip=True)
start_date, end_date = (e.get_text(strip=True) for e in soup.select('span.extra strong')[-2:])
website_tag = soup.find("a", class_="btn__link--website")
website = website_tag.get("href") if website_tag else None
return {
"Title": title,
"Location": location,
"Start Date": start_date,
"End Date": end_date,
"Website": website,
"URL": url
}
else:
print(f"Failed to fetch data from {url}. Status code: {response.status_code}")
return None
# Set the range of placement IDs we want to scrape
start_id = 1
end_id = 100000
# Generate placement URLs
urls = generate_urls(start_id, end_id)
# Scrape data from all URLs
data = []
for url in urls:
placement_data = scrape_data(url)
if placement_data:
data.append(placement_data)
# Convert data to DataFrame
df = pd.DataFrame(data)
# Print DataFrame
print(df)
</code></pre>
<p>which gives me back the following</p>
<pre><code> Failed to fetch data from https://youth.europa.eu/solidarity/placement/154_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/156_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/157_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/159_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/161_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/162_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/163_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/165_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/166_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/169_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/170_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/171_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/173_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/174_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/176_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/177_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/178_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/179_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/180_en. Status code: 404
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-5-d6272ee535ef> in <cell line: 42>()
41 data = []
42 for url in urls:
---> 43 placement_data = scrape_data(url)
44 if placement_data:
45 data.append(placement_data)
<ipython-input-5-d6272ee535ef> in scrape_data(url)
16 title = soup.h1.get_text(', ', strip=True)
17 location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ', strip=True)
---> 18 start_date, end_date = (e.get_text(strip=True) for e in soup.select('span.extra strong')[-2:])
19 website_tag = soup.find("a", class_="btn__link--website")
20 website = website_tag.get("href") if website_tag else None
ValueError: not enough values to unpack (expected 2, got 0)
</code></pre>
<p>Any idea?</p>
<p>see the bas-url: <a href="https://youth.europa.eu/go-abroad/volunteering/opportunities_en" rel="nofollow noreferrer">https://youth.europa.eu/go-abroad/volunteering/opportunities_en</a></p>
|
<python><pandas><dataframe><web-scraping><beautifulsoup>
|
2024-03-14 16:37:12
| 2
| 1,223
|
zero
|
78,161,984
| 3,623,537
|
typing for rare case fallback None value
|
<p>Trying to avoid typing issues I often run into the same problem.</p>
<p>E.g. I have a function <code>x</code> that very rarily returns value <code>None</code>, all other times it returns <code>int</code>.</p>
<pre class="lang-py prettyprint-override"><code>
def x(i: int) -> Union[int, None]:
if i == 0:
return
return i
def test(i: int):
a = x(i)
# typing issue: *= not supported for types int | None and int
a *= 25
</code></pre>
<p><code>x</code> used very often in the codebase and most of the time <code>i</code> was already checked a hundred times that <code>x(i)</code> will indeed return <code>int</code> and not <code>None</code>.
Using it as <code>int</code> right away creates typing warnings - e.g. you can't multiply possible <code>None</code> value.</p>
<p>What's best practice for that case?</p>
<p>Ideas I considered:</p>
<ol>
<li>There is no real sense to check it for <code>None</code> with <code>if a is None: return</code> as it's already <em>known</em>.</li>
<li><code>a *= 25 # type: ignore</code> will make <code>a</code> an <code>Unknown</code> type.</li>
<li><code>a = x(i) # type: int</code> will make the warning go away. But will create a new warning "int | None cannot be assigned to int"</li>
<li><code>a = cast(int, x(i))</code>, haven't tested it much yet.</li>
</ol>
<p>I usually end up changing return type of <code>x</code> to just <code>int</code>, adding <code>ignore</code> in <code>return # type: ignore</code> and mention in the docstring that it can return <code>None</code>, it helps avoiding contaminating the entire codebase with type warnings. Is this the best approach?</p>
<pre class="lang-py prettyprint-override"><code>def x(i: int) -> int:
"""might also return `None`"""
if i == 0:
return # type: ignore
return i
</code></pre>
|
<python><python-typing>
|
2024-03-14 16:22:12
| 1
| 469
|
FamousSnake
|
78,161,902
| 1,592,380
|
geodataframe is not defined
|
<p><a href="https://i.sstatic.net/ypPg2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ypPg2.png" alt="enter image description here" /></a></p>
<p>I'm working with Jupiter and ipywidgets and Ipyleaflet , trying to draw polygons on a map and saving to a geodataframe. I have the following in a notebook cell:</p>
<pre><code>zoom = 15
from ipywidgets import Output, interact
import ipywidgets
from __future__ import print_function
import ipyleaflet
import geopandas as gpd
import pandas as pd
# from shapely.geometry import Point, LineString, Polygon
from shapely import geometry
from ipyleaflet import (
Map,
Marker,
TileLayer, ImageOverlay,
Polyline, Polygon, Rectangle, Circle, CircleMarker,
GeoJSON,
DrawControl)
# Create an Output widget
out = Output()
# Define a function to handle interactions
def handle_interaction(change):
with out:
print(change)
c = ipywidgets.Box()
c.children = [m]
# keep track of rectangles and polygons drawn on map:
def clear_m():
global rects,polys
rects = set()
polys = set()
clear_m()
rect_color = '#a52a2a'
poly_color = '#00F'
myDrawControl = DrawControl(
rectangle={'shapeOptions':{'color':rect_color}},
polygon={'shapeOptions':{'color':poly_color}}) #,polyline=None)
def handle_draw(self, action, geo_json):
global rects,polys
polygon=[]
for coords in geo_json['geometry']['coordinates'][0][:-1][:]:
handle_interaction(coords)
polygon.append(tuple(coords))
polygon = tuple(polygon)
handle_interaction(polygon)
if geo_json['properties']['style']['color'] == '#00F': # poly
if action == 'created':
handle_interaction(" in here")
polys.add(polygon)
polygon_geom = geometry.Polygon(polygon)
# Create GeoDataFrame if it doesn't exist
if gdf2: is None:
gdf2 = gpd.GeoDataFrame(geometry=[polygon_geom])
else:
gdf2 = gdf2.append({'geometry': polygon_geom}, ignore_index=True)
elif action == 'deleted':
polys.discard(polygon)
if geo_json['properties']['style']['color'] == '#a52a2a': # rect
if action == 'created':
rects.add(polygon)
elif action == 'deleted':
rects.discard(polygon)
myDrawControl.on_draw(handle_draw)
m.add_control(myDrawControl)
</code></pre>
<p>After drawing the shapes in the map, I can see</p>
<pre><code>display(out)
</code></pre>
<p>[-88.434269, 31.660818]
[-88.431051, 31.661439]
[-88.431265, 31.660087]
[-88.433582, 31.659941]
((-88.434269, 31.660818), (-88.431051, 31.661439), (-88.431265, 31.660087), (-88.433582, 31.659941))
in here
[-88.432166, 31.658474]
[-88.429678, 31.65767]
[-88.431609, 31.656684]
[-88.434054, 31.65778]
((-88.432166, 31.658474), (-88.429678, 31.65767), (-88.431609, 31.656684), (-88.434054, 31.65778))
in here</p>
<p>listed in the next cell (the expected results). However when I try:</p>
<pre><code>print(gdf2)
</code></pre>
<p>I get:</p>
<pre><code>NameError: name 'gdf2' is not defined
</code></pre>
<p>What am I doing wrong</p>
|
<python><jupyter-notebook><ipywidgets><ipyleaflet>
|
2024-03-14 16:09:41
| 2
| 36,885
|
user1592380
|
78,161,736
| 9,795,817
|
How to update pyspark dataframe inside a Python function
|
<p>I have a Python function that receives a pyspark dataframe and checks if it has all the columns expected by other functions used in a script. In particular, if the column <code>'weight'</code> is missing, I want to update the dataframe passed by the user by assigning a new column to it.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import functions as F
def verify_cols(df):
if 'weight' not in df.columns:
df = df.withColumn('weight', F.lit(1)) # Can I update `df` inside this function?
</code></pre>
<p>As you can see, I want the function to update <code>df</code>. How can I achieve this? If possible, I would like to avoid using a <code>return</code> statement.</p>
<p><a href="https://stackoverflow.com/questions/66744816/how-to-update-a-dataframe-with-a-user-defined-function-pandas-python">This post</a> is very similar but uses pandas' <code>inplace</code> argument.</p>
|
<python><apache-spark><pyspark><user-defined-functions>
|
2024-03-14 15:45:20
| 1
| 6,421
|
Arturo Sbr
|
78,161,637
| 867,889
|
Is there a way to tell which python thread was the last one making any progress?
|
<p>Let's say I have a python process that starts 50 threads most of which depend on each other. All of which eventually hang because some leaf thread got blocked. Given these 50 python threads I can examine them with <code>py-spy</code> and see that everything is hanging. Is there a way to tell which thread was the last one making any progress? Like, having the instruction pointer updated or something.</p>
|
<python><multithreading>
|
2024-03-14 15:27:55
| 0
| 10,083
|
y.selivonchyk
|
78,161,622
| 9,983,652
|
Is it possible to use share_xaxis for some specific subplots?
|
<p>I use make_subplot for 3 rows and 1 columns, the first 2 rows are data plot and the last row is image. Iβs like to use shared_xaxis for the first 2 data plot, how to do it by excluding the last row of image plot? Thanks</p>
<pre><code>fig = make_subplots(rows=3, cols=1,
vertical_spacing=0.05,
specs=[[{"secondary_y": False}],[{"secondary_y": True}],[{"secondary_y": False,"type": "image"}]], #
subplot_titles=(plot_title_1stsubplot,plot_title_2ndsubplot
),
row_heights=[0.7,0.15,0.15],
shared_xaxes=False,
)
</code></pre>
|
<python><plotly>
|
2024-03-14 15:25:31
| 1
| 4,338
|
roudan
|
78,161,523
| 16,425,408
|
Auto back up using python
|
<p>I am utilizing the Python code below to take an automated backup of my files.</p>
<pre><code>import shutil
import os
def backup_folder(source_folder, backup_folder):
try:
# Check if the source folder exists
if not os.path.exists(source_folder):
print(f"Error: Source folder '{source_folder}' does not exist.")
return
# Create the backup folder if it doesn't exist
if not os.path.exists(backup_folder):
os.makedirs(backup_folder)
# Copy contents of source folder to backup folder
shutil.copytree(source_folder, os.path.join(backup_folder, os.path.basename(source_folder)))
print(f"Backup completed successfully.")
except Exception as e:
print(f"Error: {e}")
# Example usage
source_folder = '/path/to/source_folder'
backup_folder = '/path/to/backup_folder'
</code></pre>
<p>However, when I run it, the backup folder is not getting created. I have tried running it as root, but that did not work either. I am not sure what the issue is.
python3 <name.py></p>
<p>Are there any other considerations that I may have overlooked? In addition, do you have any ideas on how to automate this process? I'm wondering if it would be possible to use crontab or a Jenkins Job.</p>
|
<python><automation><scripting><backup>
|
2024-03-14 15:09:46
| 0
| 838
|
Nani
|
78,161,348
| 5,730,859
|
Python to mailmerge csv to word in continuous in one document
|
<p>I want to mailmerge a table in <strong>csv(List.csv) to word(example1.docx) but continuously into multiple pages in one word document</strong>. The function is something like "Next Record" in MS Word. I have a template (Test.docx). I can't find in Python.</p>
<pre><code>from __future__ import print_function
import pandas as pd
from mailmerge import MailMerge
from datetime import date
# Define the templates - assumes they are in the same directory as the code
template_1 = "Test.docx"
df1 = pd.read_csv('List.csv')
looprange = range(int(len(df1.index)))
for j in looprange:
document_1 = MailMerge(template_1)
document_1.merge(
BusinessName=df1.BusinessName[j],
Name= df1.Name[j],
)
document_1.write('example1.docx')
</code></pre>
<p><strong>List.csv</strong></p>
<p><a href="https://i.sstatic.net/gQBK1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gQBK1.png" alt="enter image description here" /></a></p>
<p><strong>Test.docx</strong></p>
<p><a href="https://i.sstatic.net/0nEKP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0nEKP.png" alt="enter image description here" /></a></p>
<p>However, I only get the output of the last input and is only one page and it is repeated data. Desired output suppose to be suppose to be <strong>all 4 row data</strong> in two pages. T<strong>he data should be different left and right as I split the template to two column.</strong></p>
<p><strong>Output currently:</strong></p>
<p><a href="https://i.sstatic.net/mIfVi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mIfVi.png" alt="enter image description here" /></a></p>
|
<python><mailmerge><docx-mailmerge>
|
2024-03-14 14:42:02
| 1
| 934
|
bkcollection
|
78,161,222
| 7,134,737
|
pyspark - What is the difference between these two full outer joins?
|
<p>Full Example <a href="https://pastebin.com/b79s6gDG" rel="nofollow noreferrer">here</a>.</p>
<p>I am seeing two different outputs with these two ways of doing a full outer join on two dataframes in pyspark:</p>
<pre><code>users1_df. \
join(users2_df, users1_df.email == users2_df.email, 'full_outer'). \
show()
</code></pre>
<p>This gives:</p>
<pre><code>+--------------------+----------+---------+------+-------------+--------------------+----------+----------+------+---------------+
| email|first_name|last_name|gender| ip_address| email|first_name| last_name|gender| ip_address|
+--------------------+----------+---------+------+-------------+--------------------+----------+----------+------+---------------+
| alovett0@nsw.gov.au| Aundrea| Lovett| Male| 62.72.1.143| null| null| null| null| null|
|bjowling1@spiegel.de| Bettine| Jowling|Female|26.250.197.47|bjowling1@spiegel.de| Putnam|Alfonsetti|Female| 167.97.48.246|
| null| null| null| null| null| lbutland7@time.com| Lilas| Butland|Female|109.110.131.151|
+--------------------+----------+---------+------+-------------+--------------------+----------+----------+------+---------------+
</code></pre>
<p>Notice that the <code>email</code> column is repeated and there are nulls for the email that is not present in both dataframes.</p>
<p>Now for this following code:</p>
<pre><code>users1_df. \
join(users2_df, 'email', 'full_outer'). \
show()
</code></pre>
<p>I get the following:</p>
<pre><code>+--------------------+----------+---------+------+-------------+----------+----------+------+---------------+
| email|first_name|last_name|gender| ip_address|first_name| last_name|gender| ip_address|
+--------------------+----------+---------+------+-------------+----------+----------+------+---------------+
| alovett0@nsw.gov.au| Aundrea| Lovett| Male| 62.72.1.143| null| null| null| null|
|bjowling1@spiegel.de| Bettine| Jowling|Female|26.250.197.47| Putnam|Alfonsetti|Female| 167.97.48.246|
| lbutland7@time.com| null| null| null| null| Lilas| Butland|Female|109.110.131.151|
+--------------------+----------+---------+------+-------------+----------+----------+------+---------------+
</code></pre>
<p>Notice that the <code>email</code> column is not repeated and there are also no nulls.</p>
<p>Am I missing something? Where is this behavior mentioned in the docs of <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.join.html" rel="nofollow noreferrer">pyspark.join</a> ?</p>
|
<python><dataframe><apache-spark><join><pyspark>
|
2024-03-14 14:22:32
| 2
| 3,312
|
ng.newbie
|
78,161,053
| 1,256,925
|
Convert entire (Python) file from 2-space indent to 4-space indent
|
<p>I regularly work with Python files that are provided to me as templates, which use an indentation of 2. However, I personally prefer working with an indentation width of 4, which is what I've set in my <code>.vimrc</code>. However, because of the indentation-sensitivity of Python, the typical <code>gg=G</code> way to fix indentation to my preferred indentation width does not work well at all to convert a file to my preferred indentation. Furthermore, just leaving the indentation at a width of 2 screws up with my tabstop settings.</p>
<p>To fix this, what I'd like to do is have some system to convert a 2-space indented file into a 4-space indented file. I don't know what the easiest way to do this would be, but I'm thinking to <code>vmap <Tab></code> to increase the tab width of the selected lines by 1, and <code>vmap <S-Tab></code> to decrease the width by 1. I don't know if there is some built-in method to do this, or if this would require some find/replace rule, but I think having some way to fix the indentation to my preferred width would be useful, given that the autoindent is not smart enough to fix this properly.</p>
|
<python><vim><indentation><auto-indent>
|
2024-03-14 13:57:11
| 4
| 19,172
|
Joeytje50
|
78,160,942
| 1,311,325
|
Polling MySQL database not fetching new results
|
<p>In reference to the linked so-called duplicate question. That question asks why data is not committed to the database after an INSERT query.<br />
In my case, I am polling the table with a SELECT query. New items are posted by another process (and are actually committed to the database as confirmed using Adminer).<br />
As stated below, the first time through the loop, the Select query returns all the records expected. On the second and all further passes, nothing is returned, even if new records are added by the other process.<br />
If I stop and restart this loop, it again finds all the new records on the first pass, but never finds any more unless restarted again.</p>
<p>I have a table of records created by a legacy system. I need to poll the table and create new records in another database on another server.
I was able to add a new field (posted_in_prodmon) with the default value of 0.
I have created a service that queries the original table and finds any records with a value of 0 in the new field. I then create the new record in the new database (using some data from the original) and then update the new field to equal 1. If no new records exist in the table, the process sleeps for 10 seconds, then tries again.
New records appear roughly every 20 seconds.</p>
<p>I am using python3.9 and mysql-connector-python==8.0.33 for my service and the server is running MySQL 8.0.23</p>
<p>My problem is that no matter what I try, I can't seem to find any new records after the initial records are found. Here are the relevant bits of my code:</p>
<pre><code>class Mysql_DB():
def __init__(self, logger):
self.logger = logger
self.connected = False
self.connection = None
def is_connected(self):
if self.connection:
if self.connection.is_connected():
return True
try:
self.logger.info(f'Not connected to mysql server... reconnecting')
self.connection= mysql.connector.connect(**self.dbconfig)
return True
except (mysql.connector.Error, IOError) as err:
self.logger.error(f'Mysql connection failed: {err}')
return False
class Source(Mysql_DB):
def get_records(self):
if self.is_connected():
sql = 'SELECT id, inspection_data '
sql += 'FROM `1730_Vantage` '
sql += 'WHERE part_fail = 1 '
sql += 'AND posted_in_GFxPRoduction = 0 '
sql += 'AND created_at > "2023-06-01" '
sql += 'ORDER BY created_at DESC; '
# with self.connection.cursor(buffered=True) as cursor:
# cursor.reset() # didn't help
# cursor.execute(sql)
cursor = self.connection.cursor(buffered=True)
cursor.reset() # didn't help
cursor.execute(sql)
return cursor
while True:
for result in source.get_records():
id = result[0]
data = result[1]
data_list = data.split("\t",3)
created_at = f'{data_list[0]} {data_list[1]}'
created_at = created_at.replace('_', '-')
part = data_list[2]
ts = int(datetime.timestamp(datetime.fromisoformat(created_at)))
target.insert(ts, part)
rowcount = source.update_record(id, created_at)
if not rowcount:
raise Exception(f'Failed to update id {id}')
results.close() # added after removing with block above... no change
source.connection.close() # this is the only thing that works...
sleep(10)
</code></pre>
<p>I did read that I could do this with a trigger but that seemed overly complicated.</p>
<p>I have tried cursor.reset().</p>
<p>I have tried adding LIMIT 1 (started with that).</p>
<p>Do I need to close the connection? Seems wasteful.</p>
<p>UPDATE: So I tried to remove the <code>with</code> block as suggested by Marcin below. His logic made sense but did not have an effect.<br />
The only thing that seems to work is if I close the connection every iteration.</p>
<p>Hoping someone can come up with a cleaner solution.</p>
|
<python><mysql>
|
2024-03-14 13:38:41
| 1
| 6,247
|
cstrutton
|
78,160,781
| 11,918,314
|
Function to filter a dataframe based on multiple conditions with groupby and dropping of duplicates
|
<p>I have a dataframe and would like to create a function to keep rows or drop duplicates based on certain conditions</p>
<p>original dataframe</p>
<pre><code>year year_month manager_movement email_address
2022 2022_jun transfer_in mary.crowe@abc.com
2022 2022_jun no_change andrew.gupta@abc.com
2022 2022_jul no_change mary.crowe@abc.com
2022 2022_jul no_change andrew.gupta@abc.com
2022 2022_aug no_change mary.crowe@abc.com
2022 2022_aug no_change andrew.gupta@abc.com
2022 2022_sep transfer_out mary.crowe@abc.com
2022 2022_sep no_change andrew.gupta@abc.com
2022 2022_oct transfer_in mary.crowe@abc.com
2022 2022_oct no_change andrew.gupta@abc.com
2023 2023_jan no_change andrew.gupta@abc.com
2023 2023_feb no_change andrew.gupta@abc.com
</code></pre>
<p>Expected dataframe</p>
<pre><code>year year_month manager_movement email_address
2022 2022_jun transfer_in mary.crowe@abc.com
2022 2022_oct transfer_in mary.crowe@abc.com
2022 2022_oct no_change andrew.gupta@abc.com
2023 2023_feb no_change andrew.gupta@abc.com
</code></pre>
<p>The logic to get the dataframe is such
1st: if df['manager_movement'] == 'transfer_out', then remove rows</p>
<p>2nd: elseif df['manager_movement'] == 'transfer_in', then keep only the rows with 'transfer_in' and drop the other rows if there is 'no_change'.</p>
<p>3rd: elseif df['manager_movement'] == 'no_change', then groupby 'year' and 'email_address' and drop duplicates and keep last row</p>
<p>Here was my attempt but can't seem to get my desired output. Appreciate any help or comments, thank you.</p>
<pre><code>def get_required_rows(x):
if x['manager_movement'] == 'transfer_out':
return x.loc[x['manager_movement']!='transfer_out']
elif x['manager_movement'] == 'transfer_in':
return x
elif x['manager_movement'] == 'No Change':
return x.drop_duplicates(['year','email_address'], keep='last')
end
df_filtered = df.apply(get_required_rows, axis=1)
</code></pre>
|
<python><python-3.x><dataframe><function><filtering>
|
2024-03-14 13:12:17
| 2
| 445
|
wjie08
|
78,160,464
| 5,539,707
|
Why 00 is a valid integer in Python?
|
<p>In the <a href="https://docs.python.org/3/reference/lexical_analysis.html#integer-literals" rel="nofollow noreferrer">Python documentation</a> :</p>
<pre><code>integer ::= decinteger | bininteger | octinteger | hexinteger
decinteger ::= nonzerodigit (["_"] digit)* | "0"+ (["_"] "0")*
bininteger ::= "0" ("b" | "B") (["_"] bindigit)+
octinteger ::= "0" ("o" | "O") (["_"] octdigit)+
hexinteger ::= "0" ("x" | "X") (["_"] hexdigit)+
nonzerodigit ::= "1"..."9"
digit ::= "0"..."9"
bindigit ::= "0" | "1"
octdigit ::= "0"..."7"
hexdigit ::= digit | "a"..."f" | "A"..."F"
</code></pre>
<p>I can't get why <code>00</code> is a valid integer with those definitions? It seems a <code>nonzerodigit</code> must be the leading digit, though:</p>
<pre class="lang-py prettyprint-override"><code>>>> isinstance(00, int)
True
</code></pre>
<p>And maybe the same question: why <code>0</code> is valid?</p>
|
<python><integer><pattern-matching>
|
2024-03-14 12:21:10
| 1
| 1,593
|
david
|
78,160,229
| 351,885
|
"Unsupported serialized class" when using Pyro5 proxy to get object
|
<p>I am updating some code to use Pyro5 (from Pyro3) and can't see how to deal with custom objects that are returned from a method accessed via a Pyro proxy. As a demonstration I have created two simple classes in a file <code>classes.py</code>: <code>Container</code> has a method that returns an instance of <code>Item</code>:</p>
<pre><code>from Pyro5.api import expose
class Container:
@expose
def get_item(self):
return Item()
class Item():
def __str__(self):
return "item"
</code></pre>
<p>The <code>serpent</code> package can serialize and deserialize objects of type <code>Item</code>:-</p>
<pre><code>import serpent
from classes import Item
item = Item()
s = serpent.dumps(item)
item2 = serpent.loads(s)
print(item2)
</code></pre>
<p>Running this prints, as expected:</p>
<pre><code>{'__class__': 'Item'}
</code></pre>
<p>Now I create a Pyro5 daemon and register an instance of <code>Container</code>:</p>
<pre><code>from Pyro5.api import Daemon
from classes import Container
daemon = Daemon(host="localhost", port=5555)
daemon.register(Container(),"container")
daemon.requestLoop()
</code></pre>
<p>But when I try to obtain an instance of <code>Item</code> via the proxy like this</p>
<pre><code>from Pyro5.api import Proxy
container = Proxy("PYRO:container@localhost:5555")
print(container.get_item())
</code></pre>
<p>I get an exception</p>
<pre><code>Pyro5.errors.SerializeError: unsupported serialized class: classes.Item
</code></pre>
<p>If <code>get_item()</code> is changed to return a string, all is OK. Is there a way to get my class <code>Item</code> to be serialized and deserialized without a lot of extra custom code? Since serpent deals with it OK, and is used by Pyro5, it seems that this should not be too complex!</p>
|
<python><serialization><pyro>
|
2024-03-14 11:40:27
| 1
| 2,398
|
Ben
|
78,160,159
| 8,543,025
|
Python Hashing of "tupled" numpy Array
|
<p>I have a class <code>MyClass</code> where each instance stores pixels' x- and y-coordinates, represented as two 1D numpy arrays (of the same length). Two instances are considered equal if their coordinate arrays are identical (including <code>nan</code>).<br />
I tried two methods of hashing: one by casting both arrays to tuples and hashing those, and the other by calling the <code>tobytes()</code> method for each array:</p>
<pre><code>class MyClass:
# ... init, doA(), doB(), etc. ...
def __eq__(self, other):
if not type(self) == type(other):
return False
if not np.array_equal(self._x, other._x, equal_nan=True):
return False
if not np.array_equal(self._y, other._y, equal_nan=True):
return False
return True
def hash1(self):
return hash((tuple(self._x), tuple(self._y)))
def hash2(self):
return hash((self._x.tobytes(), self._y.tobytes()))
</code></pre>
<p>Calling <code>hash1</code> on the same instance yields different hashes, and calling <code>hash2</code> outputs the same thing every time. Why do these behave so differently?</p>
|
<python><numpy><hash>
|
2024-03-14 11:29:26
| 1
| 593
|
Jon Nir
|
78,160,084
| 5,417,867
|
Odoo filter order_line records on sale order form on custom property on product
|
<p>I have a custom Boolean property on product named 'emptygoods'. Now I'd like to filter the order lines on an sale order form to only show the lines where the emptygoods is set to False for there product. I tried using domain attribute on order_line but without success. It seems nothing inserted in the domain is doing a thing (tried numerous variants).</p>
<pre><code> <xpath expr="//field[@name='order_line']" position="attributes">
<attribute name="domain">[('product_id.emptygoods', '=', True)]</attribute>
</xpath>
</code></pre>
<p>product.py models file:</p>
<pre><code>from odoo import models, fields, api
</code></pre>
<p>class product_template_inherit(models.Model):
_inherit = 'product.template'</p>
<pre><code>emptygoods = fields.Boolean("Is empty goods")
</code></pre>
|
<python><filter><odoo>
|
2024-03-14 11:18:15
| 0
| 769
|
Jesse
|
78,159,962
| 2,859,206
|
In pandas, how to reliably set the index order of multilevel columns during or after a pivot of two columns plus a value column
|
<p>After pivoting around two columns with a separate value column, I want a df with multiindex columns in a specific order, like so (please ignore that multi-2 and multi-3 labels are pointless in the simplified example):</p>
<pre><code>multi-1 one two
multi-2 multi-2 multi-2
multi-3 SomeText SomeText
mIndex
bar -1.788089 -0.631030
baz -1.836282 0.762363
foo -1.104848 -0.444981
qux -0.484606 -0.507772
</code></pre>
<p>Starting with a multiindex series of values, labelled multi-2, I create a three column df: column 1 - the serie's indexes (multi-1); column 2 - the values (multi-2); plus another column (multi-3), which I really only want for the column label. I then want to pivot this df around multi-1 and multi-3, with values multi-2. PROBLEM: The multiindex column labels MUST always be in a specific order: multi-1, multi-2, then multi-3.</p>
<pre><code>import pandas as pd
import numpy as np
arrays = [["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
["one", "two", "one", "two", "one", "two", "one", "two"]]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples, names=["mIndex", "multi-1"])
s = pd.Series(np.random.randn(8), index=index)
s.rename("multi-2", inplace=True)
df = pd.DataFrame(s.reset_index(level=["multi-1"]))
df["multi-3"] = "SomeText"
df = df.pivot(columns={"multi-1", "multi-3"}, values=["multi-2"])
df = df.swaplevel(0,1, axis=1) # option 1: works only sometimes
# ???? how do I name the values level ????
df = df.reorder_levels("multi-1", "multi-2", "multi-3") # option 2: set fixed order
</code></pre>
<p>Including multi-2 in the columns during the pivot creates another level.</p>
<p>The .swaplevel method does not always return the same order because (I guess) the original index order is not always the same following the pivot. Can this be right?!?</p>
<p>To use the reorder_levels, I need to somehow set an index label for the multi-2 value level (which is currently "None", along side "Multi-1" and "Multi-3").</p>
<p>Is there a way to set the label during the pivot? or after the pivot in a way that doesn't use the index (which seems to change somehow)? Or another way to get the same outcome?</p>
|
<python><pandas><dataframe><multi-index>
|
2024-03-14 10:56:14
| 1
| 2,490
|
DrWhat
|
78,159,960
| 11,659,631
|
Power law fit doesn't work in python: it's either way off or returns only the starting parameters
|
<p>I'm very, very confused. I'm trying to fit a power law to my data. I tried my code to random generated data and it works just fine (see figure) but when I'm trying with my data, it's way off. I try to help curve_fit by giving starting values for the fitting parameters, but in this case, it's only returning the starting parameter for the fit. This does not make sense. Could anyone help me please ?</p>
<p><a href="https://i.sstatic.net/Xf5Jk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xf5Jk.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/LXgTs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LXgTs.png" alt="enter image description here" /></a></p>
<p>Here is my code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# Define the power law function
def power_law(x, factor, exponent):
'''
x: x axis data
factor: y axis intersection
exponent: slope
'''
return factor * x ** exponent
# Define the power-law function
# Generate synthetic data following a power-law distribution
np.random.seed(0) # for reproducibility
x_data = np.linspace(1, 10, 50) # example x values
y_data = power_law(x = x_data, factor = 0.2, exponent = -10) * (1 + np.random.normal(scale=0.1, size=len(x_data))) # example y values with added noise
# Fit the power-law model to the data
params, covariance = curve_fit(power_law, x_data, y_data)
# Extract fitted parameters
fac_fit, exp_fit = params
# Plot the data and the fitted power-law curve
plt.figure()
plt.scatter(x_data, y_data, label = 'Data')
plt.plot(x_data, power_law(x_data, fac_fit, exp_fit), color='red', label='Fitted Power Law')
plt.xscale('log')
plt.yscale('log')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Fitting a Power Law to Data')
plt.legend()
plt.grid(True)
plt.show()
# Print the fitted parameters
print("Fitted Parameters:")
print("factor =", fac_fit)
print("exponent =", exp_fit)
# My data
x_data = freq_full_ordered_withM[mask]
y_data = PSD_full_ordered_withM[mask]
# Filter out non-positive values from x_data and corresponding y_data
positive_mask = x_data > 0
x_data = x_data[positive_mask]
y_data = y_data[positive_mask]
# Fit the power-law model to the data
params, covariance = curve_fit(power_law, x_data, y_data)
# Extract fitted parameters
fac_fit, exp_fit = params
# Plot the data and the fitted power-law curve
plt.figure()
plt.scatter(x_data, y_data, label = 'Data')
plt.plot(x_data, power_law(x_data, fac_fit, exp_fit), color='red', label='Fitted Power Law')
plt.xscale('log')
plt.yscale('log')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Fitting a Power Law to Data')
plt.legend()
plt.grid(True)
plt.show()
</code></pre>
<p>UPDATE: I gave it some starting parameter and now it's fitting something but it's still really off.</p>
<pre><code>initial_guess = [10**2, -2] # Initial guess parameters (a, b)
bounds = ([10**0, -5], [10**5, 0]) # Lower and upper bounds for parameters (a, b)
# Fit the power-law model to the data
params, covariance = curve_fit(power_law, x_data, y_data, p0=initial_guess, bounds=bounds)
</code></pre>
<p><a href="https://i.sstatic.net/FZ3pB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FZ3pB.png" alt="enter image description here" /></a></p>
|
<python><curve-fitting>
|
2024-03-14 10:56:04
| 1
| 338
|
Apinorr
|
78,159,949
| 8,972,038
|
How to stack numpy arrays with float values
|
<p>I have two numpy arrays with float values as below</p>
<pre><code>a = np.array([.6,.5])
b = np.array([.2,.3])
print(np.stack(a,b,dtype=float))
</code></pre>
<p>When I want to stack them, I was expecting a result like</p>
<pre><code>[[.6, .5]
[.2, .3]]
</code></pre>
<p>But I am getting this error <code>TypeError: only integer scalar arrays can be converted to a scalar index</code></p>
<p>How to achieve this? Thanks in advance.</p>
|
<python><numpy>
|
2024-03-14 10:52:47
| 2
| 418
|
Ankush Pandit
|
78,159,826
| 9,079,411
|
Get PriceBook items details using Pricebook id in nested query
|
<p>I have an sObject called Edition, it has the related PriceBook id, how to get the items of the PriceBook when querying the Edition details:</p>
<p>What I tried:</p>
<pre class="lang-py prettyprint-override"><code> query = f"""
SELECT
Id,
Name,
Edition_Name__c,
End_Date__c,
Start_Date__c,
Legal_Entity__c,
CurrencyIsoCode,
Status__c,
Price_Book__c,
(SELECT Id, Name FROM PricebookEntry WHERE Pricebook2Id = Price_Book__c)
FROM Edition__c
WHERE Id = '{edition_id}'
# """
</code></pre>
<p><strong>Update</strong>:</p>
<p><code>Price_Book__c</code> in <code>Edition__c/describe</code> :</p>
<pre class="lang-json prettyprint-override"><code>"name": "Price_Book__c",
"nameField": false,
"namePointing": false,
"nillable": true,
"permissionable": true,
"picklistValues": [],
"polymorphicForeignKey": false,
"precision": 0,
"queryByDistance": false,
"referenceTargetField": null,
"referenceTo": [
"Pricebook2"
],
"relationshipName": "Price_Book__r",
"relationshipOrder": null,
"restrictedDelete": false,
"restrictedPicklist": false,
"scale": 0,
"searchPrefilterable": true,
"soapType": "tns:ID",
"sortable": true,
"type": "reference",
</code></pre>
<p>I have a working solution with 2 queries but I don't know it's the best solution:</p>
<pre class="lang-py prettyprint-override"><code> try:
sf = SFManager().sf
edition_id = kwargs.get("id")
# SOQL query to retrieve specific edition data using the edition id
edition_query = f"""
SELECT
Id,
Name,
Edition_Name__c,
End_Date__c,
Start_Date__c,
Legal_Entity__c,
CurrencyIsoCode,
Status__c,
Price_Book__c
FROM Edition__c WHERE Id = '{edition_id}'"""
# Execute the query
edition_result = sf.query(edition_query)
edition_record = edition_result["records"][0]
price_book_id = edition_record['Price_Book__c']
if price_book_id is not None:
query = f"""
SELECT
Id,
Name,
CurrencyIsoCode,
UnitPrice,
IsActive
FROM PricebookEntry WHERE Pricebook2Id = '{price_book_id}'
"""
# Execute the query
price_book_entries = sf.query(query)
edition_record['PricebookEntries'] = price_book_entries["records"]
return Response(edition_record)
</code></pre>
<p>Link to the same question in salesforce exchange : <a href="https://salesforce.stackexchange.com/questions/419463/get-pricebook-items-details-using-pricebook-id-in-nested-query/419471#419471">https://salesforce.stackexchange.com/questions/419463/get-pricebook-items-details-using-pricebook-id-in-nested-query/419471#419471</a></p>
<p>Last note, an <code>Edition__c</code> can have multiple <code>Pricebook2</code> related to it.</p>
|
<python><salesforce><soql><simple-salesforce>
|
2024-03-14 10:34:39
| 2
| 2,494
|
B. Mohammad
|
78,159,761
| 8,458,083
|
How can I adjust the Nix Flake configuration for my virtual environment to ensure the successful execution of a Python script reliant on Ollama?
|
<p>I want to create a virtual environment where I can run this c.py using ollama. (like in this example <a href="https://python.langchain.com/docs/integrations/llms/ollama" rel="nofollow noreferrer">https://python.langchain.com/docs/integrations/llms/ollama</a>)</p>
<p>c.py:</p>
<pre><code>from langchain_community.llms import Ollama
llm = Ollama(model="llama2")
llm.invoke("Tell me a joke")
</code></pre>
<p>I created the virtual environemtn with this flake</p>
<pre><code>{
description = "Python environment with ollama";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
inputs.flake-utils.url = "github:numtide/flake-utils";
outputs = { self, nixpkgs, flake-utils }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
python = pkgs.python311;
ollama=pkgs.ollama;
Py = python.withPackages (ps: with ps; [
langchain
]);
in {
devShells.default = pkgs.mkShell {
buildInputs = [
ollama
Py
];
};
});
}
</code></pre>
<p>The envirnment seems to be create without problem after running <code>nix develop</code></p>
<p>But when I tried to run the code, I get the following error.</p>
<pre><code> python c.py
Traceback (most recent call last): File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/urllib3/connection.py", line 203, in _new_conn
sock = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request( File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/urllib3/connection.py", line 395, in request
self.endheaders() File "/nix/store/3v2ch16fkl50i85n05h5ckss8pxx6836-python3-3.11.8/lib/python3.11/http/client.py", line 1293, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked) File "/nix/store/3v2ch16fkl50i85n05h5ckss8pxx6836-python3-3.11.8/lib/python3.11/http/client.py", line 1052, in _send_output
self.send(msg) File "/nix/store/3v2ch16fkl50i85n05h5ckss8pxx6836-python3-3.11.8/lib/python3.11/http/client.py", line 990, in send
self.connect() File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/urllib3/connection.py", line 243, in connect
self.sock = self._new_conn()
^^^^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/urllib3/connection.py", line 218, in _new_conn
raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7ffff4b1d510>: Failed to establish a new connection: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ffff4b1d510>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/mnt/c/Users/Pierre-Olivier/Documents/python/llm/ollama/c.py", line 3, in <module>
llm.invoke("Tell me a joke") File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 273, in invoke
self.generate_prompt( File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 568, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 741, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 605, in _generate_helper
raise e File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 592, in _generate_helper
self._generate( File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/langchain_community/llms/ollama.py", line 408, in _generate
final_chunk = super()._stream_with_aggregation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/langchain_community/llms/ollama.py", line 317, in _stream_with_aggregation
for stream_resp in self._create_generate_stream(prompt, stop, **kwargs): File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/langchain_community/llms/ollama.py", line 159, in _create_generate_stream
yield from self._create_stream(
^^^^^^^^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/langchain_community/llms/ollama.py", line 220, in _create_stream
response = requests.post(
^^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/rjh6glh0f6l27f893pknrg7p87ajhp65-python3-3.11.8-env/lib/python3.11/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ffff4b1d510>: Failed to establish a new connection: [Errno 111] Connection refused'))
</code></pre>
|
<python><nix><ollama><flake>
|
2024-03-14 10:25:36
| 1
| 2,017
|
Pierre-olivier Gendraud
|
78,159,500
| 7,074,969
|
Can't load external stylesheet using dash-bootstrap after clearing browser cache
|
<p>I'm using <code>dash-bootstrap-components</code> and external stylesheets for my navbar. The third line in my code is literally</p>
<pre><code>app = dash.Dash(__name__,use_pages=True,external_stylesheets=[dbc.themes.CERULEAN, dbc.icons.BOOTSTRAP],suppress_callback_exceptions=True)
</code></pre>
<p>This worked fine UNTIL I decided to clear the cache of my browser. Everything broke and I don't know why nor have any idea why the bootstrap components won't 're-cache' again. I have the the following situation or from situation 1 I went to situation 2 just by clearing the cache:
<a href="https://i.sstatic.net/P9b8e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P9b8e.png" alt="enter image description here" /></a></p>
<p>Has anyone ever stumbled upon an issue like this? How can I resolve this thing, it literally broke by itself, lol</p>
|
<python><twitter-bootstrap><plotly-dash>
|
2024-03-14 09:46:29
| 1
| 1,013
|
anthino12
|
78,159,360
| 4,564,080
|
by_alias parameter on model_dump() is being ignored
|
<p>In the following code, we see that the field <code>id</code> is indeed created with the alias of <code>user_id</code> when we print the <code>model_fields</code>.</p>
<p>However, when I then call <code>model_dump(alias=True)</code>, the returned dict has an <code>id</code> key, but does not have a <code>user_id</code> key as I am expecting.</p>
<p>Is this a bug, or is there something I am missing?</p>
<p>Maybe it is to do with <code>alias_priority=2</code>, but this doesn't seem to be a parameter in SQLModel's <code>Field</code>, only in Pydantic.</p>
<pre class="lang-py prettyprint-override"><code>from uuid import UUID, uuid4
from sqlmodel import Field, SQLModel
class Temp(SQLModel, table=True):
id: UUID = Field(default_factory=uuid4, primary_key=True, alias="user_id")
t = Temp()
print(t.model_fields)
print(t.model_dump(by_alias=True))
</code></pre>
<p>Result:</p>
<pre><code>{'id': FieldInfo(annotation=UUID, required=False, default_factory=uuid4, alias='user_id', alias_priority=2)}
{'id': UUID('1c8db668-be5c-4942-b494-ef69cbc0ef3a')}
</code></pre>
|
<python><pydantic><sqlmodel>
|
2024-03-14 09:25:33
| 2
| 4,635
|
KOB
|
78,159,066
| 3,390,810
|
avoid repetitive occurence of the same exponent in y axis log scale plot
|
<p>The following code produces the figure</p>
<p><a href="https://i.sstatic.net/Z24ZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z24ZP.png" alt="enter image description here" /></a></p>
<p>Where 10^1 occurs too many times which is not necessary,
How to keep only one occurrence of 10^1 ?</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from matplotlib.ticker import FuncFormatter
# Generate some data
x = np.linspace(start=11, stop=19, num=20)
y = x
# Create figure and axis
fig, ax = plt.subplots()
# Plot data with log scale
ax.plot(x, y)
ax.set_yscale('log')
# Define custom tick formatter for y-axis
def format_y_tick(value, pos):
if pos == 0:
return f'{value:.0e}'
exponent = int(np.log10(value))
base = value / 10**exponent
if pos == 1 or exponent != format_y_tick.prev_exponent:
format_y_tick.prev_exponent = exponent
return f'{base:.0f}e{exponent}'
else:
return f'{base:.0f}'
format_y_tick.prev_exponent = None
ax.yaxis.set_major_formatter(FuncFormatter(format_y_tick))
# Show plot
plt.show()
</code></pre>
|
<python><matplotlib>
|
2024-03-14 08:33:28
| 1
| 761
|
sunxd
|
78,158,958
| 15,913,281
|
RuntimeError: Event loop is closed When Resending Message
|
<p>I am trying to send two messages, spaced a few seconds apart using python-telegram-bot. The first message is sent successfully however I get a "RuntimeError: Event loop is closed When Resending Message" error at the second attempt to send the message. The full traceback is below.</p>
<p>I am using python-telegram-bot v21.0.1 and python v3.10.</p>
<p>How do I fix this?</p>
<pre><code>asyncio.run(bot.send_message('-0000', text=alert))
sleep(5)
asyncio.run(bot.send_message('-0000', text=alert))
</code></pre>
<p>Traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\telegram\request\_baserequest.py", line 330, in _request_wrapper
code, payload = await self.do_request(
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\telegram\request\_httpxrequest.py", line 276, in do_request
res = await self._client.request(
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\httpx\_client.py", line 1574, in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\httpx\_client.py", line 1661, in send
response = await self._send_handling_auth(
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\httpx\_client.py", line 1689, in _send_handling_auth
response = await self._send_handling_redirects(
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\httpx\_client.py", line 1726, in _send_handling_redirects
response = await self._send_single_request(request)
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\httpx\_client.py", line 1763, in _send_single_request
response = await transport.handle_async_request(request)
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\httpx\_transports\default.py", line 373, in handle_async_request
resp = await self._pool.handle_async_request(req)
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\httpcore\_async\connection_pool.py", line 216, in handle_async_request
raise exc from None
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\httpcore\_async\connection_pool.py", line 196, in handle_async_request
response = await connection.handle_async_request(
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\httpcore\_async\connection.py", line 101, in handle_async_request
return await self._connection.handle_async_request(request)
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\httpcore\_async\http11.py", line 142, in handle_async_request
await self._response_closed()
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\httpcore\_async\http11.py", line 257, in _response_closed
await self.aclose()
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\httpcore\_async\http11.py", line 265, in aclose
await self._network_stream.aclose()
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\httpcore\_backends\anyio.py", line 54, in aclose
await self._stream.aclose()
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\streams\tls.py", line 202, in aclose
await self.transport_stream.aclose()
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\_backends\_asyncio.py", line 1191, in aclose
self._transport.close()
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 109, in close
self._loop.call_soon(self._call_connection_lost, None)
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 753, in call_soon
self._check_closed()
File "C:\Users\A\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 515, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
</code></pre>
|
<python><python-telegram-bot>
|
2024-03-14 08:11:13
| 0
| 471
|
Robsmith
|
78,158,713
| 4,862,162
|
How to host a proper Python Flask server with HTTP, HTTPS and interactive debug shell, all in the same global namespace?
|
<p>In Python Flask, if you run <code>app.run()</code>, typically you get the following message:</p>
<pre><code>WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
</code></pre>
<p>I know how to properly host a Flask server using WSGI and gunicorn. However, how to do that in a more complicated usage case? For example, in the following code,</p>
<pre><code>@app.route("/f_home")
def f_home():
ο»Ώο»Ώ return doA() if request.url_root.startswith('https') else doB()
threading.Thread(target=lambda:app.run(host='0.0.0.0', port=8000, threaded = True)).start()
threading.Thread(target=lambda:app.run(host='0.0.0.0', port=8001, threaded = True, ssl_context=('cert.pem', 'key.pem')).start()
import IPython
IPython.embed()
</code></pre>
<p>Firstly, HTTPS and HTTP requests need to be handled differently, e.g. in <code>f_home()</code>. Moreover, the same process hosts both the HTTP and HTTPS in separate threads, but having the same global name scope (because some clients connect via HTTP, some via HTTPS, but they all need to be managed together), and at the same time launch an interactive debugging console that can inspect and debug everything at run time.</p>
|
<python><flask><production-environment>
|
2024-03-14 07:22:03
| 0
| 1,615
|
xuancong84
|
78,158,558
| 4,699,441
|
Type hint for pydantic kwargs?
|
<pre><code>from typing import Any
from datetime import datetime
from pydantic import BaseModel
class Model(BaseModel):
timestamp: datetime
number: int
name: str
def construct(dictionary: Any) -> Model:
return Model(**dictionary)
construct({"timestamp": "2024-03-14T10:00:00Z", "number": 7, "name":"Model"})
</code></pre>
<p>What is the type hint to use in <code>construct</code> instead of <code>Any</code>?</p>
<p>If, for example, I put <code>dict[str, str]</code> as the type I get the errors</p>
<pre><code>Argument 1 to "Model" has incompatible type "**dict[str, str]"; expected "datetime"
Argument 1 to "Model" has incompatible type "**dict[str, str]"; expected "number"
Argument 1 to "Model" has incompatible type "**dict[str, str]"; expected "name"
</code></pre>
|
<python><pydantic><pydantic-v2>
|
2024-03-14 06:43:47
| 1
| 1,078
|
user66554
|
78,158,321
| 4,251,338
|
Content grep in Python Regex
|
<p>To fetch the content I have written the code below.</p>
<pre><code>The paragraph continous here....................
................................................
TABLE1..
...........Text continuous...........
......... Text continuous...........
..........Text continuous...........
........Text continuous...........
........Text continuous...........
........Text continuous...........google.co.inFrancisCo.
The paragraph continous here....................
................................................
</code></pre>
<p>The pattern that I need to match is "TABLEn ..... google.co.in". I coded the regex below but doesn't grep the pattern.</p>
<pre><code>match = re.search(r'TABLE(\d+)((?:(?!google.co.in).)*)google.co.in', extractedtext, re.M)
if match:
ful = match.group()
print(f": {ful}")
# Additional processing with 'ful' if needed
else:
print("Not matched")
</code></pre>
<p>Someone could you please help me on this one.</p>
|
<python><regex>
|
2024-03-14 05:34:07
| 0
| 2,589
|
ssr1012
|
78,158,192
| 3,111,290
|
Scikit-learn import error during Vercel deployment
|
<p>I'm deploying a Flask chatbot backend to Vercel. I'm using scikit-learn (sklearn) to train my model, but it's not required during the chatbot's runtime.</p>
<p>During deployment, I encounter the following error:</p>
<pre><code>LAMBDA_WARNING: Unhandled exception. The most likely cause is an issue in the function code. However, in rare cases, a Lambda runtime update can cause unexpected function behavior. For functions using managed runtimes, runtime updates can be triggered by a function change, or can be applied automatically. To determine if the runtime has been updated, check the runtime version in the INIT_START log entry. If this error correlates with a change in the runtime version, you may be able to mitigate this error by temporarily rolling back to the previous runtime version. For more information, see https://docs.aws.amazon.com/lambda/latest/dg/runtimes-update.html [ERROR]
Runtime.ImportModuleError: Unable to import module 'vc__handler__python': No module named 'sklearn' Traceback (most recent call last):
</code></pre>
<p>I've tried searching for solutions online but haven't found a specific fix for this scenario.</p>
<p>Here's a relevant code snippet from my <code>index.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import joblib
# Load preprocessed data
words = joblib.load('words.pkl')
classes = joblib.load('classes.pkl')
nb_classifier = joblib.load('nb_classifier.joblib')
</code></pre>
<p>My project structure looks like this:</p>
<pre><code>.gitignore
README.md
requirements.txt
vercel.json
api/
classes.pkl
index.py
intents.json
nb_classifier.joblib
words.pkl
</code></pre>
<p>My <code>requirements.txt</code> includes:</p>
<pre><code>Flask==3.0.2
Flask-Cors==4.0.0
joblib==1.3.2
nltk==3.8.1
numpy==1.26.3
wikipedia==1.4.0
</code></pre>
<p>How can I resolve this <code>sklearn</code> import error during Vercel deployment without impacting my chatbot's functionality?</p>
|
<python><aws-lambda><scikit-learn><vercel>
|
2024-03-14 04:58:23
| 2
| 643
|
ghost21blade
|
78,157,864
| 14,364,775
|
How to optimize the function which uses looping on lists on pandas dataframe?
|
<p>I am using a function on a pandas dataframe as :</p>
<pre><code>import spacy
from collections import Counter
# Load English language model
nlp = spacy.load("en_core_web_sm")
# Function to filter out only nouns from a list of words
def filter_nouns(words):
SYMBOLS = '{}()[].,:;+-*/&|<>=~$1234567890#_%'
filtered_nouns = []
# Preprocess the text by removing symbols and splitting into words
words = [word.translate({ord(SYM): None for SYM in SYMBOLS}).strip() for word in words.split()]
# Process each word and filter only nouns
filtered_nouns = [token.text for token in nlp(" ".join(words)) if token.pos_ == "NOUN"]
return filtered_nouns
# Apply filtering logic to all rows in the 'NOTE' column
df['filtered_nouns'] = sf['NOTE'].apply(lambda x: filter_nouns(x))
</code></pre>
<p>I have a dataset containing 6400 rows and <code>df['NOTE']</code> is a very long paragraph converted from the Oracle CLOB datatype.</p>
<p>This function is working quickly for 5-10 rows but for 6400 rows, it is taking a very long time.</p>
<p>Any ways to optimize this.</p>
|
<python><python-3.x><pandas><list><nlp>
|
2024-03-14 02:56:32
| 2
| 1,018
|
Rikky Bhai
|
78,157,777
| 5,228,070
|
How to package and deploy AWS python lambda functions automatically
|
<p>I have created a AWS python lambda using some modules like <strong>kafka</strong>, <strong>numpy</strong>, <strong>boto3</strong> etc.</p>
<p><strong>Boto3</strong> is already provided by AWS environment. For <strong>Numpy</strong>, I am using AWS predefined layer.</p>
<p>After deploying it as <code>.zip</code> file with kafka and other modules, the size is around <code>14 MB</code> and AWS UI wont show the code saying</p>
<blockquote>
<p>The deployment package of your Lambda function "ABC" is too large to enable inline code editing. However, you can still invoke your function.</p>
</blockquote>
<p>The process I follow whenever I am making any change is -</p>
<ol>
<li>Import any new module needed</li>
<li>pip install that module to the existing directory where .py resides</li>
<li>Zip the directory</li>
<li>Upload the zip to S3</li>
<li>Point the Lambda to use this latest zip file from S3</li>
</ol>
<p>I see this process is tedious and there should exist an easier and better way, so that other developers should also be able to contribute the enhancements easily.</p>
<p>I am also trying to understand how to commit this to git, is it just the <code>.py</code> file that I need to commit, if yes then should I also commit <code>requirements.txt</code> so that others can easily start testing in their environments ?</p>
<p>Do I need to create any pipeline to make the whole process automated and much more developer friendly for local testing ?</p>
<p>Are there any tools that make this process more simpler? Please guide.</p>
|
<python><amazon-web-services><git><aws-lambda><deployment>
|
2024-03-14 02:25:45
| 2
| 549
|
santhosh
|
78,157,708
| 117,870
|
How to use numpy.argmax to extract values from three-dimensional array
|
<p>Given a three-dimensional numpy array, the index of the maximum value across the first dimension (axis 0) can be calculated using <code>numpy.argmax</code>.</p>
<p>How do I use the result of <code>argmax</code> to extract the said maximum values from another array with a similar shape?</p>
<p>For example, given the following:</p>
<pre><code>import numpy as np
ndarr1 = np.round(np.random.rand(4, 2, 3), 2)
ndarr2 = np.round(np.random.rand(4, 2, 3), 2)
argmax1 = np.argmax(ndarr1, axis=0)
ndarr1
# array([[[0.89, 0.79, 0.64],
# [0.03, 0.53, 0.1 ]],
#
# [[0.21, 0.76, 0.99],
# [0.47, 0.08, 0.48]],
#
# [[0.67, 0.94, 0.99],
# [0.98, 0.75, 0.59]],
#
# [[0.09, 0.96, 0.98],
# [0.43, 0.98, 0.71]]])
argmax1
# array([[0, 3, 1],
# [2, 3, 3]], dtype=int64)
ndarr2
# array([[[0.79, 0.72, 0.82],
# [0.25, 0.7 , 0.56]],
#
# [[0.46, 0.11, 0.31],
# [0.55, 0.76, 0.13]],
#
# [[0.09, 0.23, 0.35],
# [0.3 , 0.42, 0.06]],
#
# [[0.24, 0.1 , 0.92],
# [0.82, 0.52, 0.7 ]]])
</code></pre>
<p>What function do I call to obtain the following array derived from retrieving items using <code>argmax1</code> from <code>ndarr2</code>:</p>
<pre><code># array([[0.79, 0.1 , 0.31],
# [0.3 , 0.52, 0.7 ]])
</code></pre>
<p>Note that I cannot use <code>numpy.amax</code> because I need to use the <code>argmax</code> results to extract values from a different array which has the same shape as <code>ndarr1</code>.</p>
|
<python><arrays><numpy>
|
2024-03-14 02:01:18
| 2
| 12,673
|
Alex Essilfie
|
78,157,548
| 5,008,610
|
Calculate exponential complex sum with fft instead of summation to simulate diffraction?
|
<h2>Context</h2>
<p>I am trying to understand x-ray diffraction a little better by coding it up in python. For a collection of points with positions R_i, the Debye formula goes</p>
<p><a href="https://i.sstatic.net/HOT7x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HOT7x.png" alt="enter image description here" /></a></p>
<p>where the i in the exponential is for the complex number, all other i's are for indices and for now <code>b_i = b_j = 1</code>, for simplicity.,</p>
<p>Now I tried explicitly calculating this sum for a collection of points of which I have the coordinates
<a href="https://i.sstatic.net/92pqO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/92pqO.png" alt="enter image description here" /></a></p>
<pre><code>import numpy as np
# set up grid
dims = 2
side = 30
points = np.power(side, dims)
coords = np.zeros((dims, points))
xc, yc = np.meshgrid(np.arange(side), np.arange(side))
coords[0, :] = xc.reshape((points))
coords[1, :] = yc.reshape((points))
# calculate diffraction
xdist = np.subtract.outer(coords[0], coords[0])
ydist = np.subtract.outer(coords[1], coords[1])
rdist = np.stack((xdist, ydist))
rdist = rdist.reshape(2, rdist.shape[1]*rdist.shape[2])
qs = 200
qspace = np.stack((np.linspace(-2, 8, qs), np.zeros(qs)))
diffrac = np.sum(np.exp(-1j * np.tensordot(qspace.T, rdist, axes=1)), axis=1)
</code></pre>
<p>Which gave me the following after a couple seconds</p>
<p><a href="https://i.sstatic.net/UAnaa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UAnaa.png" alt="enter image description here" /></a></p>
<p>This looks as expected (periodicity of 2 pi, as the dots have spacing 1). It also makes sense that this takes some time: for 900 points, 810000 distances have to be calculated. I don't use loops so I think the code is not that bad in terms of efficiency, but just the fact that I'm calculating this sum manually seems inherently slow.</p>
<h2>Thoughts</h2>
<p>Now it looks as if things would speed up greatly if I could use a discrete fast fourier transform for this - given the shape of the sum. However:</p>
<ul>
<li>for the discrete fourier transform, I would still need to 'pixelate' the image (as far as I understand), to include a lot of empty space in between the points in my signal. As if I were to transform the pixels of the first image I shared. This also seems less efficient (e.g. because of the sampling).</li>
<li>I would like to move points around afterwards, so the fact that the first image is a grid and thus sampled regularly is not particularly helpful. It looks as if non-uniform fourier transformations could help me, but still that would require me to 'pixelate' the image and set some values to 0.</li>
</ul>
<h2>Question</h2>
<p>Is there a way to use FFT (or another method) to calculate the sum faster, starting from a list of np.array coordinates (x,y)? (of dirac delta functions, if you want so...).</p>
<p>Specifically pointers at relevant mathematical techniques/python functions/python packages would be appreciated. I'm not that familiar using using fourier transforms for actual applications, but most of the material I find online seems irrelevant. So probably I'm looking in the wrong direction, or something's lacking in my understanding. All help is appreciated!</p>
<p>(the first image is a screenshot from <a href="https://www.ill.eu/fileadmin/user_upload/ILL/6_Careers/1_All_our_vacancies/PhD_recruitment/Student_Seminars/2017/19-2017-05-09_Fischer_Cookies.pdf" rel="nofollow noreferrer">https://www.ill.eu/fileadmin/user_upload/ILL/6_Careers/1_All_our_vacancies/PhD_recruitment/Student_Seminars/2017/19-2017-05-09_Fischer_Cookies.pdf</a>, as it seems there's no math notation on SO or I did not find it))</p>
|
<python><numpy><fft><physics><discrete>
|
2024-03-14 00:48:21
| 1
| 335
|
andwerb
|
78,157,421
| 900,898
|
How to get original raw CSV row from file
|
<p>For example I have a CSV file like this:</p>
<pre><code>a,b,c
a1,b1,c1
</code></pre>
<p>I want to get parsed data and the original raw CSV line. For example:</p>
<pre><code>import csv
with open('some.csv') as f:
reader = csv.reader(f)
for row in reader:
# getting original raw csv line
print(row)
print(origin_row)
# should print something like this:
# ["a", "b", "c"]
# a,b,c
</code></pre>
<p>CSV data could contain any possible data.
I need this to store parsed lines and raw lines for verification / validation / etc. I need to have parsed data and raw data per each row / entity.</p>
|
<python><csv>
|
2024-03-13 23:56:18
| 2
| 548
|
ZigZag
|
78,157,381
| 5,594,008
|
Wagtail and Elasticsearch , Lookup "icontains"" not recognised
|
<p>I'm trying to run a search with Wagtail (5.2) and Elastic (7)</p>
<p>When I make a search for Users <code>wagtail_admin/users/?q=ffff</code> I got such error</p>
<pre><code>FilterFieldError
Cannot filter search results with field "email". Please add index.FilterField('email') to User.search_fields
</code></pre>
<p>Then I add extra field to search fields in the code</p>
<pre><code>class User:
search_fields = [
index.SearchField("name", partial_match=True),
index.FilterField("email", partial_match=True),
]
</code></pre>
<p>But just got another error</p>
<pre><code>FilterError /wagtail_admin/users/
Could not apply filter on search results: "email__icontains = ffff". Lookup "icontains"" not recognised.
</code></pre>
<p>How it can be fixed?</p>
|
<python><django><elasticsearch><wagtail>
|
2024-03-13 23:38:21
| 1
| 2,352
|
Headmaster
|
78,157,376
| 172,277
|
Indexing issue when manipulating Series of boolean
|
<p>I am having what I think it an indexing problem with my DataFrame filtering.</p>
<p>I have a logic where I will apply different masks on a DataFrame and, instead of restricting the DataFrame directly I build my mask according to my custom logic.</p>
<pre class="lang-py prettyprint-override"><code># df as an input (which has already been filtered and I think it messed with it's indexing
from pandas import Series
mask = Series([True] * df.shape[0])
if some_filter is not None:
col_mask = df['aCol'] == some_filter
mask = mask & col_mask
</code></pre>
<p>Here if I a look at:</p>
<pre><code>mask.shape
col_mask.shape
</code></pre>
<p>They are identical before the last line.</p>
<p>After this last line <code>mask.shape</code> indicates almost the double of lines, I think that it is because series are indexed and the indexes don't match, the boolean operation actually fill in the blanks.</p>
<p>I can think about a few workaround around this. I would like to have the proper to do this or approach this.</p>
|
<python><pandas><dataframe>
|
2024-03-13 23:37:28
| 1
| 7,591
|
AsTeR
|
78,157,363
| 2,236,794
|
Pydantic is not applying the default value
|
<p>I have the following schema. If I input (enabled or disabled) in 'location' then this works and validation works fine. If I put a random string it fails (which is the way it should work). The problem is that if 'location' is an empty string. This still fails. I would like for it to set the default value and pass validation. I have tried to add the validator function below and it is not even getting to this function. It is automatically getting</p>
<pre><code>Input should be 'disabled' or 'enabled' [type=literal_error, input_value='', input_type=str]"
</code></pre>
<p>I have added the default value and it is still failing. Any idea what I am missing?</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Field, validator
from typing_extensions import Annotated, Literal
ENABLED_DISABLED = Literal["disabled", "enabled"]
class GlobalSchema(BaseModel):
location: Annotated[ENABLED_DISABLED, Field(description="Location")] = "disabled"
# @validator('location')
# def validate_location(cls, value):
# if value not in ENABLED_DISABLED:
# raise ValueError(f'Invalid value for location. Must be one of: {", ".join(ENABLED_DISABLED)}')
# return value or "disabled"
</code></pre>
|
<python><pydantic>
|
2024-03-13 23:33:39
| 2
| 561
|
user2236794
|
78,157,311
| 57,952
|
Bunnet/Beanie odm: replace_one with upsert
|
<p>What would be the equivalent of <code>.replace_one({"key": key}, doc, upsert=True)</code> in bunnet/beanie odm?</p>
|
<python><mongodb><odm><beanie>
|
2024-03-13 23:20:40
| 1
| 30,766
|
Udi
|
78,157,289
| 644,326
|
How to augment dataset by adding rows via huggingface datasets?
|
<p>I have a dataset with 113287 train rows. Each 'caption' field is however an array with multiple strings. I would like to flatmap this array and add new rows.</p>
<p>The documentation for datasets states that the <a href="https://huggingface.co/docs/datasets/about_map_batch#map" rel="nofollow noreferrer">batch mapping feature</a> may be used to achieve this:</p>
<blockquote>
<p>This means you can concatenate your examples, divide it up, and even add more examples!</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>from datasets import load_dataset
dataset_name = "Jotschi/coco-karpathy-opus-de"
coco_dataset = load_dataset(dataset_name)
def chunk_examples(entry):
captions = [caption for caption in entry["caption"][0]]
return {"caption": captions}
print(coco_dataset)
chunked_dataset = coco_dataset.map(chunk_examples, batched=True, num_proc=4,
remove_columns=["image_id", "caption", "image"])
print(chunked_dataset)
print(len(chunked_dataset['train']))
</code></pre>
<pre><code>DatasetDict({
train: Dataset({
features: ['caption', 'image_id', 'image'],
num_rows: 113287
})
validation: Dataset({
features: ['caption', 'image_id', 'image'],
num_rows: 5000
})
test: Dataset({
features: ['caption', 'image_id', 'image'],
num_rows: 5000
})
})
DatasetDict({
train: Dataset({
features: ['caption'],
num_rows: 464
})
validation: Dataset({
features: ['caption'],
num_rows: 40
})
test: Dataset({
features: ['caption'],
num_rows: 40
})
})
464
</code></pre>
<p>The problem that I'm having is that the resulting dataset does not contain the expected amount of rows.</p>
<p>It states <code>num_rows: 464</code> have been added. I suspect this to be the batches. How can I normalize this back into a "regular" dataset? Is there something wrong with my mapping function?</p>
<ul>
<li>datasets==2.18.0</li>
</ul>
|
<python><huggingface-datasets>
|
2024-03-13 23:11:14
| 1
| 3,682
|
Jotschi
|
78,157,232
| 14,250,641
|
Efficient DataFrame Grouping for Condensing Rows Based on Multiple Criteria
|
<p>I'm aiming to group the rows based on the 'Chromosome', 'Start', and 'End' columns and then condense the corresponding 'Start1', 'End1',main_category columns into lists. Then I want to do the same thing but with the Chromosome, Start1, End1 cols. Basically, there should not be duplicates across the 'Chromosome', 'Start', and 'End' columns OR 'Chromosome', 'Start1', 'End1', 'main_category' columns. Here's a sample input/output:</p>
<p>Here is a subset of my dataset:</p>
<pre><code>Chromosome Start End Start1 End1 main_category
chr1 2584125 2584533 2584094 2584437 Enhancer
chr1 2584125 2584533 2584200 2584401 Promoter
chr1 3069168 3069296 3066400 3074201 Promoter
chr1 3069168 3069296 3069019 3069238 Promoter
chr1 3069168 3069296 3069272 3069608 Enhancer
chr1 3186125 3186474 3186069 3186414 Enhancer
chr1 3244087 3244137 3244018 3244334 Enhancer
chr1 3244555 3244666 3244660 3244666 Promoter
chr1 3244755 3244966 3244660 3244666 Promoter
</code></pre>
<pre><code>Chromosome Start End Start1 End1 main_category
chr1 2584125 2584533 [2584094,2584200] [2584437,2584401] [Enhancer,Promoter]
chr1 3069168 3069296 [3066400,3069019,3069272][3074201,3069238,3069608][Promoter,Promoter,Enhancer]
chr1 3186125 3186474 3186069 3186414 Enhancer
chr1 3244087 3244137 3244018 3244334 Enhancer
chr1 [3244555,3244755] [3244666,3244966] 3244660 3244666 Promoter
</code></pre>
<p>I tried this code, but it doesn't work. It fills the dataset with NaNs and expands the dataset a lot:</p>
<pre><code>condensed_df = df.groupby(['Chromosome', 'Start', 'End']).agg(
{
'main_category': lambda x: ', '.join(map(str, x)),
'Start1': lambda x: ', '.join(map(str, x)),
'End1': lambda x: ', '.join(map(str, x))
}
</code></pre>
|
<python><pandas><dataframe><numpy><group-by>
|
2024-03-13 22:54:09
| 1
| 514
|
youtube
|
78,157,178
| 12,705,481
|
Is it possible to prevent env variables from ever being printed to stdout in python?
|
<p>In my python app, secrets like api keys and db passwords are stored in the env vars of the machine running the python app (basically, ECS).</p>
<p>I am looking for a way to ensure that a dev on the team could <em>never</em> see those secrets. However, currently a simple <code>password = os.environ.get("db_password")</code> and <code>print(password)</code> would foil my security plans. It just leaves the risk high that a dev at any stage prints a secret, and that's persisted somewhere in logs for all time.</p>
<p>I have seen interesting interactions before, like in Git runners where I tried using bash to print Git secrets, and they printed as <code>"*******"</code>. I am wondering if there is a way to recreate this in python?</p>
<p><em>Best idea at solution: overwrite the <code>sys.stdout.write</code> method to ensure that stdout messages always replace certain strings with <code>"*******"</code>. This would be exploitable though, because a dev could write a script logging many password attempts until they discover the secret value. I'm also not sure how I could prevent a dev from editing the python code where such a method overwrite were written.</em></p>
|
<python><security>
|
2024-03-13 22:38:43
| 0
| 2,628
|
Alan
|
78,157,143
| 547,231
|
How do I cast a raw pointer to a pytorch tensor of a specific shape?
|
<p>I get a raw pointer from a C++ library which I would like to interpret (in a "<code>reinterpret_cast</code>-like fashion) as a pytorch tensor of a specific shape. Since the code is executed in a performance critical section, I really want to make sure that no heap allocations and/or copy operations are performed.</p>
<p>Here is what I got right now:</p>
<pre><code>def as_tensor(pointer, shape):
return torch.from_numpy(numpy.array(numpy.ctypeslib.as_array(pointer, shape = shape)))
shape = (2, 3, 4)
x = torch.zeros(shape)
p = ctypes.cast(x.data_ptr(), ctypes.POINTER(ctypes.c_float))
y = as_tensor(p, shape)
</code></pre>
<p>Is it really necessary to cast to a numpy array before? And I'm also not 100% sure if the call to <code>numpy.array(...)</code> doesn't copy the content of what the <code>as_array()</code> call is pointing to.</p>
|
<python><pytorch>
|
2024-03-13 22:26:00
| 1
| 18,343
|
0xbadf00d
|
78,157,029
| 24,108
|
Pandas: Rolling sum of a counter that resets
|
<p>When collecting network traffic stats from a server it comes in as an increasing counter that resets at a certain point. Lets say the data points for a certain time range look like</p>
<pre><code>7
15
22
29 <--- reset happens next
2
5
7
20
25 <--- reset happens again
3
7
</code></pre>
<p>The total should be (29-7) + 25 + 7 = 54.</p>
<p>The actual data I have includes a hostname, e.g.</p>
<pre><code>host1 5
host2 19
host1 7
host2 29
host1 9
host2 3
</code></pre>
<p>If I have that data in a pandas dataframe, how can I build the rolling sum and take the counter reset into consideration?</p>
<p>FWIW the counter resets around 10^12 so I'm not concerned about the lost data between reset and the next measurement point.</p>
|
<python><pandas>
|
2024-03-13 21:57:37
| 1
| 15,040
|
John Oxley
|
78,156,967
| 9,290,374
|
Python: google.api_core.exceptions.Forbidden: 403 Access Denied: BigQuery BigQuery: Permission denied while getting Drive credentials
|
<p>Using a service account & it's generated json key file, I'm trying to query a BigQuery table that is connected to an external Google Sheet. The service account has editor/viewer access and I've tried to enable the scope for Drive APIs as I've seen in other questions related to this one. The Google Drive API is also enabled in IAM for the project. Keep getting error: <code>google.api_core.exceptions.Forbidden: 403 Access Denied: BigQuery BigQuery: Permission denied while getting Drive credentials.</code></p>
<p><strong>CODE</strong></p>
<pre><code>from google.cloud import bigquery
from google.oauth2 import service_account
import pandas as pd
# Initialize BigQuery client
credentials = service_account.Credentials.from_service_account_file("service_account_file.json")
client = bigquery.Client(credentials=credentials)
gbq_staging_table = "Table ID" # dataset.table
query = f"""
SELECT * FROM `{gbq_staging_table}`
"""
staging_df = client.query(query).to_dataframe()
</code></pre>
<p><strong>service_account_file.json</strong></p>
<pre><code>{
"type": "service_account",
"project_id": "projectid",
"private_key_id": "xxxx",
"private_key": "-----BEGIN PRIVATE KEY-----",
"client_email": "serviceaccount.iam.gserviceaccount.com",
"client_id": "xxxxx",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/xxxxx",
"scopes": ["https://www.googleapis.com/auth/drive","https://www.googleapis.com/auth/cloud-platform","https://www.googleapis.com/auth/bigquery"]
}
</code></pre>
|
<python><google-bigquery>
|
2024-03-13 21:43:06
| 1
| 490
|
hSin
|
78,156,921
| 18,814,386
|
Comparing two data frames with condition and removing all that not qualified
|
<p>I have two data frames. I have tried to generate a short data to explain what I am looking for, any suggestion or help is appreciated.</p>
<p><code>df = pd.DataFrame({'policy number':[11,22,33,44,55,66,77,88,99], ' policy status':['good', 'good', 'good', 'good', 'good','good', 'good', 'good', 'good']})</code></p>
<p><code>df_2 = pd.DataFrame({'policy number':[11,83,63,44,55,66,67,88,99,100], 'policy status':['bad','bad', 'good', 'good', 'bad', 'good','bad', 'good', 'average', 'good']})</code></p>
<p>I want to compare two data frames by policy number, if the column [policy status] is still good, I want to keep those policies. Else I want to remove them from my first data frame.</p>
<p>Is there any easier way for this? I have tried to iterate each rows of two data frames and compare them, but this takes a lot time, since I have bigger datasets.</p>
<p>Thanks in advance!</p>
|
<python><pandas><dataframe><function><compare>
|
2024-03-13 21:31:39
| 1
| 394
|
Ranger
|
78,156,891
| 1,608,327
|
What is messing with the logging in AsyncWebsocketConsumer?
|
<p>I have a Django project that's using Channels for websocket communication and I've stumbled upon what I think might be a bug, but I'm not certain so I'm hoping someone that understands this better than I do can help explain what's happening here:</p>
<p>Test error message:</p>
<pre><code>AssertionError: "INFO:websocket.camera_consumer:Doesn't work" not found in ['INFO:websocket.camera_consumer:Works']
</code></pre>
<p>Test that's failing:</p>
<pre class="lang-py prettyprint-override"><code>class TestCameraConsumerTests(TransactionTestCase):
async def test_fails(self):
communicator = WebsocketCommunicator(TestConsumer.as_asgi(), '/ws/device')
with self.assertLogs(f'websocket.camera_consumer', level=logging.INFO) as logs:
await communicator.connect(timeout=10)
await communicator.disconnect()
self.assertIn(f"INFO:websocket.camera_consumer:Doesn't work", logs.output)
</code></pre>
<p><code>websocket.camera_consumer.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import logging
from channels.generic.websocket import AsyncWebsocketConsumer
def sync_function():
pass
class TestConsumer(AsyncWebsocketConsumer):
async def connect(self):
await super().connect()
logger = logging.getLogger(__name__)
logger.info("Works")
await sync_to_async(sync_function)()
logger.info("Doesn't work")
</code></pre>
<p>What I've learned so far:</p>
<ul>
<li>Moving the getLogger call outside the <code>connect</code> function doesn't fix it</li>
<li>Moving the <code>super</code> call below the <code>sync_to_async</code> call <strong>does</strong> fix it</li>
<li>Removing the <code>sync_to_async</code> function (and making <code>sync_function</code> async) <strong>also</strong> fixes it</li>
<li>Somewhere along the way in the <code>sync_to_async</code> function call, the logger handler object that the <code>assertLogs</code> injects gets removed so the first log message gets routed to the test, but the second log message does not</li>
<li>I've attempted to step-though the sync_to_async call, however I can't figure out how to monitor the logger object while inside the async loop (I'm still relatively new-ish to async code so not super well versed in how it works under the hood).</li>
</ul>
|
<python><django><django-channels>
|
2024-03-13 21:23:53
| 1
| 816
|
Kenny Loveall
|
78,156,811
| 459,745
|
How do I isort using ruff?
|
<p>I often work in very small projects which do not have config file. How do I use <code>ruff</code> in place of <code>isort</code> to sort the imports? I know that the following command is roughly equivalent to <code>black</code>:</p>
<pre class="lang-bash prettyprint-override"><code>ruff format .
</code></pre>
<p>The format command do not sort the imports. How do I do that?</p>
|
<python><isort><ruff>
|
2024-03-13 21:05:50
| 1
| 41,381
|
Hai Vu
|
78,156,805
| 10,237,558
|
Returning data from a UDF to Snowflake in a Snowflake Native App
|
<p>I have a python udf that performs a http request and stores the output in a list and returns this output to snowflake. The http request returns a name for every value sent by it. Here is the code snippet of this function <code>test</code>:</p>
<pre><code># val = '["abc","etc", "dif", "tef"]' what is coming to this function from snowflake
values=json.loads(val)
op=[]
for value in values:
body = {
"parameters": [{ "token": value }]
}
try:
session = requests.Session()
response = session.post(url, json=body, headers=headers)
response.raise_for_status()
response_as_json = json.loads(response.text)
op.append(response_as_json["records"][0]["value"])
except Exception as e:
print(f"Error processing value {value}: {e}")
# print(op) Gives me output as ['John', 'Tom', 'Jimmy', 'Harry']
return op
</code></pre>
<p>I have done this for this function:</p>
<pre><code>CREATE OR REPLACE FUNCTION code_schema.test(val string)
RETURNS variant
LANGUAGE python
runtime_version = '3.8'
packages = ('snowflake-snowpark-python', 'requests', 'simplejson')
imports = ('/src/udf.py')
handler = 'udf.test';
GRANT USAGE ON FUNCTION code_schema.test(string) TO APPLICATION ROLE app_public;
</code></pre>
<p>In Snowflake -</p>
<pre><code>create table shopper (first_name string);
insert into shopper (first_name) values ('abc');
select * from shopper;
set tokens = (SELECT to_json(array_agg(first_name)) from shopper);
select $tokens;
</code></pre>
<p>which gives tokens as:</p>
<pre><code>["abc","etc", "dif", "tef"]
</code></pre>
<p>And I call this snowflake native app udf like</p>
<pre><code>SELECT app.code_schema.test($tokens) AS name;
</code></pre>
<p>This gives me the output only as</p>
<pre><code>['John']
</code></pre>
<p>when the output I want is of the format</p>
<pre><code>['John']
['Tom']
['Jimmy']
['Harry']
</code></pre>
<p>If I do a <code>json.dumps</code> on the output like <code>return json.dumps(op)</code> by returning string rather than variant for this function
I get this</p>
<pre><code>["John", "Tom", "Jimmy", "Harry"]
</code></pre>
<p>However, the format is wrong then.
Why am I getting just the first name of it when returning <code>variant</code> and how do I resolve it to get it in the format I want if I return <code>string</code>?</p>
|
<python><python-3.x><snowflake-cloud-data-platform><user-defined-functions>
|
2024-03-13 21:03:36
| 0
| 632
|
Navidk
|
78,156,752
| 610,569
|
How to fine-tune a Mistral-7B model for machine translation?
|
<p>There's a lot of tutorials online that uses raw text affix with arcane syntax to indicate document boundary and accessed through Huggingface <code>datasets.Dataset</code> object through the <code>text</code> key. E.g.</p>
<pre><code>from datasets import load_dataset
dataset_name = "mlabonne/guanaco-llama2-1k"
dataset = load_dataset(dataset_name, split="train")
dataset["text"][42]
</code></pre>
<p>[out]:</p>
<pre><code><s>[INST] ΒΏCuΓ‘les son los actuales presidentes de la regiΓ³n de Sur AmΓ©rica? EnumΓ©relos en una lista con su respectivo paΓs. [/INST] A fecha del 13 de febrero de 2023, estos son los presidentes de los paΓses de SudamΓ©rica, segΓΊn Wikipedia:
-Argentina: Alberto FernΓ‘ndez
-Bolivia: Luis Arce
-Brasil: Luiz InΓ‘cio Lula da Silva
-Chile: Gabriel Boric
-Colombia: Gustavo Petro
-Ecuador: Guillermo Lasso
-Paraguay: Mario Abdo BenΓtez
-PerΓΊ: Dina Boluarte
-Uruguay: Luis Lacalle Pou
-Venezuela: NicolΓ‘s Maduro
-Guyana: Irfaan Ali
-Surinam: Chan Santokhi
-Trinidad y Tobago: Paula-Mae Weekes </s>
</code></pre>
<p>But machine translation datasets are usually structured in 2 parts, source and target text with <code>sentence_eng_Latn</code> and <code>sentence_deu_Latn</code> keys, e.g.</p>
<pre><code>
valid_data = load_dataset("facebook/flores", "eng_Latn-deu_Latn", streaming=False,
split="dev")
valid_data[42]
</code></pre>
<p>[out]:</p>
<pre><code>{'id': 43,
'URL': 'https://en.wikinews.org/wiki/Hurricane_Fred_churns_the_Atlantic',
'domain': 'wikinews',
'topic': 'disaster',
'has_image': 0,
'has_hyperlink': 0,
'sentence_eng_Latn': 'The storm, situated about 645 miles (1040 km) west of the Cape Verde islands, is likely to dissipate before threatening any land areas, forecasters say.',
'sentence_deu_Latn': 'Prognostiker sagen, dass sich der Sturm, der etwa 645 Meilen (1040 km) westlich der Kapverdischen Inseln befindet, wahrscheinlich auflΓΆsen wird, bevor er LandflΓ€chen bedroht.'}
</code></pre>
<h3>How to fine-tune a Mistral-7b model for the machine translation task?</h3>
|
<python><huggingface-transformers><large-language-model><machine-translation><mistral-7b>
|
2024-03-13 20:51:08
| 1
| 123,325
|
alvas
|
78,156,741
| 16,281,150
|
Django language switcher is not persistent
|
<p>Hello I'm struggling with my language switcher.</p>
<p>settings.py:</p>
<pre><code>LANGUAGE_CODE = 'en'
LANGUAGES = [
('de','Deutsch'),
('en','English')
]
</code></pre>
<p>urls.py:</p>
<pre><code>path('setlang', views.setlang, name='setlang'),
</code></pre>
<p>index.html:</p>
<pre><code><a href="{% url 'setlang' %}">{% trans "English" %}</a>
</code></pre>
<p>views.py</p>
<pre><code>def setlang(request):
logger.error(get_language())
if get_language() == 'de':
activate('en')
else:
activate('de')
logger.error(get_language())
return redirect('index')
</code></pre>
<p>Output from logger.error(get_language()) -> 'de' than 'en'.</p>
<p>It's everytime 'de'! Even if I set LANGUAGE_CODE = 'en'! I've no idea where the 'de' is coming from.</p>
<p>The problem is maybe the reload, which is forced by the return redirect('index')?</p>
<p>Translation in general works.</p>
<p>Has anyone an idea how I can stick to the language which is selected and not fall back to default?</p>
|
<python><django><localization><internationalization>
|
2024-03-13 20:48:38
| 1
| 386
|
rivercity
|
78,156,692
| 4,009,645
|
Pandas: move values from one column to an appropriate column
|
<p>My google-fu is failing me. I have a simple dataframe that looks like this:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Sample</th>
<th>Subject</th>
<th>Person</th>
<th>Place</th>
<th>Thing</th>
</tr>
</thead>
<tbody>
<tr>
<td>1-1</td>
<td>Janet</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1-1</td>
<td>Boston</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1-1</td>
<td>Hat</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1-2</td>
<td>Chris</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1-2</td>
<td>Austin</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1-2</td>
<td>Scarf</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table></div>
<p>I want the values in the subject column to move into their appropriate column so that I end up with something like this:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Sample</th>
<th>Subject</th>
<th>Person</th>
<th>Place</th>
<th>Thing</th>
</tr>
</thead>
<tbody>
<tr>
<td>1-1</td>
<td>Janet</td>
<td>Janet</td>
<td>Boston</td>
<td>Hat</td>
</tr>
<tr>
<td>1-2</td>
<td>Chris</td>
<td>Chris</td>
<td>Austin</td>
<td>Scarf</td>
</tr>
</tbody>
</table></div>
<p>I've looked at pivot and transpose, but those don't seem right.</p>
<p>Any ideas would be appreciated! :)</p>
|
<python><pandas>
|
2024-03-13 20:38:02
| 2
| 1,009
|
Heather
|
78,156,640
| 23,315,914
|
Why is Visual Studio Code saying my code in unreachable after using the Pandas concat function?
|
<p><img src="https://i.sstatic.net/4hMVK.png" alt="Koda UlaΕΔ±lamΔ±yor -> Code is unreachable" /></p>
<blockquote>
<p>Koda UlaΕΔ±lamΔ±yor -> Code is unreachable</p>
</blockquote>
<p>Visual Studio code is graying out my code and saying it is unreachable after I used <code>pd.concat()</code>. The IDE seems to run smoothly but it's disturbing and I want my colorful editor back.</p>
<p>How do I disable the editor graying out my code without changing the current language?</p>
|
<python><pandas><visual-studio-code>
|
2024-03-13 20:25:52
| 4
| 305
|
wwyyaa
|
78,156,515
| 217,844
|
pydantic: how to model AWS services and their resources?
|
<p>I would like to create a <a href="https://pydantic.dev/" rel="nofollow noreferrer">pydantic</a> model for a small subset of AWS services and their resources, so I can (among many other things) validate data loaded from configuration files, e.g.</p>
<pre class="lang-py prettyprint-override"><code>from yaml import safe_load
with open(<path to yaml conf file>) as stream:
conf_dict = safe_load(stream)
conf = AWSServices(**conf_dict)
</code></pre>
<p>A sample <code>.yaml</code> configuration file might look like this:</p>
<pre class="lang-yaml prettyprint-override"><code>---
# name of the first service
EC2:
# name of an EC2 resource (in plural)
SubNets:
# list of attribute sets for subnets to be configured
# (not in scope of question)
- subnet01_attr01: value01
subnet01_attr02: value02
- subnet02_attr01: value03
subnet02_attr02: value04
# name of a second EC2 resource
VPCs:
# (similar list as for Subnets)
# name of the second service
S3:
# name of an S3 resource
Buckets:
# (similar list as for EC2 subnets and vpcs)
# name of the third service
DynamoDB:
# name of a DynamoDB resource
Tables:
# (as above)
</code></pre>
<p>The <code>subnet01_attr01 ... </code> bit is just for illustrative purposes and not relevant to the question.</p>
<p>So far, I've modeled only the top level, not using pydantic, but only a simple <a href="https://docs.python.org/3/library/enum.html" rel="nofollow noreferrer"><code>Enum</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
class AWSServiceName(str, Enum):
EC2 = 'EC2'
S3 = 'S3'
DynamoDB = 'DynamoDB'
</code></pre>
<p>which worked fairly well in my code to process all services in a somewhat elegant fashion.</p>
<p><strong>How would I model the second level, the resources ?</strong></p>
<p>I've tried adding more enums for the service resources:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
class EC2ResourcesName(str, Enum):
SubNets = 'SubNets'
VPCs = 'VPCs'
class S3ResourcesName(str, Enum):
Buckets = 'Buckets'
class DynamoDBResourcesName(str, Enum):
Tables = 'Tables'
</code></pre>
<p>and use them in the actual pydantic model:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, ConfigDict
class AWSServices(BaseModel):
model_config = ConfigDict(extra='forbid')
service: AWSServiceName
resources: ... ?!? ...
</code></pre>
<p>but I'm just completely confused how to continue here...</p>
<p>Is the <code>enum</code> approach reasonable ? Or should I better use a <code>set</code> ? (or a <code>frozenset</code> ?)</p>
|
<python><amazon-web-services><enums><pydantic>
|
2024-03-13 19:53:47
| 0
| 9,959
|
ssc
|
78,156,444
| 4,987,648
|
python/sage: how to get the last expression?
|
<p>If I do in Jupiter (in both python and sage) something like:</p>
<pre><code>a = 42
b = 43
a + b
</code></pre>
<p>it will, somehow, manage to understand that this process returns the value <code>a + b</code>, i.e. 85 here:</p>
<p><a href="https://i.sstatic.net/urZG5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/urZG5.png" alt="enter image description here" /></a></p>
<p>Similarly, if I just do:</p>
<pre><code>a = 42
</code></pre>
<p>it will understand that there is nothing to return.</p>
<p>I would like now to do something similar for a different application (to cache the result of a python operation)β¦ how could I get this information, ideally by just running the code and appending some python code to obtain this information? I tried to do:</p>
<pre class="lang-py prettyprint-override"><code>a = 42
b = 43
a + b
print(_)
</code></pre>
<p>but this fails. I was thinking to do something stupid like adding <code>res = </code> in front of the last line, but it might fail for instance if the last line is indented etc⦠How can I elegantly obtain this information?</p>
|
<python><jupyter><sage>
|
2024-03-13 19:40:14
| 2
| 2,584
|
tobiasBora
|
78,156,436
| 11,402,025
|
Locust : Not able to use the config value from env file
|
<p>I am trying to run a locust test but I am not able to use the .env file values</p>
<p>.env file contains
"Value": diuqriqjqj</p>
<p>In the locust.py I have added</p>
<pre><code>apiKey = os.environ.get("VALUE", "")
class Api(HttpUser):
wait_time = between(1, 5)
@task
def test_api(self):
self.client.get(
f"/api/test/apiKey={apiKey}"
)
</code></pre>
<p>I run the locust using</p>
<pre><code>if test -f .env; then locust -f locust.py; fi
</code></pre>
<p>and receive connection errors. If I hardcode the apiKey value all works fine.</p>
|
<python><environment-variables><api-key>
|
2024-03-13 19:38:36
| 1
| 1,712
|
Tanu
|
78,156,167
| 13,949,933
|
Unable to load a Django model from a separate directory in a database script
|
<p>I am having difficulty writing a python script that takes a directory of .txt files and loads them into my database that is utilized in a Django project. Based on requirements the python script needs to be located in a separate directory than the api (my django directory).</p>
<p>Here is my project structure currently:</p>
<pre><code>Main-Project/
database/
text-script.py
text-files/
example-file.txt
django/
django/
settings.py
snippets/
models.py
</code></pre>
<p>My text-script.py file looks like this:</p>
<pre><code>import os
import sys
import django
current_dir = os.path.dirname(os.path.abspath(__file__))
parent_dir = os.path.dirname(current_dir)
sys.path.append(parent_dir)
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'django.django.settings')
django.setup()
from django.snippets.models import Snippet
def import_articles(directory):
for filename in os.listdir(directory):
if filename.endwith('.txt'):
with open(os.path.join(directory, filename), 'r') as file:
content = file.read()
Snippet.objects.create(filename=filename, content=content)
if __name__ == '__main__':
text_dir = os.path.join(current_dir, 'text-files')
import_articles(text_dir)
</code></pre>
<p>My installed apps looks like this:</p>
<pre><code>INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'snippets.apps.SnippetsConfig',
]
</code></pre>
<p>I get this error when I try to run the script:</p>
<pre><code>ModuleNotFoundError: No module named 'snippets'
</code></pre>
<p>How do I correctly load the Snippet model from my Django project to utilize the same database used in my Django project?</p>
|
<python><django><django-rest-framework>
|
2024-03-13 18:45:39
| 2
| 812
|
Jake Mulhern
|
78,156,081
| 7,760,910
|
Restrict dags in MWAA instance in consumer account via assume role
|
<p>I have dags under the below locations in the <code>primary root account</code>:</p>
<pre><code>s3://input-read/dags/domainA/*.py
s3://input-read/dags/domainB/*.py
</code></pre>
<p>And dags location passed in the MWAA instance is <code>s3://input-read/dags</code></p>
<p>When I open the Airflow UI from the Primary account it shows the dags from both the folders which is correct.</p>
<p>Also, I have an IAM user in the <code>secondary account</code> to which I have granted access to only <code>domainA</code> folder and underlying objects via <code>assume role</code> which works perfectly.</p>
<p>I have also granted access to the <code>MWAA instance</code> of the primary account to that IAM user. But here the problem is when I open the Airflow UI in the secondary account's IAM user I still see all the dags which shouldn't happen because on the object level I have restricted access to other folders. So that IAM user should be only able to see dags from domainA folder.</p>
<p>Below is the policy block for Airflow access for the IAM user from primary account:</p>
<pre><code>{
"Sid": "AllowAirflow",
"Effect": "Allow",
"Action": [
"airflow:ListEnvironments",
"airflow:GetEnvironment",
"airflow:ListTagsForResource"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "airflow:CreateWebLoginToken",
"Resource": [
"arn:aws:airflow:us-east-1:[account-id]:role/MyAirflowEnvironment/User"
]
}
</code></pre>
<p>I also created a role in Airflow UI where you can see the details in the screenshots below:</p>
<p><a href="https://i.sstatic.net/JN1I7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JN1I7.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/uKFH9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uKFH9.png" alt="enter image description here" /></a></p>
<p>But I still see all the dags in the IAM user account.</p>
<p>So how can I resolve my problem? Any hints would also help. TIA.</p>
|
<python><amazon-web-services><airflow><mwaa>
|
2024-03-13 18:31:01
| 1
| 2,177
|
whatsinthename
|
78,155,856
| 1,214,800
|
Handling backreferences in re.sub when replacement includes numbers
|
<p>Take the following simple regex replacement:</p>
<pre class="lang-py prettyprint-override"><code>import re
s = "Python version is: 3.10"
pat = r'(is:.*)\d+\.\d+$'
version = "3.12"
result = re.sub(pat, rf'\1{version}', s)
print(result)
</code></pre>
<p>This fails with:</p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
...
raise s.error("invalid group reference %d" % index, pos)
re.error: invalid group reference 13 at position 1
</code></pre>
<p>What is happening is that the raw string interprets the backreference and includes the first "3" of the version string as its reference.</p>
<p>I've tried various iterations of:</p>
<pre><code>re.sub(pat, rf'\1{version}', s)
re.sub(pat, f'\\1{version}', s)
re.sub(pat, r'\1' + version, s)
re.sub(pat, r'\1{0}'.format(version), s)
re.sub(pat, r'\1' + f"{version}", s)
</code></pre>
<p>But none will treat the string part as an actual string. Am I stuck using a named capture group for this?</p>
|
<python><python-3.x><regex>
|
2024-03-13 17:46:33
| 1
| 73,674
|
brandonscript
|
78,155,767
| 12,367,751
|
Error training transformer with QLoRA and Peft
|
<p>So I am trying to finetuning google Gemma model using Peft and QLoRA. Yesterday I successfully fine-tuned it for 1 epoch just as a test. However, when I opened the notebook today and ran the cell that loads the model I get a huge error:</p>
<p>The code:</p>
<pre><code>model_id = "google/gemma-7b"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer =
AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=bnb_config,
device_map={0:""})
#model.gradient_checkpointing_enable()
train_dataset, val_dataset, data_collator = load_dataset(train_data_path, val_data_path, tokenizer)
</code></pre>
<p>The error (shortened):</p>
<pre><code>RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=1, num_gpus=
.....
DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=1, num_gpus=
.....
RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):
CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=1, num_gpus=
.....
</code></pre>
<p>I have shortened the error for it to be more readable. Has anyone experienced something like this? I can't seem to solve it. Help is much appreciated.</p>
|
<python><deep-learning><nlp><huggingface-transformers>
|
2024-03-13 17:29:55
| 1
| 459
|
eneko valero
|
78,155,420
| 14,104,321
|
Can scipy.integrate.solve_ivp reject a step to avoid evaluating the RHS with an invalid state?
|
<p>My code is much more complex, but I can reproduce the issue with the following example:</p>
<pre><code>import numpy as np
from scipy.integrate import solve_ivp
def funct(t, y):
return -np.sqrt(y)
def event(t, y):
return y[0]-0.1
if __name__ == '__main__':
event.terminal = True
sol = solve_ivp(funct, t_span=[0, 3], y0=[1], events=event)
</code></pre>
<p>In this example, the solver overshoots the event and calls <code>funct</code> with <code>y < 0</code>. As one can expect, <code>funct</code> raises a warning and returns NaN. In this specific example, <code>solve_ivp</code> recovers and a solution is still found. However, my real <code>funct</code> uses a third-party package that raises an error when <code>funct</code> is evaluated at an invalid argument, and <code>solve_ivp</code> fails.</p>
<p>Now, I know that the <code>event</code> is computed via an iterative method, so my question would be: is it possible for <code>solve_ivp</code> to detect whether the value of <code>y</code> is going below a given threshold and avoid the error/warning? So, something like "if <code>y < 0</code> then reject the step" or something similar?</p>
|
<python><scipy><ode>
|
2024-03-13 16:34:10
| 2
| 582
|
mauro
|
78,155,239
| 4,918,159
|
access return values of function when too many values to unpack happens
|
<p>Occasionally, I made coding mistakes when function values list does not match. e.g.,</p>
<pre><code>def f():
return a1, a2, a3
a1, a2 = f()
</code></pre>
<p>f() can takes a long time to run, when it finishes, 'too many values to unpack' will be thrown. I have to fix the code and rerun for a very long time.</p>
<p>Is there a way to access the return value of f() when the error is thrown (within the python session or from the debugger within the python session) so I don't need to rerun?</p>
<p>Thanks.</p>
|
<python>
|
2024-03-13 16:05:10
| 1
| 416
|
user4918159
|
78,154,960
| 1,361,752
|
How to provide alternative urls for dependencies in pyproject.toml
|
<p>Can you list multiple urls for pip to try to find a package at in <code>pyproject.toml</code> (or "legacy" configuration files like <code>setup.py</code>)?</p>
<p>I'm thinking something like:</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name='my_project'
dependencies = [
'library_package@{git+https://path_to_git_repo OR git+https://path_to_alternative_git_repo}'
]
</code></pre>
<p>The reason I'd find this useful is that I need my package to work in two different isolated corporate networks which have their own git bitbucket server, with a different url. It would be nice if I could maintain a single version of my package's <code>pyproject.toml</code> that can pull its dependencies correctly from either network's git repository.</p>
|
<python><pip><dependencies><pyproject.toml>
|
2024-03-13 15:24:43
| 0
| 4,167
|
Caleb
|
78,154,925
| 525,865
|
BeatuifulSoup iterate over 10 k pages & fetch data, parse: European Volunteering-Services: a tiny scraper that collects opportunities from EU-Site
|
<p>I am looking for a public list of Volunteering - Services in Europe: I don't need full addresses - but the name and the website. I think of data ... XML, CSV ... with these fields: name, country - and some additional fields would be nice one record per country of presence. <strong>btw:</strong> the european volunteering services are great options for the youth</p>
<p>well I have found a great page that is very very comprehensive - see</p>
<p>want to gather data from the <strong>european volunteering services</strong> that are hosted on a European site:</p>
<p><a href="https://youth.europa.eu/go-abroad/volunteering/opportunities_en" rel="nofollow noreferrer">https://youth.europa.eu/go-abroad/volunteering/opportunities_en</a></p>
<p>We have got several hundred volunteering opportunities there - which are stored in sites like the following:</p>
<pre><code> https://youth.europa.eu/solidarity/placement/39020_en
https://youth.europa.eu/solidarity/placement/38993_en
https://youth.europa.eu/solidarity/placement/38973_en
https://youth.europa.eu/solidarity/placement/38972_en
https://youth.europa.eu/solidarity/placement/38850_en
https://youth.europa.eu/solidarity/placement/38633_en
</code></pre>
<p><strong>idea:</strong></p>
<p>I think it would be awesome to gather the data - i.e. with a scraper that is based on <code>BS4</code> and <code>requests</code> - parsing the data and subsequently printing the data in a <code>dataframe</code></p>
<p>Well - I think that we could iterate over all the urls:</p>
<pre><code>placement/39020_en
placement/38993_en
placement/38973_en
placement/38850_en
</code></pre>
<p><strong>UPDATE:</strong> thanks to @hedgeHog s help we ve got a solution.</p>
<p><strong>idea</strong>: I think that we can iterate from zero to 100 000 in stored to fetch all the results that are stored in placements. But this idea is not backed with a code. In other words - at the moment I do not have an idea how to do this special idea of iterating over such a great range:</p>
<p>At the moment I think - it is a basic approach to start with this:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
# Function to generate placement URLs based on a range of IDs
def generate_urls(start_id, end_id):
base_url = "https://youth.europa.eu/solidarity/placement/"
urls = [base_url + str(id) + "_en" for id in range(start_id, end_id+1)]
return urls
# Function to scrape data from a single URL
def scrape_data(url):
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.content, 'html.parser')
title = soup.h1.get_text(', ', strip=True)
location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ', strip=True)
start_date, end_date = (e.get_text(strip=True) for e in soup.select('span.extra strong')[-2:])
website_tag = soup.find("a", class_="btn__link--website")
website = website_tag.get("href") if website_tag else None
return {
"Title": title,
"Location": location,
"Start Date": start_date,
"End Date": end_date,
"Website": website,
"URL": url
}
else:
print(f"Failed to fetch data from {url}. Status code: {response.status_code}")
return None
# Set the range of placement IDs we want to scrape
start_id = 1
end_id = 100000
# Generate placement URLs
urls = generate_urls(start_id, end_id)
# Scrape data from all URLs
data = []
for url in urls:
placement_data = scrape_data(url)
if placement_data:
data.append(placement_data)
# Convert data to DataFrame
df = pd.DataFrame(data)
# Print DataFrame
print(df)
</code></pre>
<p>which gives me back the following</p>
<pre><code>Failed to fetch data from https://youth.europa.eu/solidarity/placement/154_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/156_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/157_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/159_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/161_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/162_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/163_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/165_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/166_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/169_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/170_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/171_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/173_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/174_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/176_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/177_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/178_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/179_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/180_en. Status code: 404
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-5-d6272ee535ef> in <cell line: 42>()
41 data = []
42 for url in urls:
---> 43 placement_data = scrape_data(url)
44 if placement_data:
45 data.append(placement_data)
<ipython-input-5-d6272ee535ef> in scrape_data(url)
16 title = soup.h1.get_text(', ', strip=True)
17 location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ', strip=True)
---> 18 start_date, end_date = (e.get_text(strip=True) for e in soup.select('span.extra strong')[-2:])
19 website_tag = soup.find("a", class_="btn__link--website")
20 website = website_tag.get("href") if website_tag else None
ValueError: not enough values to unpack (expected 2, got 0)
</code></pre>
<p>Any idea?</p>
|
<python><pandas><dataframe><web-scraping><beautifulsoup>
|
2024-03-13 15:19:09
| 1
| 1,223
|
zero
|
78,154,889
| 1,514,114
|
Interactive Docker exec with docker-py
|
<p>I'm trying to implement something equivalent to:</p>
<pre class="lang-bash prettyprint-override"><code>docker exec -it <some_container> /bin/bash
</code></pre>
<p>Ie. run a command in a container and connect <code>stdin</code> and <code>stdout</code> of my program to <code>stdin</code> and <code>stdout</code> of the command.</p>
<p>My reading of <a href="https://docker-py.readthedocs.io/en/stable/containers.html#container-objects" rel="nofollow noreferrer">the documentation</a> seems to imply that the following should do the job:</p>
<pre><code>import docker
client = docker.from_env()
container, = client.containers.list()
container.exec_run(['/usr/local/bin/bash'], stdin=True, tty=True)
</code></pre>
<p>(There's a single container running <code>bash:latest</code>)</p>
<p>But it doesn't seem like the program's <code>stdin</code> is connected to that of the <code>exec</code>'d command.
The command line just sits there, echoing my input, but not reacting to it in any way.</p>
<p>I've also tried to interact with the raw socket (returned by <code>exec_run</code>βnot <code>docker.sock</code>):</p>
<pre><code>_, s = container.exec_run(['/usr/local/bin/bash'], stdin=True, tty=True, socket=True)
s.write('echo hello world')
</code></pre>
<p>But I'm getting <code>UnsupportedOperation: File or stream is not writable.</code></p>
<p><strong>Question: What would I have to do to allow the user of my code to interact with the std IO of a command executed in a container using docker-py?</strong></p>
|
<python><docker><interactive><dockerpy>
|
2024-03-13 15:13:02
| 1
| 548
|
Johannes Bauer
|
78,154,875
| 832,490
|
Unable to mock get_redis function with pytest
|
<p>I am using fakeredis and pytest on an application</p>
<p>My get_redis function on file <code>app/helpers/providers.py</code>:</p>
<pre><code>from redis import ConnectionPool, Redis
redis_pool = None
def get_redis() -> Redis:
global redis_pool
if redis_pool is None:
redis_pool = ConnectionPool()
return Redis.from_pool(redis_pool)
</code></pre>
<p>Then I use it on my code, located at <code>app/endpoints/secrets.py</code>:</p>
<pre><code>from app.helpers.providers import get_redis
@router.get("/secrets/{secret_id}")
async def get_secret(
request: Request,
secret_id: UUID,
credentials: Annotated[HTTPBasicCredentials, Depends(security)],
redis: Redis = Depends(get_redis),
):
with redis.pipeline() as pipe: # I use redis from get_redis here
pipe.get(key)
pipe.delete(key)
results = pipe.execute()
return results # just to simplify
</code></pre>
<p>Then on my tests I have the following:</p>
<pre><code>import sys
from unittest.mock import patch
import fakeredis
import pytest
from fastapi import HTTPException, status
from fastapi.exceptions import RequestValidationError
from fastapi.testclient import TestClient
@pytest.fixture
def fake_redis():
return fakeredis.FakeStrictRedis()
@pytest.fixture(autouse=True)
def mock_redis_dependencies(fake_redis):
with patch('app.endpoints.secrets.get_redis', return_value=fake_redis):
yield
sys.path.append(".")
from app.endpoints.secrets import router
def test_secret_not_found(fake_redis):
client = TestClient(router)
with pytest.raises(HTTPException) as exc:
client.get(
"/api/secrets/aaaaaaaa-bbbb-4ccc-aaaa-eeeeeeeeeef1",
auth=("admin", "admin"),
)
assert exc.value.status_code == status.HTTP_404_NOT_FOUND
</code></pre>
<p>The test fails because it tries to use the real redis, which raises an exception.</p>
<p>What I am doing wrong? I have another test that works fine.</p>
|
<python><pytest><fastapi>
|
2024-03-13 15:10:49
| 1
| 1,009
|
Rodrigo
|
78,154,864
| 112,976
|
How to get all pending tasks of an event loop in Python/FastAPI?
|
<p>I am trying to understand potential slowdown in my FastAPI application.</p>
<p>My understanding is that each time I call <code>await</code>, a task is scheduled for later, creating a backlog of task to be executed. If the process is slow for some reason, the number of pending tasks should be increased.</p>
<p>How can I monitor the number of pending tasks?</p>
|
<python><python-asyncio><fastapi><event-loop><starlette>
|
2024-03-13 15:09:49
| 1
| 22,768
|
poiuytrez
|
78,154,849
| 4,140,027
|
Different embedding checksums after encoding with SentenceTransformers?
|
<p>I am calculating some embeddings with SentenceTransformers Library. However, I get different results when encoding the sentences and calculating their embeddings when checking the sum of their values. For instance:</p>
<p>In:</p>
<pre><code>
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
random.seed(RANDOM_SEED)
tf.random.set_seed(RANDOM_SEED)
torch.manual_seed(RANDOM_SEED)
transformer_models = [
'M-CLIP/M-BERT-Distil-40',
]
sentences = df['content'].tolist()
for transformer_model in tqdm(transformer_models, desc="Transformer Models"):
tqdm.write(f"Processing with Transformer Model: {transformer_model}")
model = SentenceTransformer(transformer_model)
embeddings = model.encode(sentences)
print(f"Embeddings Checksum for {transformer_model}:", np.sum(embeddings))
</code></pre>
<p>Out:</p>
<pre><code>Embeddings Checksum for M-CLIP/M-BERT-Distil-40: 1105.9185
</code></pre>
<p>Or</p>
<pre><code>Embeddings Checksum for M-CLIP/M-BERT-Distil-40: 1113.5422
</code></pre>
<p>I noticed this situation happens when I restart and clear the output of the jupyter notebook, and then re-run the full notebook. Any idea of how to fix this issue?</p>
<p>Alternative I tried to set after and before the embeddings calculation the reandom seeds:</p>
<pre><code>import torch
import numpy as np
import random
import tensorflow as tf
from sentence_transformers import SentenceTransformer
from tqdm.auto import tqdm
RANDOM_SEED = 42
# Setting seeds
np.random.seed(RANDOM_SEED)
random.seed(RANDOM_SEED)
tf.random.set_seed(RANDOM_SEED)
torch.manual_seed(RANDOM_SEED)
# Ensuring PyTorch determinism
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
transformer_models = ['M-CLIP/M-BERT-Distil-40']
sentences = df['content'].tolist()
for transformer_model in tqdm(transformer_models, desc="Transformer Models"):
# Set the seed again right before loading the model
np.random.seed(RANDOM_SEED)
random.seed(RANDOM_SEED)
tf.random.set_seed(RANDOM_SEED)
torch.manual_seed(RANDOM_SEED)
tqdm.write(f"Processing with Transformer Model: {transformer_model}")
model = SentenceTransformer(transformer_model, device='cpu') # Force to use CPU
embeddings = model.encode(sentences, show_progress_bar=False) # Disable progress bar and parallel tokenization
print(f"Embeddings Checksum for {transformer_model}:", np.sum(embeddings))
</code></pre>
<p>However I am getting the same inconsistent behavior.</p>
<p><strong>UPDATE</strong></p>
<p>What I tried now, and seem to work is that now I store all the calculated embeddings in files. However, I find weird that when doing this I get different results. Does anyone has experience this before?</p>
|
<python><jupyter-notebook><nlp><sentence-transformers>
|
2024-03-13 15:07:56
| 1
| 4,670
|
tumbleweed
|
78,154,839
| 11,103,705
|
Django upgrade to 4.2 admin static files issue
|
<p>I am currently using django 3.2 and would like to upgrade to 4.2 soon. Doing so on a local environment and using <code>python manage.py collectstatic</code> works. However the problem comes when I try to deploy this to a development environment.</p>
<p>The new admin static files are not loading which causes the django admin page to look dysfunctional. Particularly, django 4.2 introduces a <code>dark_theme.css</code> static file which returns a 404 error when looking at the browser network tab. On my local environment, the file returns a 200 ok.</p>
<p>This is the most similar <a href="https://forum.djangoproject.com/t/help-django-4-2-admin-page-css-issue/20196/41" rel="nofollow noreferrer">thread</a> to my issue I could find on the official django forum, however there is not a clear fix for the problem (perhaps because there are a lot of reasons why the issue could be happening)</p>
<p>I am using nginx for the server. We have a CI process that runs <code>collectstatic</code> then copies the output to a docker container and I have this nginx configuration to get the static files:</p>
<pre><code> location /static/ {
autoindex on;
alias /usr/share/nginx/html/static/;
}
</code></pre>
<p>In django settings, I have this configuration:</p>
<pre><code>STATIC_ROOT = BASE_DIR / "ui/build/static"
STATIC_URL = "/static/"
</code></pre>
<p>Such that the static files are generated at <code>STATIC_ROOT</code> then the docker container copies them to <code>/usr/share/nginx/html/</code> correctly.</p>
<p>I have tried the same process but with a <code>--clear</code> flag for the <code>collectstatic</code> command, but no luck. I have also tried upgrading to django 4.2.0 explicitly as some people reported this version works, but again no luck. And of course I tried refreshing my browser cache.</p>
<p>Not sure what else it could be, it looks like the static files are there (particularly <code>dark_theme.css</code> seems to exist in the right place) but when fetching the files from the browser I get the 404.</p>
<p>Any thoughts or other things I can try/check would be appreciated, thanks.</p>
|
<python><django><docker><nginx>
|
2024-03-13 15:05:24
| 1
| 809
|
Sorin Burghiu
|
78,154,500
| 673,600
|
Stats Model Power Calculation
|
<p>The documentation is not very clear about the method for computing the number of samples estimated for a significant result, precisely the number of N. For example, does this N relate to one distribution? So, is the output the total number of observations (across both distributions in an A/B test), or is it just one? I've seen conflicting uses; the documentation doesn't specify precisely what it outputs.</p>
<pre><code>statsmodels.stats.power.tt_ind_solve_power(effect_size=None, nobs1=None,
alpha=None, power=None, ratio=1.0, alternative='two-sided')
</code></pre>
<p>Available from the following link
<a href="https://www.statsmodels.org/stable/generated/statsmodels.stats.power.tt_ind_solve_power.html" rel="nofollow noreferrer">https://www.statsmodels.org/stable/generated/statsmodels.stats.power.tt_ind_solve_power.html</a></p>
|
<python><statistics><statsmodels>
|
2024-03-13 14:20:47
| 0
| 6,026
|
disruptive
|
78,154,426
| 4,537,160
|
Python, OpenCV - drawContours of overlapping regions, some pixels are excluded?
|
<p>I am experimenting with cv2.drawContours. I tried this:</p>
<pre><code>import cv2
import numpy as np
def fix_format(cts):
return [np.asarray(np.round(ct), np.int32)[:, :, np.newaxis] for ct in cts]
#Β these are 2 overlapping rectangular regions
cts1 = [
[[1, 1], [6, 1], [6, 4], [1,4]],
[[3, 2], [7, 2], [7, 7], [3,7]]
]
img1 = cv2.drawContours(np.zeros([10, 10]), fix_format(cts1), contourIdx=-1, color=1, thickness=-1)
</code></pre>
<p>I assumed that, since I am using <code>thickness=-1</code>, contours would be filled, and since the areas are overlapping I would end up with the union of the areas delimited by the 2 contours in <code>cts1</code>.<br />
However, when printing <code>img1</code>, I see there are 2 values that are still 0 in the overlapping area (img1[3][4] and img1[3][5]).
Why is this happening?</p>
<p>EDIT:</p>
<p>I tried using the 2 contours separately, and this is what I get (for the first, second and both contours, respectively):</p>
<p><a href="https://i.sstatic.net/8BUUn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8BUUn.png" alt="cnt1" /></a>
<a href="https://i.sstatic.net/Q6mQa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q6mQa.png" alt="cnt2" /></a>
<a href="https://i.sstatic.net/uLNlo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uLNlo.png" alt="both" /></a></p>
|
<python><opencv><image-processing><image-editing>
|
2024-03-13 14:08:19
| 1
| 1,630
|
Carlo
|
78,154,382
| 12,297,666
|
Rounding really small/near zero complex values in Python
|
<p>Consider the following sequence/signal:</p>
<pre><code>import numpy as np
from scipy.fft import fft
# %% Discrete data
x = np.array([0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875,
0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875,
0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875,
0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875,
0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875,
0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875,
0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875,
0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875, 0.7966875])
</code></pre>
<p>I have defined the following function to calculate a Discrete Fourier Transform:</p>
<pre><code>def DFT(x):
N = len(x)
n = np.arange(0, N)
k = n.reshape((N, 1))
omega = 2*np.pi*k/N
e = np.exp(-1j*omega*n)
X = np.dot(x, e)
X_round = np.round(X, 6)
return X, X_round, N, n
X, X_round, N, n = DFT(x)
</code></pre>
<p>Our <code>X</code> here have really small values for non-zero frequencies (as expected as the signal <code>x</code> does not change). Then, I have tried to round it using <code>np.round(X, 6)</code>, that returns:</p>
<pre><code>array([38.241+0.j, -0. +0.j, -0. -0.j, -0. -0.j, 0. -0.j,
0. +0.j, -0. -0.j, 0. -0.j, -0. +0.j, -0. -0.j,
-0. -0.j, -0. -0.j, -0. +0.j, 0. -0.j, -0. -0.j,
-0. -0.j, -0. -0.j, 0. -0.j, 0. -0.j, 0. -0.j,
0. +0.j, 0. +0.j, -0. +0.j, -0. +0.j, 0. +0.j,
-0. -0.j, 0. +0.j, -0. +0.j, -0. -0.j, -0. -0.j,
-0. -0.j, 0. -0.j, 0. -0.j, 0. -0.j, -0. -0.j,
-0. -0.j, 0. -0.j, 0. -0.j, 0. -0.j, 0. +0.j,
-0. +0.j, -0. -0.j, 0. -0.j, -0. +0.j, -0. -0.j,
0. +0.j, 0. -0.j, 0. -0.j])
</code></pre>
<p>But, if I try to take the angle of <code>X_round</code>, using <code>X_round_angle = np.angle(X_round)</code> it is not zero as I would expect (you can try using the <code>fft</code> function from scipy). Here is the output of <code>X_round_angle</code>:</p>
<pre><code>array([ 0. , 3.14159265, -3.14159265, -3.14159265, -0. ,
0. , -3.14159265, -0. , 3.14159265, -3.14159265,
-3.14159265, -3.14159265, 3.14159265, -0. , -3.14159265,
-3.14159265, -3.14159265, -0. , -0. , -0. ,
0. , 0. , 3.14159265, 3.14159265, 0. ,
-3.14159265, 0. , 3.14159265, -3.14159265, -3.14159265,
-3.14159265, -0. , -0. , -0. , -3.14159265,
-3.14159265, -0. , -0. , -0. , 0. ,
3.14159265, -3.14159265, -0. , 3.14159265, -3.14159265,
0. , -0. , -0. ])
</code></pre>
<p>Using <code>np.angle(fft(x))</code> returns the correct phase value of zero for all frequencies. How can i correct my <code>DFT</code> function?</p>
<p><strong>EDIT:</strong> Complementing the answer, I have found this <a href="https://www.statlect.com/matrix-algebra/discrete-Fourier-transform-amplitude-power-phase-spectrum" rel="nofollow noreferrer">link</a>, that does the same:</p>
<blockquote>
<p>When Im(X) and Re(X) are both equal to zero,
the value of phase is undefined is
undefined. It can be set equal to 0, as we will do below, to make the
phase spectrum easier to read.</p>
</blockquote>
|
<python><fft>
|
2024-03-13 14:01:31
| 1
| 679
|
Murilo
|
78,153,943
| 1,397,946
|
Data persistence in a multi-page Dash app
|
<p>I have a Dash app where user in one page, <code>data.py</code>, adds some data, and later selected rows can be viewed and removed on another page, <code>grid.py</code>. The user should be able to later get back to <code>data.py</code> and add some more data.</p>
<p>The problem: data is not persisted between the visits to the <code>grid.py</code>. How can I achieved that? I tried setting <code>persistence</code> property, but that didn't get me anywhere. <code>existing_data</code> in <code>grid.py</code> is always <code>None</code>. When I use a single-page app, similar code works.</p>
<p>Here's my minimal reproducible example:</p>
<p><strong>app.py</strong></p>
<pre><code>from dash import html, dcc
import dash
app = dash.Dash(__name__, use_pages=True)
app.layout = html.Div(
[
dcc.Store(id="store", data={}),
html.H1("Multi Page App Demo: Sharing data between pages"),
html.Div(
[
html.Div(
dcc.Link(f"{page['name']}", href=page["path"]),
)
for page in dash.page_registry.values()
]
),
html.Hr(),
dash.page_container,
]
)
if __name__ == "__main__":
app.run_server(debug=True)
</code></pre>
<p><strong>data.py</strong></p>
<pre><code>from dash import html, Input, Output, callback, register_page
from dash.exceptions import PreventUpdate
import random
register_page(__name__, path="/")
layout = html.Div(
[
html.H3("Data input"),
html.Button("Add row", id="button_id"),
html.Br(),
html.Div(id="my-output"),
]
)
@callback(
[Output("store", "data"), Output("my-output", "children")],
Input("button_id", "n_clicks"),
prevent_initial_call=True
)
def add_data(n_clicks):
if n_clicks:
new_data = [{"col1": "New row", "col2": random.randint(0, 1000)}]
return new_data, html.Pre(str(new_data))
else:
raise PreventUpdate
</code></pre>
<p><strong>grid.py</strong></p>
<pre><code>from dash import html, dash_table, Input, Output, callback, register_page, State
register_page(__name__)
layout = html.Div([
html.H3("Data tables"),
dash_table.DataTable(
id="table",
row_deletable=True,
column_selectable="single",
page_size=5,
persistence=True,
persisted_props=[
"data",
"columns.name",
"filter_query",
"hidden_columns",
"page_current",
"selected_columns",
"selected_rows",
"sort_by",
],
),
])
@callback(
Output("table", "data"),
Input("store", "data"),
State("table", "data"),
)
def update(new_data, existing_data):
if existing_data is not None:
return existing_data + new_data
else:
return new_data
</code></pre>
|
<python><plotly><plotly-dash>
|
2024-03-13 12:53:36
| 1
| 11,517
|
Lukasz Tracewski
|
78,153,853
| 6,786,996
|
Python great expectation conditional logic with pyspark
|
<p>I am trying to test a few data validation rules for my spark DF(running great expectation 0.18.9). I want to add a conditional logic such as verify <strong>colA is NULL when colB is also NULL</strong> . I am referring to the syntax here [https://docs.greatexpectations.io/docs/reference/learn/expectations/conditional_expectations/][1]
This is what i am trying</p>
<pre><code>expectations_json = {
"expectation_suite_name": "name",
"expectations": [
{
"expectation_type": "expect_column_values_to_not_be_null",
"kwargs":
{
"column": "colA",
"row_condition" :"col(\"colB\").isNull()",
"condition_parser": "great_expectations__experimental__"
}
}
]
}
geDF = SparkDFDataset(df)
expectation_suite = ExpectationSuite(**expectations_json)
dq_json_result = geDF.validate(expectation_suite)
</code></pre>
<p>It's a sparkDF i am running the code against. The code doesn't give an error, but the returned <em><strong>dq_json</strong></em> contains the following exception</p>
<pre><code>"exception_info": {
"raised_exception": true,
"exception_message": "TypeError: SparkDFDataset.expect_column_values_to_not_be_null() got an unexpected keyword argument 'row_condition'"
}
</code></pre>
<p>Would appreciate any leads</p>
|
<python><python-3.x><great-expectations>
|
2024-03-13 12:38:59
| 0
| 315
|
Nidutt
|
78,153,685
| 15,913,281
|
Callback Function when Requesting Market Data using ib_insync and Interactive Brokers
|
<p>I am trying to get market data for multiple stocks using ib_insync and Interactive Brokers. To do this I am using the <a href="https://ib-insync.readthedocs.io/api.html#ib_insync.ib.IB.events" rel="nofollow noreferrer">pendingTickersEvent</a> callback to wait for the data to arrive. However all that happens is that data for the last symbol in the watch_dict (AAPL) is returned multiple times.</p>
<p>If I use <code>ib.sleep(2)</code> instead of <code>ib.pendingTickersEvent += wait_for_market_data</code> it works ok but is too slow.</p>
<p>This is my first time using an asynchronous library so I've probably doe something silly but I can't work out what it is.</p>
<pre><code>from ib_insync import *
watch_dict = {"TSLA": "NYSE",
"MSFT": "NYSE",
"AAPL": "NYSE"}
def wait_for_market_data(tickers):
print(market_data)
print()
# Create connection
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=1)
while True:
# Define stocks
for ticker in list(watch_dict.keys()):
print(ticker)
stock = Stock(ticker, watch_dict[ticker], 'USD')
print(stock)
# Request current prices
market_data = ib.reqMktData(stock, '', False, False)
#ib.sleep(2)
#print(market_data)
ib.pendingTickersEvent += wait_for_market_data
ib.run()
</code></pre>
|
<python><interactive-brokers><ib-insync>
|
2024-03-13 12:12:03
| 1
| 471
|
Robsmith
|
78,153,381
| 50,065
|
Using uv to install packages in the bitnami/deepspeed:0.14.0 Docker image fails with 'uv: command not found'
|
<p>If I use the following <code>Dockerfile</code>:</p>
<pre><code>FROM python:3.11-bullseye
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY requirements.txt /app
RUN pip install uv && uv pip install --system --no-cache -r requirements.txt
</code></pre>
<p>Then the packages in <code>requirements.txt</code> install just fine. But if I change the first line to</p>
<pre><code>FROM bitnami/deepspeed:0.14.0
</code></pre>
<p>Then suddenly I get the error</p>
<pre><code>#8 [4/4] RUN pip install uv && uv pip install --system --no-cache -r requirements.txt
#8 0.629 Defaulting to user installation because normal site-packages is not writeable
#8 0.807 Collecting uv
#8 0.867 Downloading uv-0.1.18-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (23 kB)
#8 0.896 Downloading uv-0.1.18-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.0 MB)
#8 2.013 ββββββββββββββββββββββββββββββββββββββββ 11.0/11.0 MB 9.9 MB/s eta 0:00:00
#8 2.947 Installing collected packages: uv
#8 3.209 Successfully installed uv-0.1.18
#8 3.421 /bin/bash: line 1: uv: command not found
#8 ERROR: process "/bin/bash -o errexit -o nounset -o pipefail -c pip install uv && uv pip install --system --no-cache -r requirements.txt" did not complete successfully: exit code: 127
------
> [4/4] RUN pip install uv && uv pip install --system --no-cache -r requirements.txt:
0.629 Defaulting to user installation because normal site-packages is not writeable
0.807 Collecting uv
0.867 Downloading uv-0.1.18-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (23 kB)
0.896 Downloading uv-0.1.18-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.0 MB)
2.947 Installing collected packages: uv
3.209 Successfully installed uv-0.1.18
3.421 /bin/bash: line 1: uv: command not found
------
Dockerfile.mini:11
--------------------
9 |
10 | # Install dependency packages
11 | >>> RUN pip install uv && uv pip install --system --no-cache -r requirements.txt
--------------------
ERROR: failed to solve: process "/bin/bash -o errexit -o nounset -o pipefail -c pip install uv && uv pip install --system --no-cache -r requirements.txt"
</code></pre>
<p><strong>Edit:</strong></p>
<p>I can get it to work if I install the python packages as root:</p>
<pre><code>FROM bitnami/deepspeed:0.14.0
USER root
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY requirements.txt /app
RUN pip install uv && uv pip install --python $(which python) --no-cache -r requirements.txt
</code></pre>
<p>But is there a way to use <a href="https://github.com/astral-sh/uv" rel="nofollow noreferrer"><code>uv</code></a> to install python packages in the <a href="https://github.com/bitnami/containers/tree/main/bitnami/deepspeed" rel="nofollow noreferrer"><code>bitnami/deepspeed:0.14.0</code></a> Docker image as the <code>deepspeed</code> user?</p>
|
<python><docker><bitnami><deepspeed><uv>
|
2024-03-13 11:25:22
| 3
| 23,037
|
BioGeek
|
78,153,258
| 6,694,814
|
Python folium problem with removing NAN from the popup
|
<p>I have the excel list with some blanks columns. I would like to remove NAN, but I don't know how.</p>
<p>I used the <code>dropna()</code> function, but didn't work in my case.</p>
<pre><code> df = pd.read_csv("work2.csv")
for i,row in df.iterrows():
lat = df.at[i, 'lat']
lng = df.at[i, 'lng']
sp = df.at[i, 'sp']
stat = df.at[i,'title']
skill = df.at[i,'skillset']
skill2 = df.at[i,'skillset2']
#skill3 = df.at[i(float).notna(), 'skillset3']
skill4 = df.at[i,'skillset4']
</code></pre>
<p>but I get the TypeError</p>
<p>Is there any way of removing the NaN from the Python folium popup which uses data manipulated by pandas?</p>
<p>The solution proposed here:</p>
<p><a href="https://stackoverflow.com/questions/61074332/str-object-has-no-attribute-dropna">'str' object has no attribute 'dropna'</a></p>
<p>leads to the situation where I have only one input visible.</p>
<p><a href="https://i.sstatic.net/34XHx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/34XHx.png" alt="enter image description here" /></a></p>
|
<python><pandas><folium>
|
2024-03-13 11:06:05
| 1
| 1,556
|
Geographos
|
78,153,123
| 3,676,262
|
Django-q2 schedule task hook on class function
|
<p>I have an object as such :</p>
<pre><code>class TestApp(models.Model):
cron = models.CharField(max_length=200)
args = models.CharField(max_length=200)
test_function = models.ForeignKey(TestFunction, on_delete=models.CASCADE)
scheduled_task = models.ForeignKey(Schedule, blank=True, null=True, on_delete=models.SET_NULL)
def get_args(self):
return ast.literal_eval(self.args)
def run_function(self):
Module = __import__(self.test_function.library_name)
func = getattr(Module, self.test_function.function_name)
result = func(*self.get_args())
print(result)
return result
def print_task(self, task):
print(self.id, task)
</code></pre>
<p>I am interested in having a schedule task as such :</p>
<pre><code>def save(self, *args, **kwargs):
self.scheduled_task = Schedule.objects.create(
func=self.run_function,
hook=self.print_task,
schedule_type=Schedule.CRON,
cron=self.cron
)
super(TestApp, self).save(*args, **kwargs)
</code></pre>
<p>But this does not work and will result something as :</p>
<pre><code>18:32:02 [Q] INFO Process-f625bf1f4a024df8be5e15647bf294a9 created task alaska-ten-october-mirror from schedule [4]
18:32:02 [Q] INFO Process-069b6ca530ae4e83be6aedbd669a94a7 processing alaska-ten-october-mirror '<bound method TestApp.run_function of <TestApp: TestApp object (1)>>' [4]
18:32:02 [Q] ERROR malformed return hook '<bound method TestApp.print_task of <TestApp: TestApp object (1)>>' for [alaska-ten-october-mirror]
18:32:02 [Q] ERROR Failed '<bound method TestApp.run_function of <TestApp: TestApp object (1)>>' (alaska-ten-october-mirror) - Function <bound method TestApp.run_function of <TestApp: TestApp object (1)>> is not defined : Traceback (most recent call last):
</code></pre>
<p>When if I do a simple async call, it works :</p>
<pre><code>def save(self, *args, **kwargs):
async_task(
func=self.run_function,
hook=self.print_task
)
super(TestApp, self).save(*args, **kwargs)
</code></pre>
<p>This will properly do the work :</p>
<pre><code>18:35:25 [Q] INFO Process-cfc73b4a7c5d48d69eed82b311f18250 processing ceiling-echo-six-west '<bound method TestApp.run_function of <TestApp: TestApp object (1)>>'
-2.0
1 ceiling-echo-six-west
</code></pre>
<p>I am not sure why the async can do it but not the schedule.</p>
|
<python><asynchronous><scheduled-tasks><django-q>
|
2024-03-13 10:46:48
| 1
| 378
|
BleuBizarre
|
78,153,078
| 3,433,875
|
Overlap polar plots to create a radial tornado chart in matplotlib
|
<p>I am trying to recreate this:
<a href="https://i.sstatic.net/wtvpl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wtvpl.png" alt="enter image description here" /></a></p>
<p>And I have most of it:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
color_dict = {"Denmark": "#A54836", "Norway": "#2B314D", "Sweden": "#5375D4"}
data = {
"year": [2004, 2022, 2004, 2022, 2004, 2022],
"countries" : ["Sweden", "Sweden", "Denmark", "Denmark", "Norway", "Norway"],
"sites": [13,15,4,10,5,8]
}
df= pd.DataFrame(data)
df['sub_total'] = df.groupby('year')['sites'].transform('sum')
df = df.sort_values(['countries', 'sites'], ascending=True ).reset_index(drop=True)
fig, axes = plt.subplots(ncols=2, figsize=(10,5), facecolor = "#FFFFFF", subplot_kw=dict(polar=True) )
fig.tight_layout(h_pad=-40)
countries = df.countries.unique()
colors = color_dict.keys()
years = df.year.unique()
offsets=[0.3,0.2,0.15]
directions = [1,-1]
ylabel = [0.58, 0.68, 0.78]
for ax,year, direction in zip(axes.ravel(),years, directions):
temp_df = df[df.year==year]
for i, (country,site, color,offset,yl) in enumerate(zip(temp_df.countries, temp_df.sites, colors, offsets, ylabel)):
angle_range = np.linspace(0, site*7)
theta =[np.deg2rad(a) for a in angle_range]
r = np.full(len(angle_range), i + 1) # identical radius values to draw an arc
print(theta,r)
ax.plot(theta,
r,
linewidth=15,
solid_capstyle="round",
color=color_dict[color])
ax.text(0.49,yl, country,transform=plt.gcf().transFigure, ha = "center")
ax.annotate(site, xy= ( theta[-1],r[-1]), color="w",ha="center" ,va="center")
# increase the r limit, making a bit more space to show thick arcs correctly
ax.set_rmax(4)
ax.set_theta_zero_location('N')
ax.set_theta_direction(direction)
ax.grid(False)
#ax.set_thetamax(180)
ax.axis('off')
</code></pre>
<p>Which produces this:
<a href="https://i.sstatic.net/kYi4t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kYi4t.png" alt="enter image description here" /></a></p>
<p>and with the axes on:
<a href="https://i.sstatic.net/Wqq1O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wqq1O.png" alt="enter image description here" /></a></p>
<p>My question is:</p>
<p>Is there a way to overlap the axes so i get them closer together?</p>
<p>I can not do #ax.set_thetamax(180) because it will cut the rounded edges at the beginning:
<a href="https://i.sstatic.net/8SZTd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8SZTd.png" alt="enter image description here" /></a></p>
<p>I have also tried tight_layout which has worked with cartesian axes, but not with polar and
plt.subplots_adjust(wspace=0, hspace=0) does nothing</p>
<p>Bonus question:
How can I position the radial labels and the data labels (country names, respectively) without hardcoding the offset?</p>
|
<python><matplotlib>
|
2024-03-13 10:37:33
| 1
| 363
|
ruthpozuelo
|
78,152,798
| 6,293,886
|
Override Hydra config with experiment config- extra folder hierarchy
|
<p>I'm working through this <a href="https://hydra.cc/docs/patterns/configuring_experiments/" rel="nofollow noreferrer">Hydra doc example</a> to override the main config with an experiment config. Differing from the Hydra example I have another level of folder hierarchy to gather all associated configs into a single sub-folder.</p>
<pre><code>conf
βββ data
βΒ Β βββ cifar100_data
βΒ Β βΒ Β βββ cifar100_extended.yaml
βΒ Β βΒ Β βββ cifar100.yaml
βΒ Β βββ default.yaml
βββ experiment
βΒ Β βββ cifar100.yaml
βββ training_config.yaml
</code></pre>
<p>In my example, I have a basic <code>training_config</code>:</p>
<pre><code>defaults:
- data: default
</code></pre>
<p><code>data/default.yaml</code>:</p>
<pre><code># @package data
dataset: mnist
n_classes: 10
</code></pre>
<p>and two config files for cifar100 data<br />
<code>cifar100_data/cifar100.yaml</code>:</p>
<pre><code>name: cifar100
n_classes: 100
</code></pre>
<p><code>cifar100_data/cifar100_extended.yaml</code>:</p>
<pre><code># @package data
defaults:
- cifar100
augmentations:
- rotate
</code></pre>
<p>I'm trying to override <code>training_config</code> with <code>experiment/cifar100</code> --> <code>python my_app.py experiment=cifar100</code>:</p>
<p><code>experiment/cifar100</code> try 1:</p>
<pre><code># @package _global_
defaults:
- override /data/cifar100_data: cifar100_extended
</code></pre>
<p>error message:</p>
<blockquote>
<p>hydra.errors.ConfigCompositionException: In 'experiment/cifar100':
Could not override 'data/cifar100_data'. No match in the defaults
list.</p>
</blockquote>
<p><code>experiment/cifar100</code> try 2:</p>
<pre><code># @package _global_
defaults:
- override /data: cifar100_data/cifar100_extended
</code></pre>
<p>error message:</p>
<blockquote>
<p>hydra.errors.MissingConfigException: In
'data/cifar100_data/cifar100_extended': Could not load
'data/cifar100'.</p>
</blockquote>
<p>How can I properly run such an experiment with an extra hierarchy compared to default config?</p>
<p><strong>EDIT following the <a href="https://stackoverflow.com/a/78158494/6293886">answer</a></strong>:<br />
I've adjust <code>cifar100_extended.yaml</code> to:</p>
<pre><code># @package data
defaults:
- cifar100_data/cifar100
augmentations:
- rotate
</code></pre>
<p>as suggested in the <a href="https://stackoverflow.com/a/78158494/6293886">answer</a>, and was able to generate a valid config.
However, now I can't run <code>cifar100_extended.yaml</code> directly, and recieving the following error</p>
<pre><code>hydra.errors.MissingConfigException: In 'cifar100_data/cifar100_extended': Could not load 'cifar100_data/cifar100_data/cifar100'.
</code></pre>
|
<python><configuration><fb-hydra>
|
2024-03-13 09:58:20
| 2
| 1,386
|
itamar kanter
|
78,152,796
| 2,794,152
|
How to use python imshow, for example, with the irregular data points?
|
<p>Suppose I have a list of data points of the form <code>(xi, yi, zi)</code> and I want to plot a 2D density plot with it. In mathematica, you just call the function <code>ListDensityPlot</code> function. In python, it seems density plot is achieved by using <code>imshow</code>. However, it seems the required data should be in regular form. How to get the similar effect given the data sets?</p>
<p>Here is the mathematica code:</p>
<pre><code>n = 100;
xs = RandomReal[1, n];
ys = RandomReal[1, n];
zs = xs + ys;
data = Table[{xs[[i]], ys[[i]], zs[[i]]}, {i, n}];
ListDensityPlot[data, PlotRange -> All, PlotLegends -> Automatic]
</code></pre>
<p>The effect of the above code is (of course, one can set the range to remove the white regions):</p>
<p><a href="https://i.sstatic.net/Bemgf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bemgf.png" alt="enter image description here" /></a></p>
<p>In python, I have generated the data, how to throw it into a function <code>list_density_plot</code> to achieve similar effects?</p>
<pre><code>def f(x, y):
return x+y
N = 100
xs = np.random.random(N)
ys = np.random.random(N)
zs = f(xs, ys)
data = [(xs[i], ys[i], zs[i]) for i in range(N)]
list_density_plot(data) # ???
</code></pre>
|
<python><matplotlib><plot><wolfram-mathematica><density-plot>
|
2024-03-13 09:58:12
| 1
| 4,904
|
an offer can't refuse
|
78,152,486
| 1,231,714
|
Render the inside of a 3D model with a different color (not watertight)
|
<p>I am trying to render a 3D file and take a screenshot and it works OK. My file is not a water tight model.</p>
<ol>
<li>When there is no texture (either .STL or .OBJ) and I orient (rotate) the view to look underneath, it looks black. Is there a way to assign the "inside" portion of the scan a specific color - say blue?</li>
<li>When texture is available (for .OBJ), I want to disable the texture after loading it with texture. The code below has commented out the line that does not load the texture.</li>
</ol>
<p>My code</p>
<pre><code>import open3d as o3d
import time
#Download url1 = 'https://gitlab.uni.lu/yliao/ghost_semi/-/raw/master/example/semi_io_simple/wall_right.stl'
file1 = 'wall_right.stl'
#mesh = o3d.io.read_triangle_mesh(file1, True) #Load texture if .obj
mesh = o3d.io.read_triangle_mesh(file1, False) #Do not load texture
mesh.compute_vertex_normals()
vis = o3d.visualization.Visualizer()
vis.create_window()
vis.clear_geometries()
vis.add_geometry(mesh)
vis.poll_events()
vis.update_renderer()
time.sleep(3)
vis.destroy_window()
</code></pre>
|
<python><3d><open3d>
|
2024-03-13 09:11:39
| 0
| 1,390
|
SEU
|
78,152,455
| 23,051,231
|
Unable to convert tiff PIL image to byte array
|
<p>I have a .tiff image loaded into a PIL Image and I'm trying to obtain its byte array.</p>
<p>This is what I'm doing:</p>
<pre><code>from PIL import Image
import io
def image_to_byte_array(image: Image) -> bytes:
imgByteArr = io.BytesIO()
image.save(imgByteArr, format=image.format)
imgByteArr = imgByteArr.getvalue()
return imgByteArr
im = Image.open(r"ImagePath")
im_bytes = image_to_byte_array(im)
</code></pre>
<p>The problem comes when I'm trying to save the image into <strong>imgByteArr</strong>.</p>
<p>Some .tiff images are throwing <strong>Error setting from dictionary</strong> and additionally I also get <strong>_TIFFVSetField: : Ignored tag "OldSubfileType" (not supported by libtiff)</strong>.</p>
<p>Is there any workaround for these cases?</p>
<p>Here is a <a href="https://drive.google.com/file/d/1dUsn8kdNhUuGDmoYnUlKguT5TJjtqF07/view?usp=sharing" rel="nofollow noreferrer">sample image</a></p>
|
<python><python-3.x><python-imaging-library><tiff>
|
2024-03-13 09:06:47
| 1
| 403
|
galex
|
78,152,055
| 102,960
|
stderr doesn't get captured when stdout=sys.stdout
|
<p>I'm trying to run through <code>subprocess.run()</code> a command that will request user input via <code>stdout</code>, and in case the result is a failure, say why via <code>stderr</code>.</p>
<p>I want to capture <code>stderr</code> to show a friendlier error, but let <code>stdout</code> pass-through to the user's terminal. However, I only seem to be able to capture <code>stderr</code> if I'm also capturing <code>stdout</code> (either by explicitly passing <code>stdout=sys.stdout</code> or omitting it).</p>
<p>I'm no Python expert, and this whole subprocess/pipe/capture thing is getting messy for me. I'm trying to avoid going down <code>Popen</code> or looping output lines, if possible, as this seems to be an otherwise simple task. Could someone clarify what I'm doing wrong?</p>
<blockquote>
<p>In the examples below, after the user inputs <code>n</code>, the script would error out.<br />
I use <code>#></code> to represent CLI output.</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>result = subprocess.run(cmd, text=True, stderr=subprocess.PIPE, stdout=sys.stdout)
print(result)
#> Question? [y/n] (user input starts) n <enter>
#> error: blah blah
#> CompletedProcess(args=[...], returncode=1, stderr='')
### the same happens if stdout is omitted,
### or if I try to check captured stderr from CalledProcessError instead
try:
subprocess.run(cmd, text=True, check=True, stderr=subprocess.PIPE)
except subprocess.CalledProcessError as error:
print(vars(error))
#> (user input starts) n <enter>
#> {'returncode': 1, 'cmd': [...], 'output': None, 'stderr': ''}
### However, if I also capture stdout, it works as expected...
### But then, the user cannot really interact, since the question
### is captured and never shown. The same happens if I try..except.
result = subprocess.run(cmd, text=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
print(result)
#> (user input starts) n <enter>
#> CompletedProcess(args=[...], returncode=1, stdout="Question? [y/n] ", stderr="\x07error: blah blah\n")
</code></pre>
<p><a href="https://gist.github.com/igorsantos07/160dc637eece5957125bb61e4746b424" rel="nofollow noreferrer">A reproducible gist can be found here</a>... However, after some nudges from @Barmar, I created an example using a MariaDB image and the issue doesn't show up there as well - instead of everything going to <code>stderr</code> on the last code block, everything is going to <code>stdout</code> (which is bad but sort of expected).</p>
<p>I guess this could be closed as non-reproducible, unless someone get a great idea about my problem (which, again, isn't reproducible on the gist I wrote).</p>
|
<python><python-3.x><python-3.8>
|
2024-03-13 07:55:23
| 0
| 4,722
|
igorsantos07
|
78,151,955
| 9,394,465
|
delimiter within column data to be ignored in pandas
|
<p>I have a input csv file which shouldn't get modified automatically by pandas to any extent except the fields we modify with dataframe object. say, we use comma (,) as a separator. Now, it may contain:</p>
<ul>
<li>data without quotes:
<ul>
<li>plah</li>
<li>blah</li>
</ul>
</li>
<li>quotes in any order and in any number of times. i.e.,
<ul>
<li>"plah"</li>
<li>""plah</li>
<li>"""plah"</li>
<li>""pl""ah""</li>
</ul>
</li>
<li>separator within single column data (but they would be quoted left and right most for sure). i.e.,
<ul>
<li>"plah,blah"</li>
<li>""plah"",""blah"</li>
</ul>
</li>
</ul>
<p>Now in all these cases, pandas shouldn't lose the quotes nor should it treat comma (,) within a single quoted column data as a separate column.</p>
<p>Reproducible code:</p>
<pre class="lang-py prettyprint-override"><code># Input file
csv_data = '''A,B,C,D,E
234,mno,C22,U,
567,pqr,"C3""",U,5555
999,abc,"C99",D,9999
678,bns,"C6,C7",F,6666
789,bcd,""""C77,T,7777
'''
# Load CSV data into dataframes
df = pd.read_csv(StringIO(csv_data), header=0, dtype=str, keep_default_na=False, engine='python', sep=',', quoting=3)
df.to_csv('output.txt', sep=',', index=False, header=True, quoting=3)
</code></pre>
<p>I get the error:</p>
<pre><code>ParserError: Expected 5 fields in line 5, saw 6
</code></pre>
<p>Expectation:</p>
<pre><code>A,B,C,D,E
234,mno,C22,U,
567,pqr,"C3""",U,5555
999,abc,"C99",D,9999
678,bns,"C6,C7",F,6666
789,bcd,""""C77,T,7777
</code></pre>
|
<python><pandas>
|
2024-03-13 07:36:58
| 1
| 513
|
SpaceyBot
|
78,151,890
| 6,803,114
|
Streamlit nested button not displaying the result on clicking
|
<p>I have a small code which accepts .txt files and process it and returns statement saying that process completed</p>
<p>Also, another button which on click should show the output folder path.
Strangely this is not working. Here is my code.</p>
<pre><code>import streamlit as st
def main():
st.title(":blue[Keyword Replacement] :twisted_rightwards_arrows:")
activity1 = ["Keyword Replacement"]
choice = st.sidebar.selectbox("What do you want to do?",activity1)
if choice == 'Keyword Replacement':
st.subheader("Please upload the file")
uploaded_file = st.file_uploader("Upload file", type={"txt"})
if uploaded_file is not None:
if st.button("Submit"):
st.write("Process completed")
output_path = r"C:\Users\Myname\directory"
if st.button('Show output folder path'):
st.markdown(f"**Outpul folder is located at:** {output_path}") ## NOT WORKING
if __name__ == '__main__':
main()
</code></pre>
<p>Here is the screenshot of the app, the highlighted yellow button should print the output path after clicking, but nothing happens.
<a href="https://i.sstatic.net/LoZrM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LoZrM.png" alt="enter image description here" /></a></p>
<p>Am I missing something here.</p>
<p>I came across session state but couldn't understand it. Is it because of that I'm not able to get the desired output on clicking the nested button.</p>
<p>Any leads would be helpful</p>
|
<python><python-3.x><streamlit>
|
2024-03-13 07:22:44
| 0
| 7,676
|
Shubham R
|
78,151,830
| 4,045,434
|
Handling multiple formats: Convert to datetime (MM:DD:YYYY)
|
<p>I'm trying to convert a column having values like May 24, 1960, Mar. 1, 1990, Aug. 22, 1981 and May 1, 1953 into Date (MM:DD:YYYY).</p>
<p>Notice some have '.' between month and date and some don't.</p>
<pre><code>df['Customer DOB'] = pd.to_datetime(df['Customer DOB'], format='%b.%d,%Y').dt.strftime('%m:%d:%Y')
</code></pre>
<p>Im getting error "time data "May 24, 1960" doesn't match format "%b. %d, %Y"," which makes sense as the code only handles cases with '.'.How do I account for both these cases?</p>
|
<python><pandas><date><datetime>
|
2024-03-13 07:12:31
| 0
| 305
|
AdR
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.