QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,867,313
| 3,353,525
|
Python Requests lib - How to get IP address of connected host after getting the response?
|
<p>I'm trying to understand how I could access information about host that the request connected to. I have situation when behind a hostname, there is a multiple IP addresses that you can connect to, so for example <code>x.com</code> can route me to <code>1.2.3.4</code> or <code>2.3.4.5</code>. I need that information after receiving response from requests. I'm using python 3.10 if that matters.</p>
<p>My code is as follows:</p>
<pre><code>retry_strategy = Retry(total=1,
backoff_factor=0.5,
status_forcelist=[400, 408, 429, 500, 501, 502, 503, 504],
raise_on_status=False)
self.adapter = HTTPAdapter(max_retries=retry_strategy)
# other, not important code
s = Session()
s.mount('https://', self.adapter)
response = s.get(url, headers=self.headers, stream=True, timeout=60)
</code></pre>
<p>Where (if anywhere?) should I look for that kind of information? I found something about <code>response.raw</code> but the connection there is declared as None.</p>
|
<python><python-3.x><python-requests><urllib>
|
2024-01-23 15:10:07
| 0
| 474
|
drajvver
|
77,867,075
| 42,126
|
Python cProfile module throws exception in Lambda
|
<p>I'm trying to profile the execution of a function within my Lambda handler:</p>
<pre><code>import cProfile
import time
def myfunc(reps, sleeptime):
for x in range(reps):
print(f"rep {x}")
time.sleep(sleeptime)
def lambda_handler(event, context):
x = 3
y = 0.25
cProfile.run('myfunc(x, y)', sort="cumtime")
</code></pre>
<p>However, when I run this Lambda, I get the following output:</p>
<pre><code>[ERROR] NameError: name 'myfunc' is not defined
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 12, in lambda_handler
cProfile.run('myfunc(x, y)', sort="cumtime")
File "/var/lang/lib/python3.10/cProfile.py", line 17, in run
return _pyprofile._Utils(Profile).run(statement, filename, sort)
File "/var/lang/lib/python3.10/profile.py", line 54, in run
prof.run(statement)
File "/var/lang/lib/python3.10/cProfile.py", line 96, in run
return self.runctx(cmd, dict, dict)
File "/var/lang/lib/python3.10/cProfile.py", line 101, in runctx
exec(cmd, globals, locals)
</code></pre>
|
<python><amazon-web-services><aws-lambda>
|
2024-01-23 14:35:36
| 1
| 39,699
|
kdgregory
|
77,866,966
| 1,562,335
|
Implement a round robin tournament without playing twice with the same partner
|
<p>I want to organize a badminton tournament in double teams, where no one plays twice with the same partner, using python. I created a graph using nx.Graph and implemented a function to get the list of unplayed partners for each round (each player keeps track of already played partners). But unfortunately, after a few rounds, the algorithm doesn't achieve to find unplayed and available partners (because unplayed partners are already picked in another team). This is my first step, so let's say I have an even number of players.</p>
<p>With that context, how to implement a team matcher which uses a rotation of partners in such a way that everyone plays at each round, and everyone has a different partner ?</p>
<p>Here is my draft of matcher class:</p>
<pre><code>import itertools
import networkx as nx
class Player:
def __init__(self, name: str):
self.name = name
self.previous_partners = set()
self.score = 0.0
def __str__(self):
return self.name
class Team:
def __init__(self, players: list):
if len(players) != 2:
raise ValueError('A team must contain exactly two players')
self.players = players
def __str__(self):
return ' / '.join([str(player) for player in self.players])
class RoundRobinMatcher:
@staticmethod
def _create_graph(players: list[Player]) -> nx.Graph:
graph = nx.Graph()
for player in players:
graph.add_node(player, score=player.score)
for opponent in player.previous_partners:
graph.add_edge(player, opponent)
return graph
@staticmethod
def _find_unplayed_pairs(graph: nx.Graph) -> list[tuple[Player, Player]]:
unplayed_pairs = []
for pair in itertools.combinations(graph.nodes, 2):
if not graph.has_edge(*pair):
unplayed_pairs.append(pair)
return unplayed_pairs
def match(self, players: list[Player]) -> list[Team]:
graph = self._create_graph(players)
unplayed_pairs = self._find_unplayed_pairs(graph)
teams = []
for pair in unplayed_pairs:
for team in teams:
if pair[0] in team.players or pair[1] in team.players:
break
else:
teams.append(Team(pair))
return teams
if __name__ == '__main__':
players = [Player('a'), Player('b'), Player('c'), Player('d'), Player('e'), Player('f'), Player('g'), Player('h')]
matcher = RoundRobinMatcher()
teams = matcher.match(players)
for team in teams:
print(team)
</code></pre>
|
<python><graph><round-robin><tournament>
|
2024-01-23 14:17:49
| 0
| 588
|
AwaX
|
77,866,590
| 17,194,313
|
How do you create and update an iceberg table via Polars or PyArrow?
|
<p>So far I've been happy with using parquet as the data format to persist my data, but I'm being encouraged to try iceberg as an alternative approach.</p>
<p>Keen to give it a go, but not sure how to start.</p>
<p>let's say I have a <code>polars</code> dataframe in Python:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({'x':[1]})
</code></pre>
<p>How can I persist it as an iceberg table in disk (or blob storage)?</p>
<p>Polars can read iceberg tables, so for now I'm mainly trying to get to the write step.</p>
<p>Note that I'm happy to write via another library (<code>pyarrow</code> or <code>pyiceberg</code> I suppose) - but it's not clear what's the best way to do it.</p>
|
<python><dataframe><python-polars><pyarrow><apache-iceberg>
|
2024-01-23 13:14:07
| 2
| 3,075
|
MYK
|
77,866,553
| 13,158,157
|
Write selected columns from pandas to excel with xlsxwriter efficiently
|
<p>I’m trying to write some of the columns of a pandas dataframe to an Excel file, but it’s taking too long. I’m currently using xlsxwriter to write all of the data to the file, which is much faster. How can I write only some of the columns to the file without sacrificing performance?</p>
<p>My code for writing selected columns so far:</p>
<pre><code>for col_num, col_name in enumerate(zip(column_indexes, selected_columns)):
for row_num in range(2, max_rows):
value = df.iloc[row-2][col]
if pd.isna(value):
value =''
worksheet.write(row_num-1, col_num, value )
</code></pre>
|
<python><pandas><xlsxwriter>
|
2024-01-23 13:08:10
| 1
| 525
|
euh
|
77,866,543
| 8,971,668
|
create pydantic computed field with invalid syntax name
|
<p>I have to model a pydantic class from a JSON object which contains some invalid syntax keys.</p>
<p>As an example:</p>
<pre class="lang-py prettyprint-override"><code>example = {
"$type": "Menu",
"name": "lunch",
"children": [
{"$type": "Pasta", "title": "carbonara"},
{"$type": "Meat", "is_vegetable": false},
]
}
</code></pre>
<p>My pydantic classes at the moment looks like:</p>
<pre class="lang-py prettyprint-override"><code>class Pasta(BaseModel):
title: str
class Meat(BaseModel):
is_vegetable: bool
class Menu(BaseModel):
name: str
children: list[Pasta | Meat]
</code></pre>
<p>Now, this work except for <code>$type</code> field. If the field was called "dollar_type", I would simply create the following <code>TranslationModel</code> base class and let <code>Pasta</code>, <code>Meat </code>and <code>Menu </code>inherit from <code>TranslationModel</code>:</p>
<pre class="lang-py prettyprint-override"><code>class TranslationModel(BaseModel):
@computed_field
def dollar_type(self) -> str:
return self.__class__.__name__
</code></pre>
<p>so that by executing <code>Menu(**example).model_dump()</code> I get</p>
<pre class="lang-json prettyprint-override"><code>{
'dollar_type': 'Menu',
'name': 'lunch',
'children': [
{'dollar_type': 'Pasta', 'title': 'carbonara'},
{'dollar_type': 'Meat', 'is_vegetable': False}
]
}
</code></pre>
<p>But sadly I have to strictly follow the original json structure, so I have to use <code>$type</code>.
I have tried using <code>alias </code>and <code>model_validator </code>by following the documentation but without success.</p>
<p>How could I solve this?
Thanks in advance</p>
|
<python><pydantic><pydantic-v2>
|
2024-01-23 13:06:19
| 2
| 542
|
ndricca
|
77,866,436
| 159,695
|
How to use numba with readonly dict as input?
|
<p>I tried <code>typed.Dict.empty(types.uint8[:], types.int32)</code>, but it is not working.</p>
<pre><code>import struct
import scipy as sp
import numpy as np
import numba as nb
#@nb.njit(parallel=True)
def process_uids(uid, data, XYtoBCID ):
indices = np.where(data == uid)
uid_row = np.zeros(data.size, dtype=bool)
hitCnt, misCnt = 0, 0
for idx in nb.prange(indices[0].size):
ptX, ptY = indices[0][idx], indices[1][idx]
#ptStr = struct.pack("@HH", ptX, ptY)
ptS = np.array([ptX, ptY], dtype=np.uint16)
ptS.setflags(write=0, align=0)
ptStr = ptS.tobytes()
if ptStr in XYtoBCID:
bcid = XYtoBCID[ptStr]
uid_row[bcid-1] = True
hitCnt += 1
else:
misCnt += 1
return hitCnt, misCnt, uid_row
def main():
np.random.seed(42)
data = np.random.choice([0, 1, 2, 3], size=(20, 20))
print(data)
random_values = np.random.choice(np.arange(1, 301), size=(20, 20), replace=True)
indices_to_replace = np.random.choice(range(20 * 20), size=100, replace=False)
random_values_flat = random_values.flatten()
random_values_flat[indices_to_replace] = 0
random_values = random_values_flat.reshape((20, 20))
XYtoBCID = {}
#XYtoBCID = nb.typed.Dict.empty(nb.types.uint8[:], nb.types.int32)
for idx, value in enumerate(random_values.flatten(), start=1):
posX, posY = np.unravel_index(idx-1, random_values.shape)
#posStr = struct.pack("@HH", posX, posY)
posStr = np.array([posX, posY], dtype=np.uint16)
if value > 0:
XYtoBCID[posStr.tobytes()] = value
#print(XYtoBCID)
values, counts = np.unique(data, return_counts=True)
mtxCellBarcode = sp.sparse.lil_matrix((values[-1], data.size), dtype=bool)
hitCnt,misCnt = 0,0
for uid in values[1:]:
hit, miss, uid_row = process_uids(uid, data, XYtoBCID)
hitCnt += hit
misCnt += miss
mtxCellBarcode[uid-1, :] = uid_row
print(f'Hit:{hitCnt}, Miss:{misCnt}.')
if __name__ == "__main__":
main()
</code></pre>
|
<python><dictionary><numba>
|
2024-01-23 12:49:26
| 1
| 2,072
|
Galaxy
|
77,866,435
| 590,335
|
vscode supports python scripts that can be run interactively - is there an option to have a section that runs only in interactive mode?
|
<p>vscode supports python scripts that can be run interactively - is there an option to have a section that runs only in interactive mode?</p>
<p>that is, this snippet should be run as a notebook cell in interactive mode, but not when running as script</p>
<p>a possible option would by a flag like the <strong>name</strong> variable - but it appears to be <strong>main</strong> both when running as a script, and when running interactively</p>
<p>this <a href="https://stackoverflow.com/questions/19309287/how-to-intermittently-skip-certain-cells-when-running-ipython-notebook">questions</a> includes some workarounds, but I'm hopping for something cleaner</p>
|
<python><visual-studio-code><jupyter-notebook>
|
2024-01-23 12:49:08
| 0
| 8,467
|
Ophir Yoktan
|
77,866,356
| 3,512,538
|
building and installing extensions during pip install -e
|
<p>I have a <code>setup.py</code> with a CMake extension:</p>
<pre class="lang-py prettyprint-override"><code>from setuptools.command.build_ext import build_ext
class CmakeExtension(build_ext):
...
if __name__ == "__main__":
setup(
...
cmdclass={
"build_ext": CmakeExtension, # Build the C++ extension using cmake
},
ext_modules= [
Extension('a', sources=[]),
Extension('b', sources=[]),
]
)
</code></pre>
<p>building this using <code>python3 seutp.py bdist_wheel</code> or <code>python3 -m build . --wheel</code> works nicely - a wheel is created and it contains the relevant cmake object file.</p>
<p>However, <code>pip install -e .</code> fails to call the cmake procedure - leaving me with only the python files, without the critical shared object.</p>
<p>I can circumvent this by first building a wheel, and then installing it, but I have 2 issues with it:</p>
<ol>
<li>the petty reason - I prefer a single command (<code>pip install</code> vs. <code>setup.py bdist_wheel && pip install</code>)</li>
<li>the main reason - I want my installation to be editable, and installing a wheel is not</li>
</ol>
<p>The <a href="https://setuptools.pypa.io/en/latest/userguide/development_mode.html" rel="nofollow noreferrer">docs</a> say:</p>
<blockquote>
<p>The editable term is used to refer only to Python modules inside the package directories. Non-Python files, external (data) files, executable script files, binary extensions, headers and metadata may be exposed as a snapshot of the version they were at the moment of the installation.</p>
</blockquote>
<p>Would that allow me to combine editable installation with the binary extension?</p>
|
<python><cmake><pip>
|
2024-01-23 12:39:23
| 0
| 12,897
|
CIsForCookies
|
77,866,250
| 470,994
|
See the raw IMAP commands when using imaplib or imap_tools
|
<p>I am writing a Python 3 script to handle IMAP mailboxes, and I am having some problems with the server. When I connect to it "by hand" and try to send raw IMAP commands, the server is unresponsive, and yet when I run the script it does what it does correctly.</p>
<p>Anyway, in order to find out what's happening with the server I need to see raw IMAP commands that my script is sending, and specially the server's responses. Is there any way to make either imaplib or imap_tools (the two libraries that I've tried) log that information?</p>
|
<python><python-3.x><imaplib><imap-tools>
|
2024-01-23 12:22:50
| 1
| 1,716
|
PaulJ
|
77,865,821
| 11,861,874
|
Insert PNG Image to Reportlab PDF using Python
|
<p>I have tried and gone through multiple topic on a similar questions, I am not able to export image to pdf, is there any way to debug to understand what's going on. When i use below function for simpler example it works fine, however, I can't share other image for which it doesn't work, I would like to know how can i debug and understand why it doesn't work.</p>
<pre><code>from reportlab.platypus import Image
from reportlab.lib.units import inch
import matplotlib.pyplot as plt
from io import BytesIO
def fig2image(f):
buf = io.BytesIO()
f.savefig(buf, format='png', dpi=300)
buf.seek(0)
x,y = f.get_size_inches()
return Image(buf, x * inch, y * inch)
#Worked Example:
abc, ax = plt.subplots(dpi=400,figsize=(4,4))
plt.plot([1,2,3,4])
plt.savefig('abc.png')
fig2image(abc)
</code></pre>
|
<python><io><reportlab>
|
2024-01-23 11:06:15
| 1
| 645
|
Add
|
77,865,780
| 23,179,206
|
Merging of PySimpleGUI EXE (PyInstaller) icons on the taskbar
|
<p>Windows 10<br />
Python 3.10.2<br />
PySimpleGUI 4.60.5<br />
pyinstaller 3.6.0</p>
<p>Any PySimpleGUI EXEs generated using pyinstaller share the same icon on the taskbar when using PySimpleGUI 4.60.5 (the most recent version). But this issue does not occur with PySimpleGUI 4.57.0 (though it does also happen with the release following 4.57.0 ie 4.58.0)</p>
<p><a href="https://i.sstatic.net/XXVUG.png" rel="nofollow noreferrer">Two different EXEs with different names (which would normally have different and separate icons) sharing the same icon on the taskbar</a></p>
<p>Running a third PySimpleGUI 4.60.5 EXE, the same thing happens, and all three share the same icon.</p>
<p>This happens with even the simplest of programs:</p>
<pre><code>import PySimpleGUI as sg
layout = [[sg.T(f"PSG version: {sg.version}")]]
window = sg.Window('Taskbar icon issue', layout)
while True:
event, values = window.read()
if event == sg.WINDOW_CLOSED:
break
window.close()
</code></pre>
<p>The pyinstaller command takes the form:
<code>pyinstaller --onefile --noconsole main.py --icon icon.ico</code>
and using earlier versions of pyinstaller do not change anything.</p>
<p>Is this a known issue? I couldn’t find any references to it.</p>
|
<python><pyinstaller><pysimplegui>
|
2024-01-23 11:00:14
| 0
| 910
|
Lee-xp
|
77,865,727
| 6,197,439
|
How to run bash shell echo in subprocess.run?
|
<p>On a Raspbian Stretch Linux, this <code>echo</code> command runs fine in the <code>bash</code> shell:</p>
<pre class="lang-none prettyprint-override"><code>$ echo -n -e "Hello World\n"
Hello World
</code></pre>
<p>Now, I want to execute this same command via Python 3's <code>subprocess.run</code>, passed as an array/list; however:</p>
<pre class="lang-none prettyprint-override"><code>$ python3 -c 'import subprocess; result=subprocess.run(["echo", "-n", "-e", r"Hello World\n"], shell=True, check=True, stdout=subprocess.PIPE); print(result.stdout)'
b'\n'
</code></pre>
<p>... I just get a measly linefeed as output ?! Why - and how can I get the full output that I expect?</p>
<p>Note that the Python 3 version here is:</p>
<pre class="lang-none prettyprint-override"><code>$ python3 --version
Python 3.5.3
</code></pre>
|
<python><subprocess>
|
2024-01-23 10:54:29
| 1
| 5,938
|
sdbbs
|
77,865,518
| 4,277,485
|
Reading partitioned multi-schema parquet files from S3 using Polars
|
<p>Having 1000+ s3 file in partitioned path, want to read all the files. using Polars because it is fast when compared to Pandas <br></p>
<blockquote>
<p>s3://bucket_name/rs_tables/name='part1'/key='abc'/date=''/part1_0000.parquet</p>
</blockquote>
<p>Scanning these files using Polars <br></p>
<pre><code> source = "s3://bucket_name/rs_tables/*/*/*/*.parquet"
storage_options = {
"aws_access_key_id": access_key,
"aws_secret_access_key": secret_key,
"aws_session_token": token
}
lazyFrame = pl.scan_parquet(source, storage_options=storage_options)
lazyFrame.collect()
</code></pre>
<p>Since these files have different schema, code is throwing compute error</p>
<blockquote>
<p>ComputeError: schema of all files in a single scan_parquet must be equal</p>
</blockquote>
<p>Is there any option of mergeSchema like in Spark? Please suggest solutions to solve this problem</p>
|
<python><dataframe><amazon-s3><partitioning><python-polars>
|
2024-01-23 10:18:23
| 1
| 438
|
Kavya shree
|
77,865,181
| 1,341,024
|
.relationship(…, cascade="all") does not delete children when multiple parents "bulk" deleted using session.execute(delete(Parent))
|
<p>I worte some code in Python for managing some database stuff.
All I want to do, is to delete all parents of a 1-to-many relationsship , which should automatically delete all children. Here is my example code:</p>
<pre><code>from sqlalchemy import create_engine, Column, Integer, String, ForeignKey
from sqlalchemy.orm import relationship, Session
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Parent(Base):
__tablename__ = 'parents'
id = Column(Integer, primary_key=True)
name = Column(String)
single_parent = True
children = relationship("Child", back_populates="parent", cascade="all, delete-orphan")
class Child(Base):
__tablename__ = 'children'
id = Column(Integer, primary_key=True)
name = Column(String)
parent_id = Column(Integer, ForeignKey('parents.id'))
parent = relationship('Parent', back_populates='children')
engine = create_engine("sqlite:///test.db", echo=True)
Base.metadata.create_all(bind=engine)
session = Session(engine)
# create data
parent1 = Parent(name='Parent 1', children=[Child(name='Child 1'), Child(name='Child 2')])
parent2 = Parent(name='Parent 2', children=[Child(name='Child 3'), Child(name='Child 4')])
session.add_all([parent1, parent2])
session.commit()
# Check
print("Before:")
print(session.query(Parent).all())
print(session.query(Child).all())
# Delete
session.query(Parent).delete()
session.commit()
</code></pre>
<p>When running, the parent elements will be deleted but none of the child elements.
What am I doing wrong here?
Thanks a lot for your help!</p>
|
<python><sqlite><sqlalchemy><cascading-deletes>
|
2024-01-23 09:24:39
| 1
| 344
|
CodeCannibal
|
77,865,010
| 3,751,263
|
Run Python script in each subfolder automatically
|
<p>I have this file structure:</p>
<ol>
<li><p>Main Folder: <code>W001-W100</code>.</p>
<ul>
<li><p>Subfolder: <code>W001</code></p>
<ul>
<li>Around 500 DICOM files (I am working with CT data).</li>
</ul>
</li>
<li><p>Subfolder: <code>W002</code></p>
<ul>
<li>Around 500 DICOM files (I am working with CT data).</li>
</ul>
</li>
<li><p>Subfolder: <code>...</code></p>
</li>
<li><p>Subfolder: <code>W100</code></p>
<ul>
<li>Around 500 DICOM files (I am working with CT data).</li>
</ul>
</li>
</ul>
</li>
</ol>
<p><a href="https://i.sstatic.net/g9gtV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g9gtV.png" alt="enter image description here" /></a></p>
<p>I am using Slicer software and Python to automate the process of getting surface meshes automatically.</p>
<p>This is the Python script I wrote that needs to be run in each subfolder.</p>
<p>As you can see, there are two lines with the location of the subfolder:</p>
<ul>
<li>Second line, specifying the location of the subfolder</li>
<li>Almost the last line, indicating where to save the <code>.ply</code> file.</li>
</ul>
<p>As I have around 1000 subfolders, I don't want to run the script changing those lines. In other words, I wan to automate the process.</p>
<p>I am much more familiar with R than with Python. So:</p>
<ul>
<li>could please indicate me how can I automate the process in detail by using Python?</li>
<li>could you please edit the code to adapt to my needs?</li>
<li>could you please show me a step-by-step process?</li>
</ul>
<pre><code># Load DICOM files
dicomDataDir = "C:/Users/mario.modesto/Desktop/DICOM/W001" # input folder with DICOM files
import os
baboon_skull = os.path.basename(dicomDataDir)
loadedNodeIDs = [] # this list will contain the list of all loaded node IDs
from DICOMLib import DICOMUtils
with DICOMUtils.TemporaryDICOMDatabase() as db:
DICOMUtils.importDicom(dicomDataDir, db)
patientUIDs = db.patients()
for patientUID in patientUIDs:
loadedNodeIDs.extend(DICOMUtils.loadPatientByUID(patientUID))
# Display volume rendering
logic = slicer.modules.volumerendering.logic()
volumeNode = slicer.mrmlScene.GetNodeByID('vtkMRMLScalarVolumeNode1')
displayNode = logic.CreateVolumeRenderingDisplayNode()
displayNode.UnRegister(logic)
slicer.mrmlScene.AddNode(displayNode)
volumeNode.AddAndObserveDisplayNodeID(displayNode.GetID())
logic.UpdateDisplayNodeFromVolumeNode(displayNode, volumeNode)
# find the files NodeID
volumeNode = getNode('2: Facial Bones 0.75 H70h')
#create a blank Markup ROI
roiNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLMarkupsROINode")
#set the new markup ROI to the dimensions of the volume
cropVolumeParameters = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLCropVolumeParametersNode")
cropVolumeParameters.SetInputVolumeNodeID(volumeNode.GetID())
cropVolumeParameters.SetROINodeID(roiNode.GetID())
slicer.modules.cropvolume.logic().SnapROIToVoxelGrid(cropVolumeParameters) # optional (rotates the ROI to match the volume axis directions)
slicer.modules.cropvolume.logic().FitROIToInputVolume(cropVolumeParameters)
slicer.mrmlScene.RemoveNode(cropVolumeParameters)
#set the cropping parameters
cropVolumeLogic = slicer.modules.cropvolume.logic()
cropVolumeParameterNode = slicer.vtkMRMLCropVolumeParametersNode()
cropVolumeParameterNode.SetIsotropicResampling(True)
#set the output resolution to 2 millimeters. units in slicer is always in mm.
cropVolumeParameterNode.SetSpacingScalingConst(2)
cropVolumeParameterNode.SetROINodeID(roiNode.GetID())
cropVolumeParameterNode.SetInputVolumeNodeID(volumeNode.GetID())
#do the cropping
cropVolumeLogic.Apply(cropVolumeParameterNode)
#obtain the nodeID of the cropped volume
croppedVolume = slicer.mrmlScene.GetNodeByID(cropVolumeParameterNode.GetOutputVolumeNodeID())
# Segmentation
segmentationNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLSegmentationNode")
segmentationNode.CreateDefaultDisplayNodes() # only needed for display
segmentationNode.SetReferenceImageGeometryParameterFromVolumeNode(croppedVolume)
addedSegmentID = segmentationNode.GetSegmentation().AddEmptySegment("skull")
# Create segment editor to get access to effects
segmentEditorWidget = slicer.qMRMLSegmentEditorWidget()
segmentEditorWidget.setMRMLScene(slicer.mrmlScene)
segmentEditorNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLSegmentEditorNode")
segmentEditorWidget.setMRMLSegmentEditorNode(segmentEditorNode)
segmentEditorWidget.setSegmentationNode(segmentationNode)
segmentEditorWidget.setMasterVolumeNode(croppedVolume)
# Thresholding
segmentEditorWidget.setActiveEffectByName("Threshold")
effect = segmentEditorWidget.activeEffect()
effect.setParameter("MinimumThreshold","115")
effect.setParameter("MaximumThreshold","3071")
effect.self().onApply()
# Clean up
segmentEditorWidget = None
slicer.mrmlScene.RemoveNode(segmentEditorNode)
# Make segmentation results visible in 3D
segmentationNode.CreateClosedSurfaceRepresentation()
# Creatint surface mesh and saving
surfaceMesh = segmentationNode.GetClosedSurfaceInternalRepresentation(addedSegmentID)
writer = vtk.vtkPLYWriter()
writer.SetInputData(surfaceMesh)
writer.SetFileName("C:/Users/mario.modesto/Desktop/DICOM/"+baboon_skull+"_surfaceMesh.ply")
writer.Update()
</code></pre>
<p>I would really appreciate your help.</p>
|
<python><automation><subdirectory>
|
2024-01-23 08:50:46
| 1
| 2,800
|
antecessor
|
77,864,935
| 11,922,765
|
Python Seaborn Heatmap Turnoff main plot keep the cbar
|
<p>I am drawing a sub-plot figures with each sub-plot is a heatmap. I turned off all cbars. Now I will plot cbar separately. While doing so, I am getting main plot. I don't want it. I want to turn if off.</p>
<p>My code:</p>
<pre><code>df1 =
Datetime
2022-08-04 15:06:00 53.4 57.8
2022-08-04 15:07:00 54.3 57.0
2022-08-04 15:08:00 53.7 57.0
2022-08-04 15:09:00 54.3 57.2
2022-08-04 15:10:00 55.4 57.5
2023-10-21 14:00:00 43.9 35.2
2023-10-21 14:01:00 43.4 34.8
2023-10-21 14:02:00 43.4 34.7
2023-10-21 14:03:00 43.3 34.5
2023-10-21 14:04:00 42.7 33.9
# Heatmap: draw colorbar only, turn off the plot
sns.heatmap(np.reshape(df1.dropna().values.flatten(),(-1,1)),cbar=True,cmap='jet')
plt.show()
</code></pre>
<p>Present output:</p>
<p><a href="https://i.sstatic.net/3Dx5U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3Dx5U.png" alt="enter image description here" /></a></p>
<p>Expected output:</p>
<p><a href="https://i.sstatic.net/SzeCG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SzeCG.png" alt="enter image description here" /></a></p>
|
<python><pandas><numpy><seaborn><heatmap>
|
2024-01-23 08:39:26
| 0
| 4,702
|
Mainland
|
77,864,794
| 4,784,914
|
How to get the project `config/` directory from inside an installed Python package?
|
<p>I have a Python database interface package, which requires configured credentials to work. I would like to tuck those away in a config file, that's not committed to the interface repository. So I have the following structure in mind:</p>
<pre><code>my_project/
|- config/
| |- database.json
|- main.py (Contains 'from my_interface import Connection')
|- ...
</code></pre>
<pre><code>my_interface/
|- src/
| |- module.py
| |- ...
|- ...
</code></pre>
<p><strong>But how can I make <code>my_interface</code> find the config directory in the project that's importing it?</strong></p>
<p>The interface could be installed as system, or as user, or in a <code>venv</code> and it could be editable, so trying to navigate up from inside <code>module.py</code> (with <code>__file__</code>) cannot be reliable.</p>
<p>I found a bunch of similar questions, but they don't seem to tackle how to approach this from inside an imported package, instead relying on <code>__file__</code>:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/52994289/python-package-can-i-look-for-config-yml-file-from-project-directory-which-call">Python package: Can I look for config/yml file from project directory which called it?</a></li>
<li><a href="https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure">Python - Get path of root project structure</a></li>
</ul>
|
<python><package><config>
|
2024-01-23 08:10:22
| 1
| 1,123
|
Roberto
|
77,864,704
| 4,281,353
|
Annotated Transformer - Why x + DropOut(Sublayer(LayerNorm(x)))?
|
<p>Please clarify if the <a href="https://nlp.seas.harvard.edu/annotated-transformer/#encoder-and-decoder-stacks" rel="nofollow noreferrer">Annotated Transformer</a> Encoder LayerNorm implementation is correct.</p>
<p><a href="https://i.sstatic.net/8rLZkm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8rLZkm.png" alt="enter image description here" /></a></p>
<p><a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">Transformer paper</a> says the output of the sub layer is <code>LayerNorm(x + Dropout(SubLayer(x)))</code>.</p>
<p><a href="https://i.sstatic.net/bl9MS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bl9MS.png" alt="enter image description here" /></a></p>
<p><code>LayerNorm</code> should be applied <strong>after</strong> the <code>DropOut(SubLayer(x))</code> as per the paper:</p>
<p><a href="https://i.sstatic.net/HrYPa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HrYPa.png" alt="enter image description here" /></a></p>
<p>However, the <a href="https://nlp.seas.harvard.edu/annotated-transformer/#encoder-and-decoder-stacks" rel="nofollow noreferrer">Annotated Transformer</a> implementation does <code>x + DropOut(SubLayer(LayerNorm(x)))</code> where <code>LayerNorm</code> is applied <strong>before</strong> <code>Sublayer</code>, which is the other way around.</p>
<pre><code>class SublayerConnection(nn.Module):
"""
A residual connection followed by a layer norm.
Note for code simplicity the norm is first as opposed to last.
"""
def __init__(self, size, dropout):
super(SublayerConnection, self).__init__()
self.norm = LayerNorm(size)
self.dropout = nn.Dropout(dropout)
def forward(self, x, sublayer):
"Apply residual connection to any sublayer with the same size."
return x + self.dropout(sublayer(self.norm(x))) # <--- LayerNorm before SubLayer
</code></pre>
|
<python><pytorch><transformer-model><encoder>
|
2024-01-23 07:49:43
| 1
| 22,964
|
mon
|
77,864,142
| 3,295,036
|
Pass SocketIO server instance to FastAPI routers as dependency
|
<p>I'm using <code>python-socketio</code> and trying to pass the server instance to my app routers as dependency:</p>
<p><code>main.py</code> file:</p>
<pre><code>import socketio
import uvicorn
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from src.api.rest.api_v1.route_builder import build_routes
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_credentials=True,
allow_methods=["*"],
)
sio = socketio.AsyncServer(cors_allowed_origins="*", async_mode="asgi")
app = build_routes(app)
sio_asgi_app = socketio.ASGIApp(sio, app)
@app.get("/")
async def root():
return {"message": "Hello from main server !"}
def start_server():
api_port = "0.0.0.0"
api_host = "8000"
uvicorn.run(
"src.api.main:sio_asgi_app",
host=api_host,
port=int(api_port),
log_level="info",
reload=True,
)
if __name__ == "__main__":
start_server()
</code></pre>
<p><code>product_routes.py</code> file:</p>
<pre><code>from typing import Annotated
import socketio
from fastapi import APIRouter, Depends
router = APIRouter(prefix="/products")
@router.get("/emit")
async def emit_message(
sio: Annotated[socketio.AsyncServer, Depends()]
) -> None:
await sio.emit("reply", {"message": "Hello from server !"})
</code></pre>
<p><code>build_routes.py</code> file:</p>
<pre><code>def build_routes(app: FastAPI) -> FastAPI:
app.include_router(
r_products,
prefix="/v1",
dependencies=[Depends(socketio.AsyncServer)],
)
return app
</code></pre>
<p>So at this point I'm not sure how to pass that dependency to the routes.</p>
<p>When I hit the <code>emit</code> endpoint I get this error response:</p>
<pre><code>{
"detail": [
{
"type": "missing",
"loc": [
"query",
"kwargs"
],
"msg": "Field required",
"input": null,
"url": "https://errors.pydantic.dev/2.5/v/missing"
}
]
}
</code></pre>
|
<python><fastapi>
|
2024-01-23 05:29:54
| 1
| 1,111
|
maudev
|
77,864,134
| 2,142,728
|
Pip dependency resolution algorithm (in editable installs)
|
<p>There are different and unclear explanations everywhere.
I've checked the contents of wheels and sdists. I've learned about <code>setup.py</code>, <code>setup.cfg</code>, <code>pyproject.toml</code>, <code>dist-info</code> (and its <code>METADATA</code> file), about the <code>PKG-INFO</code> file, and I cannot conciliate all that info.</p>
<p>How does <code>pip</code> infer transitive dependencies? where should that info be in built projects (wheels) and source projects (sdists) and in editable installs.</p>
<p>Thanks in advance</p>
|
<python><pip><setuptools><python-poetry><transitive-dependency>
|
2024-01-23 05:27:06
| 0
| 3,774
|
caeus
|
77,864,062
| 2,805,482
|
Streamlit display generated images with download button in form
|
<p>Hi I have a usecase where I have a streamlit form where I let user to upload images and based on the input images I generate multiple images. I write these images to tmp directory and wants to show it on the page along with a download button.</p>
<p>Now my challenge is when I call <code>show_image()</code> after <code>generate</code> it throws the error that download button is not allowed in form.
And when I call <code>show_image</code> after <code>main</code> it gets executed as soon as the form loads before even the images are generated.</p>
<p>How can I add the show image and download option after the execution of <code>generate</code> method?</p>
<p>Error:</p>
<pre><code>StreamlitAPIException: st.download_button() can't be used in an st.form().
For more information, refer to the documentation for forms.
Traceback:
File "/Users/anup.rawka/Documents/GitHub/vdil-ai-framework/creative_automation/app/scripts/creative_app.py", line 65, in <module>
main()
File "/Users/anup.rawka/Documents/GitHub/vdil-ai-framework/creative_automation/app/scripts/creative_app.py", line 62, in main
show_image()
File "/Users/anup.rawka/Documents/GitHub/vdil-ai-framework/creative_automation/app/scripts/creative_app.py", line 35, in show_image
st.download_button(
</code></pre>
<pre><code>def show_image():
for i in range(1, 7):
img = Image.open(r"{}{}.png".format(base_path, i))
buf = BytesIO()
img.save(buf, format="PNG")
byte_im = buf.getvalue()
st.image(img)
st.download_button(
label="Download image", data=byte_im,
file_name="{}.png".format(i), mime="image/png")
def main():
st.image('../images/1st_screen_masthead_3up_1824_412.png', caption='3up 1st Screen Masthead Template 1824x412')
# side_bar, work_area = st.columns([0.2, 0.8])
setup_sidebar()
with st.form("US STVP 2Program 1stScreenMasthead 1824x412"):
img_tpfm_shw_1 = st.file_uploader("Choose show 1 image")
img_tpfm_shw_1_logo = st.file_uploader("Choose show 1 logo")
img_tpfm_shw_2 = st.file_uploader("Choose show 2 image")
img_tpfm_shw_2_logo = st.file_uploader("Choose show 2 logo")
img_tpfm_shw_3 = st.file_uploader("Choose show 3 image")
img_tpfm_shw_3_logo = st.file_uploader("Choose show 3 logo")
submit_button = st.form_submit_button('Generate Creative')
if submit_button:
img_shw1 = iio.imread(img_tpfm_shw_1)
img_shw2 = iio.imread(img_tpfm_shw_2)
img_shw1_logo = iio.imread(img_tpfm_shw_1_logo)
img_shw2_logo = iio.imread(img_tpfm_shw_2_logo)
img_shw3 = iio.imread(img_tpfm_shw_3)
img_shw3_logo = iio.imread(img_tpfm_shw_3_logo)
generate(img_shw1, img_shw1_logo, img_shw2,
img_shw2_logo, img_shw3,
img_shw3_logo)
main()
show_image()
</code></pre>
|
<python><streamlit>
|
2024-01-23 05:02:45
| 0
| 1,677
|
Explorer
|
77,864,039
| 17,610,082
|
How to avoid LOGENTRIES_TOKEN spam logs in django?
|
<p>When i run the <code>python manage.py <somecmd></code>, i'm getting the below error:</p>
<pre class="lang-bash prettyprint-override"><code>It appears the LOGENTRIES_TOKEN parameter you entered is incorrect!
</code></pre>
<p>How can i disable this log, this log spamming across the access log.</p>
<p>I've tried to control it using log_level as of now, but it's not working.</p>
|
<python><python-3.x><django><debugging><logging>
|
2024-01-23 04:54:17
| 1
| 1,253
|
DilLip_Chowdary
|
77,863,948
| 12,352,239
|
load numpy array into pyspark
|
<p>I have an input json file where each row is an ID and a corresponding numpy array stored in base64. How can I load this file into pyspark?</p>
<p>I have tried creating a udf to do this:</p>
<pre><code>from pyspark.sql.functions import udf
from pyspark.sql.types import ArrayType, DoubleType
import base64
def decode_base64_and_convert_to_numpy(base64_string):
decoded_bytes = base64.b64decode(base64_string)
decoded_str = decoded_bytes.decode('utf-8')
decoded_list = json.loads(decoded_str)
return np.array(decoded_list)
decode_udf = udf(decode_base64_and_convert_to_numpy, ArrayType(DoubleType()))
</code></pre>
<p>but when I invoke it I get an encoding error:</p>
<pre><code>numpy_loaded_embeddings = raw_input.withColumn('numpy_embedding', decode_udf('model_output'))
</code></pre>
<pre><code>An error was encountered:
An exception was thrown from the Python worker. Please see the stack trace below.
Traceback (most recent call last):
File "<stdin>", line 7, in decode_base64_and_convert_to_numpy
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x93 in position 0: invalid start byte
Traceback (most recent call last):
File "/mnt/yarn/usercache/livy/appcache/application_1705979808797_0003/container_1705979808797_0003_01_000001/pyspark.zip/pyspark/sql/dataframe.py", line 607, in show
print(self._jdf.showString(n, 20, vertical))
File "/mnt/yarn/usercache/livy/appcache/application_1705979808797_0003/container_1705979808797_0003_01_000001/py4j-0.10.9.5-src.zip/py4j/java_gateway.py", line 1322, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/mnt/yarn/usercache/livy/appcache/application_1705979808797_0003/container_1705979808797_0003_01_000001/pyspark.zip/pyspark/sql/utils.py", line 196, in deco
raise converted from None
pyspark.sql.utils.PythonException:
An exception was thrown from the Python worker. Please see the stack trace below.
Traceback (most recent call last):
File "<stdin>", line 7, in decode_base64_and_convert_to_numpy
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x93 in position 0: invalid start byte
</code></pre>
|
<python><apache-spark><pyspark>
|
2024-01-23 04:07:47
| 2
| 480
|
219CID
|
77,863,752
| 920,599
|
Jupyter Notebook reports syntax error on code that isn't present
|
<p>Has anybody ever seen a Jupyter notebook give an error referring to code that is not actually present in the notebook?</p>
<p>Notice that in the image below, the Syntax error refers to line 6 and the text <code>return amounts: {result}")</code></p>
<p>This code does not appear in this block (or any other block in the notebook). Nor does the string <code>s: {result}")</code> which is the substring that appears in the error, but not in line 6.</p>
<p><a href="https://i.sstatic.net/qf0CA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qf0CA.png" alt="enter image description here" /></a></p>
|
<python><jupyter-notebook>
|
2024-01-23 02:40:24
| 1
| 6,694
|
Zack
|
77,863,395
| 3,042,018
|
Python Turtle Graphics in GitHub Codespace
|
<p>Is it possible to run Python Turtle Graphics from Github Codespace? I tried and got</p>
<pre><code>File "/usr/local/python/3.10.13/lib/python3.10/tkinter/__init__.py", line 2299, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
</code></pre>
|
<python><tkinter><python-turtle><github-codespaces>
|
2024-01-22 23:58:34
| 0
| 3,842
|
Robin Andrews
|
77,863,391
| 253,039
|
Reparenting the resources of a class to another class
|
<p>Suppose I have a python object <code>A</code> with field <code>field</code>. <code>A.field</code> is very expensive. Suppose a highly nested dict. At some point, after doing all necessary work with the <code>A</code> instance, I call <code>A.getB()</code>, which hands me a new kind of object <code>B</code>. The function <code>getB</code> will chew up and modify <code>field</code> in such a way that <code>field</code> violates all expectations that <code>A</code> had of it, but will be in the necessary form that <code>B</code> needs it. If <code>field</code> were cheap, I'd simply <code>deepcopy</code> it, modify it, and give the copy to <code>B</code>, but I can't do that. How do I poison or break <code>A</code> so that it isn't reused?</p>
<p>In rust, we have ownership, where the type system could be used to swallow <code>A</code> up and prevent it from being used again, but python doesn't provide such guarantees. What would be the most pythonic way to deal with such a situation?</p>
|
<python><memory>
|
2024-01-22 23:57:00
| 1
| 402
|
Evan
|
77,863,269
| 7,563,454
|
Making threads in a pool notice changes to global variables
|
<p>I'm running into an interesting circumstance with pools which I don't fully understand. I was aware that if you edit any variable or object from a thread, changes will not be applied to the main thread and only exist in the isolated reality of that worker thread. Now however I'm noticing that threads in a pool don't even detect changes to a global variable made by the main thread, if those changes are made after the pool is started or even before then.</p>
<pre><code>import multiprocessing as mp
variable = 0
def double(i):
return i * variable
def main():
pool = mp.Pool()
for result in pool.map(double, [1, 2, 3]):
print(result)
variable = 1
main()
</code></pre>
<p>Obviously a simplification for the sake of example, in my case I need threads to see updates to the contents of a list modified by the main loop which is an object property. The funny thing is that even if I move <code>variable = 1</code> before <code>pool = mp.Pool()</code> in my test, the threads always see 0 and never notice the variable changing to 1.</p>
<p>What does work when using objects is changing the variable on the object who's function is associated with the thread. The weird thing that happens then is performance on the main thread drops significantly as it's using a lot more CPU each call: It's as if merely informing the thread pool of changes to a list adds a great amount of effort.</p>
<p>What is the most efficient and cheap way to make a thread pool see changes to a global or object variable modified by the main thread, so each time you run <code>pool.map_async</code> or <code>pool.apply_async</code> threads work with the updated version of that var?</p>
|
<python><python-3.x><multithreading><threadpool>
|
2024-01-22 23:12:43
| 1
| 1,161
|
MirceaKitsune
|
77,863,214
| 9,983,652
|
Reconcile channels with varying data acquisition frequencies
|
<p>My dataframe has 8 columns for 4 different <em>data</em> recorded as a function of <em>depth</em>. However, each channel came with a different acquisition frequency. I would like to reconcile all these 4 data using a common depth interval.</p>
<p>Interpolation with Scipy looked complex, so is there any easier method to, for example, apply the first <em>depth</em> (<code>'depth1'</code>) to all other <em>data</em>?</p>
<p>Here is a data sample:</p>
<pre><code>df=pd.read_csv(file,sep='\t')
df
depth1 data1 depth2 data2 depth3 data3 depth4 data4
0 910.0 32.820 910 48.2 910.05 450.57 912.961414 -294.045478
1 910.1 33.610 911 48.2 910.20 1.14 922.966707 -447.780089
2 910.2 33.900 912 48.2 910.35 1.14 932.972000 -396.001844
3 910.3 34.190 913 48.4 910.50 1.43 942.976616 -391.830800
4 910.4 34.430 914 48.7 910.65 1.32 952.980427 -438.514022
5 910.5 34.670 915 48.9 910.80 1.54 962.984317 -679.421100
6 910.6 35.015 916 48.8 910.95 16.08 972.988514 -660.389044
7 910.7 35.360 917 49.0 911.10 8.16 982.993188 -671.841567
8 910.8 35.450 918 49.5 911.25 7.67 992.998200 -712.625933
9 910.9 35.540 919 49.4 911.40 8.86 1003.004001 -884.093533
10 911.0 35.825 920 49.5 911.55 8.70 1013.009802 -1124.780022
11 911.1 36.110 921 49.6 911.70 7.93 1023.015603 -1454.342144
</code></pre>
|
<python><pandas><interpolation><reindex>
|
2024-01-22 22:51:34
| 1
| 4,338
|
roudan
|
77,863,213
| 3,289,890
|
Addressing column header misalignment when using termcolor with pandas
|
<p>I'm currently facing a challenge while attempting to apply color to a single column in a Pandas DataFrame. I have this code snippet that accomplishes the task, but it introduces some side effects that are proving challenging to resolve.</p>
<pre><code>import pandas as pd
from termcolor import colored
# Sample DataFrame
data = {'A': [1.10, -2.22, 3.33, -4.40],
'B': [-5.50, 6.61, 7.70, -8.80]}
df = pd.DataFrame(data)
print(df)
# Function to apply color to a single value
def colorize_value(value):
return colored(value, 'blue') if value > 0 else colored(value, 'red')
# Specify the column you want to colorize
column_to_colorize = 'A'
# Apply color to the specified column using apply
df[column_to_colorize] = df[column_to_colorize].apply(colorize_value)
# Display the result
print(df)
</code></pre>
<p>While the above code achieves desired colorization effect, I've encountered the following issues:</p>
<ol>
<li><strong>Column header misalignment:</strong> The column header appears misaligned after applying the colorization.</li>
<li><strong>Justification of column values:</strong> The colorization messes up the justification of the column values.</li>
</ol>
<p><a href="https://i.sstatic.net/T35C4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T35C4.png" alt="enter image description here" /></a></p>
<p>I managed to address the justification issue by modifying the colorize_value function to include formatting:</p>
<pre><code>def colorize_value(value):
return colored("%.2f" % value, 'blue') if value > 0 else colored("%.2f" % value, 'red')
</code></pre>
<p>However, the problem of column misalignment persists, and I haven't found a satisfactory solution yet.</p>
<p>Any suggestions or insights on how to resolve the column misalignment issue would be greatly appreciated.</p>
|
<python><pandas><termcolor>
|
2024-01-22 22:51:12
| 1
| 1,008
|
Boris L.
|
77,863,198
| 558,639
|
pip fails: _in_process.py missing (but _in_process.pyc is present)
|
<p>TL;DR: What do I need to do in order to install the python <code>spidev</code> on my single-board linux machine?</p>
<p>On a semi-custom Linux system:</p>
<pre><code># uname -a
Linux sama7 5.15.32-linux4microchip-2022.04 #1 Thu Jun 9 10:03:39 CEST 2022 armv7l GNU/Linux
</code></pre>
<p>I'm trying to <code>pip install spidev</code>, but I get this error:</p>
<pre><code># pip install spidev --user
Collecting spidev
Using cached spidev-3.6.tar.gz (11 kB)
Installing build dependencies ... done
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/usr/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py'
</code></pre>
<p>What's interesting is that <code>_in_process.pyc</code> exists in that directory, but not <code>_in_process.py</code>:</p>
<pre><code># ls -l /usr/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/
total 16
-rw-r--r-- 1 root root 881 Jun 9 2022 __init__.pyc
-rw-r--r-- 1 root root 9775 Jun 9 2022 _in_process.pyc
</code></pre>
<p>Only slightly less interesting is I cannot find the cached <code>spidev-3.6*</code> anywhere on the system:</p>
<pre><code># find / -name "spidev-3.6.*"
#
</code></pre>
<h2>Update</h2>
<p>I just noticed there <em>is</em> a <code>_in_process.py</code> present, but in another directory:</p>
<pre><code># find / -name "_in_process.*"
/usr/lib/python3.8/site-packages/pipenv/patched/pip/_vendor/pyproject_hooks/_in_process/_in_process.py
/usr/lib/python3.8/site-packages/pipenv/patched/pip/_vendor/pyproject_hooks/_in_process/_in_process.pyc
/usr/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.pyc
</code></pre>
|
<python><linux><pip>
|
2024-01-22 22:45:15
| 0
| 35,607
|
fearless_fool
|
77,862,780
| 23,260,297
|
Read csv converts numeric column to data type string instead of float
|
<p>I am reading data from 2 files with similar formats using pandas. Both files contain the same column with all numeric values (positive and negative). One file reads the data in as floats and the other file reads the data in as strings. I need to format both columns with this line of code:</p>
<pre><code>df['MTMValue'] = df['MTMValue'].apply(lambda x: "${:.2f}".format(x))
</code></pre>
<p>This works for one of the files, but for the other file it gives me this error.</p>
<pre><code>could not convert string to float: '(241507.20)'
</code></pre>
<p>I am unsure why one file is converting this column to string data type and the other converts it to float data type.</p>
<p>Is there a way to work around this error?</p>
<p>I have parsed through both files and the formatting of each column is exactly the same. I am unsure why this behavior is happening</p>
<p>here is the code for the file that is reading correctly:</p>
<pre><code>def clean_cols(df):
for col in df.columns:
if col not in ['Commodity', 'FixedPriceStrike', 'Quantity', 'MTMValue']:
for i in range(0, len(df)):
df.at[i, col] = str.strip(df[col].iloc[i])
return df
def get_float_prices(df):
return df
# specify path to file
file = "Y:\HedgeFundRecon\JAron\JAron.csv"
# specify column headers
header = ['TradeID', 'TradeDate', 'Commodity', 'StartDate', 'ExpiryDate', 'FixedPriceStrike', 'FixedPriceCurr', 'Deal Type', 'Quantity', 'MTMValue']
# read file into dataframe
df = pd.read_csv(file, header= None, skiprows=11, names = header)
# create empty dataframe with same column headings
formatted_df = pd.DataFrame(columns=header)
# loop through dataframe
for i in range(0, len(df)-1, 2):
# locate two consecutive rows
row1 = df.iloc[i]
row2 = df.iloc[i + 1] if i + 1 < len(df) else None
# check if the third consecutive row is empty
# if it is empty we need to combine this row as well
if str.strip(df.iloc[i + 2, 0]) == '':
row3 = df.iloc[i + 2] if i + 2 < len(df) else None
else:
row3 = None
# create a dictionary
combined_row = {}
# loop through each column
# if the column needs to be combined,
# get the data from the rows and store it in dictionary
for col in df.columns:
if col in ['Commodity', 'Quantity', 'FixedPriceStrike']: # Specify the columns to be combined
if row3 is None:
combined_row[col] = [row1[col], row2[col] if row2 is not None else None]
else:
combined_row[col] = [row1[col], row2[col] if row2 is not None else None, row3[col]]
else:
combined_row[col] = row1[col]
# append dictionary to dataframe
formatted_df = formatted_df._append(combined_row, ignore_index=True)
# add new columns
formatted_df['Counterparty'] = 'JAron'
# drop rows that contain NaN values
formatted_df = formatted_df.dropna(how='any')
# drop columns
formatted_df = formatted_df.drop('FixedPriceCurr', axis=1)
formatted_df = formatted_df.drop('Deal Type', axis=1)
# clean columns
formatted_df = clean_commodity(formatted_df)
formatted_df = clean_cols(formatted_df)
# format date columns
formatted_df['StartDate'] = pd.to_datetime(formatted_df['StartDate'], format='%d%b%y')
formatted_df['ExpiryDate'] = pd.to_datetime(formatted_df['ExpiryDate'], format='%d%b%y')
formatted_df['TradeDate'] = pd.to_datetime(formatted_df['TradeDate'], format='%d%b%y')
# get deal type
formatted_df = get_deal_type(formatted_df)
# get quantity
formatted_df = get_quantity(formatted_df)
# get fixed price
formatted_df = get_fixed_price(formatted_df)
# reorder columns
formatted_df = formatted_df.iloc[:,[0,1,8,9,2,3,4,5,6,7]]
# groups rows based on StartDate, Commodity, and DealType
formatted_df['Commodity'] = formatted_df['Commodity'].apply(tuple)
groups = formatted_df.groupby(['StartDate', 'Commodity', 'DealType'])
df_list = []
for name, group in groups:
grouped_df = pd.DataFrame(group)
grouped_df['TotalFixedCost'] = grouped_df['FixedPriceStrike'] * grouped_df['Quantity']
grouped_df['FloatPrice'] = -(grouped_df['MTMValue'] - grouped_df['TotalFixedCost']) / grouped_df['Quantity']
grouped_df['CostPerUnit'] = grouped_df['TotalFixedCost'] / grouped_df['Quantity']
grouped_df.loc["Total", "Quantity"] = grouped_df['Quantity'].sum()
grouped_df.loc["Total", "TotalFixedCost"] = grouped_df['TotalFixedCost'].sum()
grouped_df.loc["Total", "MTMValue"] = grouped_df['MTMValue'].sum()
grouped_df.loc["Total", "FloatPrice"] = -(grouped_df.loc["Total", "MTMValue"] - grouped_df.loc["Total", "TotalFixedCost"]) / grouped_df.loc["Total", "Quantity"]
grouped_df.loc["Total", "CostPerUnit"] = grouped_df.loc["Total", "TotalFixedCost"] / grouped_df.loc["Total", "Quantity"]
df_list.append(grouped_df)
#today = str(date.today().strftime('%d%b%Y'))
#grouped_df.to_csv(f"Y:\HedgeFundRecon\JAron\Output\JAronOutput-{today}.csv", index=False, mode='a')
# create float price column
for item in df_list:
for i in range(0, len(item)-1):
formatted_df.at[item.index[i],'FloatPrice'] = item.loc["Total", "FloatPrice"]
# format currency coulmns
formatted_df['FixedPriceStrike'] = formatted_df['FixedPriceStrike'].apply(lambda x: "${:.2f}".format(x))
formatted_df['MTMValue'] = formatted_df['MTMValue'].apply(lambda x: "${:.2f}".format(x))
formatted_df['FloatPrice'] = formatted_df['FloatPrice'].apply(lambda x: "${:.2f}".format(x))
# output formatted results to new file
today = str(date.today().strftime('%d%b%Y'))
formatted_df.to_csv(f"Y:\HedgeFundRecon\JAron\Output\JAronOutput-{today}.csv", index=False)
</code></pre>
<p>here is the code for the file that is reading incorrectly:</p>
<pre><code>def clean_cols(df):
for col in df.columns:
if col not in ['FloatPrice','FixedPriceStrike', 'Quantity', 'MTMValue']:
for i in range(0, len(df)):
df.at[i, col] = str.strip(df[col].iloc[i])
return df
def get_deal_type(df):
for i in range(0, len(df)):
if df['DealType'].iloc[i] == 'Pay Float':
df.at[i, 'DealType'] = 'Sell'
elif df['DealType'].iloc[i] == 'Pay Fixed':
df.at[i, 'DealType'] = 'Buy'
return df
# specify path to file
file = "Y:\HedgeFundRecon\Macquarie_DHR\Macquarie.csv"
# read file into dataframe
df = pd.read_csv(file, skiprows=36)
#drop unused columns
df = df.dropna(how='all', axis=1)
df = df.drop('Deal Type', axis=1)
df = df.drop('Period End Date', axis=1)
df = df.drop('Outstanding Volume', axis=1)
df = df.drop('Volume Units', axis=1)
df = df.drop('Price Units', axis=1)
df = df.drop('Trade Currency', axis=1)
for i in range(0, len(df)):
# add new column
df['Counterparty'] = 'Macquarie'
# drop rows that contain NaN values
df = df.dropna(how='any', axis=0)
# reorder and rename columns
df = df.iloc[:,[1,2,10,5,0,3,4,7,9,6,8]]
df.rename(columns = {'Deal Number':'TradeID'}, inplace = True)
df.rename(columns = {'Deal Date':'TradeDate'}, inplace = True)
df.rename(columns = {'Pay Fixed/Float':'DealType'}, inplace = True)
df.rename(columns = {'Asset':'Commodity'}, inplace = True)
df.rename(columns = {'Period Start Date':'StartDate'}, inplace = True)
df.rename(columns = {'Maturity Date':'ExpiryDate'}, inplace = True)
df.rename(columns = {'Fwd Price':'FixedPriceStrike'}, inplace = True)
df.rename(columns = {'Projected Rate':'FloatPrice'}, inplace = True)
df.rename(columns = {'Total Volume':'Quantity'}, inplace = True)
df.rename(columns = {'Market Reval (USD)*':'MTMValue'}, inplace = True)
#clean columns
df = clean_cols(df)
# get deal type
df = get_deal_type(df)
# format date columns
df['StartDate'] = pd.to_datetime(df['StartDate'], format='%d-%b-%Y')
df['ExpiryDate'] = pd.to_datetime(df['ExpiryDate'], format='%d-%b-%Y')
df['TradeDate'] = pd.to_datetime(df['TradeDate'], format='%d-%b-%Y')
# format currency coulmns
df['FixedPriceStrike'] = df['FixedPriceStrike'].apply(lambda x: "${:.2f}".format(x))
df['MTMValue'] = df['MTMValue'].apply(lambda x: "${:.2f}".format(x))
df['FloatPrice'] = df['FloatPrice'].apply(lambda x: "${:.2f}".format(x))
# groups rows based on StartDate, Commodity, and DealType
groups = df.groupby(['StartDate', 'Commodity', 'DealType'])
for name, group in groups:
new_df = pd.DataFrame(group)
#today = str(date.today().strftime('%d%b%Y'))
#new_df.to_csv(f"Y:\HedgeFundRecon\Macquarie_DHR\Output\MacquarieOutput-{today}.csv", index=False, mode='a')
# output formatted results to new file
today = str(date.today().strftime('%d%b%Y'))
df.to_csv(f"Y:\HedgeFundRecon\Macquarie_DHR\Output\MacquarieOutput-{today}.csv", index=False)
</code></pre>
|
<python><pandas>
|
2024-01-22 21:09:41
| 1
| 2,185
|
iBeMeltin
|
77,862,775
| 1,838,233
|
Pandas 2.2: FutureWarning: Resampling with a PeriodIndex is deprecated
|
<p>The pandas version 2.2 raises a warning when using this code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame.from_dict({"something": {pd.Period("2022", "Y-DEC"): 2.5}})
# FutureWarning: Resampling with a PeriodIndex is deprecated.
# Cast index to DatetimeIndex before resampling instead.
print(df.resample("M").ffill())
# something
# 2022-01 2.5
# 2022-02 2.5
# 2022-03 2.5
# 2022-04 2.5
# 2022-05 2.5
# 2022-06 2.5
# 2022-07 2.5
# 2022-08 2.5
# 2022-09 2.5
# 2022-10 2.5
# 2022-11 2.5
# 2022-12 2.5
</code></pre>
<p>This does not work:</p>
<pre><code>df.index = df.index.to_timestamp()
print(df.resample("M").ffill())
# something
# 2022-01-31 2.5
</code></pre>
<p>I have PeriodIndex all over the place and I need to resample them a lot, filling gaps with ffill.
How to do this with Pandas 2.2?</p>
|
<python><pandas><pandas-resample>
|
2024-01-22 21:08:30
| 1
| 3,618
|
Arigion
|
77,862,704
| 14,890,683
|
Condense Pydantic Validators into Class Annotations
|
<p>Looking to create a datastructure in Python that condenses Pydantic's validators into oneline arguments:</p>
<p>ie. from this</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, validator
class MyClass(BaseModel):
a: str
b: str
c: str
@validator('b')
def calculate_b(cls, v, values):
return values['a'] + v
@validator('c')
def calculate_c(cls, v, values):
return values['a'] + values['b'] + v
</code></pre>
<p>to (something like) this</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
a: str
b: str = calculate_b(...)
c: str = calculate_c(...)
</code></pre>
|
<python><pydantic>
|
2024-01-22 20:56:28
| 1
| 345
|
Oliver
|
77,862,678
| 3,456,812
|
simple python beginner chatgpt bot program bombs on compatibility issues, cannot find workaround
|
<p>I am trying to learn both Python and simple ChatGPT api programming. The code I came up with was suggested by gpt itself, and in researching the problem I'm having, every code example I can find on the web is basically doing the same thing as the code I'm trying (below).</p>
<p>I started with a "pip3 install openai" and have no errors from that. It's when I run what should be the simplest code I've ever seen that the errors occur.</p>
<p>The errors I'm receiving are essentially telling me that "openai.ChatCompletion.create" is deprecated, so I've tried the recommended alternative call, which is just "openai.Completion.create". That fails too. Both calls are deprecated according to the runtime warnings. Unfortunately, every piece of sample code I can find on Google uses one of these two methods.</p>
<p>I was given two suggestions by the runtime. One was run "openai migrate" which does nothing but give me some strange permissions error on my PICTURES folder of all things.</p>
<p>The more sensible suggestion was digging into the recommended openai api spec and it suggested using "openai.chat.completions.create". When I try that, I get a different error, this time suggesting I've sent too many requests to the API. This is UTTER NONSENSE -- I've made one and exactly one only call to the api to get the error message.</p>
<p>This should not be this hard. I'm running out of options. I've tried every suggested sample I can find; none work; the call I find in the latest api spec (unless I'm misinterpreting) also does not work.</p>
<p>Any ideas would be appreciated.</p>
<pre><code>import openai
def chat_with_gpt(api_key):
openai.api_key = api_key
# Starting a new chat session
session = openai.ChatCompletion.create(
model="gpt-3.5-turbo", # or another model of your choice
messages=[{"role": "system", "content": "You are a helpful assistant."}]
)
while True:
prompt = input("Prompt: ")
if prompt == "/quit":
break
try:
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
session_id=session["id"], # Using the same session for continuity
messages=[{"role": "user", "content": prompt}]
)
print(response["choices"][0]["message"]["content"])
except Exception as e:
print(f"An error occurred: {e}")
if __name__ == "__main__":
api_key = "mykey" # I replace with my actual key
chat_with_gpt(api_key)
</code></pre>
|
<python><openai-api><chatgpt-api>
|
2024-01-22 20:49:24
| 2
| 1,305
|
markaaronky
|
77,862,611
| 9,017,311
|
Quarto Unable to Render Python
|
<p>I <strong>cannot</strong> render Quarto documents that contain <code>Python</code> code chunks.</p>
<p>I <strong>can</strong> successfully render Quarto documents that contain <code>R</code> code chunks.</p>
<h1>Error message</h1>
<p>The error message I receive is:</p>
<pre><code>==> quarto preview example.Qmd --to html --no-watch-inputs --no-browse
ERROR: Unable to render example.Qmd
Stack trace:
at fileExecutionEngineAndTarget (file:///C:/PROGRA~1/Quarto/bin/quarto.js:41260:15)
at async renderContexts (file:///C:/PROGRA~1/Quarto/bin/quarto.js:72183:32)
at async renderFormats (file:///C:/PROGRA~1/Quarto/bin/quarto.js:72234:22)
at async file:///C:/PROGRA~1/Quarto/bin/quarto.js:98062:24
at async Command.fn (file:///C:/PROGRA~1/Quarto/bin/quarto.js:98059:25)
at async Command.execute (file:///C:/PROGRA~1/Quarto/bin/quarto.js:8104:13)
at async quarto (file:///C:/PROGRA~1/Quarto/bin/quarto.js:114968:5)
at async file:///C:/PROGRA~1/Quarto/bin/quarto.js:114986:9
</code></pre>
<h1>Reprex</h1>
<p>It's not quite a reproducible example, but it's the best I can achieve given the error.</p>
<p>I am using the following CLI syntax to render an example document which lives on my Desktop:</p>
<pre><code>quarto render Desktop\example.Qmd
</code></pre>
<p>The contents of <code>Desktop\example.Qmd</code> are (when I attempt python - I just switch the 'python' to 'r' when I want to render R.):</p>
<pre><code>---
title: Example
subtitle: Example
format: html
---
```{python}
2 + 2
```
</code></pre>
<h1>SetUp</h1>
<h2>Machine</h2>
<p>Microsoft Windows 11 Home</p>
<p>Version 10.0.22621 Build 22621</p>
<h2>Software Versions</h2>
<p>All of the paths listed below are on my <code>PATH</code> Environment variable.</p>
<ul>
<li><p>R 4.3.0 (Path: <code>C:\Program Files\R\</code>)</p>
</li>
<li><p>Python 3.12.1 (Path: <code>C:\Program Files\Python312\</code>)</p>
</li>
<li><p>Quarto 1.4.547 (Path: <code>C:\Program Files\Quarto</code>)</p>
</li>
</ul>
<h2>Jupyter</h2>
<p>Following <a href="https://quarto.org/docs/computations/python.html#installation" rel="nofollow noreferrer">Using Python</a> from Quarto, i used the following command to install Jupyter:</p>
<p><code>py -m pip install jupyter</code></p>
<p>Which was installed at <code>C:\Users\me\appdata\roaming\python\python312\site-packages</code></p>
<p>Running the Quarto command <code>quarto check jupyter</code> returns:</p>
<pre><code>Quarto 1.4.547
[>] Checking Python 3 installation....OK
Version: 3.12.1
Path: C:/Program Files/Python312/python.exe
Jupyter: 5.7.1
Kernels: python3
[>] Checking Jupyter engine render....OK
</code></pre>
|
<python><r><quarto><windows-11>
|
2024-01-22 20:36:28
| 1
| 712
|
Christian Million
|
77,862,210
| 23,190,147
|
What does [Errno 2] mean in python?
|
<p>I was playing around with python, and I was trying to open a file. I accidentally made a typo, and got the expected error:</p>
<p>FileNotFoundError: [Errno 2] No such file or directory: 'tlesting.py'</p>
<p>It was supposed to be <code>testing.py</code> if you are wondering.</p>
<p>Of course, I expected the error, but I want to know the reason behind why <code>[Errno 2]</code> is included. I got curious, so I tried raising the error myself: <code>raise FileNotFoundError("This is a test")</code>, and got an output of: <code>FileNotFoundError: This is a test</code>, but no <code>[Errno 2]</code> anymore. Is this something to do with the current version of python or my current operating system?</p>
|
<python><file>
|
2024-01-22 19:09:18
| 2
| 450
|
5rod
|
77,862,188
| 4,049,444
|
Pygame font render missing baseline information, can not blit text properly
|
<p>In Pygame when rendering text with <code>render()</code> a surface is generated containing the text rendered. The size of the surface (<code>get_rect()</code>) has various heights based on the text itself. There is also the line height you can get with <code>get_linesize()</code>. Based just on these information there is no chance to align the text properly.</p>
<p>If you need to change a short text on one place you can <code>blit()</code> the text centered, or top pr bottom aligned. In any case the text will appear to jump, because the <a href="https://en.wikipedia.org/wiki/Baseline_(typography)" rel="nofollow noreferrer">baseline of the font</a> is jumping up and down.</p>
<p>Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>>>> pygame.init()
(5, 0)
>>> ff=pygame.font.match_font("freesans")
>>> f=pygame.font.Font(ff,100)
>>> f.get_linesize()
110
>>> f.render("xxxx",True,(0,0,0)).get_rect()
<rect(0, 0, 192, 100)>
>>> f.render("XXXX",True,(0,0,0)).get_rect()
<rect(0, 0, 264, 100)>
>>> f.render("AAÁČ",True,(0,0,0)).get_rect()
<rect(0, 0, 272, 111)>
>>> f.render("yg",True,(0,0,0)).get_rect()
<rect(0, 0, 103, 102)>
>>> f.render("ygjý",True,(0,0,0)).get_rect()
<rect(0, 0, 175, 102)>
>>> f.render("ygjýŘ",True,(0,0,0)).get_rect()
<rect(0, 0, 246, 113)>
</code></pre>
<p>So the font size is 100, the line size is 110, the various texts have various heights 100, 111, 102 and 113. This is all fine because of the nature of True Type Fonts.</p>
<p>The problem is that there is no clue how to align those rectangles. look at the next image:</p>
<p><a href="https://i.sstatic.net/5zJmH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5zJmH.png" alt="enter image description here" /></a></p>
<p>The yellow rectangles are the generated surfaces saved to image files. I aligned them manually based on the "baseline". It is clear that they can not be aligned either by bottom nor by top. The brownish rectangles represent the line height.</p>
<p>Is there any chance to get the baseline information or align the text properly by other means?</p>
|
<python><fonts><pygame>
|
2024-01-22 19:05:45
| 1
| 315
|
geer
|
77,862,174
| 5,683,778
|
Quarto output executable RMD/QMD file with text includes
|
<p>I'm using quarto (v1.4) to put together an assignments for my a course, giving them the option to use either Python (.ipynb) or R (.rmd) to complete it. I'm giving them a template to get started, and having them edit some existing code.</p>
<p>I have some generic preamble & question text that I want to be uniform between the R and Python versions of the document, as well as some generic imports for Python (e.g., matplotlib) and R (e.g., ggplot2) that I want to import for each assignment. So my strategy is to have two documents (Assignment1_py.qmd & Assignment1_R.qmd), where the code blocks are different, but the preamble, question text, etc. are brought in using includes. An example for the python version is at the bottom. The <code>keep-ipynb: true</code> command allows me to output the a nicely formatted .ipynb file, which the students can then work with.</p>
<p>My question is: is there a way to do something similar with R? There isn't an equivalent <code>keep-rmd: true</code> option. If they download the raw .QMD file, then the code works, but the include files are not rendered. The best option I've found so far is to set <code>keep-md: true</code>, to keep the intermediary .md file. It works, but the code blocks are not formatted properly (shown below) so I need a second second script to reformat the code cells and save as a .rmd file that the students can work with. Its not a huge problem, but I'm curious if there is a more elegant solution?</p>
<h3>Python</h3>
<pre><code>---
title: "Assignment 1 Py"
jupyter: python3
execute:
keep-ipynb: true
---
{{< include _Includes/01_Preamble.qmd >}}
{{< include _Includes/ImportPy.qmd >}}
{{< include _Includes/01_Q1.qmd >}}
```{python}
import pandas as pd
import datetime as dt
df = pd.read_csv("https://raw.githubusercontent.com/GEOS300/AssignmentData/main/Climate_Summary_BB.csv",
parse_dates=['TIMESTAMP'],
index_col=['TIMESTAMP']
)
Start ='2023-06-21 0000'
End ='2023-06-21 2359'
Selection = df.loc[(
(df.index>=dt.datetime.strptime(Start, '%Y-%m-%d %H%M'))
&
(df.index<=dt.datetime.strptime(End, '%Y-%m-%d %H%M'))
)]
Selection.head()
```
{{< include _Includes/01_Q2.qmd >}}
</code></pre>
<h3>R</h3>
<pre><code>---
title: "Assignment 1 R"
execute:
keep-md: true
---
{{< include _Includes/01_Preamble.qmd >}}
{{< include _Includes/01_Q1.qmd >}}
```{r}
#|echo: True
library("reshape2")
library("ggplot2")
df <- read.csv(file = 'https://raw.githubusercontent.com/GEOS300/AssignmentData/main/Climate_Summary_BB.csv')
df[['TIMESTAMP']] <- as.POSIXct(df[['TIMESTAMP']],format = "%Y-%m-%d %H%M")
head(df)
```
{{< include _Includes/01_Q2.qmd >}}
</code></pre>
<h3>MD Output for R</h3>
<pre><code>::: {.cell}
```{.r .cell-code}
#|echo: True
list.of.packages <- c("ggplot2", "reshape2")
new.packages <- list.of.packages[!(list.of.packages %in% installed.packages()[,"Package"])]
if(length(new.packages)) install.packages(new.packages)
library("reshape2")
library("ggplot2")
```
:::
</code></pre>
|
<python><r><r-markdown><quarto>
|
2024-01-22 19:03:49
| 0
| 1,230
|
June Skeeter
|
77,862,114
| 17,683,683
|
SSL certificate problem: unable to get local issuer certificate: zscaler
|
<p>Im trying to create docker image using dockerfile as it uses python, and it was throwing error while installing <code>RUN poetry install --no-root</code>. After digging deep it is throwing <code>SSL certificate problem: unable to get local issuer certificate</code> error as my system has Zscaler installed by my IT team so its failing TLS connection and failing to install.
I tried to export zscaler certificate and added in docker file but that doesn’t seem to work as it was skipping in update-ca-certificates step with this warning - <code>rehash: warning: skipping ZscalerRootCertificate.pem,it does not contain exactly one certificate or CRL</code>.
How to resolve this ? it looks like my ZscalerRootCertificate.crt format is not right for the above error but im not able to see the certificate or how can completely disable this ssl certifcate verification because i just need this docker image to work on my local as i will not be engaging with any code changes. Please help me. Thank you.</p>
<p>dockerfile:</p>
<pre><code>FROM python:3.11.5-slim as base
MAINTAINER Customer Platform <>
ARG GIT_SHA_ARG=unknown
RUN apt-get update && \
apt-get install -y \
curl \
&& rm -rf /var/lib/apt/lists/* \
&& curl --proto '=https' --tlsv1.2 -sSf https://just.systems/install.sh | bash -s -- --to /usr/local/bin
ENV GIT_SHA=${GIT_SHA_ARG} \
INSTALL_PATH="/app" \
POETRY_HOME="/opt/poetry" \
POETRY_NO_INTERACTION=1 \
POETRY_VERSION=1.6.1 \
VIRTUAL_ENV="/venv" \
SHARED_PATH="/shared"
EXPOSE 8000
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
RUN mkdir -p $SHARED_PATH
# Create and activate the venv
RUN mkdir -p ${VIRTUAL_ENV} && \
python -m venv ${VIRTUAL_ENV} && \
${VIRTUAL_ENV}/bin/pip install --upgrade pip
ENV PATH=${VIRTUAL_ENV}/bin:${POETRY_HOME}/bin:${PATH} \
PYTHONPATH=${INSTALL_PATH}
# Install tini
ENV TINI_VERSION v0.19.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
# Install poetry (installer uses POETRY_* vars from above)
RUN curl -sSL https://install.python-poetry.org | python
ENTRYPOINT [ "/tini", "--" ]
CMD [ "./bin/bash" ]
</code></pre>
|
<python><docker><python-poetry><zscaler>
|
2024-01-22 18:53:22
| 0
| 357
|
Rashmi Balkur
|
77,862,060
| 12,125,777
|
Django how to update existing database rows using pandas df.to_sql
|
<p>My goal is every day to update my <code>database lines by uploading a file containing the 7 past days data</code>.</p>
<p>So, I have to read the file with pandas and load it into my <code>django database</code>. And most of time, the <code>dataframe</code> will have <code>lines already present in the database</code> and <code>brand news lines which are not in the database</code>.</p>
<p>I would like to insert the <code>dataframe into my database</code> by replacing:</p>
<ul>
<li>the already <code>existing lines</code> by the <code>news lines</code> in the <code>dataframe</code></li>
<li>and also insert the <code>news lines</code> in the <code>dataframe</code> into the <code>database</code>.</li>
</ul>
<p><strong>NB:</strong> A line already exists if we have same <code>date</code>, same <code>advertiser</code>, and same <code>insertion_order</code> between the <code>database</code> and the <code>dataframe</code> we 're trying to upload.</p>
<p>After many research, I found this function, but I couldn't get it work in my case.</p>
<pre><code>def upsert(table, con, keys, data_iter):
data = [dict(zip(keys, row)) for row in data_iter]
insert_statement = insert(table.table).values(data)
upsert_statement = insert_statement.on_conflict_do_update(
constraint=f"{table.table.name}_pkey",
set_={c.key: c for c in insert_statement.excluded},
)
print(upsert_statement)
con.execute(upsert_statement)
</code></pre>
<p>And then, I call it in <code>df.to_sql</code></p>
<pre><code>df.to_sql(
Xandr._meta.db_table,
if_exists="append",
index=False,
dtype=dtype,
chunksize=1000,
con=engine,
method=upsert
)
</code></pre>
<ul>
<li>the model</li>
</ul>
<pre><code>class Xandr(InsertionOrdersCommonFields):
dsp = models.CharField("Xandr", max_length=20)
class Meta:
db_table = "Xandr"
unique_together = [
"date",
"advertiser",
"insertion_order"
]
verbose_name_plural = "Xandr"
def __str__(self):
return self.dsp
</code></pre>
|
<python><django><pandas><postgresql><django-models>
|
2024-01-22 18:40:37
| 1
| 542
|
aba2s
|
77,861,942
| 6,136,013
|
VSCode/Pylance converts network drive to UNC path
|
<p>I have a python file on a network drive (X:\foo.py). If I click on a python function defined in the same file and press F12 (go to definition), VSCode opens a <em>new window</em> where the file name is \\some.unc.path\foo.py
This behavior is recent but downgrading VSCode to an older version didn't help.
Is there a setting perhaps that makes VSCode convert network mapped drives to UNC?</p>
|
<python><visual-studio-code><pylance>
|
2024-01-22 18:20:03
| 2
| 681
|
BlindDriver
|
77,861,879
| 1,493,192
|
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 78 but got size 102 for tensor number 1 in the list
|
<p>I am trying to try the example of this library <a href="https://github.com/openclimatefix/graph_weather?tab=readme-ov-file" rel="nofollow noreferrer">graph_weather</a></p>
<pre><code>import torch
from graph_weather import GraphWeatherForecaster
from graph_weather.models.losses import NormalizedMSELoss
lat_lons = []
for lat in range(-90, 90, 1):
for lon in range(0, 360, 1):
lat_lons.append((lat, lon))
model = GraphWeatherForecaster(lat_lons)
features = torch.randn((2, len(lat_lons), 78))
out = model(features)
criterion = NormalizedMSELoss(lat_lons=lat_lons, feature_variance=torch.randn((78,)))
loss = criterion(out, features)
loss.backward()
</code></pre>
<p>When I get to the line of code <code>out = model(features)</code> I got this error message:</p>
<pre><code>RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 78 but got size 102 for tensor number 1 in the list.
</code></pre>
<p>the</p>
<pre><code>len(lat_lons)
64800
features.shape
torch.Size([2, 64800, 78])
</code></pre>
|
<python><pytorch>
|
2024-01-22 18:09:37
| 1
| 8,048
|
Gianni Spear
|
77,861,790
| 1,473,517
|
How to find an optimal circular area with regions missing
|
<p>I have an n by n matrix of integers and I want to find the circular area, with origin at the top left corner, with maximum sum. What makes this optimization problem more difficult is that I am allowed to miss out up to a fixed number of regions. A region is defined to be a consecutive block of rows. Let us call the number of regions that can be omitted, k.</p>
<p>Consider the following grid with a circle imposed on it.</p>
<p><a href="https://i.sstatic.net/Gp0WV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gp0WV.png" alt="enter image description here" /></a></p>
<p>This is made with:</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.patches import Circle
import numpy as np
plt.yticks(np.arange(0, 10.01, 1))
plt.xticks(np.arange(0, 10.01, 1))
plt.xlim(0,9)
plt.ylim(0,9)
plt.gca().invert_yaxis()
# Set aspect ratio to be equal
plt.gca().set_aspect('equal', adjustable='box')
plt.grid()
np.random.seed(40)
square = np.empty((10, 10), dtype=np.int_)
for x in np.arange(0, 10, 1):
for y in np.arange(0, 10, 1):
plt.scatter(x, y, color='blue', s=2, zorder=2, clip_on=False)
for x in np.arange(0, 10, 1):
for y in np.arange(0, 10, 1):
value = np.random.randint(-3, 4)
square[int(x), int(y)] = value
plt.text(x-0.2, y-0.2, str(value), ha='center', va='center', fontsize=8, color='black')
r1 = 3
circle1 = Circle((0, 0), r1, color="blue", alpha=0.5, ec='k', lw=1)
plt.gca().add_patch(circle1)
</code></pre>
<p>In this case the matrix is:</p>
<pre><code> [[ 3, -1, 1, 0, -1, -1, -3, -2, -2, 2],
[ 0, 0, 3, 0, 0, -1, 2, 0, -2, 3],
[ 2, 0, 3, -2, 3, 1, 2, 2, 1, 1],
[-3, 0, 1, 0, 1, 2, 3, 1, -3, -1],
[-3, -2, 1, 2, 1, -3, -2, 2, -2, 0],
[-1, -3, -3, 1, 3, -2, 0, 2, -1, 1],
[-2, -2, -1, 2, -2, 1, -1, 1, 3, -1],
[ 1, 2, -1, 2, 0, -2, -1, -1, 2, 3],
[-1, -2, 3, -1, 0, 0, 3, -3, 3, -2],
[ 0, -3, 0, -1, -1, 0, -2, -3, -3, -1]]
</code></pre>
<p>When the circle has radius 2 * sqrt(2) = sqrt(8), the sum of the points in the grid within the circle is 11 which is optimal if no regions can be omitted. As the radius increases, more and more points fall into the circle and in this case the sum is never larger than 11.</p>
<p>This can be computed quickly using this <a href="https://stackoverflow.com/a/77808698/1473517">code</a>:</p>
<pre><code>import numpy as np
def make_data(N):
np.random.seed(40)
g = np.random.randint(-3, 4, (N, N))
return g
def find_max(g):
n = g.shape[0]
sum_dist = np.zeros(2 * N * N, dtype=np.int32)
for i in range(n):
for j in range(n):
dist = i**2 + j**2
sum_dist[dist] += g[i, j]
cusum = np.cumsum(sum_dist)
return np.argmax(cusum), np.max(cusum)
N = 10
g = make_data(N)]
g = g.T # Just to match the picture
print(g)
squared_dist, score = find_max(g)
print(np.sqrt(squared_dist), score)
</code></pre>
<h1>The problem</h1>
<p>If we were to zero out row 0 and also rows 4, 5 and 6 (this corresponds to eliminating k = 2 regions), then the optimal radius is increased to sqrt(58) giving a score of 24.</p>
<p><a href="https://i.sstatic.net/DBbbb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DBbbb.png" alt="enter image description here" /></a></p>
<p>I want to find both the optimal radius and the optimal choice of up to k regions to eliminate. I would like the code to be fast for much larger matrices but k will never be bigger than 5.</p>
<p>This <a href="https://stackoverflow.com/questions/77847798/how-to-find-an-optimal-shape-with-sections-missing">question</a> is related but I can't see how to use it to give a fast solution here.</p>
|
<python><algorithm><performance><optimization>
|
2024-01-22 17:54:26
| 1
| 21,513
|
Simd
|
77,861,764
| 15,341,457
|
Python Tkinter - Passing controllers as arguments of a tkinter frame
|
<p>I've got a main class called <em>App</em> that invokes 4 controllers. The method <em>start_from_home_page()</em> creates an object of type <em>HomePage</em> and I want this object to have access to all 4 controllers which I will pass as arguments. Here's the method:</p>
<pre><code> def start_from_home_page(self, fileSystemController, widgetController, logicController):
from pages.HomePage import HomePage
self.homePage = HomePage(self, fileSystemController, widgetController, logicController)
self.homePage.grid(row = 0, column = 0, sticky = 'nsew')
self.homePage.tkraise()
</code></pre>
<p>And here's the <em>'App'</em> class</p>
<pre><code>class App(ctk.CTk):
def __init__(self, title, dim):
# main setup
super().__init__()
self.title(title)
self.geometry(f'{dim[0]}x{dim[1]}')
self.minsize(dim[0],dim[1])
os.chdir('/Users/mauri5566/Desktop/AST-CVE')
self.widgetController = WidgetController(self)
self.fileSystemController = FileSystemController(self)
self.logicController = LogicController(self)
self.navigationController = NavigationController(self)
self.navigationController.start_from_home_page(self.fileSystemController, self.widgetController, self.logicController)
self.mainloop()
App('AST-CVE', (1000,800))
</code></pre>
<p>The following code shows the initial part of the HomePage class (which inherits from ctk.CTkFrame). I'm getting the following error:</p>
<pre><code>bad screen distance ".!filesystemcontroller"
</code></pre>
<p>That's because the third argument is supposed to be the width of the frame. However I don't need that, I just want the class to have access to the controllers by passing them as arguments, how can I do that?</p>
<p>Here's the <em>HomePage</em> class:</p>
<pre><code>class HomePage(ctk.CTkFrame):
def __init__(self, navigationController: NavigationController,
fileSystemController: FileSystemController,
widgetController: WidgetController,
logicController: LogicController):
super().__init__(navigationController, fileSystemController, widgetController, logicController)
</code></pre>
<p>Just for clarity, this is how a ctk.CTkFrame object is structured:</p>
<pre><code>class CTkFrame(CTkBaseClass):
def __init__(self,
master: Any,
width: int = 200,
height: int = 200,
corner_radius: Optional[Union[int, str]] = None,
border_width: Optional[Union[int, str]] = None,
bg_color: Union[str, Tuple[str, str]] = "transparent",
fg_color: Optional[Union[str, Tuple[str, str]]] = None,
border_color: Optional[Union[str, Tuple[str, str]]] = None,
background_corner_colors: Union[Tuple[Union[str, Tuple[str, str]]], None] = None,
overwrite_preferred_drawing_method: Union[str, None] = None,
**kwargs):
</code></pre>
|
<python><class><tkinter><arguments><parameter-passing>
|
2024-01-22 17:50:23
| 1
| 332
|
Rodolfo
|
77,861,682
| 18,476,381
|
Sqlalchemy - How to use joinedload while doing a groupby function
|
<p>I am trying join a load (vendor) to my main object (service_order) while also running a sum function and groupby. Below is the code I am trying to run.</p>
<pre><code>async def search_service_orders(
session: AsyncSession,
service_order_number: str = None,
source_service_order_number: str = None,
limit: int = 20,
offset: int = 0,
) -> Sequence[ServiceOrderSearchModel]:
async with session:
statement = (
select(
DBServiceOrder,
func.sum(DBServiceOrderItem.unit_price).label("total_price"),
)
.options(joinedload(DBServiceOrder.vendor))
.outerjoin(
DBServiceOrderItem,
DBServiceOrder.service_order_id == DBServiceOrderItem.service_order_id,
)
.group_by(DBServiceOrder)
)
if service_order_number is not None:
statement = statement.where(
DBServiceOrder.service_order_number.ilike(f"%{service_order_number}%")
)
if source_service_order_number is not None:
statement = statement.where(
DBServiceOrder.source_service_order_number.ilike(
f"%{source_service_order_number}%"
)
)
statement = statement.limit(limit).offset(offset)
result = await session.execute(statement)
list_of_service_orders = result.all()
</code></pre>
<p>This results in an error: <code>ORA-00979: not a GROUP BY expression</code></p>
<p>Below is the sql that is generated.</p>
<pre><code>[SQL: SELECT service_order.service_order_id, service_order.authorized_by, service_order.authorized_date, service_order.bill_to_addr, service_order.cancel_date, service_order.cancel_by, service_order.expected_delivery_date, service_order.orderd_by, service_order.order_date, service_order.quote_estimate_number, service_order.service_order_number, service_order.service_type, service_order.service_status, service_order.ship_to_addr, service_order.shipping_method, service_order.source_service_order_number, service_order.vendor_id, service_order.created_by, service_order.create_ts, service_order.updated_by, service_order.update_ts, sum(service_order_item.unit_price) AS total_price, vendor_1.vendor_id AS vendor_id_1,
vendor_1.address_1, vendor_1.address_2, vendor_1.city, vendor_1.comment_txt, vendor_1.company_type, vendor_1.company_name, vendor_1.eog_msa_fl, vendor_1.fax_number, vendor_1.is_active, vendor_1.postal_code, vendor_1.phone_number, vendor_1.state_nm, vendor_1.vendor_number, vendor_1.vendor_uuid, vendor_1.created_by AS created_by_1, vendor_1.created_date, vendor_1.updated_by AS updated_by_1, vendor_1.updated_date
FROM service_order LEFT OUTER JOIN service_order_item ON service_order.service_order_id = service_order_item.service_order_id LEFT OUTER JOIN vendor vendor_1 ON vendor_1.vendor_id = service_order.vendor_id
WHERE lower(service_order.service_order_number) LIKE lower(:service_order_number_1) GROUP BY service_order.service_order_id, service_order.authorized_by, service_order.authorized_date, service_order.bill_to_addr, service_order.cancel_date,
service_order.cancel_by, service_order.expected_delivery_date, service_order.orderd_by, service_order.order_date, service_order.quote_estimate_number, service_order.service_order_number, service_order.service_type, service_order.service_status, service_order.ship_to_addr, service_order.shipping_method, service_order.source_service_order_number, service_order.vendor_id, service_order.created_by, service_order.create_ts, service_order.updated_by, service_order.update_ts
OFFSET 0 ROWS
FETCH FIRST 20 ROWS ONLY]
[parameters: {'service_order_number_1': '%SO%'}]
</code></pre>
<p>I then added the DBVendor object to the groupby statement like below:</p>
<pre><code>.group_by(DBServiceOrder, DBVendor)
</code></pre>
<p>which result in error: <code>sqlalchemy.exc.DatabaseError: (oracledb.exceptions.DatabaseError) ORA-00904: "VENDOR"."UPDATED_DATE": invalid identifier</code></p>
<p>This error makes no sense as all my tables are properly defined and I am able to joinload with vendor on my other queries just fine. Below is the sql query for this error:</p>
<pre><code>[SQL: SELECT service_order.service_order_id, service_order.authorized_by, service_order.authorized_date, service_order.bill_to_addr, service_order.cancel_date, service_order.cancel_by, service_order.expected_delivery_date, service_order.orderd_by, service_order.order_date, service_order.quote_estimate_number, service_order.service_order_number, service_order.service_type, service_order.service_status, service_order.ship_to_addr, service_order.shipping_method, service_order.source_service_order_number, service_order.vendor_id, service_order.created_by, service_order.create_ts, service_order.updated_by, service_order.update_ts, sum(service_order_item.unit_price) AS total_price, vendor_1.vendor_id AS vendor_id_1,
vendor_1.address_1, vendor_1.address_2, vendor_1.city, vendor_1.comment_txt, vendor_1.company_type, vendor_1.company_name, vendor_1.eog_msa_fl, vendor_1.fax_number, vendor_1.is_active, vendor_1.postal_code, vendor_1.phone_number, vendor_1.state_nm, vendor_1.vendor_number, vendor_1.vendor_uuid, vendor_1.created_by AS created_by_1, vendor_1.created_date, vendor_1.updated_by AS updated_by_1, vendor_1.updated_date
FROM service_order LEFT OUTER JOIN service_order_item ON service_order.service_order_id = service_order_item.service_order_id LEFT OUTER JOIN vendor vendor_1 ON vendor_1.vendor_id = service_order.vendor_id
WHERE lower(service_order.service_order_number) LIKE lower(:service_order_number_1) GROUP BY service_order.service_order_id, service_order.authorized_by, service_order.authorized_date, service_order.bill_to_addr, service_order.cancel_date,
service_order.cancel_by, service_order.expected_delivery_date, service_order.orderd_by, service_order.order_date, service_order.quote_estimate_number, service_order.service_order_number, service_order.service_type, service_order.service_status, service_order.ship_to_addr, service_order.shipping_method, service_order.source_service_order_number, service_order.vendor_id, service_order.created_by, service_order.create_ts, service_order.updated_by, service_order.update_ts, vendor.vendor_id, vendor.address_1, vendor.address_2, vendor.city, vendor.comment_txt, vendor.company_type, vendor.company_name, vendor.eog_msa_fl, vendor.fax_number, vendor.is_active, vendor.postal_code, vendor.phone_number, vendor.state_nm, vendor.vendor_number, vendor.vendor_uuid, vendor.created_by, vendor.created_date, vendor.updated_by, vendor.updated_date
OFFSET 0 ROWS
FETCH FIRST 20 ROWS ONLY]
[parameters: {'service_order_number_1': '%SO%'}]
</code></pre>
<p>Anyone know how I can use joinedload while also doing a func.sum and groupby?</p>
<p>I also tried adding a lazy relationship in the service_order table def but it gives me the same error as above...</p>
|
<python><oracle-database><sqlalchemy>
|
2024-01-22 17:35:05
| 1
| 609
|
Masterstack8080
|
77,861,653
| 2,855,226
|
Does invoking a system call like statfs with Python subprocess use less overhead than invoking a C utility like df?
|
<p>Unix-based system. I'm trying to use as little overhead as possible right now in the code I'm working on (it's in a resource constrained space). In this particular code, we are gathering some basic disk usage stats. One suggestion was to replace a call to <code>df</code> with <code>statfs</code> since <code>df</code> is a C utility that requires its own subprocess to run whereas <code>statfs</code> is a system call which presumably uses less overhead (and is what <code>df</code> calls anyway).</p>
<p>We're calling <code>df</code> with Python's <code>subprocess.check_output()</code> <a href="https://docs.python.org/3/library/subprocess.html#subprocess.check_output" rel="nofollow noreferrer">command</a>:</p>
<pre><code>import subprocess
DF_CMD = ["df", "-P", "-k"]
def get_disk_usage() -> str:
try:
output = subprocess.check_output(DF_CMD, text=True)
except subprocess.CalledProcessError as e:
raise RuntimeError(f"Failed to execute {DF_CMD} " + str(e)) from e
return output
</code></pre>
<p>I want to hard code our mount points (which we decided we're okay with) and replace the call to <code>df</code> with a call to <code>statfs <mountpoint></code> in the above code. However, I'm unsure if calling with the same Python function will actually reduce overhead. I plan to use a profiler to check it, but I'm curious if anyone knows enough about the inner workings of Python/Unix to know what's going on under the hood?</p>
<p>And to be clear: by "overhead" I mean CPU and memory usage on the OS/machine.</p>
|
<python><unix><diskspace><overhead>
|
2024-01-22 17:31:13
| 1
| 506
|
Ryan Schuster
|
77,861,644
| 12,389,536
|
raise ValueError if None as value in dict __setitem__
|
<p>Is it possible to raise a ValueError if a new value in dict is a None? Is it possible without creating a new class based on <code>dict</code> and overloading <code>__setitem__</code> or checking each value before setting?</p>
<pre class="lang-py prettyprint-override"><code>d = dict()
d["va1"] = None # should to raise ValueError
</code></pre>
<p>Is there any out of the box <em>no Nones dict</em>?</p>
|
<python><dictionary><collections>
|
2024-01-22 17:29:28
| 1
| 339
|
matt91t
|
77,861,609
| 13,147,413
|
Group by + rolling mean in Polars
|
<p>I need to translate this piece of pandas code:</p>
<pre><code>df.groupby(groupby_col)[[A_col, B_col]].rolling(window=window, on=B_col, closed = 'left').agg(some_function)
</code></pre>
<p>to polars.</p>
<p><em>A_col</em> features Int64 dtype, <em>B_col</em> datetime (ns).</p>
<p>I came up with the following formulation:</p>
<pre><code>df.group_by(groupby_col).agg(pl.col([A_col, B_col])).with_columns(
pl.col([A_col, B_col]).rolling_mean(window_size=30, by=B_col, closed="left"))
</code></pre>
<p>leading me to this error:</p>
<pre><code>`expr_name` operation not supported for dtype `list[date]` (expected: date/datetime)
</code></pre>
<p>I cannot understand why it cannot take a list of dates as "by" for the rolling mean.</p>
|
<python><dataframe><python-polars>
|
2024-01-22 17:21:50
| 1
| 881
|
Alessandro Togni
|
77,861,518
| 955,273
|
polars: compute row-wise quantile over DataFrame
|
<p>I have some polars DataFrames over which I want to compute some row-wise statistics.</p>
<p>For some there is a <code>.list.func</code> function which exists (eg <a href="https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.Expr.list.mean.html" rel="nofollow noreferrer"><code>list.mean</code></a>), however, for those which don't have a dedicated function I believe I must use <a href="https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.Expr.list.eval.html" rel="nofollow noreferrer"><code>list.eval</code></a>.</p>
<p>For the following example data:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({
'a': [1,10,1,.1,.1, np.NAN],
'b': [2, 8,1,.2, np.NAN,np.NAN],
'c': [3, 6,2,.3,.2, np.NAN],
'd': [4, 4,3,.4, np.NAN,np.NAN],
'e': [5, 2,3,.5,.3, np.NAN],
}, strict=False)
</code></pre>
<p>I have managed to come up with the following expression.</p>
<p>It seems that <code>list.eval</code> returns a list (which I suppose is more generic) so I need to call <code>.explode</code> on the resulting 1-element list to get back a single value.</p>
<p>The resulting column takes the name of the first column, so I then need to call <code>.alias</code> to give it a more meaningful name.</p>
<pre class="lang-py prettyprint-override"><code>df.select(
pl.concat_list(
pl.all().fill_nan(None)
)
.list.eval(pl.element().quantile(0.25))
.explode()
.alias('q1')
)
</code></pre>
<p>Is this the recommended way of computing row-wise?</p>
|
<python><python-polars>
|
2024-01-22 17:04:17
| 1
| 28,956
|
Steve Lorimer
|
77,861,436
| 19,694,624
|
Can't edit an embed on callback function
|
<p>I am having troubles with editing a message with an embed. I want to replace the embed with another embed that is being created in a callback function <em>button click</em> (which executes on the button click).
On the screenshots below you can see my problem. When I execute <em>/test</em> command, bot sends the first embed with a button. When I click a button, instead of another embed, bot sends <em><discord.embeds.Embed object at 0x7499da9001f0></em>, and the first embed is still visible.</p>
<p><a href="https://i.sstatic.net/dGAkf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dGAkf.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/SuSSH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SuSSH.png" alt="enter image description here" /></a></p>
<p>The code is pasted below. I am using cogs,</p>
<pre><code>from discord.ui import Button, View
import discord
from discord.ext import commands
class TestCog(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.slash_command(name='test', description='')
async def flip(self, ctx):
async def button_callback(interaction):
embed = discord.Embed(
description=f"{ctx.author.mention}\nNew embed!! Which I can't see",
color=0xf3c70d,
)
await interaction.response.edit_message(content=embed, view=None)
button = Button(custom_id='button', label='click me', style=discord.ButtonStyle.green, emoji="🪙")
button.callback = button_callback
my_view = View()
my_view.add_item(button)
embed = discord.Embed(
title="First embed. This one I can see clearly.",
color=0xf3c70d,
)
await ctx.respond(embed=embed, view=my_view)
def setup(bot):
bot.add_cog(TestCog(bot))
</code></pre>
<p>I am using py-cord==2.4.1</p>
|
<python><discord><discord.py><pycord>
|
2024-01-22 16:49:08
| 1
| 303
|
syrok
|
77,861,424
| 2,249,815
|
Python 3.X random.choice for object array with sub array?
|
<p>Provided the object array:</p>
<pre><code>textures = [
{
"id": 1,
"name": "cement",
"apply_to": ["wall", "basement"],
},
{
"id": 2,
"name": "fabric",
"apply_to": ["window"],
},
{
"id": 3,
"name": "brick",
"apply_to": ["wall"],
},
...
...
...
]
</code></pre>
<p>Suppose I wanted to pick a random object out of that array with the keyword 'wall' in that apply_to property?</p>
<p>So that I get either id (1,3) returned at random?</p>
<p>How can I use 'random.choice' or another random function to do so?</p>
|
<python><python-3.x>
|
2024-01-22 16:46:44
| 2
| 4,653
|
Jebathon
|
77,861,286
| 4,752,738
|
Does poetry/pip "compile" code when installing local package?
|
<p>I'm using poetry for my project (mono repo) structure.
I have a lot of local packages that I want to "install":</p>
<pre><code>[tool.poetry.dependencies]
<my-local-lib> = {path = "../<my-local-lib>", develop = true}
</code></pre>
<p>Does it matter if I do <code>develop = true</code> or <code>develop = false</code>?</p>
<p>I mean, does this installation process do some compilation stuff? Or is the difference that one is copying to site-packages and the other uses a link?</p>
<p>Will it affect execution time or startup time?</p>
|
<python><pip><python-poetry>
|
2024-01-22 16:19:26
| 2
| 943
|
idan ahal
|
77,861,284
| 18,758,062
|
Regex/algorithm in Python to extract comments from class attributes
|
<p>Given the code for a class definition, I am trying to extract all attributes and their comments (<code>""</code> empty string if no comments).</p>
<pre class="lang-py prettyprint-override"><code>class Player(Schema):
score = fields.Float()
"""
Total points from killing zombies and finding treasures
"""
name = fields.String()
age = fields.Int()
backpack = fields.Nested(
PlayerBackpackInventoryItem,
missing=[PlayerBackpackInventoryItem.from_name("knife")],
)
"""
Collection of items that a player can store in their backpack
"""
</code></pre>
<p>In the above example, we expected the parsed result to be:</p>
<pre class="lang-py prettyprint-override"><code>[
("score", "Total points from killing zombies and finding treasures"),
("name", ""),
("age", ""),
("backpack", "Collection of items that a player can store in their backpack")
]
</code></pre>
<p>In my attempt below, it is failing to extract the comments properly, giving an output:</p>
<pre><code>[
('score', 'Total points from killing zombies and finding treasures'),
('name', ''),
('age', ''),
('backpack', '')
]
</code></pre>
<p>How can the regex expression (or even the entire parsing logic) be fixed to handle the situations present in the example class code?</p>
<p>Thanks</p>
<pre class="lang-py prettyprint-override"><code>import re
code_block = '''class Player(Schema):
score = fields.Float()
"""
Total points from killing zombies and finding treasures
"""
name = fields.String()
age = fields.Int()
backpack = fields.Nested(
PlayerBackpackInventoryItem,
missing=[PlayerBackpackInventoryItem.from_name("knife")],
)
"""
Collection of items that a player can store in their backpack
"""
'''
def parse_schema_comments(code):
# Regular expression pattern to match field names and multiline comments
pattern = r'(\w+)\s*=\s*fields\.\w+\([^\)]*\)(?:\n\s*"""\n(.*?)\n\s*""")?'
# Find all matches using the pattern
matches = re.findall(pattern, code, re.DOTALL)
# Process the matches to format them as required
result = []
for match in matches:
field_name, comment = match
comment = comment.strip() if comment else ""
result.append((field_name, comment))
return result
parsed_comments = parse_schema_comments(code_block)
print(parsed_comments)
</code></pre>
|
<python><python-3.x><regex><algorithm><parsing>
|
2024-01-22 16:19:12
| 2
| 1,623
|
gameveloster
|
77,861,151
| 21,185,825
|
Python - azure-devops - AssignedTo field is missing
|
<p>I wrote a python function retrieving the results from an <strong>ADO query</strong></p>
<pre><code>def get_query_results(connection, project_name,query_id):
'''
Gets the result of a query in ADO
'''
results = []
try:
wit_client = connection.clients.get_work_item_tracking_client()
result = wit_client.query_by_id(query_id)
work_items = result.work_items
for work_item in work_items:
work_item_details = wit_client.get_work_item(id=work_item.id, fields=["System.Id", "System.Title","System.AssignedTo"], expand="Relations,Fields")
results.append({"id": work_item_details.fields['System.Id'], "title": work_item_details.fields['System.Title'], "assigned_to": work_item_details.fields['System.AssignedTo']})
except Exception as e:
logger.exception(f'Error fetching repositories from {project_name}: {str(e)}')
return results
</code></pre>
<p>but although there are a lot of available fields like "Assigned To" in ADO ui, it seems I cannot get <strong>AssignedTo</strong> field</p>
<pre><code>results.append({"id": work_item_details.fields['System.Id'], "title": work_item_details.fields['System.Title'], "assigned_to": work_item_details.fields['System.AssignedTo']})
KeyError: 'System.AssignedTo'
</code></pre>
<ul>
<li>how can I fix this ?</li>
<li>how can I retrieve the available fields instead of just guessing ?</li>
</ul>
<p>thanks for your help</p>
|
<python><azure-devops><ado>
|
2024-01-22 15:59:37
| 1
| 511
|
pf12345678910
|
77,861,129
| 9,786,534
|
Get the number of elements returned by a CDSView in Bokeh
|
<p>Given the following MRE, is it possible to access or print the number of elements resulting from the application of the <code>CDSView</code> filter on the <code>ColumnDataSource</code> in Bokeh? In this simple case, I'd like to get the information that 3 rows are returned and, if possible, access the index of <code>ColumnDataSource.data</code> of the filtered rows.</p>
<pre class="lang-py prettyprint-override"><code>from bokeh.models import BooleanFilter, CDSView, ColumnDataSource
from bokeh.plotting import figure, show
source = ColumnDataSource(data=dict(x=[1, 2, 3, 4, 5], y=[1, 2, 3, 4, 5]))
booleansX = [True if x_val < 5 else False for x_val in source.data['x']]
booleansY = [True if y_val > 1 else False for y_val in source.data['y']]
view = CDSView(source=source, filters=[BooleanFilter(booleansX), BooleanFilter(booleansY)])
p_filtered = figure(height=300, width=300)
p_filtered.circle(x="x", y="y", size=10, source=source, view=view)
show(p_filtered)
</code></pre>
<p>Note that this MRE is not the actual real data, I am just hoping to get this insight to debug a code using much larger CDS.</p>
|
<python><bokeh>
|
2024-01-22 15:55:05
| 0
| 324
|
e5k
|
77,861,106
| 4,597,780
|
FastAPI depdendency factory
|
<p>I want to charge some credits for API usage and came up with following dependency</p>
<pre><code>def charge_call(cost: int):
async def _charge_call(
wallet=Depends(get_wallet),
):
await wallet.charge(cost)
return _charge_call
@router.post(
"/some-call",
dependencies=[Depends(charge_call(10)]
)
</code></pre>
<p>It works great, but here is a problem. Some endpoints can charge different amount of credits based on Incoming parameters and I tried to create another factory and use it like this, but it never calls <code>_charge_call</code></p>
<pre><code>def charge_test_call(
call_type: str,
):
if call_type == "A":
cost = 1
else:
cost = 2
return charge_call(cost)
@router.post(
"/test-call",
dependencies=[Depends(charge_test_call)]
)
async def test_call(
call_type: str,
):
if call_type == "A":
return "A"
return "B"
</code></pre>
<p>Is it possible to recursively resolve dependencies in FastAPI or I have to redesign how charging works?</p>
|
<python><dependency-injection><fastapi>
|
2024-01-22 15:50:07
| 1
| 1,914
|
Max
|
77,861,097
| 11,932,905
|
Pyspark one-hot encoding with grouping same id
|
<p>Is there a way to perform OHE in Spark and 'flatten' dataset so that each Id has only one row?</p>
<p>For example if input is like this:</p>
<pre><code>+---+--------+
| id|category|
+---+--------+
| 0| a|
| 1| b|
| 2| c|
| 1| a|
| 2| a|
| 0| c|
+---+--------+
</code></pre>
<p>Output should be like this (id0 has categories <code>a</code> and <code>c</code>, id1 has <code>a</code> and <code>b</code>, etc.):</p>
<pre><code>+---+----------+----------+----------+
| id|category_a|category_c|category_b|
+---+----------+----------+----------+
| 0| 1| 1| 0|
| 1| 1| 0| 1|
| 2| 1| 1| 0|
+---+----------+----------+----------+
</code></pre>
<p>I can do this in pandas by OHE + groupby (aggr - 'max'), but can't find a way to do it in pyspark due to the specific output format..</p>
<p>Thank you, appreciate any help.</p>
|
<python><pyspark><one-hot-encoding>
|
2024-01-22 15:49:10
| 2
| 608
|
Alex_Y
|
77,861,071
| 12,696,223
|
Flax JIT error for inherited nn.Module class methods
|
<p>Based on <a href="https://github.com/google/jax/discussions/10598#discussioncomment-2700317" rel="nofollow noreferrer">this answer</a> I am trying to make a class jit compatible by creating a pytree node, but I get:</p>
<pre><code>TypeError: Cannot interpret value of type <class '__main__.TestModel'> as an abstract array; it does not have a dtype attribute
</code></pre>
<p>The error line is in the <code>fit</code> function when calling <code>self.step</code>.
Is there anything wrong with my implementation?</p>
<pre class="lang-py prettyprint-override"><code>import jax
import flax.linen as nn
import optax
from jax.tree_util import register_pytree_node_class
from dataclasses import dataclass
from typing import Callable
def data_loader(X, Y, batch_size):
for i in range(0, len(X), batch_size):
yield X[i : i + batch_size], Y[i : i + batch_size]
@register_pytree_node_class
@dataclass
class Parent(nn.Module):
key: jax.random.PRNGKey
params: dict = None
@jax.jit
def step(self, loss_fn, optimizer, opt_state, x, y):
loss, grads = jax.value_and_grad(loss_fn)(y, self.predict(x))
opt_grads, opt_state = optimizer.update(grads, opt_state)
params = optax.apply_updates(self.params, opt_grads)
return params, opt_state, loss
@jax.jit
def predict(self, x):
return self.apply(self.params, x)
def fit(
self,
X,
Y,
optimizer: Callable,
loss: Callable,
batch_size=32,
epochs=10,
verbose=True,
):
opt_state = optimizer.init(self.params)
self.params = self.init(self.key, X)
history = []
for i in range(epochs):
epoch_loss = 0
for x, y in data_loader(X, Y, batch_size):
self.params, opt_state, loss_value = self.step(
loss, optimizer, opt_state, x, y
)
epoch_loss += loss_value
history.append(epoch_loss / (len(X) // batch_size))
if verbose:
print(f"Epoch {i+1}/{epochs} - loss: {history[-1]}")
return history
def tree_flatten(self):
return (self.params,), None
@classmethod
def tree_unflatten(cls, aux_data, children):
return cls(*children, aux_data)
</code></pre>
<pre class="lang-py prettyprint-override"><code>class TestModel(Parent):
d_hidden: int = 64
d_out: int = 1
@nn.compact
def __call__(self, x):
x = nn.Dense(self.d_hidden)(x)
x = nn.relu(x)
x = nn.Dense(self.d_out)(x)
x = nn.sigmoid(x)
return x
x_train = jax.random.normal(jax.random.PRNGKey(0), (209, 12288))
y_train = jax.random.randint(jax.random.PRNGKey(0), (209, 1), 0, 2)
model = TestModel(key=jax.random.PRNGKey(0))
model.fit(
x_train,
y_train,
optimizer=optax.adam(1e-3),
loss=optax.sigmoid_binary_cross_entropy,
)
</code></pre>
|
<python><jax><flax>
|
2024-01-22 15:45:39
| 0
| 990
|
Momo
|
77,860,523
| 8,283,557
|
Can't check client certs in CherryPy (mTLS)
|
<p>Is mTLS possible on CherryPy 18.9 / Python 3.12 ?</p>
<p>The server is providing a valid certificate fine and clients are able to validate and connect to it via https without issue.</p>
<p>However, for this particular project, I need to be able to authenticate client certs, so am wondering if it's implemented or a known issue in CherryPy? I have generated client certificates but do not know how to config cherrypy to demand them / make them obligatory - or if its possible?</p>
<pre><code>cherrypy.config.update({
'server.ssl_certificate_chain': 'ca.cert.pem',
'server.ssl_verify_client': 'force',
'server.ssl_verify_depth': 3
})
</code></pre>
<p>The above just allows any client in regardless.</p>
<p>Any help or ideas much appreciated :)</p>
|
<python><windows><ssl><cherrypy><mtls>
|
2024-01-22 14:21:22
| 0
| 323
|
Crapicus
|
77,860,272
| 1,249,481
|
Proper way to pass a pointer to numpy's structured array from Python to C
|
<p>I want to pass a pointer to numpy's structured array from Python to C. In Python I have:</p>
<pre><code>import numpy as np
import ctypes as ct
so_file = 'test.so'
my_functions = ct.CDLL(so_file)
nobs = 3
intype = np.dtype([('nobs', np.intc), ('vals', np.double, (nobs, ))])
indata = np.empty(0, dtype=intype)
new_element = (nobs, np.array([1.1, 2.2, 3.3]))
indata = np.append(indata, np.array(new_element, dtype=intype))
fun = my_functions.my_c_func
fun.restype = None
fun.argtypes = [np.ctypeslib.ndpointer(intype)]
fun(indata)
</code></pre>
<p>while in C I have:</p>
<pre><code>#include <stdio.h>
typedef struct intype {
int nobs;
double *vals;
} intype;
void my_c_func(intype *indata) {
intype data;
data = indata[0];
printf("%i\n", data.nobs);
printf("%f\n", data.vals[0]);
}
</code></pre>
<p>Printing <code>data.nobs</code> works, while <code>data.vals[0]</code> ends with <code>Segmentation fault (core dumped)</code>. What am I doing wrong?</p>
<p><strong>Update</strong></p>
<p>I was able to achieve the goal by using <code>ctype</code> (without <code>numpy</code>) like this:</p>
<pre><code>class intype(ct.Structure):
_fields_ = [('nobs', ct.c_int), ('vals', ct.POINTER(ct.c_double))]
indata = intype()
indata.nobs = ct.c_int(nobs)
indata.vals = (ct.c_double * nobs)(*[1.1, 2.2, 3.3])
fun.argtypes = [ct.POINTER(intype)]
fun(ct.byref(indata))
</code></pre>
<p>Is it the right way? What are the (dis)advantages of using the first and the second approach?</p>
|
<python><c><numpy><ctypes>
|
2024-01-22 13:38:02
| 2
| 4,663
|
danas.zuokas
|
77,860,036
| 1,232,660
|
Why reading a compressed TAR file in reverse order is 100x slower?
|
<p>First, let's generate a compressed <code>tar</code> archive:</p>
<pre class="lang-py prettyprint-override"><code>from io import BytesIO
from tarfile import TarInfo
import tarfile
with tarfile.open('foo.tgz', mode='w:gz') as archive:
for file_name in range(1000):
file_info = TarInfo(str(file_name))
file_info.size = 100_000
archive.addfile(file_info, fileobj=BytesIO(b'a' * 100_000))
</code></pre>
<p>Now, if I read the archive contents in natural order:</p>
<pre class="lang-py prettyprint-override"><code>import tarfile
with tarfile.open('foo.tgz') as archive:
for file_name in archive.getnames():
archive.extractfile(file_name).read()
</code></pre>
<p>and measure the execution time using the <code>time</code> command, I get less than <strong>1 second</strong> on my PC:</p>
<pre><code>real 0m0.591s
user 0m0.560s
sys 0m0.011s
</code></pre>
<p>But if I read the archive contents in reverse order:</p>
<pre class="lang-py prettyprint-override"><code>import tarfile
with tarfile.open('foo.tgz') as archive:
for file_name in reversed(archive.getnames()):
archive.extractfile(file_name).read()
</code></pre>
<p>the execution time is now around <strong>120 seconds</strong>:</p>
<pre><code>real 2m3.050s
user 2m0.910s
sys 0m0.059s
</code></pre>
<p><strong>Why is that?</strong> Is there some bug in my code? Or is it some <code>tar</code>'s feature? Is it documented somewhere?</p>
|
<python><performance><tar>
|
2024-01-22 12:58:58
| 1
| 3,558
|
Jeyekomon
|
77,860,023
| 1,227,922
|
Dynamically add field to a Django Modelform
|
<p>I have read all previous posts regarding this, but none of the solutions work for me, maybe its because Django has changed something? I want to add fields dynamically to my modelform that is rendered in the django admin, in the future it will be based of fetching some information from the backend to populate the fields, but right now I just want to show a field added in the init of the model form like so:</p>
<pre><code>class PackageFileInlineForm(forms.ModelForm):
class Meta:
model = File
exclude = []
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
self._meta.fields += ('test2',)
self.fields['test2'] = forms.CharField()
</code></pre>
<p>However the field test2 is not shown when rendering the form. Why?</p>
<p>I can also add that if I add it directly on the modelform, the field shows in the django admin:</p>
<pre><code>class PackageFileInlineForm(forms.ModelForm):
test2 = forms.CharField()
</code></pre>
<p>Here is the full code for the admin:</p>
<pre><code>admin.site.register(Package, PackageAdmin)
class PackageAdmin(admin.ModelAdmin):
form=PackageForm
inlines=[PackageFileInline]
class PackageForm(forms.ModelForm):
class Meta:
model = Package
fields = '__all__'
class PackageFileInline(admin.StackedInline):
model=File
form=PackageFileInlineForm
extra=0
class PackageFileInlineForm(forms.ModelForm):
class Meta:
model = File
exclude = []
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
self._meta.fields += ('test2',)
self.fields['test2'] = forms.CharField()
</code></pre>
|
<python><django>
|
2024-01-22 12:55:12
| 0
| 1,489
|
Andreas
|
77,859,895
| 11,001,493
|
Why am I getting different hash for duplicated files?
|
<p>I need to check for duplicated and corrupted files in a folder with literally millions of files.</p>
<p>First, I was trying this way:</p>
<pre><code>hash_files = pd.DataFrame({"Path":[],
"Hash":[]})
rootdir = "//?/Z:/"
error_files = []
for root, subdir, files in os.walk(rootdir):
print(root)
for filename in files:
# String with file path
source = root + "/" + filename
with open(source, "rb") as file_obj:
# File reading
file_contents = file_obj.read()
file_obj.close()
# Hash identification
md5_hash = hashlib.blake2b(file_contents).hexdigest()
hash_files.loc[len(hash_files) + 1] = [source, md5_hash]
</code></pre>
<p>But the file reading part was taking too long to run in large files (and there are a lot). So I tried another way that seemed way quicker:</p>
<pre><code>hash_files = pd.DataFrame({"Path":[],
"Hash":[]})
rootdir = "//?/Z:/"
error_files = []
for root, subdir, files in os.walk(rootdir):
print(root)
for filename in files:
# String with file path
source = root + "/" + filename
# Hash identification
md5_hash = hashlib.blake2b(source.encode('utf-8')).hexdigest()
hash_files.loc[len(hash_files) + 1] = [source, md5_hash]
</code></pre>
<p>But in this last way, I'm getting different hash for duplicated files and I thought they had to be the same.</p>
<p>Anyone knows what is wrong or how I can get the right hash for all these files in a quicker way?</p>
|
<python><hash><md5><hashlib>
|
2024-01-22 12:37:36
| 1
| 702
|
user026
|
77,859,861
| 1,840,524
|
What is the difference between xarray.open_mfdataset(parallel=True) and xarray.concat when using dask arrays?
|
<p>I'm working with a classic data processing workflow using python, i.e. loading a large number of files, pre-processing them, concatenating them and applying a few reductions.</p>
<p>At the moment, I'm using a two-stage workflow with dask:</p>
<ol>
<li>I create a dask <code>Bag</code> to load and preprocess all the input files and export the resulting xarray <code>DataSet</code> with dimensions <code>(time, lat, lon, fileidx)</code> as a zarr file to disk.</li>
<li>I open all the previous zarr files with <code>xarray.open_mfdataset</code> to concatenate them along the <code>fileidx</code> dimension and apply the final reduction.</li>
</ol>
<p>Everything works fine this way and the parallelization applies well to both steps.</p>
<p>I naively tried to simplify this two-step workflow into a single step by using the <code>xarray.concat</code> function directly on the output <code>DataSet</code> of the first step instead of the zarr write/read step, but in all my attempts <code>xarray.concat</code> merged everything using a single worker and thus crashed my calculation due to a memory problem.</p>
<p>So do you know the difference between <code>xarray.concat</code> and <code>xarray.open_mfdataset</code> concatenation? Or maybe how to manage to do it if it is possible? With small data, the result is (it seems at least) the same.</p>
<p>This is how I write and read zarr files currently:</p>
<pre class="lang-py prettyprint-override"><code># Write
import dask.bag as db
my_data = db.from_sequence(input_files)
.map(preprocessing)
.map(to_dataset)
.map(lambda x: x.chunk(None))
my_data.map(lambda x: x.to_zarr(f"{ZARR_STORE_PATH}/part_{x.fileidx.data.item():02}.zarr")).compute()
# Read
xr.open_mfdataset(
Path(ZARR_STORE_PATH).glob("part_*.zarr"),
engine="zarr",
combine="nested",
concat_dim="fileidx",
parallel=True,
)
</code></pre>
<p>And some code I tried:</p>
<pre class="lang-py prettyprint-override"><code>xr.concat(my_data, dim="fileidx")
xr.concat(my_data.to_delayed(), dim="fileidx")
dask.delayed(lambda x: xr.concat(x, dim="fileidx"))(my_data)
</code></pre>
<p>There is also a discussion here: <a href="https://github.com/pydata/xarray/issues/4628" rel="nofollow noreferrer">https://github.com/pydata/xarray/issues/4628</a></p>
|
<python><dask><python-xarray>
|
2024-01-22 12:30:45
| 1
| 1,529
|
Remy F
|
77,859,647
| 136,285
|
Python and support for EUC-KR
|
<p>I am trying to play with string encoding in python 3.10, in particular to demonstrate the <a href="https://archives.miloush.net/michkap/archive/2005/09/17/469941.html" rel="nofollow noreferrer">yen/won/backslash</a> encoding issue.</p>
<p>So the <a href="https://stackoverflow.com/questions/56819615/are-there-correct-encodings-for-the-backslash-and-tilde-characters-in-shift-jis">following behavior</a> (irreversible mapping) makes sense to me:</p>
<pre><code>>>> "¥".encode("shift-jis").decode("shift-jis")
'\\'
</code></pre>
<p>I can also verify with my iconv copy:</p>
<pre><code>$ echo -n "¥" | iconv -f utf-8 -t shift-jis | hexdump
0000000 005c
0000001
</code></pre>
<p>Now I struggle to understand the following behavior:</p>
<pre><code>>>> "₩".encode("euc-kr")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'euc_kr' codec can't encode character '\u20a9' in position 0: illegal multibyte sequence
</code></pre>
<p>While:</p>
<pre><code>$ echo -n "₩" | hexdump
0000000 82e2 00a9
0000003
$ echo -n "₩" | iconv -f utf-8 -t euc-kr | hexdump
0000000 dca3
0000002
$ echo -n "₩" | iconv -f utf-8 -t euc-kr | iconv -f euc-kr -t utf-8 | hexdump
0000000 bfef 00a6
0000003
</code></pre>
<p>My naive understanding of KS X 1001 (registered as ISO-IR 149), was that <code>₩</code> really is <code>\</code> (<a href="https://en.wikipedia.org/wiki/KS_X_1001#Encodings" rel="nofollow noreferrer">*</a>):</p>
<blockquote>
<p>Encoding schemes of KS X 1001 include EUC-KR (in both ASCII and ISO
646-KR based variants, the latter of which includes a won currency
sign (₩) at byte 0x5C rather than a backslash)</p>
</blockquote>
<p>What did I misundertood from <code>KS X 1001</code> and <code>₩</code> ?</p>
<ol>
<li>Why python isn't returning the <code>\</code> symbol ?</li>
<li>Why iconv is returning code <code>dca3</code> (U+FFE6 FULLWIDTH WON SIGN) for <code>₩</code> (U+20A9, WON SIGN) ?</li>
</ol>
<p>For reference:</p>
<pre><code>$ python3 --version
Python 3.10.12
</code></pre>
<p>and</p>
<pre><code>$ iconv --version
iconv (Ubuntu GLIBC 2.35-0ubuntu3.6) 2.35
</code></pre>
|
<python><character-encoding><iconv>
|
2024-01-22 11:49:54
| 1
| 12,719
|
malat
|
77,859,478
| 4,255,096
|
ParseException when reading table in Spark fails
|
<p>I'm trying to read a table in spark using a string path but I get the following error:</p>
<pre><code>spark.read.format("delta").table(path)
Possibly unquoted identifier spark-warehouse detected. Please consider quoting it with back-quotes as `spark-warehouse`(line 1, pos 5)
</code></pre>
<p>The table that I'm trying to read is found on my local machine under <code>spark-warehouse</code>.</p>
<p>I tried the option of adding back-quotes to the path but it failed again giving me a different error</p>
<pre><code>pyspark.sql.utils.AnalysisException: Table or view not found:
'UnresolvedRelation [spark-warehouse/data/temp/myuser/unit_test/_delta/test/table], [], false
</code></pre>
<p>I've checked the path for correctness. For example if I use <code>load</code> method it works fine.</p>
<pre><code>spark.read.format('delta').load(path)
</code></pre>
<p>I wonder if the issue is <code>-</code> in <code>spark-warehouse</code>?</p>
<p>I would like to know why read.table fails.</p>
|
<python><apache-spark><pyspark><delta-lake>
|
2024-01-22 11:22:45
| 0
| 375
|
Geosphere
|
77,859,473
| 525,865
|
Selenium use chrome on Colab got unexpectedly exited - how to fix this?
|
<p>i am trying to get data form a page</p>
<p>see url = "https://clutch.co/il/it-services"</p>
<p>The website i am trying to scrap from has some sort of anti-bot protection with CloudFlare or similar services, hence the scrapper need to use selenium with a headless browser like Headless Chrome or PhantomJS. Selenium automates a real browser, which can navigate Cloudflare's anti-bot pages just like a human user.</p>
<p>Here's how i use selenium to imitate a real human browser interaction:</p>
<p>but on Google-Colab it does not work propperly</p>
<pre><code>import pandas as pd
from bs4 import BeautifulSoup
from tabulate import tabulate
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.headless = True
driver = webdriver.Chrome(options=options)
url = "https://clutch.co/il/it-services"
driver.get(url)
html = driver.page_source
soup = BeautifulSoup(html, 'html.parser')
# Your scraping logic goes here
company_names = soup.select(".directory-list div.provider-info--header .company_info a")
locations = soup.select(".locality")
company_names_list = [name.get_text(strip=True) for name in company_names]
locations_list = [location.get_text(strip=True) for location in locations]
data = {"Company Name": company_names_list, "Location": locations_list}
df = pd.DataFrame(data)
df.index += 1
print(tabulate(df, headers="keys", tablefmt="psql"))
df.to_csv("it_services_data.csv", index=False)
driver.quit()
</code></pre>
<p><strong>question</strong>: can i use selenium to imitate a real human browser interaction - on google colab too? How to fix the issues i am facing</p>
<p><strong>see my results</strong>: <a href="https://pastebin.com/FpEDLNiA" rel="nofollow noreferrer">https://pastebin.com/FpEDLNiA</a></p>
<pre><code>SessionNotCreatedException Traceback (most recent call last)
<ipython-input-4-ffdb44a94ddd> in <cell line: 9>()
7 options = Options()
8 options.headless = True
----> 9 driver = webdriver.Chrome(options=options)
10
11 url = "https://clutch.co/il/it-services"
5 frames
/usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response)
227 alert_text = value["alert"].get("text")
228 raise exception_class(message, screen, stacktrace, alert_text) # type: ignore[call-arg] # mypy is not smart enough here
--> 229 raise exception_class(message, screen, stacktrace)
SessionNotCreatedException: Message: session not created: Chrome failed to start: exited normally.
(session not created: DevToolsActivePort file doesn't exist)
(The process started from chrome location /root/.cache/selenium/chrome/linux64/120.0.6099.109/chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
Stacktrace:
#0 0x56d4ca1b8f83 <unknown>
#1 0x56d4c9e71cf7 <unknown>
#2 0x56d4c9ea960e <unknown>
#3 0x56d4c9ea626e <unknown>
#4 0x56d4c9ef680c <unknown>
#5 0x56d4c9eeae53 <unknown>
#6 0x56d4c9eb2dd4 <unknown>
#7 0x56d4c9eb41de <unknown>
#8 0x56d4ca17d531 <unknown>
#9 0x56d4ca181455 <unknown>
#10 0x56d4ca169f55 <unknown>
#11 0x56d4ca1820ef <unknown>
#12 0x56d4ca14d99f <unknown>
#13 0x56d4ca1a6008 <unknown>
#14 0x56d4ca1a61d7 <unknown>
#15 0x56d4ca1b8124 <unknown>
#16 0x79bb253feac3 <unknown>
</code></pre>
<p>btw: see my colab: <a href="https://colab.research.google.com/drive/1WilnQwzDq45zjpJmgdjoyU5wTVAgJqvd#scrollTo=pyd0BcMaPxkJ" rel="nofollow noreferrer">https://colab.research.google.com/drive/1WilnQwzDq45zjpJmgdjoyU5wTVAgJqvd#scrollTo=pyd0BcMaPxkJ</a></p>
|
<python><selenium-webdriver><web-scraping><google-colaboratory>
|
2024-01-22 11:21:59
| 0
| 1,223
|
zero
|
77,859,431
| 1,922,302
|
How to get the index of function parameter list comprehension
|
<p>Gurobipy can apparently read the index of a list comprehension formulated within the parentheses of a function. How does this work? Shouldn't this formulation pass a generator object to the function? How do you read the index from that?</p>
<pre><code> md = gp.Model()
md.addConstrs(True for i in [1,2,5,3])
</code></pre>
<p>The output contains the indices that where used in the list comprehension formulation:</p>
<pre><code>{1: <gurobi.Constr *Awaiting Model Update*>,
2: <gurobi.Constr *Awaiting Model Update*>,
5: <gurobi.Constr *Awaiting Model Update*>,
3: <gurobi.Constr *Awaiting Model Update*>}
</code></pre>
|
<python><list-comprehension><generator><gurobi>
|
2024-01-22 11:14:46
| 1
| 953
|
johk95
|
77,859,418
| 7,662,164
|
Custom JVP and VJP for higher order functions in JAX
|
<p>I find custom automatic differentiation capabilities (JVP, VJP) very useful in JAX, but am having a hard time applying it to higher order functions. A minimal example of this sort is as follows: given a higher order function:</p>
<pre><code>def parent_func(x):
def child_func(y):
return x**2 * y
return child_func
</code></pre>
<p>I would like to define custom gradients of <code>child_func</code> with respect to x and y. What would be the correct syntax to achieve this?</p>
|
<python><function><higher-order-functions><jax><autodiff>
|
2024-01-22 11:12:44
| 1
| 335
|
Jingyang Wang
|
77,859,333
| 189,247
|
How to fit text labels on Seaborn log-log scatterplots?
|
<p>My question is how to adjust a Seaborn log-log scatterplot to fit text labels? MWE:</p>
<pre><code>mport seaborn as sns
mport seaborn.objects as so
f = sns.load_dataset("penguins").sample(frac = 0.1)
f['island'] = 'long names here'
Arbitrary tweak distributions
f['bill_length_mm'] = df['bill_length_mm'].apply(lambda x: 2.0**(x / 5))
f['bill_depth_mm'] = df['bill_depth_mm'].apply(lambda x: 2.0**((x - 10) / 2))
(
so.Plot(df, x='bill_length_mm', y='bill_depth_mm', text = 'island')
.add(so.Dot()).add(so.Text(valign='bottom'))
.scale(x = 'log', y = 'log')
.save('test.png')
)
</code></pre>
<p><a href="https://i.sstatic.net/xwqQV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xwqQV.png" alt="enter image description here" /></a></p>
<p>As you can see on the resulting image, the labels get cropped by the edges. This
would be relatively easy to fix if I were using linear axes. But how
to do it when drawing logarithmic scatter plots?</p>
|
<python><plot><seaborn><scatter-plot><loglog>
|
2024-01-22 10:58:45
| 0
| 20,695
|
Gaslight Deceive Subvert
|
77,859,251
| 6,017,833
|
How do specify a only finite number of Hypothesis examples
|
<p>I am using Hypothesis to unit test my package. There are some functions in my package that I only want to run a few examples over for testing. I do not want Hypothesis to generate examples for me.</p>
<p>If I do the following:</p>
<pre class="lang-py prettyprint-override"><code>@example(s="hello world")
@example(s="hello world 2")
def test_test(s):
assert len(s) > 0
</code></pre>
<p>and run <code>pytest</code>, PyTest complains that <code>fixture 's' not found</code>. I am not sure what the most elegant solution for this is. Cheers.</p>
|
<python><testing><pytest><python-hypothesis>
|
2024-01-22 10:45:10
| 1
| 1,945
|
Harry Stuart
|
77,859,194
| 3,719,961
|
Capturing arguments passed to a constructor using class based decorator without breaking inheritance
|
<p>Running the code below raises the following exception:</p>
<pre><code>ERROR!
Traceback (most recent call last):
File "<string>", line 22, in <module>
TypeError: MyDecorator.__init__() takes 2 positional arguments but 4 were given
>
</code></pre>
<p>When an object derived from class Subclass is instanciated, I want to access all the arguments given to the constructor, from my decorator's code. I had to stop before because my current code breaks the inheritance. Could someone steer me in the right direction?</p>
<pre><code>class MyDecorator:
def __init__(self, original_class):
self.original_class = original_class
def __call__(self, *args, **kwargs):
instance = self.original_class(*args, **kwargs)
print(f"Decorator: Creating instance with arguments: {args}, {kwargs}")
return instance
def __getattr__(self, name):
# Pass through attribute access to the original class
return getattr(self.original_class, name)
@MyDecorator
class BaseClass:
def __init__(self, arg1, arg2):
self.arg1 = arg1
self.arg2 = arg2
class SubClass(BaseClass):
def additional_method(self):
print("Additional method in SubClass")
# Creating an instance of the decorated base class
base_instance = BaseClass("foo", arg2="bar")
print(f"arg1: {base_instance.arg1}, arg2: {base_instance.arg2}")
# Creating an instance of the decorated subclass
sub_instance = SubClass("baz", arg2="qux")
print(f"arg1: {sub_instance.arg1}, arg2: {sub_instance.arg2}")
# Accessing additional method in the subclass
sub_instance.additional_method()
</code></pre>
|
<python><python-3.x><inheritance><python-decorators>
|
2024-01-22 10:34:52
| 1
| 373
|
epi.log
|
77,859,076
| 9,488,023
|
Creating a colormap in Python with a specific transition between two regions
|
<p>I am trying to create a colormap in Python for coloring a map of the world. As such, I want there to be a distinct difference in color between ocean and land, and I am trying to create a colormap for this, but I and stuck on how to make the transition between the two work correctly.</p>
<p>My code is something like this:</p>
<pre><code>vmin1 = -12000
vmax1 = 9000
vmax2 = int(np.round(256/(np.abs(vmin1)/np.abs(vmax1)+1)))
vmin2 = int(np.round((256-vmax2)))
cmap = cm.ocean.copy()
colors1 = cmap(np.linspace(0.25, 0.85, vmin2))
cmap = ListedColormap(colors1)
colors = ['yellowgreen', 'goldenrod', 'maroon', 'white']
nodes = [0.0, 0.05, 0.3, 1.0]
cmap = LinearSegmentedColormap.from_list('cm', list(zip(nodes, colors)))
colors2 = cmap(np.linspace(0, 1, vmax2))
cmap = ListedColormap(np.concatenate((colors1, colors2)))
</code></pre>
<p>This creates a nice colormap which looks like this:</p>
<p><a href="https://i.sstatic.net/OPsGC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OPsGC.png" alt="cmap" /></a></p>
<p>This combines two different colorschemes, the built-in 'ocean' colormap and the ['yellowgreen', 'goldenrod', 'maroon', 'white'] scheme that will be used for land. The problem is that this colormap is not centered on a depth of zero, meaning that a lot of shallow ocean is colored as if it was land. Is there a way to make a colormap be centered on a specific value so that everything above zero is colored with one part of the colormap and everything below zero is colored with another? Thanks for any help!</p>
|
<python><plot><colors><colormap>
|
2024-01-22 10:18:04
| 1
| 423
|
Marcus K.
|
77,857,999
| 1,578,364
|
Debian packaged python does not latest scikit-learn
|
<p>My python program depends on scikit-learn - when I run this on ubuntu I get an an error</p>
<pre><code>sklearn/base.py:299: UserWarning: Trying to unpickle estimator MultinomialNB from version 1.3.0 when using version 1.2.1
</code></pre>
<p>I then reinstalled the package with</p>
<pre><code>pip3 install scikit-learn==1.3.0
</code></pre>
<p>This works perfectly.</p>
<p>Now I run this on RaspberryPi 5, which runs a version of Debian bookworm, (<em>Linux ha 6.1.0-rpi7-rpi-v8 #1 SMP PREEMPT Debian 1:6.1.63-1+rpt1 (2023-11-24) aarch64 GNU/Linux</em>).</p>
<p>I run into the same error, so I try <code>pip3 install scikit-learn==1.3.0</code> - this fails with an error</p>
<pre><code>This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz
</code></pre>
<p>So I try <code>sudo apt install python3-sklearn=1.3.0</code> but this fails with <code>Version '1.3.0' for 'python3-sklearn' was not found</code>.</p>
<p>I then ran</p>
<pre><code>apt-cache policy python3-sklearn
</code></pre>
<p>and got this</p>
<pre><code>python3-sklearn:
Installed: (none)
Candidate: 1.2.1+dfsg-1
Version table:
1.2.1+dfsg-1 500
500 http://deb.debian.org/debian bookworm/main arm64 Packages
500 http://deb.debian.org/debian bookworm/main armhf Packages
</code></pre>
<p>It appears that the Debian managed python does not have any versions of this package after 1.2.1. Am I reading this correctly?</p>
<p>How can I get version 1.3.0 of scikit-learn on the raspberry pi?</p>
|
<python><scikit-learn><raspberry-pi><debian>
|
2024-01-22 06:31:21
| 0
| 327
|
anish
|
77,857,733
| 1,266,109
|
Unable to get contour when bounding boxes are not overlapping
|
<p>I have some sprite sheets. In some cases, the bounding boxes of the sprites are overlapping, even though the sprites themselves are not overlapping:</p>
<p><a href="https://i.sstatic.net/qabZQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qabZQ.png" alt="Overlapping bounding boxes" /></a></p>
<p>In other cases, the bounding boxes are not overlapping:</p>
<p><a href="https://i.sstatic.net/Zv501.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zv501.png" alt="Nonoverlapping bounding boxes" /></a></p>
<p>To extract the individual sprites, I am doing the following:</p>
<pre><code>im = cv2.imread("trees.png") # read image
imGray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) # convert to gray
contours, _ = cv2.findContours(imGray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # contouring
sortedContours = sorted(contours, key=cv2.contourArea, reverse=True) # sorting, not necessary...
for contourIdx in range(0,len(sortedContours)-1): # loop with index for easier saving
contouredImage = im.copy() # copy image
contouredImage = cv2.drawContours(contouredImage, sortedContours, contourIdx, (255,255,255), -1) # fill contour with white
extractedImage = cv2.inRange(contouredImage, (254,254,254), (255,255,255)) # extract white from image
resultImage = cv2.bitwise_and(im, im, mask=extractedImage) # AND operator to get only one filled contour at a time
x, y, w, h = cv2.boundingRect(sortedContours[contourIdx]) # get bounding box
croppedImage = resultImage[y:y + h, x:x + w] # crop
cv2.imwrite("contour_"+str(contourIdx)+".png", croppedImage) # save
</code></pre>
<p>This works great for the former image where the bounding boxes are overlapping, but fails in the latter case where the bounding boxes are not overlapping. Why is that and how can I fix it ?</p>
<p>EDIT:
In the former case, as expected, it detects the individual contours and outputs each separately. But in the latter case, it doesn't seem to detect any individual contours, or rather the whole image is output.</p>
|
<python><opencv><computer-vision><sprite><image-segmentation>
|
2024-01-22 05:07:13
| 1
| 21,085
|
Rahul Iyer
|
77,857,652
| 3,667,693
|
How to resolve pylance "Series[Unknown]" error for dataframe using applymap
|
<p>I've been running into the following pylance error.</p>
<pre><code>Object of type "Series[Unknown]" is not callable
</code></pre>
<p>Sample code to recreate error:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'col1': ['a', 'b', 'c'], 'col2': ['c', 'd', 'e']})
df.applymap(lambda x: 'c' in x) # <-- this is the line which cause the pylance error
</code></pre>
<p>Note that I am using the following:</p>
<ul>
<li>python==3.9.0</li>
<li>pandas==1.5.3</li>
<li>pylance (vscode extension)==v2023.12.1</li>
</ul>
<p>Thank you.</p>
|
<python><pandas><pylance>
|
2024-01-22 04:39:31
| 1
| 405
|
John Jam
|
77,857,590
| 1,900,384
|
Limited decoration margin in "constrained" layout?
|
<p>I want to understand the decoration space limits of the <a href="https://matplotlib.org/stable/users/explain/axes/constrainedlayout_guide.html" rel="nofollow noreferrer">"constrained" layout engine</a> of Matplotlib. In my use case, I have to add a lot of decoration (such as different axes ticks and labels) to the right of the plot and I am running into limits that I cannot find documented.</p>
<p>The little test below shows an <code>"x"</code> being moved more and more to the right. Somewhere around <code>x_pos=1.3</code>, constrained starts to move the <code>"x"</code> out of the visible area. Another observation is that a tiny bit of window resize fixes this, i.e. it brings the <code>"x"</code> back to the visible.</p>
<p>Do you have any advises how to tame the beast?</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
TEST_DECORATION = dict(s="x", horizontalalignment="center", verticalalignment="center")
def decorated_plot(x_pos: float):
"""Create (empty) plot w/ decoration at defined horizontal axes position."""
fig = plt.figure(layout="constrained")
ax = fig.add_subplot()
ax.text(x=x_pos, y=0.5, transform=ax.transAxes, **TEST_DECORATION)
ax.set_title(f"x = {x_pos}")
plt.show()
def main():
# explore the behavior for different values...
for x_pos in [1.1, 1.2, 1.25, 1.3, 1.32, 1.4]:
decorated_plot(x_pos)
if __name__ == "__main__":
main()
</code></pre>
|
<python><matplotlib><layout><axis><decoration>
|
2024-01-22 04:07:40
| 3
| 2,201
|
matheburg
|
77,857,589
| 87,240
|
Avoid jagged edges when filling in quadrilateral using OpenCV
|
<p>I'm trying to fill in an area identified by a quadrilateral on an image, but keep getting jagged lines after performing the filling (doesn't seem to matter whether I'm filling using an image or just all plain black). I thought it was due to anti-aliasing so am using <code>lineType=cv2.LINE_AA</code> but I can still see the jagged edges.</p>
<p><strong>Q: Is there a way to avoid these jagged edges and produce smooth straight lines?</strong></p>
<p>Below is the code that I'm using:</p>
<pre><code>import cv2
import numpy as np
# Load the mockup image
mockup = cv2.imread('mockup3.png')
# Define the quadrilateral coordinates (x, y) in clockwise order
quad_coords = np.array([
[307.7142857142857, 239.8571428571429],
[300.57142857142856, 875.5714285714286],
[742.3529411764706, 875.2941176470589],
[736.2857142857143, 239.8571428571429]
], dtype=np.float32)
# Create an empty mask to fill with the smoothed quadrilateral region
mask = np.zeros(mockup.shape[:2], dtype=np.uint8)
# Draw a filled smoothed quadrilateral on the mask
cv2.fillPoly(mask, [quad_coords.astype(np.int32)], (255, 255, 255), lineType=cv2.LINE_AA)
# Create a black image of the same size as the mockup
black_background = np.zeros_like(mockup)
# Overlay the black background onto the mockup using the smoothed mask
result = cv2.bitwise_and(mockup, mockup, mask=~mask)
result += cv2.bitwise_and(black_background, black_background, mask=mask)
# Save the final result
cv2.imwrite('output.png', result)
</code></pre>
<p>Input image:
<a href="https://i.sstatic.net/shmiB.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/shmiB.jpg" alt="the input image" /></a></p>
<p>Output image (problematic areas highlighted in red):
<a href="https://i.sstatic.net/TGRdk.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TGRdk.jpg" alt="the output image" /></a></p>
|
<python><numpy><opencv><image-processing>
|
2024-01-22 04:07:32
| 1
| 3,165
|
krasnaya
|
77,857,466
| 5,960,363
|
With Langchain, how can I change the OpenAI API key at runtime?
|
<h3>Context</h3>
<p>I'm working with Langchain Expression Language (LCEL) in Python.</p>
<p>I'm using the <code>ChatOpenAI</code> class (<a href="https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/openai.py#L150" rel="nofollow noreferrer">source code</a>) within a FastAPI API. The code initializes at startup and then the chain is called with various arguments as requests flow in. I want to pass in different api keys at runtime, depending on the task I'm performing. <code>ConfigurableField</code> (<a href="https://python.langchain.com/docs/expression_language/how_to/configure" rel="nofollow noreferrer">docs</a>) allows me to change any config except <code>openai_api_key</code>.</p>
<p><strong>Is there something in the codebase that forbids this? How can I change the <code>openai_api_key</code> at runtime?</strong></p>
<h3>This works</h3>
<pre><code>model = ChatOpenAI(temperature=0).configurable_fields(
temperature=ConfigurableField(
id="llm_temperature",
name="LLM Temperature",
description="The temperature of the LLM",
)
)
model.with_config(configurable={"llm_temperature": 0.9}).invoke("foo_prompt")
</code></pre>
<h3>But this doesn't</h3>
<p>(It uses "placeholderkey" instead of the desired "somevalidapikey")</p>
<pre><code>model = ChatOpenAI(openai_api_key="placeholderkey").configurable_fields(
openai_api_key=ConfigurableField(
id="openai_api_key",
name="API Key",
description="The API key used when making a call to OpenAI",
)
)
model.with_config(configurable={"openai_api_key": somevalidapikey}).invoke("foo_prompt")
</code></pre>
<h3>Possible root cause?</h3>
<p>A potential culprit for the <code>openai_api_key</code>'s silent exclusion would be <code>lc_secrets</code> which makes <code>openai_api_key</code> non-serializable, but I don't know if that actually has an impact.</p>
<p>From the <a href="https://github.com/langchain-ai/langchain/blob/5396604ef4822abdc812baadec456af072d5f592/libs/community/langchain_community/chat_models/openai.py#L166C1-L169C1" rel="nofollow noreferrer">ChatOpenAI class</a>:</p>
<pre class="lang-py prettyprint-override"><code> @property
def lc_secrets(self) -> Dict[str, str]:
return {"openai_api_key": "OPENAI_API_KEY"}
</code></pre>
<h5>I believe the relevant code is here:</h5>
<p><a href="https://github.com/langchain-ai/langchain/blob/5396604ef4822abdc812baadec456af072d5f592/libs/community/langchain_community/chat_models/openai.py#L150" rel="nofollow noreferrer">Source: ChatOpenAI Class</a></p>
<h5>and here:</h5>
<p><a href="https://github.com/langchain-ai/langchain/blob/5396604ef4822abdc812baadec456af072d5f592/libs/community/langchain_community/chat_models/openai.py#L150" rel="nofollow noreferrer">Source: RunnableSerializable</a></p>
<p>Thank you for any support - much appreciated!</p>
|
<python><fastapi><openai-api><langchain><py-langchain>
|
2024-01-22 03:02:39
| 1
| 852
|
FlightPlan
|
77,857,275
| 4,580,217
|
Return Self of different generic
|
<p>In Python typings, am I able to express something like</p>
<pre class="lang-py prettyprint-override"><code>A = TypeVar('A')
B = TypeVar('B')
class Foo(Generic[A]):
def map(self, f: Callable[[A], B]) -> Self[B]:
fields = dataclasses.fields(type(self))
# do some transformations...
mapped_fields = ...
return (type(self))(**mapped_fields)
@dataclass
class Bar(Foo[A]):
x: A
@dataclass
class Qux(Foo[A]):
y: A
z: Bar[A]
w: Bar[A]
b = Qux("hello", Bar("world"), Bar("!"))
c = b.map(lambda str: len(str))
reveal_type(c) # Qux[int]
c # Qux(5, Bar(5), Bar(1))
</code></pre>
<p>With <code>Self[B]</code> I get the error that <code>Self</code> cannot be parameterized. I'm willing to change the definitions of <code>Foo</code> and <code>Bar</code>/<code>Qux</code> (if I need to add more <code>TypeVar</code>s to make this work), but as I have a dozen or so <code>Bar</code>/<code>Qux</code>s I would prefer not to duplicate <code>map</code> across all of them.</p>
|
<python><python-typing>
|
2024-01-22 01:42:16
| 1
| 872
|
Aly
|
77,857,199
| 14,740,191
|
comparing sets to multiple sets within a loop
|
<p>Is it possible to compare a set to multiple sets with a for loop? I am trying to see if there is match when comparing against multiple sets.
I want to create a function that will allow any number of sets to compare.</p>
<pre><code>ex1 = {'a','b','c'}
ex2 = {'a','b','c','d'}
ex3 = {'a','b','c','d','e'}
list_of_sets = [ex1,ex2,ex3]
my_dict = {
"person1": ['a','b','c'],
"person2": ['a','b','c','d'],
"person3": ['a','b','c','d','e'],
"person4": ['a'],
"person5": ['x','y','z']
}
# this currently works for comparing a hard coded set
output = {}
for k,v in my_dict.items():
if set(v) == ex1 or set(v) == ex2 or set(v) == ex3:
output[k] = True
# something like this doesn't work:
output2 = {}
for k,v in my_dict.items():
if any(set(v).issubset(s) for s in list_of_sets):
output2[k] = True
</code></pre>
|
<python><python-3.x>
|
2024-01-22 01:08:47
| 1
| 441
|
kjay
|
77,857,096
| 5,269,749
|
How to show the results and confidence interval values together?
|
<p>I have two <code>10x10</code> matrices.
One of them is for main results and the other one is for confidence interval (CI) of the corresponding element in the main results matrix.</p>
<p>I am wondering how I can show both of them</p>
<p><img src="https://latex.codecogs.com/svg.image?&space;results%5Cpm&space;CI" alt="equation" /></p>
<p>in one matrix in a pretty way?</p>
<p>right now I have each of them as the following</p>
<pre><code>x, y = np.random.rand(2, 1000) * 1
fig, ax = plt.subplots(figsize=(12, 10))
hist, xedges, yedges = np.histogram2d(
x, y, bins=10, range=[[0, 1], [0, 1]]
)
xedges = np.round(xedges, 2)
yedges = np.round(yedges, 2)
ax = sns.heatmap(
hist.T, annot=True, fmt='6.6g', linewidths=0.5, linecolor='b', center=True
)
ax.set_xticks(np.arange(len(xedges)), xedges)
ax.set_yticks(np.arange(len(yedges)), yedges, rotation=0)
ax.set(title='Results')
ax.invert_yaxis()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/LLO68.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LLO68.png" alt="enter image description here" /></a></p>
<p>and</p>
<pre><code>x, y = np.random.rand(2, 1000) * 1
fig, ax = plt.subplots(figsize=(12, 10))
hist_ci, xedges, yedges = np.histogram2d(
x, y, bins=10, range=[[0, 1], [0, 1]]
)
xedges = np.round(xedges, 2)
yedges = np.round(yedges, 2)
ax = sns.heatmap(
hist_ci.T, annot=True, fmt='6.6g', linewidths=0.5, linecolor='b', center=True
)
ax.set_xticks(np.arange(len(xedges)), xedges)
ax.set_yticks(np.arange(len(yedges)), yedges, rotation=0)
ax.set(title='CI')
ax.invert_yaxis()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/YSut3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4d9yy.png" alt="enter image description here" /></a></p>
<p>I want to know if I can display both of them in one figure in a way that we have plus/minus sign in between of them?</p>
|
<python><numpy><matplotlib><seaborn>
|
2024-01-22 00:14:44
| 1
| 1,264
|
Alex
|
77,857,091
| 1,914,781
|
convert string to datetime works on linux but fail on windows
|
<p>Below code works fine on ubuntu but fail on win10.</p>
<pre><code>import datetime
def main():
s1 = "Sat Jan 20 08:13:13 CST 2024"
dt = datetime.datetime.strptime(s1, '%a %b %d %H:%M:%S %Z %Y')
print(dt)
main()
</code></pre>
<p>from ubuntu:</p>
<pre><code>2024-01-20 08:13:13
</code></pre>
<p>But from win10:</p>
<pre><code> raise ValueError("time data %r does not match format %r" %
ValueError: time data 'Sat Jan 20 08:13:13 CST 2024' does not match format '%a %b %d %H:%M:%S %Z %Y'
</code></pre>
<p>python version is 3.8.5.</p>
|
<python>
|
2024-01-22 00:10:58
| 0
| 9,011
|
lucky1928
|
77,857,073
| 11,621,983
|
Cannot redirect crontab output to file
|
<p>I am on a raspberry pi 5 model B rev 1.0, and am trying to redirect the contents of this simple python program to a file:</p>
<pre class="lang-py prettyprint-override"><code>import time
import sys
while True:
time.sleep(1)
sys.stdout.write("out\n")
sys.stderr.write("err\n")
</code></pre>
<p>I then added this run statement via <code>sudo crontab -e</code>:</p>
<pre><code>@reboot /usr/bin/python3 /home/pi/test.py >> /home/pi/cronlog.log
</code></pre>
<p>However, after restarting my computer, the only logs that I see in cronlog.log are the "err" logs.</p>
<p>I also tried this variation:</p>
<pre><code>@reboot /usr/bin/python3 /home/pi/test.py >> /home/pi/cronlog.log 2>&1
</code></pre>
<p>and also even tried specifying the log file specifically:</p>
<pre><code>@reboot /usr/bin/python3 /home/pi/test.py >> /home/pi/cronlog.log 2>> /home/pi/cronerr.log
</code></pre>
<p>Neither worked, only printing out the "err" message into the cronlog.log file.</p>
<p>As a last resort, I tried:</p>
<pre><code>@reboot /bin/bash -c "/usr/bin/python3 /home/pi/test.py >> /home/py/cronlog.log"
</code></pre>
<p>And... it still only printed out the err logs.</p>
<p>Any help? Thanks!</p>
<p>As a note, I have tried all of these commands in my normal bash instance, and they work perfectly as expected.</p>
|
<python><python-3.x><cron><raspberry-pi>
|
2024-01-22 00:01:45
| 1
| 382
|
unfestive chicken
|
77,856,978
| 2,125,671
|
How to keep constant the memory used by SQLAlchemy
|
<p>I have this Python script :</p>
<pre><code>from typing import Any, Tuple
import sys
def find_memory_usage(insertion: bool):
from sqlalchemy import create_engine, select, String
from sqlalchemy.orm import MappedAsDataclass, DeclarativeBase, declared_attr
from sqlalchemy.orm import Mapped, mapped_column
import random
import string
from sqlalchemy.orm import sessionmaker
from datetime import datetime
import gc
import psutil
class Base(MappedAsDataclass, DeclarativeBase):
__abstract__ = True
@declared_attr
def __tablename__(self) -> Any:
return self.__name__
class User(Base):
id: Mapped[int] = mapped_column(init=False, name="ID", primary_key=True)
name: Mapped[str] = mapped_column(String(16), name="Name", nullable=False)
description: Mapped[str] = mapped_column(
String(256), name="Description", nullable=False
)
engine = create_engine("postgresql+psycopg2://postgres:password@localhost/postgres")
Base.metadata.create_all(engine, tables=[User.__table__])
# Function to generate a random string
def random_string(length):
letters = string.ascii_letters
return "".join(random.choice(letters) for i in range(length))
# Set up the session maker
Session = sessionmaker(bind=engine)
with Session() as session:
if insertion:
# Insert 100,000 User rows with random data
for _ in range(100000):
user = User(
name=random_string(16), # 10 characters long names
description=random_string(128), # 128 characters long descriptions
)
session.add(user)
session.commit()
print("Data insertion complete.")
return
print(datetime.now(), psutil.Process().memory_info().rss / (1024 * 1024))
number = (int(sys.argv[1]) if len(sys.argv) > 1 else 100000)
for row in session.scalars(select(User).fetch(number)):
pass
session.commit()
session.expire_all()
session.expunge_all()
session.close()
del session
gc.collect()
print(datetime.now(), psutil.Process().memory_info().rss / (1024 * 1024))
# find_memory_usage(True) # I have done this once already to insert data.
find_memory_usage(False)
</code></pre>
<p>If I run with <code>python test.py 1000</code>, I got result:</p>
<pre><code>2024-01-21 22:59:37.291944 48.23828125
2024-01-21 22:59:37.321317 49.86328125
</code></pre>
<p>If I run with <code>python test.py 100000</code>, I got result:</p>
<pre><code>2024-01-21 22:59:51.152666 47.8984375
2024-01-21 22:59:52.477458 73.06640625
</code></pre>
<p>We can see that the increase in memory usage depends on <code>number</code> in the script.</p>
<p>The relevant section of code is this :</p>
<pre><code> number = (int(sys.argv[1]) if len(sys.argv) > 1 else 100000)
for row in session.scalars(select(User).fetch(number)):
pass
</code></pre>
<p>As it processes the result row by row, I would expect the memory usage to stay constant.</p>
<p>How (if possible) should I change this part <code>session.scalars(select(User).fetch(number))</code> to achieve this goal.</p>
|
<python><memory-management><sqlalchemy>
|
2024-01-21 23:18:10
| 1
| 27,618
|
Philippe
|
77,856,969
| 1,873,897
|
AzureDefaultCredentials fail with Authorization Code: None
|
<p>Tried <a href="https://stackoverflow.com/questions/75110869/how-to-check-which-credential-azure-python-sdk-class-defaultazurecredential-is-u">this</a> question.
I'm running this code locally in Visual Studio Code, and am logged in using <code>az login</code>. I've also tried <code>aad_credentials = AzureCliCredential()</code> with the same result. Left out the details of creating the group.</p>
<pre><code>aad_credentials = DefaultAzureCredential()
URI = environ.get("DPS_ENDPOINT")
client = DeviceProvisioningClient(endpoint=URI, credential=aad_credentials)
group_id = 'symetric_group_test'
enrollment_group = {
"enrollmentGroupId": group_id,
"attestation": {
"type": "symmetricKey"
}
}
client.enrollment_group.create_or_update( id=group_id, enrollment_group=enrollment_group
</code></pre>
<p>Response:</p>
<blockquote>
<p>azure.core.exceptions.ClientAuthenticationError: (None) Authorization failed for the request<br />
Code: None<br />
Message: Authorization failed for the request</p>
</blockquote>
<p>adding <code>logging_enable=True</code> did not provide any output.</p>
<p>Anything hints or directions on how to debug?</p>
<p>I get the same error in Azure Cloud Shell.</p>
<p>This might be the issue. I have permissions in portal, but in CLIs:</p>
<pre><code>az role assignment list --assignee <username>
[]
</code></pre>
|
<python><microsoft-entra-id><azure-python-sdk>
|
2024-01-21 23:12:57
| 1
| 1,159
|
MikeF
|
77,856,952
| 525,865
|
Running a script in Google Colab - throws back no results
|
<p>I have issues running a tiny scraper in Google Colab. While we are able to run the code on Google Colab Notebooks, the real issue is that the clutch-page, we are trying to scrape is using CloudFlare, which can detect Selenium. There is a work-around now (see below)</p>
<p>So far so good: with the little code below we are able to scrape the data. Here, we actually don't need to use Selenium as the data is already baked right into the HTML when we go to the webpage.</p>
<pre><code>%pip install -q curl_cffi
%pip install -q fake-useragent
%pip install -q lxml
from curl_cffi import requests
from fake_useragent import UserAgent
# we need to take care for this: https://pypi.org/project/fake-useragent/
ua = UserAgent()
headers = {'User-Agent': ua.safari}
resp = requests.get('https://clutch.co/pt/it-services', headers=headers, impersonate="safari15_3")
resp.status_code
# I like to use this to verify the contents of the request
from IPython.display import HTML
HTML(resp.text)
from lxml.html import fromstring
tree = fromstring(resp.text)
data = []
for company in tree.xpath('//ul/li[starts-with(@id, "provider")]'):
data.append({
"name": company.xpath('./@data-title')[0].strip(),
"location": company.xpath('.//span[@class = "locality"]')[0].text,
"wage": company.xpath('.//div[@data-content = "<i>Avg. hourly rate</i>"]/span/text()')[0].strip(),
"min_project_size": company.xpath('.//div[@data-content = "<i>Min. project size</i>"]/span/text()')[0].strip(),
"employees": company.xpath('.//div[@data-content = "<i>Employees</i>"]/span/text()')[0].strip(),
"description": company.xpath('.//blockquote//p')[0].text,
"website_link": (company.xpath('.//a[contains(@class, "website-link__item")]/@href') or ['Not Available'])[0],
})
import pandas as pd
from pandas import json_normalize
df = json_normalize(data, max_level=0)
df
</code></pre>
<p>At the moment I wonder why the xpath on the website does not give back some more results. I guess that I have to do a better defining of this tiny part of the script - and to write a proper path towards this entity.</p>
|
<python><pandas><web-scraping><beautifulsoup>
|
2024-01-21 23:06:42
| 1
| 1,223
|
zero
|
77,856,609
| 447,860
|
Avoid circular imports with generic types in python
|
<p>Often circular imports stemming from type checking can be avoided by skipping runtime imports based on <code>typing.TYPE_CHECKING</code></p>
<p>However, it appears that generic types require evaluation at runtime, for example, the follow code does not run in 3.11.</p>
<p>Note that this is different from non-generic type hints, which work fine.</p>
<p><em>mwe1.py</em></p>
<pre><code>from typing import TYPE_CHECKING
if TYPE_CHECKING:
from .mwe2 import A
class B:
def get_as(self) -> list[A]:
return []
</code></pre>
<p><em>mwe2.py</em></p>
<pre><code>from typing import TYPE_CHECKING
if TYPE_CHECKING:
from .mwe1 import B
class A:
def get_bs(self) -> list[B]:
return []
</code></pre>
<p>is there a strategy for avoiding circular imports in this case?</p>
|
<python><python-3.x><types><circular-dependency>
|
2024-01-21 20:54:11
| 0
| 1,958
|
Lucas
|
77,856,584
| 959,306
|
Scikit-Learn's `_openmp_effective_n_threads()` returns 1, causing HistGradientBoostingRegressor to run on only 1 core
|
<p>the title pretty much says it all. I am troubleshooting why sklearn model is only running on 1 cpu core, when usually it will run on as many cores as I have available. I found behind the scenes it is setting the <code>n_threads</code> attribute with this convenience method from Scikit-Learn: <code>_openmp_effective_n_threads</code> (<a href="https://github.com/scikit-learn/scikit-learn/blob/897c0c570511be4b7912a335052ed479ac5ca1f3/sklearn/utils/_openmp_helpers.pyx#L21" rel="nofollow noreferrer">source code</a>).</p>
<p>So I fired up a fresh python kernel in my desired environment and I call this function and find it returns 1. This is what is limiting my program. Why is this function returning 1 here and is there something I can do to all this to see that I actually have many CPUs available?</p>
<p>Just seeing this comment in the comments of the source code:</p>
<blockquote>
<p>If scikit-learn is built without OpenMP support, always return 1.</p>
</blockquote>
<p>Here is a better example of the symptoms:</p>
<pre><code>Python 3.12.0 | packaged by Anaconda, Inc. | (main, Oct 2 2023, 17:29:18) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from sklearn.utils._openmp_helpers import _openmp_effective_n_threads, _openmp_parallelism_enabled
>>> _openmp_parallelism_enabled()
False
</code></pre>
<p>I feel like there is a version mismatch somewhere because I am installing a custom fork of Scikit-Learn that I have not been keeping up to date. On another computer I have it working with Python version 3.8, but I tried to install this version of sklearn in a new computer, and it always seems to have this problem of no parallel support.</p>
<p>I think my question comes down to this command: <code>_openmp_parallelism_enabled()</code>. I need to figure out why this is returning false.</p>
<p>Thanks.</p>
|
<python><multithreading><scikit-learn><cpu>
|
2024-01-21 20:47:41
| 0
| 18,324
|
jeffery_the_wind
|
77,856,369
| 1,245,659
|
Dag error asking for Task Group that is not asked for
|
<p>I'm experienced in Airflow 1.10.14. I'm now trying to learn Airflow 2, and am having difficulty running DAGs from one to the next. I'm hoping someone can look and tell me what I'm doing wrong.</p>
<p><strong>Here's my DAG</strong></p>
<pre><code>@dag(
"FILE_DB_PRODUCT_TABLES",
schedule_interval='@daily',
start_date=datetime(2022, 11, 1),
catchup=False,
tags=['FILES', 'PRODUCT'],
template_searchpath='/home/airflow/airflow',
default_args={
'email': ['email@gmail.com'],
'email_on_failure': True,
'email_on_success': False
}
)
def FILE_DB_PRODUCT_TABLES():
check_for_file = BranchPythonOperator(
task_id='Check_FTP_and_Download',
provide_context=True,
python_callable=GetFiles
)
PrepareFiles = BashOperator(
task_id='Prepare_Files_For_GCS',
bash_command=SendPRODUCT,
dag=dag,
)
load_File_to_PRODUCT_RAW = GCSToBigQueryOperator(
task_id='PRODUCT_GCS_to_GDB_Raw',
bucket='PRODUCT_files',
source_objects=['To_Process/*.txt'],
destination_project_dataset_table='PRODUCT.PRODUCT_RAW',
schema_fields=[
{'name': 'Field1', 'type': 'STRING', 'mode': 'NULLABLE'},
{'name': 'Field2', 'type': 'STRING', 'mode': 'NULLABLE'},
{'name': 'Field3', 'type': 'STRING', 'mode': 'NULLABLE'},
{'name': 'Field4', 'type': 'STRING', 'mode': 'NULLABLE'},
{'name': 'Field5', 'type': 'STRING', 'mode': 'NULLABLE'},
{'name': 'Field6', 'type': 'STRING', 'mode': 'NULLABLE'},
],
write_disposition='WRITE_TRUNCATE',
google_cloud_storage_conn_id='CLOUD_STORAGE_Staging',
bigquery_conn_id='CLOUD_STORAGE_Staging',
skip_leading_rows=1,
soft_fail=True,
quote_character="",
field_delimiter='\x1f')
# [END howto_operator_gcs_to_DB]
Set_Staging = BigQueryExecuteQueryOperator(
task_id='Set_PRODUCT_Staging',
DBl='Select * from `STORAGE-stg-254212.PRODUCT.VWE_PRODUCT_RAW_TO_STAGE`;',
bigquery_conn_id='CLOUD_STORAGE_Staging',
use_legacy_sql=False,
write_disposition='WRITE_TRUNCATE',
create_disposition='CREATE_IF_NEEDED',
destination_dataset_table='STORAGE-stg-254212.PRODUCT.PRODUCT_STAGE'
)
Populate_Properties = BigQueryExecuteQueryOperator(
task_id='Update_Properties_Table',
DBl='./SQL/PRODUCT/PRODUCT.sql',
bigquery_conn_id='CLOUD_STORAGE_Staging',
use_legacy_sql=False,
write_disposition='WRITE_APPEND',
create_disposition='CREATE_IF_NEEDED'
)
#
#
Populate_Properties_Details = BigQueryExecuteQueryOperator(
task_id='Update_Properties_Detail_Table',
DBl='./SQL/PRODUCT/PRODUCT_DETAIL.sql',
bigquery_conn_id='CLOUD_STORAGE_Staging',
use_legacy_sql=False,
write_disposition='WRITE_APPEND',
create_disposition='CREATE_IF_NEEDED'
)
Populate_Commercial = BigQueryExecuteQueryOperator(
task_id='Update_COMMERCIAL_Table',
DBl='./SQL/PRODUCT/PRODUCT_COMMERCIAL.sql',
bigquery_conn_id='CLOUD_STORAGE_Staging',
use_legacy_sql=False,
write_disposition='WRITE_APPEND',
create_disposition='CREATE_IF_NEEDED'
)
archive_PRODUCT_files = GCSToGCSOperator(
task_id='Archive_PRODUCT_Files',
source_bucket='PRODUCT_files',
source_object='To_Process/*.txt',
destination_bucket='PRODUCT_files',
destination_object='Archive/',
move_object=True,
google_cloud_storage_conn_id='CLOUD_STORAGE_Staging'
)
#
finished = BashOperator(
task_id='Cleanup_and_Finish',
bash_command='rm -rf {}'.format(Variable.get("temp_directory") + "PRODUCT/*.*"),
dag=dag,
)
check_for_file >> PrepareFiles >> load_File_to_PRODUCT_RAW >> Set_Staging >> Populate_Properties >> Populate_Properties_Details >> archive_PRODUCT_files >> finished
Set_Staging >> Populate_Commercial >> archive_PRODUCT_files
check_for_file >> finished
dag = FILE_DB_PRODUCT_TABLES()
</code></pre>
<p>This is the error it's giving me:</p>
<pre><code>Broken DAG: [/home/airflow/airflow/dags/FILE_BQ_ATTOM_TABLES.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 376, in apply_defaults
task_group = TaskGroupContext.get_current_task_group(dag)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/task_group.py", line 490, in get_current_task_group
return dag.task_group
AttributeError: 'function' object has no attribute 'task_group'
</code></pre>
<p>The problem is that I'm not asking for a task group. Where do I look for the error here?
Thanks!</p>
|
<python><airflow><directed-acyclic-graphs>
|
2024-01-21 19:37:15
| 1
| 305
|
arcee123
|
77,856,367
| 525,865
|
BeautifulSoup - parsing on the clutch.co site and adding the rules and regulations of the robot
|
<p>I want to use <code>Python</code> with <code>BeautifulSoup</code> to scrape information from the Clutch.co website.</p>
<p>I want to collect data from companies that are listed at clutch.co :: lets take for example the it agencies from israel that are visible on clutch.co:</p>
<p><a href="https://clutch.co/il/agencies/digital" rel="nofollow noreferrer">https://clutch.co/il/agencies/digital</a></p>
<p>my approach!?</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import time
def scrape_clutch_digital_agencies(url):
# Set a User-Agent header
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
}
# Create a session to handle cookies
session = requests.Session()
# Check the robots.txt file
robots_url = urljoin(url, '/robots.txt')
robots_response = session.get(robots_url, headers=headers)
# Print robots.txt content (for informational purposes)
print("Robots.txt content:")
print(robots_response.text)
# Wait for a few seconds before making the first request
time.sleep(2)
# Send an HTTP request to the URL
response = session.get(url, headers=headers)
# Check if the request was successful (status code 200)
if response.status_code == 200:
# Parse the HTML content of the page
soup = BeautifulSoup(response.text, 'html.parser')
# Find the elements containing agency names (adjust this based on the website structure)
agency_name_elements = soup.select('.company-info .company-name')
# Extract and print the agency names
agency_names = [element.get_text(strip=True) for element in agency_name_elements]
print("Digital Agencies in Israel:")
for name in agency_names:
print(name)
else:
print(f"Failed to retrieve the page. Status code: {response.status_code}")
# Example usage
url = 'https://clutch.co/il/agencies/digital'
scrape_clutch_digital_agencies(url)
</code></pre>
<p>well - to be frank; i struggle with the conditions - the site throws back the following
ie. i run this in google-colab:</p>
<p>and it throws back in the developer-console on colab:</p>
<pre><code>NameError Traceback (most recent call last)
<ipython-input-1-cd8d48cf2638> in <cell line: 47>()
45 # Example usage
46 url = 'https://clutch.co/il/agencies/digital'
---> 47 scrape_clutch_digital_agencies(url)
<ipython-input-1-cd8d48cf2638> in scrape_clutch_digital_agencies(url)
13
14 # Check the robots.txt file
---> 15 robots_url = urljoin(url, '/robots.txt')
16 robots_response = session.get(robots_url, headers=headers)
17
NameError: name 'urljoin' is not defined
</code></pre>
<p>Well I need to get more insights- I am pretty sute that i will get round the robots-impact. The robot is target of many many interest. so i need to add the things that impact my tiny bs4 - script.</p>
<p><strong>update:</strong> dear dear Hedgehog -i tried to cope with robot - but its hard - i can open a new thread:</p>
<p>well i have tried to cope with the robot.txt - and therefore i used the selenium-approach.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup
import time
def scrape_clutch_digital_agencies_with_selenium(url):
# Set up Chrome options for headless browsing
chrome_options = Options()
chrome_options.add_argument('--headless') # Run Chrome in headless mode
# Create a Chrome webdriver instance
driver = webdriver.Chrome(options=chrome_options)
# Visit the URL
driver.get(url)
# Wait for the JavaScript challenge to be completed (adjust sleep time if needed)
time.sleep(5)
# Get the page source after JavaScript has executed
page_source = driver.page_source
# Parse the HTML content of the page
soup = BeautifulSoup(page_source, 'html.parser')
# Find the elements containing agency names (adjust this based on the website structure)
agency_name_elements = soup.select('.company-info .company-name')
# Extract and print the agency names
agency_names = [element.get_text(strip=True) for element in agency_name_elements]
print("Digital Agencies in Israel:")
for name in agency_names:
print(name)
# Close the webdriver
driver.quit()
# Example usage
url = 'https://clutch.co/il/agencies/digital'
scrape_clutch_digital_agencies_with_selenium(url)
</code></pre>
|
<python><pandas><web-scraping><beautifulsoup><python-requests>
|
2024-01-21 19:36:51
| 1
| 1,223
|
zero
|
77,856,238
| 2,847,689
|
Passing a prompt via python to llama.cpp
|
<p>I have written the following code:</p>
<pre><code>import subprocess
import os
import mysql.connector
from dotenv import load_dotenv
def run_llama_cpp(prompt):
# Specify the absolute path of the executable and current directory
executable_path = 'D:\\Code\\llama_cpp\\w64devkit\\w64devkit.exe'
current_directory = 'D:\\Code\\llama_cpp\\llama.cpp'
model_path = 'D:\\Code\\llama_cpp\\llama.cpp\\models\\llama-2-7b.Q8_0.gguf'
# Debug: Print the current working directory
print("Current Working Directory:", os.getcwd())
# Change directory
os.chdir(current_directory)
print("Changed to Directory:", os.getcwd())
# Define the command
command = [
executable_path, # Use the absolute path of your executable
'./main',
'-ins',
'--color',
'-c', '1024',
'--temp', '0.7',
'--repeat_penalty', '1.1',
'-s', '42',
'-n', '-1',
'-m', model_path,
'-p', "'" , prompt , "'"
]
try:
print("Command: ", ' '.join(command))
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
stdout, stderr = process.communicate()
if stderr:
print("Standard Error Output:", stderr)
if process.returncode != 0:
print("Error running llama_cpp:", stderr)
return None
else:
title_start = stdout.find('[TITLE_START]') + len('[TITLE_START]')
title_end = stdout.find('[TITLE_END]')
description_start = stdout.find('[DESCRIPTION_START]') + len('[DESCRIPTION_START]')
description_end = stdout.find('[DESCRIPTION_END]')
title = stdout[title_start:title_end].strip()
description = stdout[description_start:description_end].strip()
return title, description
except FileNotFoundError as e:
print(f"File not found: {e}")
return None
except Exception as e:
print(f"An error occurred: {e}")
return None
load_dotenv()
db_connection = os.getenv('DB_CONNECTION')
db_host = os.getenv('DB_HOST')
db_port = int(os.getenv('DB_PORT'))
db_database = os.getenv('DB_DATABASE')
db_username = os.getenv('DB_USERNAME')
db_password = os.getenv('DB_PASSWORD')
connection = mysql.connector.connect(host=db_host, user=db_username, password=db_password, database=db_database, port=db_port)
cursor = connection.cursor(dictionary=True)
cursor.execute("SELECT * FROM posts")
prompts = cursor.fetchall()
cursor.close()
connection.close()
for prompt_entity in prompts:
str = prompt_entity['content']
content = "Generate headline for this content: '" + str + "'. Put the headline between [TITLE_START] and [TITLE_END]"
title, description = run_llama_cpp(content)
if title and description:
print("Content:", content)
print("Title:", title)
print("Description:", description)
print("####################################")
</code></pre>
<p>I am trying to pass my prompt directly via <code>w64devkit.exe</code>, however when I open the command on a regular terminal <code>w64devkit</code> opens its own terminal.</p>
<p><a href="https://i.sstatic.net/Nz4h3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Nz4h3.png" alt="enter image description here" /></a></p>
<p>How to pass my prompt directly to llama.cpp and receive the answer?</p>
|
<python><llama>
|
2024-01-21 19:00:05
| 1
| 5,261
|
Carol.Kar
|
77,856,221
| 12,167,598
|
python peewee upsert only dictionary specified fields
|
<p>I am new to peewee (<code>v3.17</code>) - using with sqlite(<code>v3.40</code>), trying to write a function that updates or inserts a data to my user table,
This is the model class</p>
<pre class="lang-py prettyprint-override"><code>class MyUser(Model):
id = AutoField()
user_mail = TextField(unique=True)
username_a = TextField(null=True)
user_id_a = TextField(null=True)
username_b = TextField(null=True)
user_id_b = TextField(null=True)
class Meta:
table_name = 'my_user'
database = db
</code></pre>
<p>This is the function I am using to add or update my data. <a href="https://docs.peewee-orm.com/en/latest/peewee/querying.html#upsert" rel="nofollow noreferrer">Refernce</a></p>
<pre class="lang-py prettyprint-override"><code>def upsert_user(**user_data):
"""Insert / update given data to user table"""
return (MyUser
.insert(**user_data)
.on_conflict(
conflict_target=[MyUser.user_mail],
preserve=[MyUser.username_a,
MyUser.user_id_a,
MyUser.username_b,
MyUser.user_id_b])
.execute())
</code></pre>
<p>These are my sample data.</p>
<pre class="lang-py prettyprint-override"><code>data0 = {
"user_mail": "user1@test.com",
"username_a": "user1_a",
"user_id_a": "a1",
"username_b": "user1_b",
"user_id_b": "b1"
}
upsert_user(**data0)
data1 = {
"user_mail": "user1@test.com",
"username_a": "user1_a_new",
"user_id_a": "a1_new"
}
upsert_user(**data1)
</code></pre>
<p><strong>Expected only <code>username_a</code> and <code>user_id_a</code> columns to get updated preserving other columns,
but other columns got null(the default value) after function call.</strong></p>
<p>I've added all other column names to <code>preserve</code> because I need also these to work,</p>
<pre class="lang-py prettyprint-override"><code>data2 = {
"user_mail": "user1@test.com",
"username_b": "user1_b_new",
"user_id_b": "b1_new"
}
# only update the mentioned colums
data3 = {
"user_mail": "user2@test.com",
"username_a": "user2_a",
"user_id_a": "a2"
}
# Add new row
</code></pre>
<p><strong>How can I achieve update/insert only the mentioned dictionary columns.</strong></p>
|
<python><sqlite><peewee>
|
2024-01-21 18:55:20
| 1
| 1,473
|
Akshay Chandran
|
77,856,187
| 1,020,139
|
How can I add scopes to `security.HTTPBearer[]` for a route using FastAPI?
|
<p>How can I add scopes to <code>security.HTTPBearer[]</code> for the <code>GET /v1/health</code> route using FastAPI?</p>
<p>I have checked the docs for <code>HTTPBearer</code>, but it doesn't seem to support any way of adding scopes.</p>
<p>I would like <code>security.HTTPBearer[]</code> to be equal to e.g. <code>["read", "me"]</code>.</p>
<p>Thank you!</p>
<p><strong>Code</strong>:</p>
<pre><code>security = HTTPBearer()
@v1_router.get(
"/health",
tags=["healthcheck"],
summary="Perform a Health Check",
response_description="Return HTTP Status Code 200 (OK)",
status_code=status.HTTP_200_OK,
response_model=HealthCheck,
)
def get_health(credentials: Annotated[HTTPAuthorizationCredentials, Depends(security)]) -> HealthCheck:
return HealthCheck()
</code></pre>
<p><strong>JSON</strong>:</p>
<pre><code>"/v1/health": {
"get": {
"tags": [
"healthcheck"
],
"summary": "Perform a Health Check",
"operationId": "get_health_v1_health_get",
"responses": {
"200": {
"description": "Return HTTP Status Code 200 (OK)",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/HealthCheck"
}
}
}
}
},
"security": [
{
"HTTPBearer": []
}
]
}
}
</code></pre>
|
<python><fastapi>
|
2024-01-21 18:50:03
| 0
| 14,560
|
Shuzheng
|
77,855,843
| 9,016,861
|
Python/Bash: comparing local and remote docker images sha256 hashes
|
<p>The main goal: I have a very size-limited server and I can't afford docker pull of new image version while old layers are on server, so when new image is available, I remove the local one. But I don't want to do it when image wasn't updated as it takes several minutes to pull. I am trying to write a script that will compare remote and local sha256 hashes and act based on their difference.</p>
<p><strong>1st part of question</strong>:</p>
<p>What I have in python:</p>
<pre><code>import docker
def get_local_image_sha256(image_name):
client = docker.from_env()
image = client.images.get(image_name)
return image.id
def get_remote_image_sha256(image_name):
client = docker.from_env()
try:
manifest = client.images.get(image_name).attrs['RepoDigests'][0]
# Extract SHA256 hash from the manifest
sha256_hash = manifest.split('@')[1]
return sha256_hash
except docker.errors.ImageNotFound:
print(f"Image '{image_name}' not found.")
return None
if __name__ == "__main__":
image_name = "cr.yandex/crp110tk8f32a48oaeqo/ecr-server:latest"
local_sha256 = get_local_image_sha256(image_name)
remote_sha256 = get_remote_image_sha256(image_name)
if local_sha256 and remote_sha256:
print(f"Local Image SHA256: {local_sha256}")
print(f"Remote Image SHA256: {remote_sha256}")
else:
print("Failed to obtain SHA256 hashes.")
</code></pre>
<p>It outputs:</p>
<pre><code>Local Image SHA256: sha256:3995accefa763b49e743afb5a72a43be7cb0eb1acd14475f40496c002c6063d7
Remote Image SHA256: sha256:d80f38450a7ca2785afa9f18d790d7ff878dc8897dfab3af14e97983ab1e329e
</code></pre>
<p>Latest image is really ...9e in the cloud, but I don't know where ...d7 came from. When I use <code>docker pull cr.yandex/crp110tk8f32a48oaeqo/ecr-server:latest</code> just after script it outputs:</p>
<pre><code>Pulling from crp110tk8f32a48oaeqo/ecr-server
Digest: sha256:d80f38450a7ca2785afa9f18d790d7ff878dc8897dfab3af14e97983ab1e329e
Status: Image is up to date for cr.yandex/crp110tk8f32a48oaeqo/ecr-server:latest
cr.yandex/crp110tk8f32a48oaeqo/ecr-server:latest
</code></pre>
<p>Why it says hashes are different in python but docker pull says image is latest?</p>
<p><strong>2nd part</strong>:</p>
<p>Originally I tried in bash with script</p>
<pre><code>LATEST_IMAGE_HASH_RAW=$(docker manifest inspect cr.yandex/crp110tk8f32a48oaeqo/ecr-server:latest -v | jq -r .Descriptor.digest)
IMAGE_HASH_RAW=$(docker inspect --format='{{index .RepoDigests 0}}' cr.yandex/crp110tk8f32a48oaeqo/ecr-server:latest)
IMAGE_HASH_RAW="${IMAGE_HASH_RAW#*@}"
if [[ "$IMAGE_HASH_RAW" == "$LATEST_IMAGE_HASH_RAW" ]]; then
echo "$IMAGE_HASH_RAW is latest hash, skipping updating"
else
echo "New hash is available, $LATEST_IMAGE_HASH_RAW, reinstalling (old one is $IMAGE_HASH_RAW)"
fi
</code></pre>
<p>And it managed to output:</p>
<pre><code>New hash is available, sha256:d80f38450a7ca2785afa9f18d790d7ff878dc8897dfab3af14e97983ab1e329e, old is sha256:d80f38450a7ca2785afa9f18d790d7ff878dc8897dfab3af14e97983ab1e329e, reinstalling
</code></pre>
<p>What's wrong with string comparation in bash? Script seemed to handle case when new image is available, but it doesn't do what I want when it's not...</p>
<p><strong>3rd part:</strong></p>
<p>In bash scipt I used <code>docker inspect --format='{{index .RepoDigests 0}}' cr.yandex/crp110tk8f32a48oaeqo/ecr-server:latest</code>, which outputs ...9e but there is another suggestion, <code>docker inspect --format='{{.Id}}' cr.yandex/crp110tk8f32a48oaeqo/ecr-server:latest</code>, which outputs ...d7. Why are they different and which one is correct for local latest hash?</p>
|
<python><bash><docker><docker-pull>
|
2024-01-21 17:14:11
| 1
| 709
|
Jedi Knight
|
77,855,805
| 4,770,853
|
Multiprocessing in python won't keep log of errors in log file
|
<p>I've implemented multiprocessing in a data analysis I'm running but sometimes there are errors and instead of having the entire que get killed I wanted them to just be ignored so I implemeneted a try statement. But new errors keep cropping up, is there a more exhaustive list of erros I can include in the except statement? In general I'd just like errors to be logged and then the code to move on.</p>
<p>Secondly though, the errors are not getting logged in the log file I'm making. Not sure why. This is an example of my code, I've replaced the analysis in teh try statement with some simpler steps. More or less does the same thing, takes the data and writes an output to a csv for each dataset a process is running on (here represented with the dictionary of dataframes). NOTE: I have purposely introduced a keyerror here by misnaming one of the columns in the second dataframe.</p>
<p><strong>Example code...</strong></p>
<pre><code>import pandas as pd
import numpy as np
import logging
import traceback
from multiprocessing import Pool, Manager, Process
output_dir = ''
input_dir = 'df_folder'
# make our data
# Create a dict of 5 dataframes with 3 columns of ten entries each
df_dict = {i: pd.DataFrame(np.random.rand(10, 3), columns=['col1', 'col2', 'col3']) for i in range(5)}
# Introduce an error by changing a column name in one of the dataframes
df_dict[1].columns = ['col1', 'col2', 'wrong_col']
for key, df in df_dict.items():
file_name = f"{key}_df.csv"
file_path = os.path.join(input_dir, file_name)
df.to_csv(file_path, index=False)
# define funditons for mutiprocessing and error logging...
def listener_process(queue):
logging.basicConfig(filename='abi_detector_app.log', filemode='w', format='%(name)s - %(levelname)s - %(message)s')
while True:
message = queue.get()
if message == 'kill':
break
logging.error(message)
def example_process(df_id, queue):
df = pd.read_csv(f"{input_dir}/{df_id}_df.csv")
try:
for col in ['col1', 'col2', 'col3']:
mean = df[col].mean()
std = df[col].std()
result = pd.DataFrame({'mean': [mean], 'std': [std]}, index=[col])
result.to_csv(f'{output_dir}/df_{df_id}_{col}_stats.csv')
except (IndexError, KeyError) as e:
logging.error('Error in dataframe id: %s', df_id)
logging.error(traceback.format_exc())
manager = Manager()
queue = manager.Queue()
listener = Process(target=listener_process, args=(queue,))
listener.start()
pool_size = 5
df_list = df_dict.keys()
# run the processes with the specified number of cores
# new code which passes the error messages to the listener process
with Pool(pool_size) as p:
p.starmap(example_process, [(df_id, queue) for df_id in df_list])
queue.put('kill')
listener.join()
</code></pre>
<p><strong>NOTE</strong> df_dict is a place holder for what my script is actually doing. I editted the example so that it is written to file as a folder which stores the generated dataframes as csv's. Then example process loads them. This is a better example of whats happening because df_dict really does not need to be shared across processes. Just the error logging file.</p>
|
<python><error-handling><multiprocessing><python-multiprocessing>
|
2024-01-21 17:03:57
| 1
| 584
|
Angus Campbell
|
77,855,764
| 7,225,482
|
How to send empty arguments in a parametrized pytest test?
|
<p>I am using Pytest. I have a function with several arguments and I am using a parametrized test to check different combinations of those arguments. But I would like to be able to not have all arguments in all test cases. Is there a way that I can indicate in the tuples where I define the test cases that some arguments are empty?</p>
<p>Here is a minimal example. In the case below I would like to have another test wher I pass only the first argument, e.g. 1 and the expected result 5 (1+4).</p>
<p>I have tried (1,,5) and (1,None,5) but none seems to work.</p>
<p>The example below is a very simplified case, I could probably have a separate test function. But in a general case with lots of arguments it would be quite helpful to be able to parametrize the function to test not passing a each argument.</p>
<pre><code>import pytest
def foo(arg1, arg2=4):
return arg1+arg2
@pytest.mark.parametrize("first,second,expected_result",[(1,2,3),(2,3,5)])
def test(first,second,expected_result):
assert foo(first,second) == expected_result
</code></pre>
|
<python><testing><pytest>
|
2024-01-21 16:55:25
| 3
| 746
|
Alvaro Aguilar
|
77,855,742
| 1,145,744
|
How to convert safetensors model to onnx model?
|
<p>I want to convert a <code>model.safetensors</code> to ONNX, unfortunately I haven't found enough information about the procedure. The documentation of <code>safetensors</code> package isn't enough and actually is not clear even how to get the original (pytorch in my case) model, since when I try something as</p>
<pre><code>with st.safe_open(modelsafetensors, framework="pt") as mystf:
...
</code></pre>
<p>the <code>mystf</code> object has <code>get_tensor('sometensorname')</code> but it seems hasn't any <code>get_model()</code> method or something similar.</p>
|
<python><pytorch><onnx><huggingface><safe-tensors>
|
2024-01-21 16:47:45
| 1
| 12,435
|
AndreaF
|
77,855,706
| 10,200,497
|
How can I merge two dataframes based on range of dates?
|
<p>I have two DataFrames: <code>df1</code> and <code>df2</code></p>
<pre><code>import pandas as pd
df1 = pd.DataFrame(
{
'a': ['2024-01-01 04:00:00', '2023-02-02 20:00:00'],
'id':['a_1', 'a_2']
}
)
df2 = pd.DataFrame(
{
'a': [
'2024-01-01 4:00:00', '2024-01-01 05:00:00',
'2024-01-01 06:00:00', '2024-01-01 07:00:00',
'2024-01-01 08:00:00', '2024-01-01 09:00:00',
'2023-02-02 21:00:00', '2023-02-02 23:00:00',
]
}
)
</code></pre>
<p>And this is the expected output. I want to merge <code>id</code> from <code>df1</code> to <code>df2</code>:</p>
<pre><code> a id
0 2024-01-01 04:00:00 a_1
1 2024-01-01 05:00:00 a_1
2 2024-01-01 06:00:00 a_1
3 2024-01-01 07:00:00 a_1
4 2024-01-01 08:00:00 NaN
5 2024-01-01 09:00:00 NaN
6 2023-02-02 21:00:00 a_2
7 2023-02-02 23:00:00 a_2
</code></pre>
<p>If you are familiar with candlesticks, <code>df1.a</code> is 4 hour candlestick. For example <code>df1.a.iloc[0]</code>is:</p>
<p>2024-01-01 04:00:00</p>
<p>2024-01-01 05:00:00</p>
<p>2024-01-01 06:00:00</p>
<p>2024-01-01 07:00:00</p>
<p>Basically it is a range from <code>df1.a.iloc[0]</code> to <code>df1.a.iloc[0] + pd.Timedelta(hours=3)</code>. And I want to <code>merge</code> these ids by the range of hours that they cover.</p>
<p>This is my attempt but I don't know how to <code>merge</code> by range of dates:</p>
<pre><code>df1['a'] = pd.to_datetime(df1.a)
df2['a'] = pd.to_datetime(df2.a)
df1['b'] = df1.a + pd.Timedelta(hours=3)
</code></pre>
|
<python><pandas><dataframe>
|
2024-01-21 16:37:32
| 4
| 2,679
|
AmirX
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.