QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,692,222
| 2,404,988
|
Is it possible to change the return value of a function with pdb?
|
<p>Let's say I have the following code :</p>
<pre class="lang-py prettyprint-override"><code>def returns_false():
breakpoint()
return False
assert(returns_false())
print("Hello world")
</code></pre>
<p>Is there a sequence of pdb commands that will print "Hello world" without triggering an AssertionError first ?</p>
<p>I can't modify a single character of this source file, i'm only looking for what I can achieve while it's already running.</p>
<p>What I tried :</p>
<ul>
<li><code>return True</code></li>
<li><code>s</code> to get in return mode, and then either <code>retval=True</code> or <code>locals()['__return__']=True</code></li>
<li><code>interact</code> and then <code>return True</code> but that throws an exception</li>
</ul>
<p>But none of that change the actual return value</p>
|
<python><pdb>
|
2024-07-01 12:25:55
| 5
| 8,056
|
C4stor
|
78,692,165
| 5,295,755
|
Send list of lists of floats in request body
|
<p>I have defined the following endpoint using FastAPI:</p>
<pre><code>@app.post(
"/my_endpoint",
)
async def my_endpoint(
my_matrix: list[list[list[float]]],
):
print(my_matrix)
</code></pre>
<p>However, I cannot manage to send a request using either the Swagger UI nor a Python requests request:</p>
<pre><code>client = TestClient(app)
headers = {"accept": "application/json"}
params = {}
files = [
("my_matrix", (None, '[[[0.01, 0.32],[0.291, 0.239]]]')),
]
response = client.post(
"/my_endpoint",
params=params,
files=files,
headers=headers,
)
</code></pre>
<p>This gives me the following error:</p>
<pre><code>[
{
'input': '[[[0.01, 0.32],[0.291, 0.239]]]',
'loc': [
'body',
'my_matrix',
0
],
'msg': 'Input should be a valid list',
'type': 'list_type',
'url': 'https://errors.pydantic.dev/2.5/v/list_type'
}
]
</code></pre>
<p>I know I can send the data through json but I'd like to know if there is a way to keep using Pydantic model validation and send through the body.</p>
|
<python><python-requests><swagger><fastapi>
|
2024-07-01 12:13:53
| 1
| 849
|
Aloïs de La Comble
|
78,692,083
| 521,070
|
How to run an executable Python script packaged in a wheel?
|
<p>Suppose I've got a "wheel" file that contains executable python script <code>myscript.py</code> and a few python modules it imports.</p>
<p>Now I want to run <code>myscript.py</code> on a host with python virtual environment "xyz".<br />
So I install the "wheel" file in this virtual environment and then run the script like this:</p>
<pre><code>(xyz)> python ~/xyz/mypackage/mysubpackage/myscript.py
</code></pre>
<p>Is it a good practice ? Is it better to run it as: <code>(xyz)> python myscript.py</code> ? How to do it ?</p>
|
<python><execution><python-wheel>
|
2024-07-01 11:55:53
| 0
| 42,246
|
Michael
|
78,691,957
| 487,993
|
Parse temperature string in pint
|
<p>I was trying to parse a string with a temperature and pint fails due to the non-multiplicative nature of the temperature. However, it seems the internal conversion should handle this:</p>
<pre class="lang-py prettyprint-override"><code>from pint import Quantity
Quantity("6degC")
</code></pre>
<p>Produces</p>
<pre><code>---------------------------------------------------------------------------
OffsetUnitCalculusError Traceback (most recent call last)
Cell In[6], line 1
----> 1 Quantity("6degC")
File ~/virtual_enviroments/cfd-2d/lib/python3.10/site-packages/pint/facets/plain/quantity.py:199, in PlainQuantity.__new__(cls, value, units)
197 if units is None and isinstance(value, str):
198 ureg = SharedRegistryObject.__new__(cls)._REGISTRY
--> 199 inst = ureg.parse_expression(value)
200 return cls.__new__(cls, inst)
202 if units is None and isinstance(value, cls):
File ~/virtual_enviroments/cfd-2d/lib/python3.10/site-packages/pint/facets/plain/registry.py:1398, in GenericPlainRegistry.parse_expression(self, input_string, case_sensitive, **values)
1395 def _define_op(s: str):
1396 return self._eval_token(s, case_sensitive=case_sensitive, **values)
-> 1398 return build_eval_tree(gen).evaluate(_define_op)
File ~/virtual_enviroments/cfd-2d/lib/python3.10/site-packages/pint/pint_eval.py:382, in EvalTreeNode.evaluate(self, define_op, bin_op, un_op)
379 if op_text not in bin_op:
380 raise DefinitionSyntaxError(f"missing binary operator '{op_text}'")
--> 382 return bin_op[op_text](
383 self.left.evaluate(define_op, bin_op, un_op),
384 self.right.evaluate(define_op, bin_op, un_op),
385 )
386 elif self.operator:
387 assert isinstance(self.left, EvalTreeNode), "self.left not EvalTreeNode (4)"
File ~/virtual_enviroments/cfd-2d/lib/python3.10/site-packages/pint/facets/plain/quantity.py:1018, in PlainQuantity.__mul__(self, other)
1017 def __mul__(self, other):
-> 1018 return self._mul_div(other, operator.mul)
File ~/virtual_enviroments/cfd-2d/lib/python3.10/site-packages/pint/facets/plain/quantity.py:101, in check_implemented.<locals>.wrapped(self, *args, **kwargs)
99 elif isinstance(other, list) and other and isinstance(other[0], type(self)):
100 return NotImplemented
--> 101 return f(self, *args, **kwargs)
File ~/virtual_enviroments/cfd-2d/lib/python3.10/site-packages/pint/facets/plain/quantity.py:75, in ireduce_dimensions.<locals>.wrapped(self, *args, **kwargs)
74 def wrapped(self, *args, **kwargs):
---> 75 result = f(self, *args, **kwargs)
76 try:
77 if result._REGISTRY.autoconvert_to_preferred:
File ~/virtual_enviroments/cfd-2d/lib/python3.10/site-packages/pint/facets/plain/quantity.py:966, in PlainQuantity._mul_div(self, other, magnitude_op, units_op)
964 if not self._check(other):
965 if not self._ok_for_muldiv(no_offset_units_self):
--> 966 raise OffsetUnitCalculusError(self._units, getattr(other, "units", ""))
967 if len(offset_units_self) == 1:
968 if self._units[offset_units_self[0]] != 1 or magnitude_op not in (
969 operator.mul,
970 operator.imul,
971 ):
OffsetUnitCalculusError: Ambiguous operation with offset unit (degree_Celsius). See https://pint.readthedocs.io/en/stable/user/nonmult.html for guidance.
</code></pre>
<p>However this works</p>
<pre class="lang-py prettyprint-override"><code>Quantity(6, "degC")
<Quantity(6, 'degree_Celsius')>
</code></pre>
<p>So this should be the parsing path taken internally. Or am I seeing this wrongly?</p>
|
<python><pint>
|
2024-07-01 11:28:21
| 1
| 801
|
JuanPi
|
78,691,826
| 1,789,718
|
chaquopy and android: interactive dialog
|
<p>I have been trying to implement this functionality for a while, but all my approaches failed.</p>
<p>I would like to have some python code opening a dialog in the android app and wait for the user to click on the ok button (this is the first step I want to complete before I can create more complex YES/NO dialogs).</p>
<p>So far the only behavior I'm able to obtain is a non-blocking dialog, not matter what I try (signals, sockets, shared variables) but it seems like that the dialog is not shown until the python code has ended its execution.</p>
<p>here is my example that uses a global variable to confirm the user has dismissed the dialog:</p>
<p>Python</p>
<pre class="lang-py prettyprint-override"><code>from java import dynamic_proxy, jboolean, jvoid, Override, static_proxy
from java.lang import Runnable
from com.chaquo.python import Python
from android.app import AlertDialog
from android.content import DialogInterface
import threading
state_done = False
def open_dialog(activity):
def show_dialog():
print('Dialog shown')
builder = AlertDialog.Builder(activity)
builder.setTitle("Title")
builder.setMessage("This is a simple dialog from Python!")
class Listener(dynamic_proxy(DialogInterface.OnClickListener)):
def onClick(self, dialog, which):
print("OK button pressed")
state_done = True
listener = Listener()
builder.setPositiveButton("OK", listener)
dialog = builder.create()
dialog.show()
print('Dialog shown')
class R(dynamic_proxy(Runnable)):
def run(self):
show_dialog()
def dummy():
activity.runOnUiThread(R())
dialog_thread = threading.Thread(target=dummy)
dialog_thread.start()
dialog_thread.join()
while not state_done:
pass
print('done')
</code></pre>
<p>Java</p>
<pre class="lang-java prettyprint-override"><code>if (!Python.isStarted()) {
Python.start(new AndroidPlatform(this));
}
Python py = Python.getInstance();
PyObject pyObject = py.getModule("your_script");
pyObject.callAttr("open_dialog", this);
</code></pre>
<p>few notes: with this code, no print is ever executed, it is as if <code>show_dialog</code> is never called in the first place, no chances that anything is displayed. If I remove the <code>while</code> loop all the prints are executed in the following order:</p>
<pre><code>I/python.stdout: done
I/python.stdout: show_dialog
I/python.stdout: Dialog shown
*here i press the ok button*
I/python.stdout: OK button pressed
I/python.stdout: dismissed
</code></pre>
<p>Is there a way to create a blocking dialog that interface with python directly? indirect solutions are welcomed as well, but even with java callbacks I get the same behavior.</p>
<p>SOLUTION by @mhsmith:</p>
<pre class="lang-java prettyprint-override"><code> if (!Python.isStarted()) {
Python.start(new AndroidPlatform(this));
}
MainActivity activity = this;
Runnable runnable = new Runnable() {
@Override
public void run() {
// Your code here
Python py = Python.getInstance();
PyObject pyObject = py.getModule("your_script");
pyObject.callAttr("open_dialog", activity);
}
};
Thread thread = new Thread(runnable);
thread.start();
</code></pre>
|
<python><java><chaquopy>
|
2024-07-01 10:58:53
| 1
| 1,360
|
Luca
|
78,691,625
| 14,084,653
|
From Excel to JSON convert an empty string to null using Python
|
<p>I'm reading an Excel sheet using the Pandas read_excel which gives me a dataframe, then I'm using df.to_json(...) to convert the dataframe into JSON string. However when there are no values in the Excel cell its coming as empty string "" in JSON. How can I ensure that JSON converts empty cells to null values?</p>
|
<python><pandas>
|
2024-07-01 10:18:27
| 2
| 779
|
samsal77
|
78,691,616
| 2,163,392
|
Cannot extract the penultimate layer output of a vision transformer with a Pytorch
|
<p>I have the following model that I tuned with my own dataset trained with DataParallel:</p>
<pre><code>model = timm.create_model('vit_base_patch16_224', pretrained=False)
model.head = nn.Sequential(nn.Linear(768, 512),nn.ReLU(),nn.BatchNorm1d(512),nn.Dropout(p=0.2),nn.Linear(512, 141))
checkpoint = torch.load('vit_b_16v3.pth')
checkpoint = {k.partition('module.')[2]: v for k, v in checkpoint.items()}
# Load parameters
model.load_state_dict(checkpoint)
</code></pre>
<p>However, I have no idea on how to get the penultimate layer output of such a vision transformer. I tried <a href="https://www.kaggle.com/code/mohammaddehghan/pytorch-extracting-intermediate-layer-outputs" rel="nofollow noreferrer">This tutorial</a> but it is not working. I only want to input an image and have a 512-D vector describing it. With Tensorflow it is a piece of cake to do it but in Pytorch I am struggling.</p>
<p>My last layers are as follows:</p>
<pre><code>(norm): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(fc_norm): Identity()
(head_drop): Dropout(p=0.0, inplace=False)
(head): Sequential(
(0): Linear(in_features=768, out_features=512, bias=True)
(1): ReLU()
(2): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): Dropout(p=0.2, inplace=False)
(4): Linear(in_features=512, out_features=141, bias=True)
)
)
</code></pre>
|
<python><machine-learning><pytorch><vision-transformer>
|
2024-07-01 10:16:39
| 1
| 2,799
|
mad
|
78,691,574
| 18,769,241
|
Roboflow Vs. Darknet for generating weight file and creating the model
|
<p>I have a YoloV8 data file format that is an annotation of data (images) done manually.
What is the most effective and straightforward way of generating a model and therefore yielding the weights file? is it using <code>darknet</code> through the command:</p>
<pre><code>darknet.exe detector train data/obj.data yolo-obj.cfg backup\yolo-obj_2000.weights
</code></pre>
<p>then something like the following to generate the associate model:</p>
<pre><code>python tools/model_converter/convert.py cfg/yolov3.cfg weights/yolov3.weights weights/yolov3.h5
</code></pre>
<p>Or using <code>Roboflow</code> through:</p>
<pre><code>version.deploy(model_type="yolov8", model_path=f”{HOME}/runs/detect/train/”)
</code></pre>
<p>Seems to me darknet is more difficult to install.</p>
|
<python><machine-learning><yolo><yolov8><darknet>
|
2024-07-01 10:07:36
| 1
| 571
|
Sam
|
78,691,539
| 1,617,563
|
Prevent pdoc from leaking environment variables of pydantic-settings
|
<p>I use <code>pydantic-settings</code> to collect configuration parameters from a variety of sources (config.yaml, .env, environment variables, ...). The settings object is stored as a module variable in a similar fashion to how it is shown for FastAPI <a href="https://fastapi.tiangolo.com/advanced/settings/#create-the-settings-object" rel="nofollow noreferrer">here</a>:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic_settings import BaseSettings
class Settings(BaseSettings):
app_name: str = "Awesome API"
admin_email: str
items_per_user: int = 50
settings = Settings()
</code></pre>
<p>The only difference is that the label is capitalised and is imported by other modules (<code>from my_app.config import SETTINGS</code>). This works quite well. However, when I use <code>pdoc</code> to generate documentation it will use <code>__repr__</code>/<code>__str__</code> to document the value of <code>SETTINGS</code>:</p>
<pre><code>SETTINGS: Settings =
Settings(api=APIConfig(token=SecretStr('**********'), endpoint=SecretStr('**********')), logging=LoggingConfig(level=<LogLevel.DEBUG: 'debug'>), uid=501, device_info=None)
</code></pre>
<p>This leaks environment information and in the worst case even credentials, urls or session tokens that have been typed with <code>str</code> instead of <code>SecretStr</code>. I don't want to override <code>__repr__</code> as it might be useful to log settings during runtime. However, for documentation purposes <code>SETTINGS: Settings</code> without a value should be sufficient as the actual value depends on the user's environment. <code>SETTINGS</code> should be documented though, so readers know it exists and can be accessed.</p>
<p>Is there a way to tell pdoc or pydantic(-settings) to not document the module variable's value? Or is there a better pattern to load and share settings across modules?</p>
|
<python><pydantic><pdoc>
|
2024-07-01 10:00:00
| 1
| 2,313
|
aleneum
|
78,691,107
| 5,266,998
|
Regex to substitute the next two words after a matching point
|
<p>I'm writing a Regex to substitute the maximum of the next two words after the matching point.</p>
<p>Expected prefixes: dr, doctor, pr, professor</p>
<p><strong>Sample text:</strong></p>
<blockquote>
<p>Examination carried out in agreement with and in the presence of Dr
John Doe (rhythmologist).</p>
</blockquote>
<p><strong>Expected outcome:</strong></p>
<blockquote>
<p>Examination carried out in agreement with and in the presence of Dr
[DOCTOR_NAME] (rhythmologist).</p>
</blockquote>
<p>Here is my current Regex:</p>
<pre><code>(\s|^|^(.*)|\()(dr|doctor|pr|professor)(\s|[.])(\s*([A-Z]\w+)){0,2}
</code></pre>
<p><strong>However, it doesn't account for an ending parenthesis</strong>, as shown in the following image:
<a href="https://i.sstatic.net/vTTH5who.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vTTH5who.png" alt="enter image description here" /></a></p>
<p>Really appreciate the help to improve the Regex. Thank you!</p>
|
<python><python-3.x><regex>
|
2024-07-01 08:16:08
| 2
| 2,607
|
Januka samaranyake
|
78,691,025
| 6,618,289
|
Parsing .targets file with ElementTree does not find specified tag
|
<p>I am trying to use ElementTree to parse information out of a .targets file from a NuGet package.
What I am trying to find is the tag <code><AdditionalIncludeDirectories></code>. I am using the following python code to try to get it:</p>
<pre><code>import xml.etree.ElementTree as ET
tree = ET.parse(targets_file_path)
config_root = tree.getroot()
namespace = config_root.tag.replace("Project", "")
to_find = f"{namespace}AdditionalIncludeDirectories"
include_dirs = config_root.findall(to_find)
include_directory_list = []
for p in include_dirs:
include_directory_list.append(p)
</code></pre>
<p>This is essentially the first example from the <a href="https://docs.python.org/3/library/xml.etree.elementtree.html#parsing-xml-with-namespaces" rel="nofollow noreferrer">python documentation</a>. The list <code>include_directory_list</code>, however is empty after I execute this snippet.</p>
<p>When I print every single tag, however, I get the following output:</p>
<pre><code>depth = 0
item_list = []
for it in config_root:
item_list.append([depth, it])
while len(item_list) > 0:
d, it = item_list.pop(0)
print(f"depth {d}: {it.tag}")
for subit in it:
item_list.append([d+1, subit])
</code></pre>
<p>yields:</p>
<pre><code>depth 0: {http://schemas.microsoft.com/developer/msbuild/2003}PropertyGroup
depth 0: {http://schemas.microsoft.com/developer/msbuild/2003}ItemDefinitionGroup
depth 0: {http://schemas.microsoft.com/developer/msbuild/2003}ItemDefinitionGroup
depth 1: {http://schemas.microsoft.com/developer/msbuild/2003}ClCompile
depth 1: {http://schemas.microsoft.com/developer/msbuild/2003}Link
depth 1: {http://schemas.microsoft.com/developer/msbuild/2003}Link
depth 2: {http://schemas.microsoft.com/developer/msbuild/2003}AdditionalIncludeDirectories
depth 2: {http://schemas.microsoft.com/developer/msbuild/2003}AdditionalDependencies
depth 2: {http://schemas.microsoft.com/developer/msbuild/2003}AdditionalLibraryDirectorie
depth 2: {http://schemas.microsoft.com/developer/msbuild/2003}AdditionalDependencies
depth 2: {http://schemas.microsoft.com/developer/msbuild/2003}AdditionalLibraryDirectories
</code></pre>
<p>Which is exactly the output I expected. The first item with depth=2 is the one I am looking for and the name is seemingly the one I am inputting to <code>findall()</code>. I tried <code>iterfind()</code> as well, but that also does not give me the correct output. There is also the option of passing a namespace dictionary, which I have also tried with the same result.
I have the feeling that I am either seeing a bug or I am missing something very very obvious.</p>
<p>Any help would be greatly appreciated!</p>
<hr />
<p>PS:
The XML looks like this:</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemDefinitionGroup>
<ClCompile>
<AdditionalIncludeDirectories>$(MSBuildThisFileDirectory)native\include\;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
</ClCompile>
</ItemDefinitionGroup>
<ItemDefinitionGroup>
<Link Condition="'$(Configuration)'=='Debug'">
<AdditionalDependencies>mylib_d.lib;%(AdditionalDependencies)</AdditionalDependencies>
<AdditionalLibraryDirectories>$(MSBuildThisFileDirectory)native\lib;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
</Link>
<Link Condition="'$(Configuration)'=='Release'">
<AdditionalDependencies>mylib.lib;%(AdditionalDependencies)</AdditionalDependencies>
<AdditionalLibraryDirectories>$(MSBuildThisFileDirectory)native\lib;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
</Link>
</ItemDefinitionGroup>
</Project>
</code></pre>
|
<python><xml><msbuild><elementtree>
|
2024-07-01 07:54:12
| 1
| 925
|
meetaig
|
78,690,334
| 16,405,935
|
How can I load copied Libraries to Anaconda packages
|
<p>Because the company does not allow Internet connection from the internal computer, so I installed the libraries from my personal laptop and copied them to the company computer, but I did not see it appear on Anaconda's Package list. In my case it is dash module.</p>
<p><a href="https://i.sstatic.net/Hii6nUOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hii6nUOy.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/yrZBOpZ0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrZBOpZ0.png" alt="enter image description here" /></a></p>
<p>How should I do to update it to Anaconda packages list. Thank you.</p>
|
<python><anaconda><libraries>
|
2024-07-01 03:16:47
| 1
| 1,793
|
hoa tran
|
78,690,229
| 19,366,064
|
How to mock sqlalchemy's async session?
|
<p>Code snippet</p>
<pre><code>from sqlalchemy.ext.asyncio import async_sessionmaker, create_async_engine
engine = create_async_engine(url="")
session_maker = async_sessionmaker(engine)
</code></pre>
<p>How can I mock the session maker so that the following executes without any error:</p>
<pre><code>async with session_maker() as session:
</code></pre>
<p>I tried doing the following and it doesn't work</p>
<pre><code>from unittest.mock import AsyncMock
session_maker.return_val = AsyncMock()
</code></pre>
|
<python><testing><sqlalchemy><pytest><python-unittest>
|
2024-07-01 02:02:45
| 2
| 544
|
Michael Xia
|
78,690,213
| 1,033,217
|
How to Enumerate Threads using CreateToolhelp32Snapshot and Python ctypes?
|
<p>This seems like it should print the thread ID of the first thread in the snapshot, but it always prints <code>0</code>. What is wrong with it?</p>
<p>The following assumes that process ID <code>1234</code> is a real, running process.</p>
<pre class="lang-py prettyprint-override"><code>import ctypes
from ctypes import wintypes
pid = 1234
TH32CS_SNAPTHREAD = 0x00000004
kernel32 = ctypes.windll.kernel32
class THREADENTRY32(ctypes.Structure):
_fields_ = [
('dwSize', wintypes.DWORD),
('cntUsage', wintypes.DWORD),
('th32ThreadID', wintypes.DWORD),
('th32OwnerProcessID', wintypes.DWORD),
('tpBasePri', wintypes.LONG),
('tpDeltaPri', wintypes.LONG),
('dwFlags', wintypes.DWORD)
]
CreateToolhelp32Snapshot = kernel32.CreateToolhelp32Snapshot
CreateToolhelp32Snapshot.argtypes = (wintypes.DWORD, wintypes.DWORD, )
CreateToolhelp32Snapshot.restype = wintypes.HANDLE
h_snapshot = kernel32.CreateToolhelp32Snapshot(TH32CS_SNAPTHREAD, pid)
LPTHREADENTRY32 = ctypes.POINTER(THREADENTRY32)
Thread32First = kernel32.Thread32First
Thread32First.argtypes = (wintypes.HANDLE, LPTHREADENTRY32, )
Thread32First.restype = wintypes.BOOL
thread_entry = THREADENTRY32()
thread_entry.dwSize = ctypes.sizeof(THREADENTRY32)
if kernel32.Thread32First(h_snapshot, ctypes.byref(thread_entry)):
print(thread_entry.th32ThreadID)
</code></pre>
|
<python><python-3.x><windows><ctypes><kernel32>
|
2024-07-01 01:46:18
| 1
| 795
|
Utkonos
|
78,690,204
| 6,407,935
|
How to Fix "TypeError: getattr(): attribute name must be string" when multiple optimizers are GridSearched for GaussianProcessRegressor?
|
<p>Here is my script to predict targets on the final date of a timeseries dataset. I am trying to incorporate a <code>GaussianProcessRegressor</code> model to find the best hyperparameters using <code>GridSearchCV</code>: (Note that some of the code including most of the constants used are not explicitly shown in here to avoid clutter.)</p>
<p><strong>HelperFunctions.py:</strong></p>
<pre><code>skf = StratifiedKFold(n_splits=17, shuffle=True, random_state=4)
def randomized_search(
model,
distribution,
X_train,
X_validation,
y_train,
y_validation,
) -> None:
try:
randomized_search = RandomizedSearchCV(
model,
distribution,
cv=skf,
return_train_score=True,
n_jobs=-1,
scoring="neg_mean_squared_error",
n_iter=100,
)
try:
search = randomized_search.fit(X_train, y_train)
print(
"Best estimator:\n{} \
\nBest parameters:\n{} \
\nBest cross-validation score: {:.3f} \
\nBest test score: {:.3f}\n\n".format(
search.best_estimator_,
search.best_params_,
-1 * search.best_score_,
-1 * search.score(X_validation, y_validation),
)
)
except Exception:
print("'randomized_search.fit' NOT successful!")
print(traceback.format_exc())
raise
else:
print("'randomized_search.fit' Successful!")
except Exception:
print("'randomized_search' NOT successful!")
print(traceback.format_exc())
raise
else:
print("'randomized_search' successful!")
def doRandomizedSearch(
model,
distribution,
feat_train,
feat_validation,
tgt_train,
tgt_validation,
):
try:
randomized_search(
model,
distribution,
feat_train,
feat_validation,
tgt_train,
tgt_validation,
)
except Exception as e:
print("'doRandomizedSearch' NOT successful!")
raise e
else:
print("'doRandomizedSearch' Successful!")
def model_randomized_search(
model_dist_pairs, feat_train, feat_validation, tgt_train, tgt_validation
):
for model, distribution in model_dist_pairs:
doRandomizedSearch(
model,
distribution,
feat_train,
feat_validation,
tgt_train,
tgt_validation,
)
class CustomOptimizers:
def __init__(self, model, initial_theta, bounds):
self.model = model
self.initial_theta = initial_theta
self.bounds = bounds
def obj_func(self, theta, eval_gradient):
if eval_gradient:
ll, grad = self.model.log_marginal_likelihood(theta, True)
return -ll, -grad
else:
return -self.model.log_marginal_likelihood(theta)
def minimize_wrapper(self, theta, eval_gradient):
return minimize(self.obj_func, theta, args=(eval_gradient), bounds=self.bounds)
def least_squares_wrapper(self, theta, eval_gradient):
return least_squares(
self.obj_func, theta, args=(eval_gradient), bounds=self.bounds
)
def differential_evolution_wrapper(self, theta, eval_gradient):
return differential_evolution(
self.obj_func, theta, args=(eval_gradient), bounds=self.bounds
)
def basinhopping_wrapper(self, theta, eval_gradient):
return basinhopping(
self.obj_func, theta, args=(eval_gradient), bounds=self.bounds
)
def dual_annealing_wrapper(self, theta, eval_gradient):
return dual_annealing(
self.obj_func, theta, args=(eval_gradient), bounds=self.bounds
)
class GPRWithCustomOptimizer(GaussianProcessRegressor):
def __init__(
self,
optimizer="minimize",
initial_theta=None,
bounds=None,
random_state=None,
normalize_y=True,
n_restarts_optimizer=0,
copy_X_train=True,
**kwargs,
):
self.initial_theta = initial_theta
self.bounds = bounds
self.custom_optimizers = CustomOptimizers(self, self.initial_theta, self.bounds)
self.optimizer_func = getattr(self.custom_optimizers, optimizer)
super().__init__(
optimizer=self.optimizer_func,
random_state=random_state,
normalize_y=normalize_y,
n_restarts_optimizer=n_restarts_optimizer,
copy_X_train=copy_X_train,
**kwargs,
)
def fit(self, X, y):
super().fit(X, y)
def intermediate_models(kernel):
dtr_dic = dict(
ccp_alpha=uniform(loc=0.0, scale=10.0),
max_features=randint(low=1, high=100),
max_depth=randint(low=1, high=100),
criterion=["squared_error", "friedman_mse", "absolute_error", "poisson"],
)
optimizer_names = [
"minimize_wrapper",
"least_squares_wrapper",
"differential_evolution_wrapper",
"basinhopping_wrapper",
"dual_annealing_wrapper",
]
model_dist_pairs = []
for optimizer_name in optimizer_names:
gpr = GPRWithCustomOptimizer(kernel=kernel, optimizer=optimizer_name)
gpr_dic = dict(
optimizer=optimizer_names,
n_restarts_optimizer=np.arange(0, 20 + 1),
normalize_y=[False, True],
copy_X_train=[True, False],
random_state=np.arange(0, 10 + 1),
)
model_dist_pairs.append((gpr, gpr_dic))
return [(DecisionTreeRegressor(), dtr_dic)] + model_dist_pairs
def cast2Float64(X_train, X_test, y_train, y_test):
X_train_new = np.nan_to_num(X_train.astype(np.float64))
y_train_new = np.nan_to_num(y_train.astype(np.float64))
X_test_new = np.nan_to_num(X_test.astype(np.float64))
y_test_new = np.nan_to_num(y_test.astype(np.float64))
return [X_train_new, X_test_new, y_train_new, y_test_new]
</code></pre>
<p><strong>utilities.py:</strong></p>
<pre><code>from HelperFunctions import (
np,
intermediate_models,
model_randomized_search,
cast2Float64,
)
def initializeKernel(median_distance, data_range):
return ConstantKernel(constant_value_bounds=np.array([[1e-3, 1e3]])) * Matern(
length_scale_bounds=np.array([[1e-3, 1e3]])
) + WhiteKernel(noise_level_bounds=np.array([[1e-3, 1e3]]))
####################################################################
def all_combined_product_cols(df):
cols = list(df.columns)
product_cols = []
for length in range(1, len(cols) + 1):
for combination in combinations(cols, r=length):
combined_col = None
for col in combination:
if combined_col is None:
combined_col = df[col].copy()
else:
combined_col *= df[col]
combined_col.name = "_".join(combination)
product_cols.append(combined_col)
return pd.concat(product_cols, axis=1)
def ensureDataFrameHasName(y, dataframe_name):
if not isinstance(y, pd.DataFrame):
y = pd.DataFrame(y, name=dataframe_name, freq="C", weekmask=weekmask_string)
else:
y.name = dataframe_name
return y.set_axis(pd.to_datetime(y.index)).asfreq(cfreq)
##################################################################################
def model_comparison(original_df):
a = original_df["Result"].to_numpy()
if (a[0] == a).all():
original_df = original_df.drop(columns=["Result"])
bet, train, features, target = """train_features_target(original_df)"""
features_cols = [col for col in list(original_df.columns) if "Score" not in col]
train.dropna(inplace=True)
data = train.values
n_features = len(features_cols)
score_features = data[:, :n_features]
score_target = data[:, n_features]
feat_tgt_tuple = train_test_split(
score_features,
score_target,
test_size=0.33,
random_state=4,
)
feat_train, feat_validation, tgt_train, tgt_validation = feat_tgt_tuple
data_range = np.ptp(feat_train, axis=0)
distances = pdist(feat_train, metric="euclidean")
median_distance = np.median(distances)
values = list(feat_tgt_tuple)
kernel = initializeKernel(median_distance, data_range)
model_dist_pairs = intermediate_models(kernel)
model_randomized_search(model_dist_pairs, *cast2Float64(*values))
</code></pre>
<p><strong>score.py:</strong></p>
<pre><code>from utilities import model_comparison
###############################################################################
def main():
# data_cls_with_result is basically some dataframe after processing.
model_comparison(data_cls_with_result)
if __name__ == "__main__":
main()
</code></pre>
<p><strong>However, I am getting an error message that I cannot seem to fix:</strong></p>
<pre><code>'randomized_search.fit' NOT successful!
Traceback (most recent call last):
File "c:\Users\username\Projects\Python\ScoreTest\HelperFunctions.py", line 38, in randomized_search
search = randomized_search.fit(X_train, y_train)
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\base.py", line 1152, in wrapper
return fit_method(estimator, *args, **kwargs)
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\model_selection\_search.py", line 812, in fit
base_estimator = clone(self.estimator)
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\base.py", line 75, in clone
return estimator.__sklearn_clone__()
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\base.py", line 268, in __sklearn_clone__
return _clone_parametrized(self)
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\base.py", line 110, in _clone_parametrized
new_object = klass(**new_object_params)
File "c:\Users\username\Projects\Python\ScoreTest\HelperFunctions.py", line 158, in __init__
self.optimizer_func = getattr(self.custom_optimizers, optimizer)
TypeError: getattr(): attribute name must be string
'randomized_search' NOT successful!
Traceback (most recent call last):
File "c:\Users\username\Projects\Python\ScoreTest\HelperFunctions.py", line 38, in randomized_search
search = randomized_search.fit(X_train, y_train)
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\base.py", line 1152, in wrapper
return fit_method(estimator, *args, **kwargs)
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\model_selection\_search.py", line 812, in fit
base_estimator = clone(self.estimator)
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\base.py", line 75, in clone
return estimator.__sklearn_clone__()
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\base.py", line 268, in __sklearn_clone__
return _clone_parametrized(self)
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\base.py", line 110, in _clone_parametrized
new_object = klass(**new_object_params)
File "c:\Users\username\Projects\Python\ScoreTest\HelperFunctions.py", line 158, in __init__
self.optimizer_func = getattr(self.custom_optimizers, optimizer)
TypeError: getattr(): attribute name must be string
'doRandomizedSearch' NOT successful!
Traceback (most recent call last):
File "c:/Users/username/Projects/Python/ScoreTest/score.py", line 483, in <module>
main()
File "c:/Users/username/Projects/Python/ScoreTest/score.py", line 479, in main
model_comparison(data_cls_with_result)
File "c:\Users\username\Projects\Python\ScoreTest\utilities.py", line 127, in model_comparison
model_randomized_search(model_dist_pairs, *cast2Float64(*values))
File "c:\Users\username\Projects\Python\ScoreTest\HelperFunctions.py", line 96, in model_randomized_search
doRandomizedSearch(
File "c:\Users\username\Projects\Python\ScoreTest\HelperFunctions.py", line 87, in doRandomizedSearch
raise e
File "c:\Users\username\Projects\Python\ScoreTest\HelperFunctions.py", line 77, in doRandomizedSearch
randomized_search(
File "c:\Users\username\Projects\Python\ScoreTest\HelperFunctions.py", line 38, in randomized_search
search = randomized_search.fit(X_train, y_train)
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\base.py", line 1152, in wrapper
return fit_method(estimator, *args, **kwargs)
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\model_selection\_search.py", line 812, in fit
base_estimator = clone(self.estimator)
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\base.py", line 75, in clone
return estimator.__sklearn_clone__()
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\base.py", line 268, in __sklearn_clone__
return _clone_parametrized(self)
File "C:\Users\username\anaconda3\envs\ts-env\lib\site-packages\sklearn\base.py", line 110, in _clone_parametrized
new_object = klass(**new_object_params)
File "c:\Users\username\Projects\Python\ScoreTest\HelperFunctions.py", line 158, in __init__
self.optimizer_func = getattr(self.custom_optimizers, optimizer)
TypeError: getattr(): attribute name must be string
</code></pre>
|
<python><optimization><scikit-learn><kernel-density><gaussian-process>
|
2024-07-01 01:40:33
| 1
| 525
|
Rebel
|
78,690,101
| 3,163,618
|
Do PyPy int optimizations apply to classes subclassed from int?
|
<p>From what I recall, PyPy has special optimizations on built-in types like ints and lists, which is great because they are very common, and it would be wasteful to treat an int like any other object.</p>
<p>If I subclass <code>int</code> specifically, for example</p>
<pre><code>class bitset(int):
def in_bit(self, i):
return bool(self & (1 << i))
</code></pre>
<p>are these optimizations lost? I do not want to pre-maturely optimize, but I also don't want to be wasteful since I can easily define a function like <code>in_bit(b, i)</code> instead where <code>b</code> is a plain old <code>int</code>.</p>
|
<python><optimization><subclass><pypy>
|
2024-07-01 00:06:06
| 1
| 11,524
|
qwr
|
78,689,986
| 3,486,684
|
Pivoting and then unpivoting while maintaining the original data types in the dataframe, without any reparsing operations
|
<p>Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
import polars as pl
from typing import NamedTuple
Months = Enum(
"MONTHS",
[
"Jan",
"Feb",
"Mar",
"Apr",
"May",
"Jun",
"Jul",
"Aug",
"Sep",
"Oct",
"Nov",
"Dec",
],
)
plMonths = pl.Enum([m.name for m in Months])
plYearMonth = pl.Struct({"year": int, "month": plMonths})
class YearMonth(NamedTuple):
year: int
month: str
dates = pl.Series(
"year_months",
[YearMonth(2024, Months.Jan.name), YearMonth(2024, Months.Feb.name)],
dtype=plYearMonth,
)
values = pl.Series("value", [0, 1])
hellos = pl.Series("hello", ["a"]*2 + ["b"]*2)
</code></pre>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({
"hello": pl.Series(["a", "a", "b", "b"]),
"date": pl.concat([dates, dates]),
"value": pl.concat([values, values.reverse()])
})
df
</code></pre>
<pre><code>shape: (4, 3)
┌───────┬──────────────┬───────┐
│ hello ┆ date ┆ value │
│ --- ┆ --- ┆ --- │
│ str ┆ struct[2] ┆ i64 │
╞═══════╪══════════════╪═══════╡
│ a ┆ {2024,"Jan"} ┆ 0 │
│ a ┆ {2024,"Feb"} ┆ 1 │
│ b ┆ {2024,"Jan"} ┆ 1 │
│ b ┆ {2024,"Feb"} ┆ 0 │
└───────┴──────────────┴───────┘
</code></pre>
<p>I then pivot <code>df</code>:</p>
<pre class="lang-py prettyprint-override"><code>pivoted = df.pivot(index="hello", columns="date", values="value")
pivoted
</code></pre>
<pre><code>shape: (2, 3)
┌───────┬──────────────┬──────────────┐
│ hello ┆ {2024,"Jan"} ┆ {2024,"Feb"} │
│ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 │
╞═══════╪══════════════╪══════════════╡
│ a ┆ 0 ┆ 1 │
│ b ┆ 1 ┆ 0 │
└───────┴──────────────┴──────────────┘
</code></pre>
<p>The headers have now become strings, quite understandably, as can be seen by the following unpivot:</p>
<pre class="lang-py prettyprint-override"><code>unpivoted = pivoted.melt(id_vars="hello", variable_name="date", value_name="value")
unpivoted
</code></pre>
<pre><code>shape: (4, 3)
┌───────┬──────────────┬───────┐
│ hello ┆ date ┆ value │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞═══════╪══════════════╪═══════╡
│ a ┆ {2024,"Jan"} ┆ 0 │
│ b ┆ {2024,"Jan"} ┆ 1 │
│ a ┆ {2024,"Feb"} ┆ 1 │
│ b ┆ {2024,"Feb"} ┆ 0 │
└───────┴──────────────┴───────┘
</code></pre>
<p>But recall that <code>df</code>'s original <code>date</code> column had a struct data type <code>plYearMonth</code>. Is there any way at all to do the unpivot so that <code>unpivoted</code>'s <code>date</code> data is interpreted again as <code>plYearMonth</code> <em>without performing a reparsing</em> operation?</p>
<p>My best guess is that this is not possible, and its instead better to have a dictionary which map between string and struct representation?</p>
|
<python><python-polars>
|
2024-06-30 22:48:59
| 0
| 4,654
|
bzm3r
|
78,689,924
| 5,570,089
|
Efficient Merge sort implementation in python
|
<pre><code>def merge(arr, l, m, r):
merged_arr = []
i, j = 0, 0
while (i < len(arr[l:m])) and (j < len(arr[m:r+1])):
if arr[l+i] < arr[m+j]:
merged_arr.append(arr[l+i])
i += 1
elif arr[l+i] >= arr[m+j]:
merged_arr.append(arr[m+j])
j += 1
merged_arr.extend(arr[m+j:r+1])
merged_arr.extend(arr[l+i:m])
arr[l:r+1] = merged_arr
return
def merge_sort(arr, l, r):
if r > l:
m = (l + r) // 2
merge_sort(arr, l, m)
merge_sort(arr, m+1, r)
return merge(arr, l, m, r)
</code></pre>
<p>I am not sure what is wrong with my <code>merge</code> function. The following works well:
<code>print(merge([1,4,7,8,9,20, 0,4,5,15,67,68], 0, 6,12))</code> <br>
However, this does not give correct output:
<code>print(merge_sort([1, 5, 4, 2, 33, 4, 2, 44, 22, 0], 0, 10))</code></p>
|
<python><algorithm><sorting><data-structures>
|
2024-06-30 22:17:13
| 1
| 636
|
Gerry
|
78,689,912
| 3,486,684
|
Polars: "explode" an enum-valued series into two columns of enum index and enum value
|
<pre class="lang-py prettyprint-override"><code>import polars as pl
dates = ["2024 Jan", "2024 Feb"]
DateEnum = pl.Enum(dates)
</code></pre>
<pre class="lang-py prettyprint-override"><code>date_table = pl.DataFrame(
{"date": pl.Series(raw_dates, dtype=DateEnum)}
)
date_table.explode("date") # fails
</code></pre>
<pre><code> {
"date": pl.Series(raw_dates, dtype=plDateEnum).cast(dtype=str),
"index": pl.Series(raw_dates, dtype=plDateEnum).cast(dtype=int),
}
)
date_table # fails
</code></pre>
<p>I hoped to see a dataframe like:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Enum
pyDateEnum = Enum("Dates", raw_dates)
date_table = pl.DataFrame({
"date": pl.Series("dates", [x.name for x in pyDateEnum], dtype=str),
"index": pl.Series("indices", [x.value - 1 for x in pyDateEnum], dtype=int)
})
date_table
</code></pre>
<pre class="lang-py prettyprint-override"><code>shape: (2, 2)
┌──────────┬───────┐
│ date ┆ index │
│ --- ┆ --- │
│ str ┆ i64 │
╞══════════╪═══════╡
│ 2024 Jan ┆ 0 │
│ 2024 Feb ┆ 1 │
└──────────┴───────┘
</code></pre>
<p>How do I go about doing this using Polars, without resorting to Python enums?</p>
|
<python><enums><python-polars>
|
2024-06-30 22:09:56
| 1
| 4,654
|
bzm3r
|
78,689,910
| 13,395,230
|
Why is Numpy converting an "object"-"int" type to an "object"-"float" type?
|
<p>This could be a bug, or could be something I don't understand about when numpy decides to convert the types of the objects in an "object" array.</p>
<pre><code>X = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [1158941147679947299,0]
Y = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [11589411476799472995,0]
Z = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [115894114767994729956,0]
print(type(X[0]),X[0]) # <class 'int'> 7047216832217320738
print(type(Y[0]),Y[0]) # <class 'float'> 1.7477687161336848e+19
print(type(Z[0]),Z[0]) # <class 'int'> 121782390452532103395
</code></pre>
<p>The arrays themselves remain "object" type (as expected). It is unexpected that the <code>Y</code> array's objects got converted to "floats". Why is that happening? As a consequence I immediately loose precision in my combinatorics. To make things even stranger, removing the <code>0</code> fixes things:</p>
<pre><code>X = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [1158941147679947299]
Y = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [11589411476799472995]
Z = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [115894114767994729956]
print(type(X[0]),X[0]) # <class 'int'> 7047216832217320738
print(type(Y[0]),Y[0]) # <class 'int'> 17477687161336846434
print(type(Z[0]),Z[0]) # <class 'int'> 121782390452532103395
</code></pre>
<p>I have tried other things, such as using larger/smaller numbers, but rarely (if ever) end up with "floats". It is something very specific about the size of these particular "int" values.</p>
<hr />
<p>Better code that shows the problem.</p>
<pre><code>import numpy as np
A = np.array([1,1],dtype=object) + [2**62,0]
B = np.array([1,1],dtype=object) + [2**63,0]
C = np.array([1,1],dtype=object) + [2**64,0]
D = np.array([1,1],dtype=object) + [2**63]
E = np.array([1,1],dtype=object) + [2**63,2**63]
print(type(A[0]),A[0]) # <class 'int'> 4611686018427387905
print(type(B[0]),B[0]) # <class 'float'> 9.223372036854776e+18
print(type(C[0]),C[0]) # <class 'int'> 18446744073709551617
print(type(D[0]),D[0]) # <class 'int'> 9223372036854775809
print(type(E[0]),E[0]) # <class 'int'> 9223372036854775809
</code></pre>
|
<python><arrays><numpy>
|
2024-06-30 22:07:22
| 2
| 3,328
|
Bobby Ocean
|
78,689,873
| 6,286,900
|
How to fine-tune merlinite 7B model in Python
|
<p>I am new to LLM programming in Python and I am trying to fine-tune the <a href="https://huggingface.co/instructlab/merlinite-7b-lab" rel="nofollow noreferrer">instructlab/merlinite-7b-lab</a> model on my Mac M1. My goal is to teach this model to a new music composer <strong>Xenobi Amilen</strong> I have invented.</p>
<p>The text of this composer is <a href="https://github.com/sasadangelo/llm-train/blob/main/xenobi_amilen.json" rel="nofollow noreferrer">here</a>.</p>
<p>Using the new Ilab CLI from RedHat I created <a href="https://github.com/sasadangelo/llm-train/blob/main/train_merlinite_7b.jsonl" rel="nofollow noreferrer">this training set</a> for the model. It is a JSONL file with 100 questions/answers about the invented composer.</p>
<p>I wrote <a href="https://github.com/sasadangelo/llm-train/blob/main/main.py" rel="nofollow noreferrer">this Python script</a> to train the model. I tested all the parts related to the tokenizer, datasets and it seems to work. However, the final train got this error:</p>
<pre><code>RuntimeError: Placeholder storage has not been allocated on MPS device!
0%| | 0/75 [00:00<?, ?it/s]
</code></pre>
<p>I found a lot of articles about this error on Google and also StackOverflow <a href="https://stackoverflow.com/questions/74724120/pytorch-on-m1-mac-runtimeerror-placeholder-storage-has-not-been-allocated-on-m">like this</a>, for example. The problem seems that in addition to the model I have to send to mps also the input parameters, but it's not clear to me how to change my code to do that.</p>
<p>I tried several fixes but had no luck. Can anyone can help?</p>
|
<python><pytorch><huggingface-transformers><large-language-model><huggingface-tokenizers>
|
2024-06-30 21:37:14
| 0
| 1,179
|
Salvatore D'angelo
|
78,689,869
| 2,687,250
|
Ignore missing columns in pandas read_excel usecols
|
<p>Is there a way to get pandas to ignore missing columns in usecols when reading excel?</p>
<p><a href="https://stackoverflow.com/questions/63002350/ignore-missing-columns-in-usecol-parameter">I know there is a similar solution for read_csv here</a></p>
<p>But can’t find a solution for Excel.</p>
<p>For example:</p>
<pre><code># contents of sample.xlsx:
#Col1,Col2,Col3
#1,2,3
#4,5,6
</code></pre>
<p>I'd like to return the following, using pandas.read_excel and the subset, but this raises an error:</p>
<pre><code>pd.read_excel(sample.xlsx, usecols=“A:D”)
ParserError: Defining usecols without of bounds indices is not allowed. [3]
</code></pre>
<p>I think the solution is to use a callable, but not sure how to implement it.</p>
|
<python><excel><pandas>
|
2024-06-30 21:35:20
| 2
| 538
|
Jack
|
78,689,833
| 9,547,007
|
TextTestRunner doesn't recognize modules when executing tests in a different project
|
<p>i am currently working on a project, where i need to run tests inside a different file structure like this:</p>
<pre><code>/my_project
├── __init__.py
├── ...my python code
/given_proj
├── __init__.py
├── /package
│ ├── __init__.py
│ └── main.py
└── /tests
└── test_main.py
</code></pre>
<h3>Current approach</h3>
<p>From inside my project i want to execute the tests within the given project.
My current approach is using unittest.TextTestRunner like this:
<code>unittest.TextTestRunner().run(unittest.defaultTestLoader.discover('../given_proj/tests'))</code>.</p>
<h3>Problem with the current approach</h3>
<p>Of course the test file wants to import from <code>main.py</code> like this <code>from package.main import my_function</code>. However when i run my code, the tests fail to run because the "package" module cannot be found:</p>
<pre><code>...\given_proj\tests\test_main.py", line 2, in <module>
from package.main import my_function
ModuleNotFoundError: No module named 'package'
</code></pre>
<p>When i run the tests with <code>python -m unittest discover -s tests</code> from the command line in the directory of the <code>given_proj</code> they run fine.</p>
<h3>What i tried</h3>
<p>I tried changing the working directory to <code>given_proj</code> with <code>os.chdir('../given_proj')</code> however it produces the same result.</p>
<p>What i kinda tried, was importing the module manually with <code>importlib.import_module()</code>. There i am not sure if i did it wrong or it doesnt work either.</p>
<h3>My question</h3>
<p>How do i make it, that the tests get run, as if i would run it from the actual directory they are supposed to run? Ideally i wouln't need to change the "given_project" at all, because i want to do this with multiple projects.</p>
<h3>Reproducing it</h3>
<p>I reduced it to a very minimal project, if anybody wants to try and reproduce it. The file-structure is the one at the top of the post.</p>
<p>All <code>__init__.py</code> files are empty.</p>
<p><code>/my_project/main.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import os
import unittest
import os
import unittest
if __name__ == "__main__":
dirname = "../given_proj/tests" #either "./" or "../" depending of where you run the python file from
unittest.TextTestRunner().run(unittest.defaultTestLoader.discover(dirname))
</code></pre>
<p><code>/given_proj/package/main.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>def my_function(num):
return num*2
</code></pre>
<p><code>/given_proj/tests/test_main.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import unittest
from package.main import my_function
class TestMain(unittest.TestCase):
def test_my_function(self):
result = my_function(5)
self.assertEqual(result, 10)
result = my_function(10)
self.assertEqual(result, 20)
result = my_function(0)
self.assertEqual(result, 0)
if __name__ == '__main__':
unittest.main()
</code></pre>
|
<python><python-import><python-unittest>
|
2024-06-30 21:20:07
| 1
| 352
|
jonas
|
78,689,782
| 2,837,253
|
Can't delete attribute added by metaclass
|
<p>I've been playing around with Python metaclasses, and I have encountered a rather strange instance where I cannot delete an attribute added to a class object by a metaclass. Consider the following two classes:</p>
<pre class="lang-py prettyprint-override"><code>class meta(type):
def __new__(mcls, name, bases, attrs):
obj = super().__new__(mcls, name, bases, attrs)
setattr(obj, "temp", "This is temporary")
return obj
def __call__(cls, *args, **kwargs):
obj = type.__call__(cls, *args, **kwargs)
# delattr(obj, "temp")
return obj
class test(metaclass=meta):
def __init__(self):
print(self.temp)
</code></pre>
<p>If I create an instance of the "test" class, it will print "This is temporary", as expected. However, if I uncomment the <code>delattr</code> statement in the <code>__call__</code> method, it throws an error complaining about the object having no attribute "temp". If I remove the <code>delattr</code> statement and run the following:</p>
<pre class="lang-py prettyprint-override"><code>a = test()
print(a.temp)
print(hasattr(a, "temp"))
delattr(a, "temp")
</code></pre>
<p>It will print:</p>
<pre><code>This is temporary
This is temporary
True
AttributeError: 'a' object has no attribute 'temp'
</code></pre>
<p>So it clearly recognises that there is an attribute there, it can print the value of the attribute, it just can't delete the attribute. Is this expected behaviour?</p>
<p>If it matters, this is python 3.12</p>
|
<python><metaclass>
|
2024-06-30 20:57:48
| 1
| 4,778
|
MrAzzaman
|
78,689,770
| 1,285,061
|
Django django.contrib.messages add new constant messages.NOTICE
|
<p>How can I create a new constant to Django messages?</p>
<p>I want to add a new constant <code>messages.NOTICE</code> in addition to the existing six constants. That I can use to display notices using Bootstrap CSS.</p>
<pre><code># settings.py
from django.contrib.messages import constants as messages
MESSAGE_TAGS = {
messages.DEBUG: 'alert-secondary',
messages.INFO: 'alert-info',
messages.SUCCESS: 'alert-success',
messages.WARNING: 'alert-warning',
messages.ERROR: 'alert-danger',
#messages.NOTICE: 'alert-primary', #To add
}
</code></pre>
<p>Error if I try to add the above.</p>
<pre><code> G:\runtime\devui\devui\settings.py changed, reloading.
Traceback (most recent call last):
File "G:\runtime\devui\manage.py", line 22, in <module>
main()
File "G:\runtime\devui\manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "C:\Users\devuser\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\core\management\__init__.py", line 442, in execute_from_command_line
utility.execute()
File "C:\Users\devuser\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\core\management\__init__.py", line 382, in execute
settings.INSTALLED_APPS
File "C:\Users\devuser\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\conf\__init__.py", line 89, in __getattr__
self._setup(name)
File "C:\Users\devuser\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\conf\__init__.py", line 76, in _setup
self._wrapped = Settings(settings_module)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\devuser\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\conf\__init__.py", line 190, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\devuser\AppData\Local\Programs\Python\Python311\Lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "G:\runtime\devui\devui\settings.py", line 44, in <module>
messages.NOTICE: 'alert-primary',
^^^^^^^^^^^^^^^
AttributeError: module 'django.contrib.messages.constants' has no attribute 'NOTICE'
PS G:\runtime\devui>
</code></pre>
|
<python><python-3.x><django><bootstrap-5><django-messages>
|
2024-06-30 20:52:56
| 1
| 3,201
|
Majoris
|
78,689,527
| 4,322
|
How to get Rich to print out final table without cutting off the bottom?
|
<p>I am using the "Rich" printing tool in Python. It's working great - but I have a problem. When I run it with more content than fits on the terminal window, it cuts everything off at the bottom (as expected). But when it finishes, I want to allow the entire table to print out. Here's a repro:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import logging
import random
import time
import uuid
from rich.console import Console
from rich.layout import Layout
from rich.live import Live
from rich.panel import Panel
from rich.progress import BarColumn, Progress, TextColumn
from rich.table import Table
all_statuses = {}
task_total = 30
console = Console()
table_update_event = asyncio.Event()
table_update_running = False
events_to_progress = []
max_time = 5 # Changed to 5 seconds
logging.basicConfig(level=logging.INFO)
class InstanceStatus:
def __init__(self, region, zone):
# Generate a unique ID for each instance - maximum 6 characters
self.id = f"{region}-{zone}-{uuid.uuid4()}"[:6]
self.region = region
self.zone = zone
self.status = "Initializing"
self.detailed_status = "Initializing"
self.elapsed_time = 0
self.instance_id = None
self.public_ip = None
self.private_ip = None
self.vpc_id = None
def combined_status(self):
return f"{self.status} ({self.detailed_status})"
def make_progress_table():
table = Table(show_header=True, header_style="bold magenta", show_lines=False)
table.add_column("ID", width=8, style="cyan", no_wrap=True)
table.add_column("Region", width=8, style="green", no_wrap=True)
table.add_column("Zone", width=8, style="green", no_wrap=True)
table.add_column("Status", width=15, style="yellow", no_wrap=True)
table.add_column("Elapsed", width=8, justify="right", style="magenta", no_wrap=True)
table.add_column("Instance ID", width=15, style="blue", no_wrap=True)
table.add_column("Public IP", width=15, style="blue", no_wrap=True)
table.add_column("Private IP", width=15, style="blue", no_wrap=True)
sorted_statuses = sorted(all_statuses.values(), key=lambda x: (x.region, x.zone))
for status in sorted_statuses:
table.add_row(
status.id[:8],
status.region[:8],
status.zone[:8],
status.combined_status()[:15],
f"{status.elapsed_time:.1f}s",
(status.instance_id or "")[:15],
(status.public_ip or "")[:15],
(status.private_ip or "")[:15],
)
return table
def create_layout(progress, table):
layout = Layout()
progress_panel = Panel(
progress,
title="Progress",
border_style="green",
padding=(1, 1),
)
layout.split(
Layout(progress_panel, size=5),
Layout(table),
)
return layout
async def update_table(live):
global table_update_running, events_to_progress, all_statuses, console
if table_update_running:
logging.debug("Table update already running. Exiting.")
return
logging.debug("Starting table update.")
try:
table_update_running = True
progress = Progress(
TextColumn("[progress.description]{task.description}"),
BarColumn(bar_width=None),
TextColumn("[progress.percentage]{task.percentage:>3.0f}%"),
TextColumn("[progress.completed]{task.completed} of {task.total}"),
TextColumn("[progress.elapsed]{task.elapsed:>3.0f}s"),
expand=True,
)
task = progress.add_task("Creating Instances", total=task_total)
while not table_update_event.is_set() or events_to_progress:
while events_to_progress:
event = events_to_progress.pop(0)
all_statuses[event.id] = event
progress.update(task, completed=len(all_statuses))
table = make_progress_table()
layout = create_layout(progress, table)
live.update(layout)
await asyncio.sleep(0.05) # Reduce sleep time for more frequent updates
except Exception as e:
logging.error(f"Error in update_table: {str(e)}")
finally:
table_update_running = False
logging.debug("Table update finished.")
async def main():
global events_to_progress, all_statuses
start_time = time.time()
end_time = start_time + 4 # Set to 4 seconds
statuses_to_create = [
InstanceStatus(str(random.randint(1, 100)), str(random.randint(1, 1000)))
for _ in range(task_total)
]
with Live(console=console, refresh_per_second=20) as live:
update_table_task = asyncio.create_task(update_table(live))
# Distribute status creation over 4 seconds
for i in range(task_total):
events_to_progress.append(statuses_to_create[i])
if (i + 1) % 10 == 0:
await asyncio.sleep(0.4)
# Ensure all statuses are processed
all_statuses.update({status.id: status for status in statuses_to_create})
# If we finished early, wait until 4 seconds have passed
time_elapsed = time.time() - start_time
if time_elapsed < 4:
await asyncio.sleep(4 - time_elapsed)
table_update_event.set()
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>If <code>task_total = 30</code> is less than the height of your terminal, it works perfectly. If it's greater (e.g. 100), then it cuts off the table and never prints the final result.</p>
|
<python><rich>
|
2024-06-30 18:58:25
| 1
| 7,148
|
aronchick
|
78,689,486
| 25,874,132
|
how to calculate this integral using scipy/numpy?
|
<p>i have the folowing question <a href="https://i.sstatic.net/51XgyKMH.png" rel="nofollow noreferrer">enter image description here</a>
it's written badly, but for those values of alpha and beta (0.32, 12) i need to find the minimal k at which the integral from -inf to inf of that expression is less than pi.</p>
<p>but this code I tried didn't work as for the example I got the value of 4 and not 15 so I don't know how to continue.</p>
<p>this is what i got with some help of chatGPT, when the last print shows a positive value that means that i've met my condition and it happens at k=2.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.integrate import quad
def a(i):
return (12 * i - 11) ** 0.32
def sinc(t):
if t == 0:
return 1.0
else:
return np.sin(t) / t
def p(x, k):
product = 1.0
for i in range(1, k + 1):
product *= sinc(x / a(i))
return product
def P(k):
integral, _ = quad(lambda x: p(x, k), -np.inf, np.inf)
return integral
for value in range(1,5):
print(f"P({value}) =", P(value))
print(f'pi - P({value}) =', np.pi - P(value))
### solution was 2
</code></pre>
|
<python><numpy><scipy>
|
2024-06-30 18:39:02
| 2
| 314
|
Nate3384
|
78,689,397
| 893,254
|
How to configure VS Code pytest when Python code is not in top level directory?
|
<p>I started a project with what I would describe as a "standard" Python project structure, whereby my code is organized into a set of <em>packages</em> which all live in the top level directory. In addition to this, there is a <code>tests</code> directory which contains <code>.py</code> files with functions which define the test cases.</p>
<p>I would now like to segregate this from another new project which I will add to the same repository.</p>
<p>To do this, I propose to create two top level directories <code>new_project</code> and <code>old_project</code> under which everything will be organized. (Given the names, you could imagine this is part of some kind of migration.)</p>
<p>I don't know how to make this work with pytest as it integrates with VS Code.</p>
<p>Here's the proposed project structure, as a MWE:</p>
<pre><code>new_project/
simple_package/
__init__.py
tests/
test_simple_package.py
old_project/
# copy & paste of the above
</code></pre>
<p>The files can contain minimal code to demonstrate the point.</p>
<pre><code>$ cat new_project/simple_package/__init__.py
a = 1
$ cat new_project/tests/test_simple_package.py
from simple_package import a
def test_a():
assert a == 1
</code></pre>
<p>I see the following error message:</p>
<ul>
<li>In the <code>pytest</code> <strong>Test Explorer</strong> sidebar (left hand side, beaker icon): <em>pytest Discovery Error [pytest-subdirectory-test] Show output to view error logs</em></li>
<li>In the output: <code>E ModuleNotFoundError: No module named 'simple_package'</code></li>
</ul>
<p>Is it possible to configure <code>pytest</code> to work with such a repository structure?</p>
<p>It would seem strange that this <strong>wouldn't</strong> work, because many projects would likely be a mix of different code written in different programming languages. Therefore it would seem strange if the VS Code <code>pytest</code> integration can only be made to work in the most simple case where all the Python packages are at the top level of the repository.</p>
|
<python><visual-studio-code><pytest>
|
2024-06-30 18:02:31
| 2
| 18,579
|
user2138149
|
78,689,389
| 6,843,153
|
Attempt to debug streamlit on VSC raises error
|
<p>I have the following <code>launch.json</code>:</p>
<pre><code>{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
},
{
"name": "debug streamlit",
"type": "debugpy",
"request": "launch",
"module": "streamlit",
"args": [
"run",
"$(file)"
]
}
]
}
</code></pre>
<p>But I get the following error when attempting to run VSC debug on Streamlit:</p>
<pre><code>(Venv) workdir$ cd /workdir ; /workdir/Venv/bin/python /home/.vscode/extensions/ms-python.debugpy-2024.6.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher 52265 -- -m streamlit run \$\(file\)
Usage: streamlit run [OPTIONS] TARGET [ARGS]...
Try 'streamlit run --help' for help.
Error: Streamlit requires raw Python (.py) files, but the provided file has no extension.
For more information, please see https://docs.streamlit.io
</code></pre>
<p>What do I have to change to make it work?</p>
|
<python><visual-studio-code><debugging><streamlit>
|
2024-06-30 17:58:34
| 1
| 5,505
|
HuLu ViCa
|
78,689,352
| 6,394,617
|
Why doesn't `randrange(start=100)` raise an error?
|
<p>In the <a href="https://docs.python.org/3/library/random.html#functions-for-integers" rel="nofollow noreferrer">docs</a> for <code>randrange()</code>, it states:</p>
<blockquote>
<p>Keyword arguments should not be used because they can be interpreted in unexpected ways. For example <code>randrange(start=100)</code> is interpreted as <code>randrange(0, 100, 1)</code>.</p>
</blockquote>
<p>If the signature is <code>random.randrange(start, stop[, step])</code>, why doesn't <code>randrange(start=100)</code> raise an error, since <code>stop</code> is not passed a value?</p>
<p>Why would <code>randrange(start=100)</code> be interpreted as <code>randrange(0, 100, 1)</code>?</p>
<p>I'm not trying to understand the design choice of whoever wrote the code, so much as understanding how it's even possible. I thought parameters without default values need to be passed arguments or else a TypeError would be raised.</p>
|
<python><parameter-passing>
|
2024-06-30 17:44:16
| 1
| 913
|
Joe
|
78,689,274
| 1,795,245
|
Yfinance history returns different result each time
|
<p>I use Yfinance to download historical prices and have found that the function returns different result each time. Is it an error or am I doing something wrong?</p>
<pre><code>import yfinance as yf
msft = yf.Ticker('AAK.ST')
data_1 = msft.history(period="max")
data_2 = msft.history(period="max")
</code></pre>
<p>The result differs every time I run the code and below is an example where the first two rows differs but the third is equal.</p>
<p><a href="https://i.sstatic.net/pkVnptfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pkVnptfg.png" alt="enter image description here" /></a></p>
<p>There are a lot of rows that differs:</p>
<p><a href="https://i.sstatic.net/gBHTSSIz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gBHTSSIz.png" alt="enter image description here" /></a></p>
<p>Any suggestions why, and how to handle this?</p>
|
<python><yfinance>
|
2024-06-30 17:09:17
| 1
| 649
|
Jonas
|
78,689,258
| 1,521,512
|
Time Limit Exceed on a UVA problem using Python, is this code that inefficient?
|
<p>Just for fun, I'm trying to solve <a href="https://onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=24&page=show_problem&problem=2019" rel="nofollow noreferrer">UVA 11078 (Open Credit System)</a>.
(Solutions can be checked on <a href="https://vjudge.net/problem/UVA-11078" rel="nofollow noreferrer">vjudge.net</a>)</p>
<p>The goal is to find a <em>maximum consecutive difference</em> for a given list of integers:</p>
<p>For example: given this list of four numbers: <code>7 10 3 1</code> the maximum consecutive difference is <code>9</code>, since <code>10 - 1 = 9</code>.</p>
<p>In the problem, first a number of testcases <code>T</code> is given, then the amount of numbers <code>n</code>, and then the individual numbers. This <strong>input</strong> (with <code>T = 2</code>, first <code>n = 4</code> and then <code>n = 2</code>) for example</p>
<pre class="lang-none prettyprint-override"><code>2
4
7
10
3
1
2
100
20
</code></pre>
<p>would result in two testcases, one of 4 numbers (<code>7 10 3 1</code>) and one of 2 numbers (<code>100 20</code>). Output of this should be:</p>
<pre class="lang-none prettyprint-override"><code>9
80
</code></pre>
<p>I implemented the following Python program, and I would think this would have complexity <strong>O(n)</strong> since it goes through each number only once, but still UVA is giving me <strong>Time Limit Exceeded</strong>.</p>
<pre><code>import math
T = int(input())
for _ in range(T):
n = int(input())
diff = -math.inf
score = int(input()) # Using the very first input as max and min
max = score
min = score
for i in range(1,n):
score = int(input())
if max - score > diff:
diff = max - score
if score > max:
max = score
if score < min:
min = score
print(diff)
</code></pre>
<p>How can I improve this solution? I have seen other solutions to this problem, but I'm somewhat confused to why this solution would be <em>slow</em>...</p>
|
<python><performance>
|
2024-06-30 17:03:15
| 1
| 427
|
dietervdf
|
78,689,216
| 1,089,239
|
BedrockEmbeddings - botocore.errorfactory.ModelTimeoutException
|
<p>I am trying to get vector embeddings on scale for documents.</p>
<ul>
<li>Importing, <code>from langchain_community.embeddings import BedrockEmbeddings</code> package.</li>
<li>Using <code>embeddings = BedrockEmbeddings( credentials_profile_name="default", region_name="us-east-1", model_id="amazon.titan-embed-text-v2:0" )</code> to embed documents</li>
<li>I have a function that embeds an array of documents (called <code>batch</code>) using <code>embeddings.embed_documents(batch)</code>. This works.</li>
<li>My batch size is 250. I.e., an array of 250 strings</li>
<li>I'm facing this error, every now and then:</li>
</ul>
<p><code>botocore.errorfactory.ModelTimeoutException: An error occurred (ModelTimeoutException) when calling the InvokeModel operation: Model has timed out in processing the request. Try your request again.</code></p>
<p><code>File "...\site-packages\langchain_community\embeddings\bedrock.py", line 150, in _embedding_func raise ValueError(f"Error raised by inference endpoint: {e}") ValueError: Error raised by inference endpoint: An error occurred (ModelTimeoutException) when calling the InvokeModel operation: Model has timed out in processing the request. Try your request again. </code></p>
<p>Does anyone have any pointers or know if <code>BedrockEmbeddings</code> provides any function that attempts to try again due to a <code>timed out</code> error?</p>
|
<python><langchain><retrieval-augmented-generation><amazon-bedrock>
|
2024-06-30 16:45:59
| 2
| 7,208
|
Benny
|
78,689,213
| 6,758,654
|
Polars cumulative count over sequential dates
|
<p>Here's some sample data</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"date": [
"2024-08-01",
"2024-08-02",
"2024-08-03",
"2024-08-04",
"2024-08-04",
"2024-08-05",
"2024-08-06",
"2024-08-08",
"2024-08-09",
],
"type": ["A", "A", "A", "A", "B", "B", "B", "A", "A"],
}
).with_columns(pl.col("date").str.to_date())
</code></pre>
<p>And my <em>desired</em> output would look something like this</p>
<pre><code>shape: (9, 3)
┌────────────┬──────┬───────────────┐
│ date ┆ type ┆ days_in_a_row │
│ --- ┆ --- ┆ --- │
│ date ┆ str ┆ i64 │
╞════════════╪══════╪═══════════════╡
│ 2024-08-01 ┆ A ┆ 1 │
│ 2024-08-02 ┆ A ┆ 2 │
│ 2024-08-03 ┆ A ┆ 3 │
│ 2024-08-04 ┆ A ┆ 4 │
│ 2024-08-04 ┆ B ┆ 1 │
│ 2024-08-05 ┆ B ┆ 2 │
│ 2024-08-06 ┆ B ┆ 3 │
│ 2024-08-08 ┆ A ┆ 1 │
│ 2024-08-09 ┆ A ┆ 2 │
└────────────┴──────┴───────────────┘
</code></pre>
<p>Where my <code>days_in_a_row</code> counter gets reset upon a date gap greater than 1 day.</p>
<p>What I've tried so far</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(days_in_a_row=pl.cum_count("date").over("type"))
</code></pre>
<p>Which gives me</p>
<pre><code>shape: (9, 3)
┌────────────┬──────┬───────────────┐
│ date ┆ type ┆ days_in_a_row │
│ --- ┆ --- ┆ --- │
│ date ┆ str ┆ u32 │
╞════════════╪══════╪═══════════════╡
│ 2024-08-01 ┆ A ┆ 1 │
│ 2024-08-02 ┆ A ┆ 2 │
│ 2024-08-03 ┆ A ┆ 3 │
│ 2024-08-04 ┆ A ┆ 4 │
│ 2024-08-04 ┆ B ┆ 1 │
│ 2024-08-05 ┆ B ┆ 2 │
│ 2024-08-06 ┆ B ┆ 3 │
│ 2024-08-08 ┆ A ┆ 5 │
│ 2024-08-09 ┆ A ┆ 6 │
└────────────┴──────┴───────────────┘
</code></pre>
<p>Which is <strong>not</strong> resetting after the gap. I can't quite nail this one down.</p>
<p>I've also tried variations with</p>
<pre class="lang-py prettyprint-override"><code>df
.with_columns(date_gap=pl.col("date").diff().over("type"))
.with_columns(days_in_a_row=(pl.cum_count("date").over("date_gap", "type")))
</code></pre>
<p>Which get's <em>closer</em>, but it still ends up not resetting where I'd want it</p>
<pre><code>shape: (9, 4)
┌────────────┬──────┬──────────────┬───────────────┐
│ date ┆ type ┆ date_gap ┆ days_in_a_row │
│ --- ┆ --- ┆ --- ┆ --- │
│ date ┆ str ┆ duration[ms] ┆ u32 │
╞════════════╪══════╪══════════════╪═══════════════╡
│ 2024-08-01 ┆ A ┆ null ┆ 1 │
│ 2024-08-02 ┆ A ┆ 1d ┆ 1 │
│ 2024-08-03 ┆ A ┆ 1d ┆ 2 │
│ 2024-08-04 ┆ A ┆ 1d ┆ 3 │
│ 2024-08-04 ┆ B ┆ null ┆ 1 │
│ 2024-08-05 ┆ B ┆ 1d ┆ 1 │
│ 2024-08-06 ┆ B ┆ 1d ┆ 2 │
│ 2024-08-08 ┆ A ┆ 4d ┆ 1 │
│ 2024-08-09 ┆ A ┆ 1d ┆ 4 │
└────────────┴──────┴──────────────┴───────────────┘
</code></pre>
|
<python><python-polars><cumulative-sum>
|
2024-06-30 16:45:38
| 1
| 3,643
|
Will Gordon
|
78,689,124
| 322,909
|
What is the correct maxmem parameter value in Python's hashlib.scrypt method?
|
<p>I am trying to use Python's <a href="https://docs.python.org/3/library/hashlib.html#hashlib.scrypt" rel="nofollow noreferrer"><code>hashlib.scrypt</code></a> to hash passwords. According to <a href="https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html" rel="nofollow noreferrer">OWASP</a> the recommended settings for scrypt are</p>
<blockquote>
<p>... use scrypt with a minimum CPU/memory
cost parameter of (2^17), a minimum block size of 8 (1024 bytes),
and a parallelization parameter of 1.</p>
</blockquote>
<p><a href="https://cryptobook.nakov.com/mac-and-key-derivation/scrypt" rel="nofollow noreferrer">Several</a> <a href="https://stackoverflow.com/questions/11126315">places</a> recommend the formula <code>128 * n * r</code> as the value for <code>maxmem</code>.</p>
<p>However using this formula produces the following error:</p>
<pre><code>from hashlib import scrypt
scrypt(b'password', salt=b'775186574b308535c9f0c1bedec71374', n=(2 ** 17), r=8, p=1, maxmem=(128 * (2 ** 17) * 8))
ValueError: [digital envelope routines: EVP_PBE_scrypt] memory limit exceeded
</code></pre>
<p>Through trial and error I have discovered that using <code>128 * n * r + (2 ** 10 + 2 ** 11)</code> for <code>maxmem</code> is the exact value to resolve the error. What is the proper way to compute the value of <code>maxmem</code>?</p>
|
<python><encryption><python-3.6><hashlib><scrypt>
|
2024-06-30 16:08:45
| 0
| 13,809
|
John
|
78,689,123
| 9,640,238
|
Value of nested dict remains unchanged in loop
|
<p>Consider the following:</p>
<pre class="lang-py prettyprint-override"><code>template = {"first": "John", "last": "Doe", "numbers": {"age": 30}}
people = []
for age in range(30, 41):
new_dict = template.copy()
new_dict["numbers"]["age"] = age
people.append(new_dict)
people
</code></pre>
<p>This will output:</p>
<pre><code>[{'first': 'John', 'last': 'Doe', 'numbers': {'age': 40}},
{'first': 'John', 'last': 'Doe', 'numbers': {'age': 40}},
{'first': 'John', 'last': 'Doe', 'numbers': {'age': 40}},
{'first': 'John', 'last': 'Doe', 'numbers': {'age': 40}},
{'first': 'John', 'last': 'Doe', 'numbers': {'age': 40}},
{'first': 'John', 'last': 'Doe', 'numbers': {'age': 40}},
{'first': 'John', 'last': 'Doe', 'numbers': {'age': 40}},
{'first': 'John', 'last': 'Doe', 'numbers': {'age': 40}},
{'first': 'John', 'last': 'Doe', 'numbers': {'age': 40}},
{'first': 'John', 'last': 'Doe', 'numbers': {'age': 40}},
{'first': 'John', 'last': 'Doe', 'numbers': {'age': 40}}]
</code></pre>
<p>Instead of <code>age</code> taking the value of each item in the <code>ages</code> list. However, if the <code>age</code> key is not nested in <code>numbers</code>, then it will get each value.</p>
<p>Why is that? What should I do instead?</p>
|
<python><dictionary><nested>
|
2024-06-30 16:08:45
| 0
| 2,690
|
mrgou
|
78,688,830
| 2,521,423
|
How to avoid import issues when Sphinx docs are generated from a different directory to parent app
|
<p>I want to autogenerate documentation for a python project. I managed to get a barebones pipeline working with sphinx, but the output is not very pretty and I would appreciate some guidance on how to make the output cleaner and the inputs more maintainable.</p>
<p>My project is organized like so:</p>
<pre><code>docs
|-- build
|-- index.rst
|-- conf.py
app
|-- main_app.py #entry point that intializes everything, internal imports in .py files are relative to here
views
|-- view1.py
|-- view2.py
|-- ...
controllers
|-- controller1.py
|-- ...
models
|-- model1.py
|-- ...
utils
|-- ...
plugins
|-- ...
</code></pre>
<p><strong>Question 1</strong>
In my conf.py file, I had to mock the imports to get it to run, like so:</p>
<pre><code>import os
import sys
sys.path.insert(0, os.path.abspath('..'))
autodoc_mock_imports = ['utils', 'models','PySide6', 'controllers', 'views', 'configs']
</code></pre>
<p>This is, I believe, because the <code>import</code> statement in the .py files are written relative to the location of <code>main_app.py</code>, whereas the docs are generated from inside the <code>docs</code> folder. While this works, it feels like a hack that will come back to bite me at some point. Is there a more graceful way to handle this issue, without having to mock the imports?</p>
|
<python><mocking><python-sphinx>
|
2024-06-30 14:11:20
| 0
| 1,488
|
KBriggs
|
78,688,698
| 631,619
|
Skipping playwright test gives pytest is not defined
|
<p>I can run my python playwright tests ok</p>
<p>However I want to skip one</p>
<p>I added <code>@pytest.mark.skip</code> but I get the error <code>'pytest' is not defined</code></p>
<pre><code>import re
from playwright.sync_api import Page, expect
def test_has_title(page: Page):
page.goto("https://bbc.com")
# Expect a title "to contain" a substring.
expect(page).to_have_title(re.compile("BBC Home"))
@pytest.mark.skip('skip')
def test_get_started_link(page: Page):
page.goto("https://bbc.com")
# Click the get started link.
page.get_by_role("link", name="Home").click()
# Expects page to have a heading with the name of Installation.
expect(page.get_by_role("heading", name="BBC")).to_be_visible()
</code></pre>
<p>.......</p>
<pre><code>==================================== ERRORS =====================================
_________________________ ERROR collecting test_bbc.py __________________________
test_bbc.py:10: in <module>
@pytest.mark.skip('skip')
E NameError: name 'pytest' is not defined
============================ short test summary info ============================
ERROR test_bbc.py - NameError: name 'pytest' is not defined
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 0.03s ================================
</code></pre>
|
<python><pytest><playwright>
|
2024-06-30 13:19:16
| 1
| 97,135
|
Michael Durrant
|
78,688,583
| 898,042
|
venv is activated but pip installing still goes in default and python does not see the lib in sourcecode
|
<p>in vscode in terminal shell typing "which python" shows default path:</p>
<pre><code>C:\Users\erjan\AppData\Local\Programs\Python\Python311\python.exe
(my_venv)
</code></pre>
<p>but (my_venv) means my venv is active, I did <code>pip install transformers</code>, but the code below still shows error - can't see the lib.</p>
<pre><code># Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("summarization", model="facebook/bart-large-cnn")
</code></pre>
<p>How to fix it?</p>
|
<python><pip><virtualenv><python-venv>
|
2024-06-30 12:30:08
| 0
| 24,573
|
ERJAN
|
78,688,455
| 498,584
|
Can one inherit a ModelSerializer and Merge Models
|
<p>I am trying to inherit djoser's UserCreatePasswordRetypeSerializer</p>
<p>Djoser Modelserializer's model is User.
The child Serializer that I am implementing represents a different model, Customer.</p>
<p>So could one do:</p>
<pre><code>class CustomerSerializer(UserCreatePasswordRetypeSerializer):
class Meta:
model=Customer
fields = ('customer_name',)+UserCreatePasswordRetypeSerializer.Meta.fields
</code></pre>
<p>When I do this
django is complaining because Customer model does not have the model fields that User has.</p>
<p>this is the customer model but this model does exactly represent this model.</p>
<pre><code> class Customer(models.Model):
customer_name = models.CharField(max_length=255)
user = models.ForeignKey(User,on_delete=models.CASCADE,related_name='user')
</code></pre>
<p>Would passing User as a parent class solve this issue?</p>
<pre><code>class Customer(User):
customer_name = models.CharField(max_length=255)
</code></pre>
<p>would this work with inherited ModelSerializers.</p>
<p>As Djoser is an app I can not change the UserCreatePasswordRetypeSerializer to fit my needs.</p>
<p>I can either inherit is or</p>
<p>this is the second approach i was thinking.</p>
<p>create something like,</p>
<pre><code>class CustomerSerializer(serializers.ModelSerializer):
user=UserCreatePasswordRetypeSerializer
class Meta:
fields = ('customer_name','user')
</code></pre>
<p>since this is a nested serializer I will have to write create method myself.</p>
<p>the issue is how can I call the create of the UserCreatePasswordRetypeSerializer in the create method that I implement because I need to use the create method of UserCreatePasswordRetypeSerializer since it handles sending emails and all the other features in the serializer. So I can not basically just pop the validated data of user and create the user object in CustomerSerializer's create method like so:</p>
<pre><code>user_validated_data=validated_data.pop('user')
User.objects.create(**user_validated_data).
</code></pre>
<p>I have to call the create method of UserCreatePasswordRetypeSerializer but Since I am not inheriting that Serializer i can not just call <code>super().create(user_validated_data).</code></p>
<p>Again I can not inherit it because Child and Parents represent different models.</p>
|
<python><django><django-rest-framework><django-serializer><djoser>
|
2024-06-30 11:37:29
| 1
| 1,723
|
Evren Bingøl
|
78,688,424
| 13,977,239
|
algorithm to detect pools of 0s in a matrix
|
<p>I'm writing an algorithm that detects regions of contiguous empty cells that are completely surrounded (orthogonally) by filled cells and do not extend to the edge of the grid. Let's call such regions "pools".</p>
<p>As you can see in the below visualization, there are three pool cells – <code>(1, 1)</code>, <code>(1, 3)</code>, and <code>(2, 2)</code> – forming the one and only pool in the grid. That is, <strong>both orthogonally and diagonally adjacent pool cells should be considered part of a single pool</strong>.</p>
<p><a href="https://i.sstatic.net/26zRzgNM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26zRzgNM.png" alt="" /></a></p>
<p>I have already implemented an algorithm that partially solves the problem: it correctly identifies pool cells via an <strong>orthogonal</strong> DFS, i.e. it never considers diagonal neighbors.</p>
<pre class="lang-py prettyprint-override"><code>class Cell:
x: int
y: int
is_filled: bool
def __init__(self, x, y, is_filled):
self.x = x
self.y = y
self.is_filled = is_filled
def get_neighbor(self, grid, direction):
if direction == "LEFT" and self.y > 0:
return Cell(self.x, self.y - 1, grid[self.x][self.y - 1] == 1)
if direction == "TOP" and self.x > 0:
return Cell(self.x - 1, self.y, grid[self.x - 1][self.y] == 1)
if direction == "RIGHT" and self.y < len(grid[0]) - 1:
return Cell(self.x, self.y + 1, grid[self.x][self.y + 1] == 1)
if direction == "BOTTOM" and self.x < len(grid) - 1:
return Cell(self.x + 1, self.y, grid[self.x + 1][self.y] == 1)
return None
def iterate_pool_cells(grid):
def is_pool(cell):
nonlocal visited
nonlocal edge_reached
if cell is None: # Reached grid boundary, not a pool
edge_reached = True
return False
if (cell.x, cell.y) in visited or cell.is_filled: # Already visited or is land
return True
visited.add((cell.x, cell.y)) # Mark as visited
is_enclosed = True
is_enclosed = is_pool(cell.get_neighbor(grid, "LEFT")) and is_enclosed
is_enclosed = is_pool(cell.get_neighbor(grid, "TOP")) and is_enclosed
is_enclosed = is_pool(cell.get_neighbor(grid, "RIGHT")) and is_enclosed
is_enclosed = is_pool(cell.get_neighbor(grid, "BOTTOM")) and is_enclosed
return is_enclosed and not edge_reached
visited = set()
edge_reached = False
for x in range(len(grid)):
for y in range(len(grid[0])):
cell = Cell(x, y, grid[x][y] == 1)
if (x, y) not in visited and not cell.is_filled:
edge_reached = False
if is_pool(cell) and not edge_reached:
yield (x, y)
</code></pre>
<p>The problem lies in the second half of the algorithm, i.e. the iterator. I am aiming to only yield one cell per pool (it could be any cell in the pool, as long as it's just one). However, right now the iterator will yield all three of the cells from the example, because it thinks of them as separate, unconnected pools.</p>
<p>I want the three cells in the example to be viewed as belonging to the same pool, such that only one cell is yielded per pool.</p>
<p>Can someone please steer me in the right direction on how to solve this? Can I maybe do this after the DFS is over, by clustering diagonally adjacent pool cells?</p>
<p>Here's a test case involving the grid from the image:</p>
<pre class="lang-py prettyprint-override"><code>grid = [
[ 0, 1, 0, 1, 0 ],
[ 1, 0, 1, 0, 1 ],
[ 1, 1, 0, 1, 1 ],
[ 1, 0, 1, 0, 1 ],
[ 0, 0, 1, 0, 0 ],
[ 1, 0, 1, 0, 1 ],
]
for pool_cell in iterate_pool_cells(grid):
print(f"Pool starts at cell: {pool_cell}")
</code></pre>
|
<python><arrays><algorithm><matrix><multidimensional-array>
|
2024-06-30 11:22:58
| 1
| 575
|
chocojunkie
|
78,688,289
| 5,029,589
|
Retrive all records from elasticsearch
|
<p>I have an index on Elasticsearch that has more than 100K records.</p>
<p>The format of the index data is:</p>
<pre><code>{
"_index": "isp_names",
"_id": "XX.XXX.XXX.X",
"_version": 1,
"_score": 1,
"_source": {
"XX.XXX.XXX.X": "Some Random ISP Name "
}
}
</code></pre>
<p>I am trying to get all records from Elasticsearch using Python. My goal is that when the server starts, all the records from Elasticsearch will be read and stored in a dict in memory cache (ip_address:isp_name).</p>
<p>I tried using the scroll API, but I am not sure how I can use it as per the ES guide (<a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/scroll-api.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/elasticsearch/reference/current/scroll-api.html</a>) it says they don't recommend using scroll API, the elasticsearch version I am using is 8.12.1.</p>
<p>Any pointers appreciated.</p>
|
<python><elasticsearch>
|
2024-06-30 10:22:07
| 1
| 2,174
|
arpit joshi
|
78,688,223
| 6,597,296
|
Communicating with a Kafka server from a Python app using the Twisted framework
|
<p>I have an app written using the Twisted framework, that communicates various data (mostly - log entries in JSON format) to a variety of logging servers using output plug-ins. I would like to make it able to send the data to a Kafka server, too - but I am hitting some kind of problem that I don't know how to solve.</p>
<p>If I send data to the Kafka server in a straightforward way using Python and the <code>kafka-python</code> module, everything works just fine:</p>
<pre class="lang-py prettyprint-override"><code>from json import dumps
from kafka import KafkaProducer
site = '<KAFKA SERVER>'
port = 9092
username = '<USERNAME>'
password = '<PASSWORD>'
topic = 'test'
producer = KafkaProducer(
bootstrap_servers='{}:{}'.format(site, port),
value_serializer=lambda v: bytes(str(dumps(v)).encode('utf-8')),
sasl_mechanism='SCRAM-SHA-256',
security_protocol='SASL_SSL',
sasl_plain_username=username,
sasl_plain_password=password
)
event = {
'message': 'Test message'
}
try:
producer.send(topic, event)
producer.flush()
print('Message sent.')
except Exception as e:
print('Error producing message: {}'.format(e))
finally:
producer.close()
</code></pre>
<p>However, if I try to send it from my actual Twisted app, using pretty much the same code, it hangs:</p>
<pre class="lang-py prettyprint-override"><code>from json import dumps
from core import output
from kafka import KafkaProducer
from twisted.python.log import msg
class Output(output.Output):
def start(self):
site = '<KAFKA SERVER>'
port = 9092
username = '<USERNAME>'
password = '<PASSWORD>'
self.topic = 'test'
self.producer = KafkaProducer(
bootstrap_servers='{}:{}'.format(site, port),
value_serializer=lambda v: bytes(str(dumps(v)).encode('utf-8')),
sasl_mechanism='SCRAM-SHA-256',
security_protocol='SASL_SSL',
sasl_plain_username=username,
sasl_plain_password=password
)
def stop(self):
self.producer.flush()
self.producer.close()
def write(self, event):
try:
self.producer.send(self.topic, event)
self.producer.flush()
except Exception as e:
msg('Kafka error: {}'.format(e))
</code></pre>
<p>(This won't run out-of-the box; it's just the plug-in and uses a generic <code>Output</code> class defined elsewhere.)</p>
<p>In particular, it hangs at the <code>self.producer.send(self.topic, event)</code> line.</p>
<p>I think that the problem comes from the fact that the Kafka producer in <code>kafka-python</code> is synchronous (blocking) and Twisted requires asynchronous (non-blocking) code. There is an asynchronous Kafka module, called <code>afkak</code> - but it doesn't seem to provide authentication with the Kafka server, so it is not suitable for my needs.</p>
<p>The way I understand it, the way to get around such problems in Twisted is to use <em>deferreds</em>. However, I have been unable to understand how exactly to do it. If I rewrite the <code>write</code> method like this</p>
<pre class="lang-py prettyprint-override"><code> def write(self, event):
d = threads.deferToThread(self.postentry, event)
d.addCallback(self.postentryCallback)
return d
def postentryCallback(self):
reactor.stop()
def postentry(self, event):
try:
self.producer.send(self.topic, event)
self.producer.flush()
except Exception as e:
msg('Kafka error: {}'.format(e))
</code></pre>
<p>it no longer hangs when trying to send data to the Kafka server - but it hangs when the application terminates and nothing is send to the Kafka server anyway (which I can verify with a separate, consumer script, written in pure Python).</p>
<p>Any ideas what I am doing wrong and how to fix it?</p>
<p>Twisted has an asynchronous implementation of some typical otherwise blocking code, like making GET/POST requests to a web server - but Kafka is not a web server, it runs on port 9092, so I don't think that I can use that.</p>
|
<python><apache-kafka><twisted>
|
2024-06-30 09:51:32
| 1
| 578
|
bontchev
|
78,688,141
| 15,358,800
|
How to choose dataset_text_field in SFTTrainer hugging face for my LLM model
|
<p>Note: Newbie to LLM's</p>
<h2>Background of my problem</h2>
<p>I am trying to train a LLM using LLama3 on stackoverflow <code>c</code> langauge dataset.</p>
<pre><code>LLm - meta-llama/Meta-Llama-3-8B
Dataset - Mxode/StackOverflow-QA-C-Language-40k
</code></pre>
<p>My dataset structure looks like so</p>
<pre><code>DatasetDict({
train: Dataset({
features: ['question', 'answer'],
num_rows: 40649
})
})
</code></pre>
<h2>Why dataset_text_field is important?</h2>
<p>This feild is crucial as LLM decides which column has to pick and train the model to answer the questions.</p>
<pre><code>trainer = SFTTrainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
peft_config=peft_config,
dataset_text_field="question", # Specify the text field in the dataset <<<<<-----
max_seq_length=4096,
tokenizer=tokenizer,
args=training_arguments,
)
</code></pre>
<p>Iam assuming if i keep the question then LLM gone pick questions column and train to answer the answers when user asked through prompt?</p>
<p>My assumption is right? I did trained my model for hours with <code>questions</code> but when i observed the response. looks like the responses are most of the questions context.</p>
|
<python><large-language-model><huggingface><huggingface-datasets>
|
2024-06-30 09:25:57
| 1
| 4,891
|
Bhargav
|
78,687,946
| 6,243,129
|
How to optimize PyTorch and Ultralytics Yolo code to utilize GPU?
|
<p>I am working on a project which involves object detection and tracking. For object detection I am using <code>yolov8</code> and for tracking, I am using <code>SORT</code> tracker. After running the below code, my GPU usage is always under 10% and CPU usage is always more than 40%. I have installed <code>cuda</code>, <code>cudnn</code> and have installed <code>torch</code> using <code>cuda</code>. I have also compiled <code>opencv</code> with <code>cuda</code> support. I am using <code>RTX 4060 ti</code> but looks like it's not being used.</p>
<p>Is there a way to further optimize the below code so that all of the work is handled by GPU and not CPU?</p>
<pre><code>from src.sort import *
import cv2
import time
import torch
import numpy as np
from ultralytics import YOLO
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print(f"Using device: {device}")
sort_tracker = Sort(max_age=20, min_hits=2, iou_threshold=0.05)
model = YOLO('yolov8s.pt').to(device)
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
print("**No frame received**")
continue
results = model(frame)
dets_to_sort = np.empty((0, 6))
for result in results:
for obj in result.boxes:
bbox = obj.xyxy[0].cpu().numpy().astype(int)
x1, y1, x2, y2 = bbox
conf = obj.conf.item()
class_id = int(obj.cls.item())
dets_to_sort = np.vstack((dets_to_sort, np.array([x1, y1, x2, y2, conf, class_id])))
tracked_dets = sort_tracker.update(dets_to_sort)
for det in tracked_dets:
x1, y1, x2, y2 = [int(i) for i in det[:4]]
track_id = int(det[8]) if det[8] is not None else 0
class_id = int(det[4])
cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 4)
cv2.putText(frame, f"{track_id}", (x1, y1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 2, (255, 255, 255), 3)
frame = cv2.resize(frame, (800, int(frame.shape[0] * 800 / frame.shape[1])), interpolation=cv2.INTER_NEAREST)
cv2.imshow("Frame", frame)
key = cv2.waitKey(1)
if key == ord("q"):
break
if key == ord("p"):
cv2.waitKey(-1)
cap.release()
cv2.destroyAllWindows()
</code></pre>
|
<python><machine-learning><pytorch><gpu><yolov8>
|
2024-06-30 07:43:52
| 2
| 7,576
|
S Andrew
|
78,687,879
| 10,855,529
|
Standard way to use a udf in polars
|
<pre><code>def udf(row):
print(row)
print(row[0])
print(row[1])
return row
df = pl.DataFrame({'a': [1], 'b': [2]})
df = df.map_rows(udf)
</code></pre>
<p>gives output,</p>
<pre><code>(1, 2)
1
2
</code></pre>
<p>but I would like to use the <code>[]</code> notation, is there a specific reason that it comes as a tuple by default as when I use,</p>
<pre><code>def udf(row):
print(row['a'])
print(row['b'])
return row
df = pl.DataFrame({'a': [1], 'b': [2]})
df = df.map_rows(udf)
</code></pre>
<p>I get</p>
<pre><code>TypeError: tuple indices must be integers or slices, not str
</code></pre>
<p>how do I make the <code>[]</code> notation work for custom udfs?</p>
|
<python><python-polars>
|
2024-06-30 07:08:25
| 1
| 3,833
|
apostofes
|
78,687,580
| 395,857
|
"RuntimeError: BlobWriter not loaded" error when exporting a PyTorch model to CoreML. How to fix it?
|
<p>I get a "RuntimeError: BlobWriter not loaded" error when exporting a PyTorch model to CoreML. How to fix it?</p>
<p>Same issue with Python 3.11 and Python 3.10. Same issue with torch 2.3.1 and 2.2.0. Tested on Windows 10.</p>
<p>Export script:</p>
<pre><code># -*- coding: utf-8 -*-
"""Core ML Export
pip install transformers torch coremltools nltk
"""
import os
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
import torch.nn as nn
import nltk
import coremltools as ct
nltk.download('punkt')
# Load the model and tokenizer
model_path = os.path.join('model')
model = AutoModelForTokenClassification.from_pretrained(model_path, local_files_only=True)
tokenizer = AutoTokenizer.from_pretrained(model_path, local_files_only=True)
# Modify the model's forward method to return a tuple
class ModifiedModel(nn.Module):
def __init__(self, model):
super(ModifiedModel, self).__init__()
self.model = model
self.device = model.device # Add the device attribute
def forward(self, input_ids, attention_mask, token_type_ids=None):
outputs = self.model(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)
return outputs.logits
modified_model = ModifiedModel(model)
# Export to Core ML
def convert_to_coreml(model, tokenizer):
# Define a dummy input for tracing
dummy_input = tokenizer("A French fan", return_tensors="pt")
dummy_input = {k: v.to(model.device) for k, v in dummy_input.items()}
# Trace the model with the dummy input
traced_model = torch.jit.trace(model, (
dummy_input['input_ids'], dummy_input['attention_mask'], dummy_input.get('token_type_ids')))
# Convert to Core ML
inputs = [
ct.TensorType(name="input_ids", shape=dummy_input['input_ids'].shape),
ct.TensorType(name="attention_mask", shape=dummy_input['attention_mask'].shape)
]
if 'token_type_ids' in dummy_input:
inputs.append(ct.TensorType(name="token_type_ids", shape=dummy_input['token_type_ids'].shape))
mlmodel = ct.convert(traced_model, inputs=inputs)
# Save the Core ML model
mlmodel.save("model.mlmodel")
print("Model exported to Core ML successfully")
convert_to_coreml(modified_model, tokenizer)
</code></pre>
<p>Error stack:</p>
<pre><code>C:\Users\dernoncourt\anaconda3\envs\coreml\python.exe C:\Users\dernoncourt\PycharmProjects\coding\export_model_to_coreml6_fopr_SE_q.py
Failed to load _MLModelProxy: No module named 'coremltools.libcoremlpython'
Fail to import BlobReader from libmilstoragepython. No module named 'coremltools.libmilstoragepython'
Fail to import BlobWriter from libmilstoragepython. No module named 'coremltools.libmilstoragepython'
[nltk_data] Downloading package punkt to
[nltk_data] C:\Users\dernoncourt\AppData\Roaming\nltk_data...
[nltk_data] Package punkt is already up-to-date!
C:\Users\dernoncourt\anaconda3\envs\coreml\lib\site-packages\transformers\modeling_utils.py:4565: FutureWarning: `_is_quantized_training_enabled` is going to be deprecated in transformers 4.39.0. Please use `model.hf_quantizer.is_trainable` instead
warnings.warn(
When both 'convert_to' and 'minimum_deployment_target' not specified, 'convert_to' is set to "mlprogram" and 'minimum_deployment_target' is set to ct.target.iOS15 (which is same as ct.target.macOS12). Note: the model will not run on systems older than iOS15/macOS12/watchOS8/tvOS15. In order to make your model run on older system, please set the 'minimum_deployment_target' to iOS14/iOS13. Details please see the link: https://apple.github.io/coremltools/docs-guides/source/target-conversion-formats.html
Model is not in eval mode. Consider calling '.eval()' on your model prior to conversion
Converting PyTorch Frontend ==> MIL Ops: 0%| | 0/127 [00:00<?, ? ops/s]Core ML embedding (gather) layer does not support any inputs besides the weights and indices. Those given will be ignored.
Converting PyTorch Frontend ==> MIL Ops: 99%|█████████▉| 126/127 [00:00<00:00, 2043.73 ops/s]
Running MIL frontend_pytorch pipeline: 100%|██████████| 5/5 [00:00<00:00, 212.62 passes/s]
Running MIL default pipeline: 37%|███▋ | 29/78 [00:00<00:00, 289.75 passes/s]C:\Users\dernoncourt\anaconda3\envs\coreml\lib\site-packages\coremltools\converters\mil\mil\ops\defs\iOS15\elementwise_unary.py:894: RuntimeWarning: overflow encountered in cast
return input_var.val.astype(dtype=string_to_nptype(dtype_val))
Running MIL default pipeline: 100%|██████████| 78/78 [00:00<00:00, 137.56 passes/s]
Running MIL backend_mlprogram pipeline: 100%|██████████| 12/12 [00:00<00:00, 315.01 passes/s]
Traceback (most recent call last):
File "C:\Users\dernoncourt\PycharmProjects\coding\export_model_to_coreml6_fopr_SE_q.py", line 58, in <module>
convert_to_coreml(modified_model, tokenizer)
File "C:\Users\dernoncourt\PycharmProjects\coding\export_model_to_coreml6_fopr_SE_q.py", line 51, in convert_to_coreml
mlmodel = ct.convert(traced_model, inputs=inputs)
File "C:\Users\dernoncourt\anaconda3\envs\coreml\lib\site-packages\coremltools\converters\_converters_entry.py", line 581, in convert
mlmodel = mil_convert(
File "C:\Users\dernoncourt\anaconda3\envs\coreml\lib\site-packages\coremltools\converters\mil\converter.py", line 188, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "C:\Users\dernoncourt\anaconda3\envs\coreml\lib\site-packages\coremltools\converters\mil\converter.py", line 212, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "C:\Users\dernoncourt\anaconda3\envs\coreml\lib\site-packages\coremltools\converters\mil\converter.py", line 307, in mil_convert_to_proto
out = backend_converter(prog, **kwargs)
File "C:\Users\dernoncourt\anaconda3\envs\coreml\lib\site-packages\coremltools\converters\mil\converter.py", line 130, in __call__
return backend_load(*args, **kwargs)
File "C:\Users\dernoncourt\anaconda3\envs\coreml\lib\site-packages\coremltools\converters\mil\backend\mil\load.py", line 902, in load
mil_proto = mil_proto_exporter.export(specification_version)
File "C:\Users\dernoncourt\anaconda3\envs\coreml\lib\site-packages\coremltools\converters\mil\backend\mil\load.py", line 400, in export
raise RuntimeError("BlobWriter not loaded")
RuntimeError: BlobWriter not loaded
Process finished with exit code 1
</code></pre>
|
<python><pytorch><coreml><coremltools>
|
2024-06-30 03:30:14
| 1
| 84,585
|
Franck Dernoncourt
|
78,687,492
| 1,371,666
|
permanent virtual numpad in tkinter
|
<br>
I am using Python 3.11.9 on windows OS.<br>
I want to create button in tkinter which when pressed add text to entry i selected before clicking the button.<br>
Here is my example code<br>
<pre><code>from tkinter import *
class Application(Frame):
def __init__(self, master=None):
Frame.__init__(self, master)
frtop=self.winfo_toplevel() #Flexible Toplevel of the window
screen_width = self.winfo_screenwidth()
screen_height = self.winfo_screenheight()
print("width="+str(screen_width)+", height="+str(screen_height))
max_windows_size=str(screen_width)+"x"+str(screen_height)
frtop.geometry(max_windows_size)
self.grid()
self.cash_frame = Frame(self)
self.cash_frame.grid(column =0, row =0, sticky='NSEW',columnspan=1)
self.cash_lbl = Label(self.cash_frame, text = "Cash:", font=('Arial',20,'bold'))
self.cash_lbl.grid(column =0, row =0)
self.cash_str_null = StringVar()
self.cash_str_null.set('')
self.cash = Entry(self.cash_frame, width=5,textvariable=self.cash_str_null, font=('Arial',26,'bold'))
self.cash.grid(column =1, row =0)
self.exchange_frame = Frame(self)
self.exchange_frame.grid(column =0, row =1, sticky='NSEW')
self.lbl_exchange = Label(self.exchange_frame, text = "exchange",font=('Arial',20,'bold'))
self.lbl_exchange.grid(column =0, row =0)
self.exchange_str_null = StringVar()
self.exchange_str_null.set('')
self.exchange_value = Entry(self.exchange_frame, width=5,textvariable=self.exchange_str_null,font=('Arial',20,'bold'))
self.exchange_value.grid(column =1, row =0)
self.add_one = Button(self, text = "1" , fg = "red", command=self.add_one_in_entry,font=('Arial',20,'bold'))
self.add_one.grid(column=0, row=2)
def add_one_in_entry(self):
pass
if __name__ == "__main__":
app = Application()
app.master.title("Sample application")
app.mainloop()
</code></pre>
<p>How can I do that ?<br></p>
<p>Thanks<br></p>
<p>PS : I added answer as per my knowledge . I would still appreciate if you know another way or have suggestions about code improvement .</p>
|
<python><tkinter>
|
2024-06-30 02:07:01
| 1
| 481
|
user1371666
|
78,687,398
| 6,343,313
|
Regex does or does not give an error depending on which REPL is used
|
<p>Consider the following Python code using the regex library re</p>
<pre><code>import re
re.compile(rf"{''.join(['{c}\\s*' for c in 'qqqq'])}")
</code></pre>
<p>I have run it in two different REPLs:</p>
<p><a href="https://www.pythonmorsels.com/repl/" rel="nofollow noreferrer">https://www.pythonmorsels.com/repl/</a><br />
<a href="https://www.online-python.com/#google_vignette" rel="nofollow noreferrer">https://www.online-python.com/#google_vignette</a></p>
<p>The first of which gives no error, and the second give the following error:</p>
<pre><code>File "main.py", line 2
re.compile(rf"{''.join(['{c}\\s*' for c in 'qqqq'])}")
^
SyntaxError: f-string expression part cannot include a backslash
</code></pre>
<p>I'd like to understand why I get an error sometimes and not others. How can I proceed to narrow down exactly where the difference is?</p>
|
<python><regex><f-string>
|
2024-06-30 00:46:17
| 2
| 1,580
|
Mathew
|
78,687,328
| 395,857
|
"Failed to load _MLModelProxy: No module named 'coremltools.libcoremlpython'"when converting an ONNX model to CoreML with onnx-coreml lib. How to fix?
|
<p>I try to use the onnx-coreml lib to convert an ONNX model to CoreML:</p>
<pre><code>import onnx
from onnx_coreml import convert
onnx_model = onnx.load('model.onnx')
coreml_model = convert(onnx_model)
coreml_model.save('model.mlmodel')
</code></pre>
<p>Error:</p>
<pre><code>C:\code\export_model_to_coreml.py
scikit-learn version 1.3.2 is not supported. Minimum required version: 0.17. Maximum required version: 1.1.2. Disabling scikit-learn conversion API.
Failed to load _MLModelProxy: No module named 'coremltools.libcoremlpython'
Fail to import BlobReader from libmilstoragepython. No module named 'coremltools.libmilstoragepython'
Fail to import BlobWriter from libmilstoragepython. No module named 'coremltools.libmilstoragepython'
Traceback (most recent call last):
File "C:\code\export_model_to_coreml.py", line 2, in <module>
from onnx_coreml import convert
File "C:\Users\dernoncourt\anaconda3\envs\py311\Lib\site-packages\onnx_coreml\__init__.py", line 6, in <module>
from .converter import convert
File "C:\Users\dernoncourt\anaconda3\envs\py311\Lib\site-packages\onnx_coreml\converter.py", line 35, in <module>
from coremltools.converters.nnssa.coreml.graph_pass.mlmodel_passes import remove_disconnected_layers, transform_conv_crop
ModuleNotFoundError: No module named 'coremltools.converters.nnssa'
Process finished with exit code 1
</code></pre>
<p>How to fix?</p>
|
<python><coreml><onnx><onnx-coreml>
|
2024-06-29 23:56:16
| 0
| 84,585
|
Franck Dernoncourt
|
78,687,274
| 446,943
|
Python problem showing image in label in secondary window
|
<p>This problem is one of many posted here where there's an error because the image created by PhotoImage doesn't exist. Consider this code:</p>
<pre><code>from tkinter import *
from PIL import ImageTk
def show():
path = r'P:\Greece Slideshow\01001-MJR_20240511_5110080-Edit.jpg'
pimg = ImageTk.PhotoImage(file=path)
image_label.configure(image=pimg)
root.pimg = pimg # prevent garbage collection
root = Tk()
window2 = Tk()
# image_label = Label(root) # works if label is in main window
image_label = Label(window2) # fails if label is in secondary window
# message is "_tkinter.TclError: image "pyimage1" doesn't exist"
image_label.pack()
root.after(500, show)
root.mainloop()
</code></pre>
<p>Three things to note:</p>
<ol>
<li>The code works if the label is in the main window, just not if it's in the secondary window.</li>
<li>As shown, I've assigned the PhotoImage to root.pimg so it won't be garbage collected. I've tried numerous variants of this (e.g., global variable, property of other persistent objects), but nothing helps.</li>
<li>I've tried a Canvas instead of a Label, but I get exactly the same error message.</li>
</ol>
<p>Any ideas? Has anyone successfully put an image into a secondary window in Python? I don't have to use PIL; anything at all will make me happy. I hate to go down to the level of the Win32 API (kills portability), but I'm about to do that.</p>
|
<python><tkinter><python-imaging-library><photoimage>
|
2024-06-29 23:06:27
| 1
| 3,740
|
Marc Rochkind
|
78,687,048
| 11,860,883
|
Count all unique triples
|
<p>Suppose that I have a pandas data frame A with columns called user_id and history where history is an array of ints. And the possible histories are bounded from above by 2000. I need to iterate through all rows of A, for each history b = [b1, b2, b3, ..., bn]. All bi's are unique(only appearing once in the array). I need to find all possible triples (bi, bj, bk) such that i < j < k, and count the occurrences of all such triples. In addition, we treat the triple (bi, bj, bk) to be the same as (bj, bi, bk) if bi > bj.</p>
<pre><code>import pandas as pd
from itertools import combinations
# Example DataFrame A (replace this with your actual DataFrame)
import pandas as pd
from collections import defaultdict
# Example DataFrame A (replace this with your actual DataFrame)
A = pd.DataFrame({
'user_id': [1, 2, 3],
'history': [[1, 2, 3], [2, 3, 4], [1, 3, 5]]
})
# Initialize a dictionary to store counts of triples
triple_counts = defaultdict(int)
# Iterate over each row of A
for index, row in A.iterrows():
history = row['history']
n = len(history)
# Iterate over all triples (bi, bj, bk) where i < j < k
for i in range(n):
for j in range(i + 1, n):
for k in range(j + 1, n):
bi = history[i]
bj = history[j]
bk = history[k]
if bi < bj:
triple_counts[(bi, bj, bk)] += 1
else:
triple_counts[(bj, bi, bk)] += 1
# Output the counts of all triples
for triple, count in triple_counts.items():
print(f"Triple {triple}: Count = {count}")
</code></pre>
<p>The issue with this approach is that A is an extremely large data frame and each row has a complexity of O(n^3) so it takes forever to complete this computation. Is there faster way to do this possibly leveraging pytorch or tensor operations?</p>
|
<python><pandas><algorithm><pytorch>
|
2024-06-29 20:58:46
| 3
| 361
|
Adam
|
78,686,937
| 15,547,292
|
Shorter way to define contextmanager by decorator?
|
<p>When creating a decorator using <code>@contextlib.contextmanager()</code>, we have to write</p>
<pre class="lang-py prettyprint-override"><code>enter_action(...)
try:
yield ...
finally:
exit_action(...)
</code></pre>
<p>That is 3 lines just for the (rather unaesthetic) <code>try/yield/finally</code> construct.
Why can't we get something like this instead?</p>
<pre class="lang-py prettyprint-override"><code>enter_action(...)
with context_magic(...): # equivalent to try/yield/finally
exit_action(...)
</code></pre>
<p>Would that be technically possible?</p>
|
<python><contextmanager>
|
2024-06-29 19:54:27
| 2
| 2,520
|
mara004
|
78,686,834
| 17,896,651
|
Add Payment Method on my SAAS using an API from local gateway
|
<p>I have a SAAS program on Django and Python, that manage Clients accounts (one or many).</p>
<p>As for now, we handle all payment manually, adding a new client manually to reoccurring monthly payment 3'rd party system
For a client with many accounts we have a payment calculator and we charge the amount manually.
The major issue we handle is that each month most of our clients have different payment cost, according to number of accounts running.</p>
<ul>
<li>we don't have Stripe here.</li>
</ul>
<p>So my Questions are of 2 types, the way to calculate the payments and the way to execute it.</p>
<ol>
<li>is it safe to add a payment gateway by a simple API, and handle the payment from the backend of the server side ? is it done with a cron job kind of tasker?
this it the API:
<a href="https://documenter.getpostman.com/view/16363118/TzkyNLWB" rel="nofollow noreferrer">https://documenter.getpostman.com/view/16363118/TzkyNLWB</a></li>
<li>how can I calculate the payment monthly, should I make a smart way to do it, are there more existing solution for the case ? for example an account was running 1.7-5.7 stopped and resumed in 15.7-30.7 is there an recommended algorithm for storing the data?</li>
<li>We charge clients world wide.</li>
<li>how to manage all payments ?</li>
<li>how to enable more payment ways, Paypal for example ?</li>
<li>how can I charge a client that added one account in 15.7.24 and another in 20.7.24, and to have 1 invoice for him ? should I charge him one time at the end of month ? may I need to make sure to get some balance before I start the first account (so I wont get to 30.7.24 and the payment dont work).</li>
</ol>
<p>What big SAAS company normally work with without increasing the cost per charge from 0.75% to 3% like using Stripe or Bluesnap?</p>
|
<python><django>
|
2024-06-29 18:59:29
| 0
| 356
|
Si si
|
78,686,779
| 1,114,105
|
Why does my code time out with ThreadPoolExecutor but not with normal Threads
|
<p>Trying to solve <a href="https://leetcode.com/problems/web-crawler-multithreaded/" rel="nofollow noreferrer">https://leetcode.com/problems/web-crawler-multithreaded/</a></p>
<p>This code works (well at least for the toy test cases before eventually TLE'ing)</p>
<pre><code>from collections import deque
from urllib.parse import urljoin, urlparse
from concurrent.futures import ThreadPoolExecutor
from threading import Lock, Thread
import queue
import time
class Solution:
def __init__(self):
self.visited = set()
self.frontier = queue.Queue()
self.visitLock = Lock()
def threadCrawler(self, htmlParser):
while True:
nextUrl = self.frontier.get()
urls = htmlParser.getUrls(nextUrl)
with self.visitLock:
self.visited.add(nextUrl)
host = urlparse(nextUrl).hostname
urls = list(filter(lambda x: urlparse(x).hostname == host, urls))
with self.visitLock:
urls = list(filter(lambda x: x not in self.visited, urls))
for url in urls:
self.frontier.put(url)
self.frontier.task_done()
def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:
self.frontier.put(startUrl)
n = 10
for i in range(n):
Thread(target=self.threadCrawler, args=(htmlParser,), daemon=True).start()
self.frontier.join()
return self.visited
</code></pre>
<p>But this code which uses a ThreadPoolExecutor doesn't work - it times out in the toy examples even with a single thread.</p>
<pre><code>from collections import deque
from urllib.parse import urljoin, urlparse
from concurrent.futures import ThreadPoolExecutor
from threading import Lock, Thread
import queue
import time
class Solution:
def __init__(self):
self.visited = set()
self.frontier = queue.Queue()
self.visitLock = Lock()
def threadCrawler(self, htmlParser):
while True:
nextUrl = self.frontier.get()
urls = htmlParser.getUrls(nextUrl)
with self.visitLock:
self.visited.add(nextUrl)
host = urlparse(nextUrl).hostname
urls = list(filter(lambda x: urlparse(x).hostname == host, urls))
with self.visitLock:
urls = list(filter(lambda x: x not in self.visited, urls))
for url in urls:
self.frontier.put(url)
self.frontier.task_done()
def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:
self.frontier.put(startUrl)
n = 1
executor = ThreadPoolExecutor(max_workers=n)
for i in range(n):
executor.submit(self.threadCrawler,htmlParser, daemon=True)
self.frontier.join()
return self.visited
</code></pre>
<p>Even when I remove the daemon parameter, store the futures and check their result, it still results in a TLE</p>
<pre><code> def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:
self.frontier.put(startUrl)
n = 1
executor = ThreadPoolExecutor(max_workers=n)
futures = []
for i in range(n):
futures.append(executor.submit(self.threadCrawler,htmlParser))
self.frontier.join()
for i in range(n):
print(futures[i].result())
return self.visited
</code></pre>
|
<python><multithreading><queue><threadpoolexecutor>
|
2024-06-29 18:29:41
| 1
| 1,189
|
user1114
|
78,686,622
| 2,425,044
|
Install Artifact Registry Python package from Dockerfile with Cloud Build
|
<p>I have a python package located in my Artifact Registry repository.</p>
<p>My Dataflow Flex Template is packaged within a Docker image with the following command:</p>
<pre><code>gcloud builds submit --tag $CONTAINER_IMAGE .
</code></pre>
<p>Since developers are constantly changing the source code of the pipeline, this command is often run from their computers to rebuild the image.</p>
<p>Here is my Dockerfile:</p>
<pre><code>FROM gcr.io/dataflow-templates-base/python311-template-launcher-base
ARG WORKDIR=/template
RUN mkdir -p ${WORKDIR}
WORKDIR ${WORKDIR}
ENV PYTHONPATH ${WORKDIR}
ENV FLEX_TEMPLATE_PYTHON_SETUP_FILE="${WORKDIR}/setup.py"
ENV FLEX_TEMPLATE_PYTHON_PY_FILE="${WORKDIR}/main.py"
RUN pip install --no-cache-dir -U pip && \
pip install --no-cache-dir -U keyrings.google-artifactregistry-auth
RUN pip install --no-cache-dir -U --index-url=https://europe-west9-python.pkg.dev/sample-project/python-repo/ mypackage
COPY . ${WORKDIR}/
ENTRYPOINT ["/opt/google/dataflow/python_template_launcher"]
</code></pre>
<p>I get the following error:</p>
<pre><code>ERROR: No matching distribution found for mypackage
error building image: error building stage: failed to execute command: waiting for process to exit: exit status 1
</code></pre>
<p>I guess the Cloud Build process doesn't have the access rights. I'm a bit confused on how to get them from a Dockerfile.</p>
<p><a href="https://https:/levelup.gitconnected.com/how-to-install-private-python-packages-in-artifact-registry-when-building-docker-images-with-cloud-a1bf65647d78" rel="nofollow noreferrer">An article</a> I found was mentionning the use of a Service Account key file read by the Docker process, but I would like to avoid that. Could I use the Service Account impersonation feature?</p>
|
<python><dockerfile><google-cloud-dataflow><google-cloud-build><google-artifact-registry>
|
2024-06-29 17:01:59
| 2
| 2,038
|
Grégoire Borel
|
78,686,561
| 6,554,121
|
sqlite db not being created in RUN instruction during image build (python/pdm)
|
<p>What I'm trying to do is for a new sqlite db file to be created when spinning up a container. But the db files doesn't get created for some reason. When I run the file inside the container normally: <code>pdm run src/db_setup.py</code> it works fine and the file is created.</p>
<p>So why doesn't the <code>RUN pdm run -v src/db_setup.py</code> step do this?</p>
<p>I run the container with <code>docker run --name de_pdm_container -p 8080:8080 -dt -v .:/app de_pdm</code></p>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM python:3.11-bookworm
WORKDIR /app
# install PDM
RUN pip install -U pdm
# disable update check
ENV PDM_CHECK_UPDATE=false
COPY pdm.lock pyproject.toml ./
RUN pdm install
COPY ./src ./src
COPY ./tests ./tests
COPY .devcontainer .devcontainer
RUN apt-get update
RUN apt-get install -y sqlite3 libsqlite3-dev
RUN pdm run -v src/db_setup.py
EXPOSE 8080
CMD ["pdm", "run", "uvicorn", "src.de_pdm.__init__:app", "--host", "0.0.0.0", "--port", "8080", "--reload"]
</code></pre>
<p><strong>db_setup.py</strong></p>
<pre><code>import sqlite3
con = sqlite3.connect("test.db")
cursor = con.cursor()
con.set_trace_callback(print)
cursor.execute("DROP TABLE IF EXISTS tb;")
con.commit()
cursor.execute("""
CREATE TABLE tb(
country VARCHAR(50),
country_code VARCHAR(2),
year INT,
estimated_population_number INT,
estimated_prevalence INT
);
""")
con.commit()
con.close()
</code></pre>
|
<python><docker>
|
2024-06-29 16:33:06
| 0
| 12,739
|
A. L
|
78,686,504
| 13,566,716
|
422 Unprocessable Entity error when sending Form data to FastAPI backend using ReactJs and Fetch API in the frontend
|
<p>I keep getting the <code>422 Unprocessable Entity</code> error with the follwoing missing fields:</p>
<pre><code>{"detail":[{"type":"missing","loc":["body","username"],"msg":"Field required","input":null},{"type":"missing","loc":["body","password"],"msg":"Field required","input":null}]}
</code></pre>
<p>I tried everything possible, but the error persists.</p>
<p>This is my frontend code:</p>
<pre><code>const Login = () => {
const [email, setEmail] = useState('');
const [password, setPassword] = useState('');
const navigate = useNavigate();
const handleLogin = async (e) => {
e.preventDefault();
const formData = new URLSearchParams();
formData.append('username', email); // Assuming the backend expects 'username'
formData.append('password', password);
try {
const response = await fetch('http://localhost:8000/token', {
method: 'POST',
body: formData,
});
if (!response.ok) {
throw new Error('Login failed');
}
</code></pre>
<p>and this is the backend:</p>
<pre><code>@fastapi_router.post("/token", response_model=Token)
async def login_for_access_token(
username: Annotated[str, Form()],
password: Annotated[str, Form()],
db: AsyncSession = Depends(database.get_db)
):
user = await auth.authenticate_user(db, username, password)
if not user:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect username or password",
headers={"WWW-Authenticate": "Bearer"},
)
access_token_expires = timedelta(minutes=envs.access_token_expire_minutes)
access_token = auth.create_access_token(
data={"sub": user.username}, expires_delta=access_token_expires
)
logger.info(f"Access Token: {access_token}")
return {"access_token": access_token, "token_type": "bearer"}
</code></pre>
<p>Any help would be appreciated.</p>
|
<javascript><python><reactjs><fastapi><http-status-code-422>
|
2024-06-29 16:03:34
| 1
| 369
|
3awny
|
78,686,477
| 9,554,640
|
ONNXRuntimeError with chromadb adding docs on MacOS
|
<p>I am trying to run Python script with Chroma db. Create collection, add some vectors and get. But getting an error.</p>
<p>Script:</p>
<pre class="lang-py prettyprint-override"><code>import chromadb
client = chromadb.Client()
collection = client.create_collection(name="example")
collection.add(
documents=["Sky is unlimited.", "Tree is a plant."], ids=["d_1", "d_2"]
)
items = collection.get()
collection.peek(limit=5)
</code></pre>
<p>Error:</p>
<blockquote>
<p>onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] :
1 : FAIL : Non-zero status code returned while running
CoreML_10659388614159815537_1 node.
Name:'CoreMLExecutionProvider_CoreML_10659388614159815537_1_1' Status
Message: Error executing model: Unable to compute the prediction using
a neural network model. It can be an invalid input data or
broken/unsupported model (error code: -1).</p>
</blockquote>
<p>Hardware:
<code>MacOS 14, Intel</code></p>
<p>What could be the reasons?</p>
<p>I don't what the reason, should I use Docker or is it problem with OS?</p>
|
<python><chromadb>
|
2024-06-29 15:51:35
| 2
| 372
|
Dmitri Galkin
|
78,686,295
| 2,755,116
|
import python libraries in jinja2 templates
|
<p>I have this template:</p>
<pre><code>% template.tmpl file:
% set result = fractions.Fraction(a*d + b*c, b*d) %}
The solution of ${{a}}/{{b}} + {{c}}/{{d}}$ is ${{a * d + b*c}}/{{b*d}} = {{result.numerator}}/{{result.denominator}}$
</code></pre>
<p>which I invoke by</p>
<pre><code>from jinja2 import Template
import fractions
with open("c.jinja") as f:
t = Template(f.read())
a = 2
b = 3
c = 4
d = 5
print(t.render(a = a, b = b, c=c, d=d))
</code></pre>
<p>I get</p>
<pre><code>jinja2.exceptions.UndefinedError: 'fractions' is undefined
</code></pre>
<p>but I want</p>
<pre><code>The solution of $2/3 + 4/5$ is $22/15=22/15$.
</code></pre>
<p>Is this possible to achieve that?</p>
|
<python><jinja2>
|
2024-06-29 14:45:24
| 1
| 1,607
|
somenxavier
|
78,686,253
| 16,778,949
|
Encountering ValueError upon joining two pandas dataframes on a datetime index column
|
<p>I have two tables which I need to join on a date column. I want to preserve all the dates in both tables, with the empty rows in each table just being filled with <code>NaN</code>s in the final combined array. I think an outer join is what I'm looking for. So I've written this code (with data_1 and data_2 acting as mockups of my actual tables)</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def main():
data_1 = [["May-2024", 10, 5], ["June-2024", 3, 5], ["April-2015", 1, 3]]
df1 = pd.DataFrame(data_1, columns = ["Date", "A", "B"])
df1["Date"] = pd.to_datetime(df1["Date"], format="%B-%Y")
print(df1)
data_2 = [["01-11-2024", 10, 5], ["01-06-2024", 3, 5], ["01-11-2015", 1, 3]]
df2 = pd.DataFrame(data_2, columns = ["Date", "C", "D"])
df2["Date"] = pd.to_datetime(df2["Date"], format="%d-%m-%Y")
print(df2)
merged = df1.join(df2, how="outer", on=["Date"])
print(merged)
if __name__ == "__main__":
main()
</code></pre>
<p>But when I try and perform an outer join on two pandas dataframes, I get the error</p>
<pre><code>ValueError: You are trying to merge on object and int64 columns for key 'Date'. If you wish to proceed you should use pd.concat
</code></pre>
<p>I checked the datatype of both columns by printing</p>
<pre class="lang-py prettyprint-override"><code>print(df1["Date"].dtype, df2["Date"].dtype)
</code></pre>
<p>and they both seem to be</p>
<pre><code>datetime64[ns] datetime64[ns]
</code></pre>
<p>datetimes. So I'm not quite sure why I'm getting a <code>ValueError</code></p>
<p>Any help is appreciated, thanks.</p>
|
<python><pandas><dataframe>
|
2024-06-29 14:27:24
| 1
| 534
|
phoney_badger
|
78,686,206
| 20,920,790
|
How to insert int128 data to Cickhouse with ClickHouseHook?
|
<p>I'm using ClickHouseHook for insert data to databse.</p>
<pre><code>from airflow_clickhouse_plugin.hooks.clickhouse import ClickHouseHook
ch_hook = ClickHouseHook(clickhouse_conn_id=connections_name)
def update_replacingmergetree(ch_hook, table_name: str, df: pd.DataFrame):
values = tuple(df.to_records(index=False))
ch_hook.execute(f'INSERT INTO do_you.{table_name} VALUES', (v for v in values))
</code></pre>
<p>My function works fine, then DataFrame don't contains dates.
But then I pass data like this (part of tuple with values):</p>
<pre><code> # Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 18 non-null string
1 s_id 18 non-null int32
2 month_date 18 non-null datetime64[ns]
3 sum 18 non-null int32
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">id</th>
<th style="text-align: right;">s_id</th>
<th style="text-align: left;">month_date</th>
<th style="text-align: right;">sum</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">265609333301876169530781520669667823585</td>
<td style="text-align: right;">84</td>
<td style="text-align: left;">2024-01-01 00:00:00</td>
<td style="text-align: right;">100</td>
</tr>
</tbody>
</table></div>
<p>I get error:</p>
<pre><code>TypeError: unsupported operand type(s) for &: 'str' and 'int'
</code></pre>
<p>To avoid this error I need convert id to int128, but I can't do that, cause pandas or numpy don't have int128 type.</p>
<p>How to put int128 value to Clickhouse?</p>
|
<python><airflow><clickhouse>
|
2024-06-29 14:11:00
| 1
| 402
|
John Doe
|
78,686,130
| 6,058,203
|
scrapy selenium firefox - cant scrap urls from a page
|
<p>I am trying to get a list of urls from a website (perfectly permitted to scrape it). I am using firefox and whilst it seemed to work last month does not work this month and i am struggling to see where the error is.</p>
<p>In the code below, the url_authority_list list is empty at the end.</p>
<p>the output i get is:</p>
<pre><code>2024-06-29 14:30:25 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2024-06-29 14:30:25 [filelock] DEBUG: Attempting to acquire lock 2630883719752 on C:\Users\andre\Anaconda3\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2024-06-29 14:30:25 [filelock] DEBUG: Lock 2630883719752 acquired on C:\Users\andre\Anaconda3\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2024-06-29 14:30:25 [filelock] DEBUG: Attempting to acquire lock 2630883914632 on C:\Users\andre\Anaconda3\lib\site-packages\tldextract\.suffix_cache/urls\62bf135d1c2f3d4db4228b9ecaf507a2.tldextract.json.lock
2024-06-29 14:30:25 [filelock] DEBUG: Lock 2630883914632 acquired on C:\Users\andre\Anaconda3\lib\site-packages\tldextract\.suffix_cache/urls\62bf135d1c2f3d4db4228b9ecaf507a2.tldextract.json.lock
2024-06-29 14:30:25 [filelock] DEBUG: Attempting to release lock 2630883914632 on C:\Users\andre\Anaconda3\lib\site-packages\tldextract\.suffix_cache/urls\62bf135d1c2f3d4db4228b9ecaf507a2.tldextract.json.lock
2024-06-29 14:30:25 [filelock] DEBUG: Lock 2630883914632 released on C:\Users\andre\Anaconda3\lib\site-packages\tldextract\.suffix_cache/urls\62bf135d1c2f3d4db4228b9ecaf507a2.tldextract.json.lock
2024-06-29 14:30:25 [filelock] DEBUG: Attempting to release lock 2630883719752 on C:\Users\andre\Anaconda3\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2024-06-29 14:30:25 [filelock] DEBUG: Lock 2630883719752 released on C:\Users\andre\Anaconda3\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2024-06-29 14:30:25 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ratings.food.gov.uk/open-data> (referer: None)
</code></pre>
<p>the code is:</p>
<pre><code>import scrapy
from urllib.parse import urljoin
import os
class foodstandardsagencySpider(scrapy.Spider):
name = "foodstandardsagency"
allowed_domains = ["ratings.food.gov.uk"]
start_urls = ["https://ratings.food.gov.uk/open-data"]
# will need a new token if using geckodriver
os.environ['GH_TOKEN'] = "ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
def parse(self, response):
#create a list of urls
url_authority_list = []
for href in response.xpath('//tr/td/a[text()[contains(.,"English")]]/@href'):
url = urljoin('http://ratings.food.gov.uk/',href.extract())
url_authority_list.append(url)
print(url_authority_list)
</code></pre>
|
<python><selenium-webdriver><firefox><scrapy>
|
2024-06-29 13:43:31
| 0
| 379
|
nevster
|
78,686,119
| 7,404,892
|
How To get consistent and expected json from Autogen?
|
<p>I'm using multi-agent autogen, where I have multiple critic agents who review writer agent's report. I need to get critic reviews in a specific JSON format, but I'm not getting a consistent and expected JSON as output. How can I get this?</p>
<p>I tried configuring llm config with the below param, but no use.</p>
<pre><code>response_format = {"type": response_format}
</code></pre>
<p>Is there any other way to achieve this?</p>
<p>The output which I'm getting:</p>
<pre><code>case1: {"heading":<heading>,"content":content}
case2: [{"reviewer"::<reviewer>,review:{heading...
</code></pre>
<p>Expected output:</p>
<pre><code> response_format = """- The ouput must be exactly in the below JSON format:
[
{
"heading":"<heading of the comment_1 >",
"content":"<content in markdown format of comment_1>"
},
{
"heading":"<heading of the comment_2 >",
"content": "<content in markdown format of comment_2>"
},
{
"heading":"<heading of the comment_n >",
"content": "<content in markdown format of comment_n>"
}
]
"""
</code></pre>
|
<python><artificial-intelligence><ms-autogen>
|
2024-06-29 13:39:15
| 0
| 1,225
|
Nayana Madhu
|
78,686,046
| 6,621,137
|
pip dependencies from env yaml file missed during building docker container
|
<p>I have the following conda environment YAML file :</p>
<pre><code>name: someapp
channels:
- conda-forge
- defaults
dependencies:
- flask=2.2.2
- pandas=1.5.3
- pip=23.0.1
- python=3.11.3
- pip:
- psycopg2-binary==2.9.9
- requests==2.32.3
prefix: /usr/local/anaconda3/envs/someapp
</code></pre>
<p>And I would like this env to be created when building my docker container (for a Flask app) but I have a <code>ModuleNotFoundError: No module named 'requests'</code> error when launching the <code>server.py</code> file where I import <code>requests</code> with the following DockerFile :</p>
<pre><code>FROM continuumio/miniconda3
RUN apt-get update && \
apt-get install --yes --no-install-recommends \
&& apt-get clean
WORKDIR /app/services/back_end/
COPY ./environment.yml ./environment_server.yml
RUN conda env create -f ./environment_server.yml
# added line to try to install but doesn't work
RUN /bin/bash -c "source activate someapp && pip install requests"
RUN echo "source activate someapp" > ~/.bashrc
ENV PATH /opt/conda/envs/someapp/bin:$PATH
EXPOSE 6969
ENTRYPOINT ["python", "server.py"]
</code></pre>
|
<python><docker><conda>
|
2024-06-29 13:02:16
| 1
| 452
|
TmSmth
|
78,685,937
| 937,440
|
Search for currency pairs
|
<p>I am trying to learn more about Python. I am trying to pull some data from Yahoo Finance. I have this code:</p>
<pre><code>import yfinance as yf
from statsmodels.tsa.vector_ar.vecm import coint_johansen
symbols = ['AMZN', 'AAPL']
start = '2020-01-01'
end = '2024-01-16'
data = yf.download(symbols, start=start, end=end)['Adj Close']
specified_number = 0
coint_test_result = coint_johansen(data, specified_number, 1)
print("End")
</code></pre>
<p>It works as expected. I then change the symbols (line four to crypto currency pairs i.e.</p>
<pre><code>symbols = ['BTCUSD', 'ETHUSD']
</code></pre>
<p>The code then errors on this line:</p>
<pre><code>coint_test_result = coint_johansen(data, specified_number, 1)
</code></pre>
<p>The error is: <code>zero-size array to reduction operation maximum which has no identity</code></p>
<p>Why does it error with cryptocurrency pairs as symbols?</p>
<p>I have version 3.12.3 of Python.</p>
|
<python><python-3.x><yfinance>
|
2024-06-29 12:10:58
| 0
| 15,967
|
w0051977
|
78,685,901
| 10,855,529
|
ComputeError: dynamic pattern length in 'str.replace' expressions is not supported yet
|
<p>What is the polars expression way to achieve this,</p>
<pre><code>df = pl.from_repr("""
┌───────────────────────────────┬───────────────────────────┐
│ document_url ┆ matching_seed_url │
│ --- ┆ --- │
│ str ┆ str │
╞═══════════════════════════════╪═══════════════════════════╡
│ https://document_url.com/1234 ┆ https://document_url.com/ │
│ https://document_url.com/5678 ┆ https://document_url.com/ │
└───────────────────────────────┴───────────────────────────┘""")
df = df.with_columns(
pl.when(pl.col("matching_seed_url").is_not_null())
.then(pl.col("document_url").str.replace(pl.col("matching_seed_url"), ""))
.otherwise(pl.lit(""))
.alias("extracted_id"))
</code></pre>
<p>I get,</p>
<pre><code>ComputeError: dynamic pattern length in 'str.replace' expressions is not supported yet
</code></pre>
<p>how do I extract 1234, 5678 here</p>
|
<python><regex><dataframe><python-polars>
|
2024-06-29 11:54:16
| 2
| 3,833
|
apostofes
|
78,685,861
| 2,748,928
|
Optimizing an LLM Using DPO: nan Loss Values During Evaluation
|
<p>I want to optimize an LLM based on DPO. When I tried to train and evaluate the model, but there are nan values in the evaluation results.</p>
<blockquote>
</blockquote>
<pre><code>import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from datasets import Dataset
from trl import DPOTrainer, DPOConfig
from datasets import load_dataset
model_name = "EleutherAI/pythia-14m"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def preprocess_data(item):
return {
'prompt': 'Instruct: ' + item['prompt'] + '\n',
'chosen': 'Output: ' + item['chosen'],
'rejected': 'Output: ' + item['rejected']
}
dataset = load_dataset('jondurbin/truthy-dpo-v0.1', split="train")
dataset = dataset.map(preprocess_data)
split_dataset = dataset.train_test_split(test_size=0.1) # Adjust the test_size as needed
train_dataset = split_dataset['train']
val_dataset = split_dataset['test']
print(f"Length of train data: {len(train_dataset)}")
print(f"Length of validation data: {len(val_dataset)}")
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.unk_token
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
low_cpu_mem_usage=True,
torch_dtype=torch.float16
).to(device)
model_ref = AutoModelForCausalLM.from_pretrained(
model_name,
low_cpu_mem_usage=True,
torch_dtype=torch.float16
).to(device)
# Config
training_args = DPOConfig(
output_dir="./output",
beta=0.1,
max_length=512,
max_prompt_length=128,
remove_unused_columns=False,
)
# Load trainer
dpo_trainer = DPOTrainer(
model,
model_ref,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
tokenizer=tokenizer,
)
# Train
dpo_trainer.train()
# Evaluate
evaluation_results = dpo_trainer.evaluate()
print("Evaluation Results:", evaluation_results)
</code></pre>
<p>This is the code used to train a simple 'pythia-14m' model. Below is the result.</p>
<pre><code>Evaluation Results: {'eval_loss': nan, 'eval_runtime': 0.5616, 'eval_samples_per_second': 181.61, 'eval_steps_per_second': 12.463, 'eval_rewards/chosen': nan, 'eval_rewards/rejected': nan, 'eval_rewards/accuracies': 0.0, 'eval_rewards/margins': nan, 'eval_logps/rejected': nan, 'eval_logps/chosen': nan, 'eval_logits/rejected': nan, 'eval_logits/chosen': nan, 'epoch': 3.0}
</code></pre>
<p>any idea why nan values during evaluation ? is there anything wrong in the code ?</p>
|
<python><huggingface-transformers><large-language-model><huggingface><huggingface-trainer>
|
2024-06-29 11:37:20
| 1
| 1,220
|
Refinath
|
78,685,748
| 893,254
|
FastAPI no module named "my module" - FastAPI cannot find my Python Package
|
<p>FastAPI cannot find my Python Package. It seems relatively obvious that this is an issue with Python paths and imports, however I do not know how to fix it.</p>
<p>What surprised me is that this worked when using Flask instead of FastAPI. I converted a small application from Flask to FastAPI, and this is when the import statements no longer were able to find my Python package.</p>
<p>I don't know enough about the FastAPI-cli to understand what it does / does not do with paths to know how to fix it.</p>
<p>Here's a very simple MWE.</p>
<pre><code># /fastapi/fastapi_mwe.py
from fastapi import FastAPI
from pydantic import BaseModel
from my_package import my_value
print(f'__name__={__name__}')
app = FastAPI()
class FastAPI_Value(BaseModel):
value: int
@app.get('/')
def root(fastapi_value: FastAPI_Value):
next_value = fastapi_value.value + my_value
return {
'next_value': next_value
}
</code></pre>
<pre><code># /my_package/__init__.py
my_value = 1
</code></pre>
<p>Here's the project structure:</p>
<pre><code>stack_overflow_mwe/
my_package/
__init__.py
fastapi/
fastapi_mwe.py
</code></pre>
<p>Here's how I run <code>fastapi_mwe.py</code>:</p>
<pre><code>user@host:/home/user/stackoverflow_mwe/$ fastapi dev fastapi/fastapi_mwe.py
ModuleNotFoundError: No module named `my_package`
</code></pre>
<p>How should I structure this project and how can I fix this? I don't really want to put the package <code>my_package</code> into the subdirectory <code>fastapi</code>, because this package should be totally independent of the fastapi wrapper/webserver. I should be able to create a new subfolder called <code>flask_api</code> and be able to import <code>my_package</code> from both locations.</p>
<p>I have also tried to avoid using <code>.venv</code> and an editable install + <code>pyproject.toml</code>. I am using virtual environments to manage dependencies (for example I did <code>pip3 install fastapi</code> to install the fastapi package into a local <code>.venv</code>). If there is no alternative, then I can create a <code>pyproject.toml</code> and setup the local packages as an editable install, but I don't really want to do this unless there is no alternative.</p>
<hr />
<h3>Edit: <code>PYTHONPATH</code></h3>
<p>I tried to print the value of <code>PYTHONPATH</code> and found that it does not exist. (When running with <code>fastapi</code>.)</p>
<pre><code>import os
print(f'__name__={__name__}')
print(os.getcwd())
print(os.environ['PYTHONPATH'])
</code></pre>
<p>This seems strange, but suggests that whatever the CLI <code>fastapi</code> does, it doesn't set the <code>PYTHONPATH</code> variable to the cwd.</p>
<p>Any thoughts?</p>
|
<python><fastapi>
|
2024-06-29 10:46:00
| 1
| 18,579
|
user2138149
|
78,685,502
| 1,867,328
|
How to easily standardise pandas dataframe
|
<p>I have below pandas dataframe</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
dat = pd.DataFrame({'AA': [1, 2], 'BB': [3,4]})
</code></pre>
<p>I am looking for a direct way to standardise each column like in <code>SAS</code> we have <a href="https://www.statology.org/sas-proc-stdize/" rel="nofollow noreferrer"><code>PROC STDIZE</code></a>.</p>
<p>Is there any direct method available with pandas?</p>
|
<python><pandas><dataframe>
|
2024-06-29 08:48:36
| 1
| 3,832
|
Bogaso
|
78,685,269
| 880,783
|
Is there a Numpy function to subset an array based on values (not indices) in a slice or range?
|
<p>I am trying to extract, from an array, all values within a certain <code>slice</code> (something like a <code>range</code>, but with optional <code>start</code>, <code>stop</code> and <code>step</code>). And in that, I want to benefit from the heavy optimizations that <code>range</code> objects employ for <code>range.__contains__()</code>, which implies they don't ever have to instantiate the full range of values (compare <a href="https://stackoverflow.com/questions/30081275/why-is-1000000000000000-in-range1000000000000001-so-fast-in-python-3">Why is "1000000000000000 in range(1000000000000001)" so fast in Python 3?</a>).</p>
<p>The following code works, but it's horribly inefficient because <code>i</code> is converted to a full-fledged array, increasing memory use <em>and</em> runtime.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
arr = np.array([0, 20, 29999999, 10, 30, 40, 50])
M = np.max(arr)
# slice based on values
s = slice(20, None)
i = range(*s.indices(M + 1))
print(arr[np.isin(arr, i)]) # works, but inefficient!
</code></pre>
<p>Output:</p>
<pre><code>[ 20 29999999 30 40 50]
</code></pre>
<p>Is there a numpy function to improve that directly? Should I use <code>np.vectorize</code>/<code>np.where</code> instead with a callback using the slice (which feels like it could be slow as well, going back from C++ to Python for every single element [or would it not be doing that?])? Should I subtract <code>start</code> from my values, divide by <code>step</code>, and then see if values are <code>>= 0 && < (stop - start) / step</code>? Or am I missing a much better way?</p>
|
<python><numpy><subset><slice>
|
2024-06-29 06:57:07
| 3
| 6,279
|
bers
|
78,685,004
| 3,486,684
|
Efficiently storing data that is not yet purely-columnar into the Arrow format
|
<p>I previously asked a question similar to this one (in that it uses the same example to illustrate the problem): <a href="https://stackoverflow.com/questions/78684997/efficiently-storing-data-that-is-not-quite-columnar-yet-into-a-duckdb-database?noredirect=1#comment138730725_78684997">Efficiently storing data that is not-quite-columnar-yet into a DuckDB database</a>.</p>
<p>However, <a href="https://duckdb.org/why_duckdb.html" rel="nofollow noreferrer">DuckDB</a> <em>is not</em> <a href="https://arrow.apache.org/overview/" rel="nofollow noreferrer">Apache Arrow</a>. In particular, one cannot insert data into an Apache Arrow table or array using SQL queries, unless they are doing so indirectly.</p>
<p><a href="https://arrow.apache.org/docs/python/getstarted.html#creating-arrays-and-tables" rel="nofollow noreferrer">Arrow arrays are directly constructed much like Numpy arrays.</a>. My best guess, based on what I understand from the documentation so far, is that likely I want to use <a href="https://arrow.apache.org/docs/python/generated/pyarrow.Tensor.html#pyarrow.Tensor" rel="nofollow noreferrer">Arrow Tensors</a> in some way. I am far from certain, hence this question.</p>
<p>I have some partially-columnar data like this:</p>
<pre><code>"hello", "2024 JAN", "2024 FEB"
"a", 0, 1
</code></pre>
<p>If it were purely-columnar, it would look like:</p>
<pre><code>"hello", "year", "month", "value"
"a", 2024, "JAN", 0
"a", 2024, "FEB", 1
</code></pre>
<p>Suppose the data is in the form of a numpy array, like this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import numpy as np
data = np.array(
[["hello", "2024 JAN", "2024 FEB"], ["a", "0", "1"]], dtype="<U"
)
</code></pre>
<pre><code>array([['hello', '2024 JAN', '2024 FEB'],
['a', '0', '1']], dtype='<U8')
</code></pre>
<p>How could I go about efficiently storing <code>data</code> into the Apache Arrow format, in a purely columnar format?</p>
<p>The naive/easy way would be to brute-force transform the data into a columnar format in Python, before storing it as an Apache Arrow array.</p>
<p>Here I mean specifically:</p>
<pre class="lang-py prettyprint-override"><code>import re
data_header = data[0]
data_proper = data[1:]
date_pattern = re.compile(r"(?P<year>[\d]+) (?P<month>JAN|FEB)")
common_labels: list[str] = []
known_years: set[int] = set()
known_months: set[str] = set()
header_to_date: Dict[str, tuple[int, str]] = dict()
for header in data_header:
if matches := date_pattern.match(header):
year, month = int(matches["year"]), str(matches["month"])
known_years.add(year)
known_months.add(month)
header_to_date[header] = (year, month)
else:
common_labels.append(header)
# hello, year, month, value
new_rows_per_old_row = len(known_years) * len(known_months)
new_headers = ["year", "month", "value"]
purely_columnar = np.empty(
(
1 + data_proper.shape[0] * new_rows_per_old_row,
len(common_labels) + len(new_headers),
),
dtype=np.object_,
)
purely_columnar[0] = common_labels + ["year", "month", "value"]
for rx, row in enumerate(data_proper):
common_data = []
ym_data = []
for header, element in zip(data_header, row):
if header in common_labels:
common_data.append(element)
else:
year, month = header_to_date[header]
ym_data.append([year, month, element])
for yx, year_month_value in enumerate(ym_data):
purely_columnar[
1 + rx * new_rows_per_old_row + yx, : len(common_labels)
] = common_data
purely_columnar[
1 + rx * new_rows_per_old_row + yx, len(common_labels) :
] = year_month_value
print(f"{purely_columnar=}")
</code></pre>
<pre><code>purely_columnar=
array([[np.str_('hello'), 'year', 'month', 'value'],
[np.str_('a'), 2024, 'JAN', np.str_('0')],
[np.str_('a'), 2024, 'FEB', np.str_('1')]], dtype=object)
</code></pre>
<p>Now it is easy enough to create an Arrow array from this re-organized data:</p>
<pre><code>import pyarrow as pa
column_types = [pa.string(), pa.int64(), pa.string(), pa.string()]
pa.table(
[
pa.array(purely_columnar[1:, cx], type=column_types[cx])
for cx in range(purely_columnar.shape[1])
],
names=purely_columnar[0],
)
</code></pre>
<pre><code>pyarrow.Table
hello: string
year: int64
month: string
value: string
----
hello: [["a","a"]]
year: [[2024,2024]]
month: [["JAN","FEB"]]
value: [["0","1"]]
</code></pre>
<p>But is there anything else I can do to store the data in purely columnar form in Apache Arrow format, apart from first brute-forcing the data into the purely columnar format?</p>
|
<python><pyarrow><apache-arrow>
|
2024-06-29 03:39:33
| 1
| 4,654
|
bzm3r
|
78,684,997
| 3,486,684
|
Efficiently storing data that is partially columnar into a DuckDB database in a purely columnar form
|
<p>I have some partially columnar data like this:</p>
<pre><code>"hello", "2024 JAN", "2024 FEB"
"a", 0, 1
</code></pre>
<p>If it were purely columnar, it would look like:</p>
<pre><code>"hello", "year", "month", "value"
"a", 2024, "JAN", 0
"a", 2024, "FEB", 1
</code></pre>
<p>Suppose the data is in the form of a numpy array, like this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
data = np.array([["hello", "2024 JAN", "2024 FEB"], ["a", "0", "1"]], dtype="<U")
data
</code></pre>
<pre><code>array([['hello', '2024 JAN', '2024 FEB'],
['a', '0', '1']], dtype='<U8')
</code></pre>
<p>Imagine also that I created a table:</p>
<pre class="lang-py prettyprint-override"><code>import duckdb as ddb
conn = ddb.connect("hello.db")
conn.execute("CREATE TABLE columnar (hello VARCHAR, year UINTEGER, month VARCHAR, value VARCHAR);")
</code></pre>
<p>How could I go about efficiently inserting <code>data</code> into the DuckDB table <code>columnar</code>?</p>
<p>The naive/easy way would be to brute-force transform the data into a columnar format in-memory, in Python, before inserting it into the DuckDB table.</p>
<p>Here I mean specifically:</p>
<pre class="lang-py prettyprint-override"><code>import re
data_header = data[0]
data_proper = data[1:]
date_pattern = re.compile(r"(?P<year>[\d]+) (?P<month>JAN|FEB)")
common_labels: list[str] = []
known_years: set[int] = set()
known_months: set[str] = set()
header_to_date: Dict[str, tuple[int, str]] = dict()
for header in data_header:
if matches := date_pattern.match(header):
year, month = int(matches["year"]), str(matches["month"])
known_years.add(year)
known_months.add(month)
header_to_date[header] = (year, month)
else:
common_labels.append(header)
# hello, year, month, value
new_rows_per_old_row = len(known_years) * len(known_months)
new_headers = ["year", "month", "value"]
purely_columnar = np.empty(
(
1 + data_proper.shape[0] * new_rows_per_old_row,
len(common_labels) + len(new_headers),
),
dtype=np.object_,
)
purely_columnar[0] = common_labels + ["year", "month", "value"]
for rx, row in enumerate(data_proper):
common_data = []
ym_data = []
for header, element in zip(data_header, row):
if header in common_labels:
common_data.append(element)
else:
year, month = header_to_date[header]
ym_data.append([year, month, element])
for yx, year_month_value in enumerate(ym_data):
purely_columnar[
1 + rx * new_rows_per_old_row + yx, : len(common_labels)
] = common_data
purely_columnar[
1 + rx * new_rows_per_old_row + yx, len(common_labels) :
] = year_month_value
print(f"{purely_columnar=}")
</code></pre>
<pre><code>purely_columnar=
array([[np.str_('hello'), 'year', 'month', 'value'],
[np.str_('a'), 2024, 'JAN', np.str_('0')],
[np.str_('a'), 2024, 'FEB', np.str_('1')]], dtype=object)
</code></pre>
<p>Now it is easy enough to store this data in DuckDB:</p>
<pre class="lang-py prettyprint-override"><code>purely_columnar_data = np.transpose(purely_columnar[1:])
conn.execute(
"""INSERT INTO columnar
SELECT * FROM purely_columnar_data
"""
)
conn.sql("SELECT * FROM columnar")
</code></pre>
<pre><code>┌─────────┬────────┬─────────┬─────────┐
│ hello │ year │ month │ value │
│ varchar │ uint32 │ varchar │ varchar │
├─────────┼────────┼─────────┼─────────┤
│ a │ 2024 │ JAN │ 0 │
│ a │ 2024 │ FEB │ 1 │
└─────────┴────────┴─────────┴─────────┘
</code></pre>
<p>But is there any other way in which I can insert the data into a DuckDB in a purely columnar form, apart from brute-forcing the data into a purely columnar form first?</p>
<p><strong>Note:</strong> I have tagged this question with <code>postgresql</code> because <a href="https://duckdb.org/docs/sql/introduction" rel="nofollow noreferrer">DuckDB's SQL dialect closely follows that of PostgreSQL</a>.</p>
|
<python><postgresql><duckdb>
|
2024-06-29 03:31:12
| 2
| 4,654
|
bzm3r
|
78,684,807
| 908,313
|
or tools flexible job shop intervals with multiple machines and per machine availability per job
|
<p>I have been looking at OR-tools flexible job shop examples, specifically around machine jobs with per job durations. I see examples for this base problem problems but not for a way to specify specific times of day that particular jobs can be done on a machine.</p>
<p>How would I approach implementing an additional constraint where the machine can do specific jobs during specific time frames? Each job would have a different per machine list of time restrictions, I'm just trying to get this baseline figured out before approaching a similar multi machine situation.</p>
|
<python><or-tools><constraint-programming><cp-sat>
|
2024-06-29 00:22:45
| 0
| 1,130
|
Jarrod Sears
|
78,684,750
| 3,213,204
|
Key-value “sub-parameter” with Python’s argparse
|
<p>I want to make the <code>--new-entry</code> parameter be able to process a key-value sequence with space as statement separator.</p>
<p>As example:</p>
<pre class="lang-bash prettyprint-override"><code>script.py --new-entry name="The Republic" author="Plato" year="-400"
</code></pre>
<p>In this example, <code>name="The Republic" author="Plato" year="-400"</code> is the whole value of the parameter <code>--new-entry</code>. Ideally, I want to get it inside a dictionary, like:</p>
<pre class="lang-py prettyprint-override"><code> {
"name": "The Republic",
"author": "Plato",
…
}
</code></pre>
<h2>What I didn’t search to do</h2>
<p>Instead of what some people are searching elsewhere, I am not trying to make multiple parameters to get the different values. All the needed variables (name, author, year) should be retrieved inside one paramet</p>
<h2>The question</h2>
<p>So, is their a feature like this inside argparse, or with any other library?</p>
|
<python><parameter-passing><argparse><key-value>
|
2024-06-28 23:46:02
| 1
| 321
|
fauve
|
78,684,705
| 5,009,293
|
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90
|
<p>So, yesterday I was having issues trying to define all columna via the data frame's to_sql method. For the most part, this works just fine.</p>
<p>One file I was trying to process, however, is giving me an odd error. See below.</p>
<p>When I run this :</p>
<pre><code> data.to_sql
(
name=f'tbl{table_name}'
, schema='stage'
, con=odbc_ntt.con
, if_exists='replace'
, index=False, dtype=sqlalchemy.types.NVARCHAR(length=2000)
)
</code></pre>
<p>I get error:</p>
<pre><code> UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 2: character maps to <undefined>
</code></pre>
<p>When I run this, it runs just fine.</p>
<pre><code>data.to_sql
(
name=f'tbl{table_name}'
, schema='stage'
, con=odbc_ntt.con
, if_exists='replace'
, index=False, dtype=sqlalchemy.types.NVARCHAR
)
</code></pre>
<p>stack trace:</p>
<pre><code>python -m trace --trace
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\ProgramData\Anaconda3\lib\trace.py", line 755, in <module>
main()
File "C:\ProgramData\Anaconda3\lib\trace.py", line 735, in main
code = compile(fp.read(), opts.progname, 'exec')
File "C:\ProgramData\Anaconda3\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 2: character maps to <undefined>
</code></pre>
<p>Now, I did check the file for using characters or otherwise bad actors. These did not exist in the source file. I tried enforcing Utf-8 encoding, nothing.</p>
<p>I'm kind of at a loss here because the obvious things aren't sticking. Any assistance appreciated.</p>
|
<python><dataframe><sqlalchemy><anaconda>
|
2024-06-28 23:12:46
| 0
| 7,207
|
Doug Coats
|
78,684,619
| 2,153,235
|
"matplotlibrc" is read at the "startup" of what?
|
<p>According to <a href="https://matplotlib.org/stable/users/explain/customizing.html#customizing-with-matplotlibrc-files" rel="nofollow noreferrer">matplotlibrc documentation</a>, "The <code>matplotlibrc</code> is read at startup to configure Matplotlib." It didn't say the startup of what. Can I assume that it means when <code>matplotlib</code> or subordinate packages are imported, e.g., <code>import matplotlib.pyplot as plt</code>?</p>
|
<python><matplotlib>
|
2024-06-28 22:28:35
| 1
| 1,265
|
user2153235
|
78,684,613
| 3,213,204
|
Match a patern with multiple entries in arbitrary order in Python with re
|
<p>I try to catch values entered in syntax like this one <code>name="Game Title" authors="John Doe" studios="Studio A,Studio B" licence=ABC123 url=https://example.com command="start game" type=action code=xyz78</code></p>
<p>But <code>name</code>, <code>author</code>, <code>studio</code>, …, <code>code</code> statements could appear in arbitrary and different order than the previous one.</p>
<p>For the moment here is my code:</p>
<pre class="lang-py prettyprint-override"><code>import re
input_string = 'name="Game Title" authors="John Doe" studios="Studio A,Studio B" licence=ABC123 url=https://example.com command="start game" type=action code=xyz789'
ADD_GAME_PATERN = r'(?P<name>(?:"[^"]*"|\'[^\']*\'|[^"\']*))\s+' \
r'licence=(?P<licence>[a-z0-9]*)\s+' \
r'type=(?P<typeCode>[a-z0-9]*)\s+' \
r'command=(?P<command>(?:"[^"]*"|\'[^\']*\'|[^"\']*))\s+' \
r'url=(?P<url>\S+)\s+' \
r'code=(?P<code>[a-z0-9]*)\s+' \
r'studios=(?P<studios>.*)\s+' \
r'authors=(?P<authors>.*)\s+'
match = re.match(ADD_GAME_PATERN, input_string)
if match:
name = match.group('name')
code = match.group('code')
licence = match.group('licence')
type_code = match.group('typeCode')
command = match.group('command')
url = match.group('url')
studios = match.group('studios')
authors = match.group('authors')
print(f"Name: {name}")
print(f"Code: {code}")
print(f"Licence: {licence}")
print(f"Type: {type_code}")
print(f"Command: {command}")
print(f"URL: {url}")
print(f"Studios: {studios}")
print(f"Authors: {authors}")
else:
print("No correspondance founded.")
</code></pre>
<p>But in his current state the pattern await for the exact order of the statements.</p>
<p>So how to allow different and arbitrary order of statements?</p>
|
<python><regex><python-re>
|
2024-06-28 22:25:10
| 1
| 321
|
fauve
|
78,684,592
| 2,475,195
|
Pandas hybrid rolling mean
|
<p>I want to consider a window of size 3, in which I take the last value from the <code>b</code> column, and the other 2 values from the <code>a</code> column, like in this example:</p>
<pre><code>df = pd.DataFrame.from_dict({'a':[1, 2, 3, 4, 5], 'b':[10, 20, 30, 40, 50]})
df['hybrid_mean_3'] = [10, 10.5, 11, 15, 19] # how do I calculate this?
a b c
0 1 10 10.0
1 2 20 10.5 # (20+1) / 2
2 3 30 11.0 # (30+2+1) / 3
3 4 40 15.0 # (40+3+2) / 3
4 5 50 19.0 # (50+4+3) / 3
</code></pre>
|
<python><pandas><dataframe><rolling-computation>
|
2024-06-28 22:16:00
| 1
| 4,355
|
Baron Yugovich
|
78,684,570
| 395,857
|
Is there any point in setting `fp16_full_eval=True` if training in `fp16`?
|
<p>I train a Huggingface model with <a href="https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments.fp16_opt_level" rel="nofollow noreferrer"><code>fp16=True</code></a>, e.g.:</p>
<pre class="lang-py prettyprint-override"><code> training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
learning_rate=4e-5,
lr_scheduler_type="cosine",
per_device_train_batch_size=32,
per_device_eval_batch_size=16,
num_train_epochs=3,
weight_decay=0.01,
fp16=True,
)
</code></pre>
<p>Is there any point in also setting <code>fp16_full_eval=True</code>? Or is that already implied by <code>fp16=True</code>?</p>
<p>Same question for <code>bf16</code>.</p>
|
<python><huggingface-transformers><half-precision-float>
|
2024-06-28 22:04:10
| 1
| 84,585
|
Franck Dernoncourt
|
78,684,472
| 4,591,810
|
How to convert a glb 3D design file to a step file in python?
|
<p>I have found several libraries, <a href="https://github.com/tpaviot/pythonocc-core" rel="nofollow noreferrer"><code>pythonOCC</code></a>, <a href="https://gitlab.com/dodgyville/pygltflib" rel="nofollow noreferrer"><code>pygltflib</code></a> which have data exchange functions. However, I cannot figure out how to convert a GLB file to a STEP file.</p>
<p>For example, <code>pythonOCC</code> has a <code>read_gltf_file</code> and <code>write_step_file</code> function.</p>
<p>I use <code>pygltflib</code> to read a GLB and save as GLTF (just a json version of GLB). However, I <code>pythonOCC</code> fails to read and convert this to STEP.</p>
<pre><code>GLB --(pygltflib)--> GLTF --(pythonOCC)--> STEP
</code></pre>
<p>Do any tried and tested libraries exist for this conversion?</p>
<p>Here is what I do:</p>
<pre class="lang-py prettyprint-override"><code>from OCC.Extend.DataExchange import read_gltf_file, write_step_file
from pygltflib import GLTF2
geom = GLTF2().load(glb_src_file)
geom.save_json(gltf_file) # I can open this file in Windows 3D Viewer
gltf=read_gltf_file(glt_file, verbose=True) # [] empty list for some reason
if gltf:
write_step_file(gltf) # only if the last step works.
</code></pre>
<p>what I want is a function:</p>
<pre class="lang-py prettyprint-override"><code>
def glb2step(path):
# what to do here?
# writes step file to disk
</code></pre>
<p>The sample glb file I am working with is <a href="https://filebin.net/hd6gvg9nbkgkfcey" rel="nofollow noreferrer">here</a>.</p>
|
<python><file-conversion><step>
|
2024-06-28 21:18:51
| 1
| 3,731
|
hazrmard
|
78,684,119
| 2,475,195
|
Pandas dataframe - keep values that are certain number of rows apart
|
<p>I have a column of 0s and 1s. I only want to keep 1s in the output column only if they end up being at least 4 rows apart.
Note that simply doing <code>diff()</code> is not a solution, because this would eliminate too many <code>1</code>s. Here's an example:</p>
<pre><code>df = pd.DataFrame.from_dict({'ix':list(range(12)), 'in':[1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1]})
df['out'] = [1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0] # desred output
ix in out
0 0 1 1
1 1 0 0
2 2 0 0
3 3 1 0 # this 1 needs to become 0
4 4 1 1 # we keep this 1, because the previously kept one is sufficiently far
5 5 0 0
6 6 1 0
7 7 0 0
8 8 1 1
9 9 0 0
10 10 0 0
11 11 1 0
</code></pre>
<p>Intuitively it seems like it should be solved with some combination of grouping, <code>diff</code> and <code>cumsum()</code>, but I haven't been able to figure it out.</p>
|
<python><pandas><dataframe>
|
2024-06-28 19:11:30
| 3
| 4,355
|
Baron Yugovich
|
78,684,115
| 5,924,007
|
Python: Relative import with no known parent package
|
<p>I have an AWS Python lambda. The directory structure is as follows"</p>
<pre><code>src
__init__.py
main.py
service.py
</code></pre>
<p>I am initiating a detabase connection in <code>__init__.py</code> file and then importing the connection variable in <code>main.py</code></p>
<p><code>from . import conn</code></p>
<p>I get the following error:</p>
<blockquote>
<p>ImportError: attempted relative import with no known parent package.</p>
</blockquote>
<p>I am new to Python and trying to get a hang of the imports. Shouldn't <code>main.py</code> have access to everything from <code>__init__.py</code> since they are in the same package called <code>src</code></p>
|
<python><python-3.x><aws-lambda><relative-import>
|
2024-06-28 19:10:16
| 1
| 4,391
|
Pritam Bohra
|
78,684,063
| 8,584,739
|
Python strptime not working when time zone is PST
|
<p>I have a python function to convert a given date time string to epoch time. It is working when the date time string has the time zone, say IST. I need to change the time zone to PST/PDT for obvious reasons, but it throws <code>ValueError: time data '01-06-2024 21:30 PDT' does not match format '%d-%m-%Y %H:%M %Z'</code></p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/python3
def convertTime(dateStr):
from datetime import datetime
import pytz
dateFormat = "%d-%m-%Y %H:%M %Z"
naiveDateTime = datetime.strptime(dateStr, dateFormat)
# tTimezone = pytz.timezone("Asia/Kolkata")
tTimezone = pytz.timezone("America/Los_Angeles")
localizedDateTime = tTimezone.localize(naiveDateTime)
epochTime = int(localizedDateTime.timestamp())
# return epochTime
print(epochTime)
# convertTime("01-06-2024 21:30 IST")
convertTime("01-06-2024 21:30 PDT")
</code></pre>
<p>As I said, If I use <code>"01-06-2024 21:30 IST"</code> and <code>tTimezone = pytz.timezone("Asia/Kolkata")</code>, it just works. Any way the the PDT date time can be converted to epoch?</p>
|
<python><python-3.x>
|
2024-06-28 18:54:24
| 1
| 1,228
|
Vijesh
|
78,684,013
| 3,361,850
|
Minimize repetitions by removing all occurrences of one number
|
<p>I did a program in python3 which is correct, it gives the right results, but it has Time Complexity of O(n^2). I wanna improve the Time complexity of this program.
When I run it on a platform <a href="https://concours.algorea.org/contents/4807-4802-1735320797656206224-103413922876285801-1426630568355953802/" rel="nofollow noreferrer">algorea</a> ,the program pass only 63% of tests (only 10 of 16 tests), because some tests take a long time.
could you give me a better and efficient algorithm.</p>
<h3>Problem Statement</h3>
<p>In a list of integers, a repetition is a pair of equal numbers that are adjacent to each other. For example, in the list 1 3 3 4 2 2 1 1 1, there are four repetitions: the two 3's, then the two 2's, then the two 1's that follow, and finally the two 1's at the end of the list.</p>
<p>You are given a list of integers. Write a program that calculates the minimum number of repetitions that can remain in the list once all occurrences of one of the numbers are removed.</p>
<h3>Output</h3>
<p>You should output a single number: the minimum number of repetitions that can remain after removing all occurrences of one of the numbers in the list.</p>
<h3>Examples</h3>
<p>Here is an example of input:</p>
<pre><code>liste = [1, 3, 2, 2, 3, 4, 4, 2, 1]
</code></pre>
<p>The list of 9 numbers is "1 3 2 2 3 4 4 2 1". It contains two repetitions (the first two 2's and the two 4's):</p>
<ul>
<li>Removing the 1's leaves the two repetitions;</li>
<li>Removing the 2's brings the 3's together, so we removed one repetition and added one. There are still two repetitions left;</li>
<li>Removing the 3's leaves the two repetitions;</li>
<li>Removing the 4's leaves only one repetition.</li>
</ul>
<p>If we remove all occurrences of the number 4, only one repetition remains. It is not possible to get fewer than this, so your program should output:</p>
<pre><code>1
</code></pre>
<h3>Constraints</h3>
<ul>
<li>Time limit: 1000 ms.</li>
<li>Memory limit: 64,000 kb.</li>
</ul>
<p>The time limit is set so that a solution which iterates a small number of times over the entire list can score full points, but a solution which for each number in the list, loops over all other numbers in the list can only solve about half the tests without exceeding the time limit.</p>
<h3>My Solution</h3>
<pre><code>liste = [1, 3, 2, 2, 3, 4, 4, 2, 1]
# liste = [1,3,3,4,2,2,1,1,1]
repmin=[]
listunique =set(liste)
for x in listunique:
listWithoutx = []
for i in liste:
if i!=x:
listWithoutx.append(i)
rep=0
for j in range(len(listWithoutx)-1):
if listWithoutx[j]==listWithoutx[j+1]:
rep+=1
repmin.append(rep)
print(min(repmin))
</code></pre>
<p>How Could I improve the Time Complexity of this problem, like that the program run quickly.</p>
<p>Thanks in advance</p>
|
<python><python-3.x><algorithm><time-complexity>
|
2024-06-28 18:35:05
| 2
| 1,063
|
Mourad BENKDOUR
|
78,683,994
| 12,466,687
|
How to add horizontal lines dynamically to a facet plot in plotly python?
|
<p>I am trying to add <code>upper</code> and <code>lower</code> limits <code>red dashed horizontal lines</code> to each <strong>facet Line plot</strong> based on respective column values but dashed lines are not formed at correct yaxis values of the facets.</p>
<p>I have tried below code and made several attempts but it is <strong>not</strong> creating the right <strong>yaxis dashed lines</strong> for the <code>facets</code> and gives a wrong chart output.</p>
<p>Code I have tried:</p>
<p><strong>dataframe</strong></p>
<pre><code>import polars as pl
import datetime
import plotly.express as px
df_stack = pl.DataFrame({'Category': ['CATG1', 'CATG1', 'CATG1', 'CATG2', 'CATG2', 'CATG2', 'CATG3', 'CATG3', 'CATG3', 'CATG4', 'CATG4', 'CATG4', 'CATG5', 'CATG5', 'CATG5', 'CATG6', 'CATG6', 'CATG6', 'CATG7', 'CATG7'],
'Value': [39.7, 29.0, 32.7, 19.0, 14.0, 15.0, 1.28, 1.2, 1.14, 5.6, 6.9, 4.9, 31.02, 24.17, 28.68, 14.49, 11.29, 13.4, 3.81, 3.5],
'Lower Range': [12.0, 12.0, 12.0, 6.0, 6.0, 6.0, 0.9, 0.9, 0.9, 3.5, 3.5, 3.5, 23.0, 23.0, 23.0, 5.5, 5.5, 5.5, 2.5, 2.5],
'Upper Range': [43.0, 43.0, 43.0, 21.0, 21.0, 21.0, 1.3, 1.3, 1.3, 7.2, 7.2, 7.2, 33.0, 33.0, 33.0, 19.2, 19.2, 19.2, 4.5, 4.5],
'Date': [datetime.date(2022, 4, 1),
datetime.date(2022, 9, 9),
datetime.date(2023, 5, 15),
datetime.date(2022, 4, 1),
datetime.date(2022, 9, 9),
datetime.date(2023, 5, 15),
datetime.date(2022, 4, 1),
datetime.date(2022, 9, 9),
datetime.date(2023, 5, 15),
datetime.date(2022, 4, 1),
datetime.date(2022, 9, 9),
datetime.date(2023, 5, 15),
datetime.date(2022, 4, 1),
datetime.date(2022, 9, 9),
datetime.date(2023, 5, 15),
datetime.date(2022, 4, 1),
datetime.date(2022, 9, 9),
datetime.date(2023, 5, 15),
datetime.date(2022, 4, 1),
datetime.date(2022, 9, 9)],
'Color_flag': ['Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal']}
)
</code></pre>
<p><strong>Plot Code</strong></p>
<pre><code># creating line plot
fig_facet_catg_31 = px.line(df_stack.to_pandas(), x='Date', y='Value', facet_row="Category",
facet_col_wrap=2,facet_row_spacing=0.035)
# creating scatter plot
fig_facet_catg_3 = (px.scatter(df_stack.to_pandas(),
x='Date', y='Value', color = 'Color_flag',
color_discrete_sequence=["red", "green"], facet_row="Category"
, facet_row_spacing=0.035 )
.add_traces(fig_facet_catg_31.data)
.update_yaxes(matches=None,showticklabels=True)
.update_layout(height=600)
.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1]))
)
# Getting unique values of each category & converting it into a list
category_list = df_stack.sort('Category').get_column('Category').unique(maintain_order=True).to_list()
# Looping with index & value of category list
for i,elem in enumerate(category_list):
print(i,elem)
# filter for each category 1 by 1 in loop
data = df_stack.filter(pl.col('Category')==elem)
# Extract Upper & lower range values for respective category
upper = data.item(0,"Upper Range")
lower = data.item(0,"Lower Range")
print(lower,upper)
# Adding h_line() to each facet based on row index & range values inside loop.
fig_facet_catg_3 = (fig_facet_catg_3.add_hline(y=upper, row=i, line_dash="dash", line_color = 'red')
.add_hline(y=lower, row=i, line_dash="dash", line_color = 'red'))
fig_facet_catg_3
</code></pre>
<p><strong>Output</strong>
Incorrect Plot as each facet is not aligned to its upper lower limits:</p>
<p><a href="https://i.sstatic.net/pLMP4yfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pLMP4yfg.png" alt="enter image description here" /></a></p>
<p><strong>Desired output:</strong> Ideally the lines should be within the <code>dashed</code> marking <strong>red lines</strong> but dashed lines are not using the correct y values.</p>
|
<python><plotly><python-polars><facet>
|
2024-06-28 18:28:53
| 1
| 2,357
|
ViSa
|
78,683,848
| 22,479,232
|
Attribute error : None type object in Linked List
|
<p>I tried to create a simple single-linked list as the following:</p>
<pre class="lang-py prettyprint-override"><code>class node:
def __init__(self, data=None):
self.data = data
self.next = None
class linkedlist:
def __init__(self):
self.head = node()
def append(self, data):
curr = self.head
new_node = node(data)
while curr.next != None:
curr = curr.next
curr.next = new_node
def total(self):
curr = self.head
total = 0
while curr.next != None:
curr = curr.next
total += 1
return total
def display(self):
# added total here as well
curr = self.head
em = []
total = 0
while curr.next != None:
curr = curr.next
em.append(curr.data)
total += 1
print(f"LinkedList: {em} \n Total: {self.total()} ")
def look(self, index):
if index >= self.total():
print("ERROR: Index error" )
curr_index = self.head
idx = 0
while True:
curr_index = curr_index.next
if idx == index:
print(curr_index.data)
idx += 1
</code></pre>
<p>Whenever I am calling the <code>look()</code> function, I am getting the following error:</p>
<pre><code>curr_index = curr_index.next
^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'next'
</code></pre>
<p>What could this mean?</p>
<p>As far as I know, the <code>curr_index</code> is using the <code>self.head</code>, which is inturn using <code>node()</code>. The <code>node()</code> have does have the <code>next</code> attribute.</p>
<p>This error isn't there when I call the other functions in the class. Moreover, the other functions -- except the <code>look()</code> function -- returns their respective values error-free.</p>
<p>Where is my code going wrong?</p>
|
<python><data-structures><linked-list><nonetype>
|
2024-06-28 17:47:55
| 2
| 351
|
Epimu Salon
|
78,683,609
| 893,254
|
I want to write and deploy a single thread, single process Flask/Gunicorn application? How can I maintain persistent state?
|
<p>Please note that if you have seen a similar question from myself on <code>stackoverflow.com/beta/discussions</code> this question is intended to be focused with (hopefully) a specific answer, rather than a more open ended discussion with a range of possible answers.</p>
<p>I am writing a Flask application which I intend to run with Gunicorn. This application needs to maintain some internal state. For the sake of argument, assume this internal state is as simple as a key-value store. It could be as simple as a Python dictionary, with the webapp presenting <code>GET</code> and <code>POST</code> methods to get or update key value pairs.</p>
<p>In order to share this state, I must configure Gunicorn to run as a single process. (Otherwise some mechanism would be required to share state between processes, and this isn't something I want to implement.)</p>
<p>I could configure it to run with multiple threads. But I would have to protect the data structure using locks. (Again, this isn't a route I want to go down. This is a very simple demonstration project, and the web app part is not the important part of the demo.)</p>
<p>From this I conclude that the best way to deploy this is with Gunicorn configured to use a single process and a single thread. This solves any data-sharing and concurrencuy issues.</p>
<p>What I do not understand is <strong>how do I create shared state in a Flask application?</strong></p>
<p>I have read multiple sources which appear to have conflicting information. Some sources say this is not possible. Others say that the <code>flask.g</code> object can be used, but only within a single request. (Which I assume means the state will be created and destroyed many times throughout the lifetime of the application? If this is the case this clearly will not work.)</p>
<p>Here is some example code. How should I be using the state of <code>global_data</code> in the context of the <code>GET</code> and <code>POST</code> requests?</p>
<pre><code>from flask import Flask, request
app = Flask(__name__)
global_data = {} # ? how should I initialize this global state ?
@app.get('/get')
def get():
context = request.json
key = content.get('key')
return global_data[key]
@app.post('/put')
def put():
context = request.json
key = content.get('key')
value = content.get('value')
global_data[key] = value
</code></pre>
|
<python><flask>
|
2024-06-28 16:46:21
| 1
| 18,579
|
user2138149
|
78,683,536
| 7,886,968
|
How to find the location of a specific "include" library in Python?
|
<p>Within a particular Python program, how do I know what and where a particular included file is?</p>
<p>For example: given <code>import EASY from easygopigo</code>, how do I find which one of many <code>easygopigo</code> libraries are being used?</p>
<p>In other words, I'd like the Python equivalent of <code>which [command]</code> as in "which [library file]", that would give me the path and filename of the file that would be used when I include it.</p>
<p>Specifically, I have a robot running Raspberry Pi O/S Buster with Python 3.7 installed. Many of the robot's programming interfaces are Python files that include various libraries. For example, two libraries are universally used: <code>easygopigo</code> and <code>gopigo</code>.</p>
<p>For whatever reason, there seem to be six or seven copies of these files scattered around, some in the "pi" user's home directory and/or subdirectories, others in various other places within the filesystem.</p>
<p>The operating system version and the version of Python installed, (etc.), are hard requirements and cannot be changed without a non-trivial effort which isn't going to happen (there are ongoing efforts to update the gopigo libraries to run on Bullseye and Bookworm, but these are long-term projects that won't help me here as they're not even close to completion).</p>
|
<linux><python><raspbian><debian-buster>
|
2024-06-28 16:03:08
| 1
| 643
|
Jim JR Harris
|
78,683,374
| 893,254
|
Due to the Python GIL, access to "most" variables is (usually said to be) thread safe. However access to global variables is not. What explains this?
|
<p>There are several questions on Stack Overflow which explain why access to global variables is not thread safe in Python, even with the presence of the Global Interpreter Lock.</p>
<p>This reason for this is that the GIL only permits a single Python Interpreter instance to execute bytecode at any one time. However, since many operations on data in memory consist of several byte code operations (they are not atomic), if two threads switch such that one stops running and another resumes, this might take place while the state in memory and the state of the interpreter are not aligned at a synchronization point. (Think "race condition".)</p>
<p>If the above statement isn't immediatly clear, there are some good explanations available on this site. <a href="https://stackoverflow.com/questions/37990533/is-global-counter-thread-safe-in-python">Here is one such example.</a></p>
<p>However, in direct contradiction to this, it is often stated that the purpose of the GIL is to simplify multi-thread programming by preventing race conditions. <a href="https://developer.vonage.com/en/blog/removing-pythons-gil-its-happening" rel="nofollow noreferrer">Here is one such example.</a> (<em>but it this may be - and seems to me to be - a common misconception or misunderstanding</em>)</p>
<p>However, this is not a sufficiently detailed explanation, since while global variables are not protected from race conditions, for the reasons described above, <em>some variables are</em>, and this is frequently stated as common lore.</p>
<p>So - why is access to "not global" ("some") variables guaranteed to be thread safe? Or yet better, <strong>under what conditions is access to a variable guaranteed to be thread safe?</strong></p>
<p>One relatively easy to understand example is that of local variables: Since each function call has its own independent stack frame and set of local variables, clearly one thread cannot create a race condition with another in the context of local variables which have a lifetime which is confined to that of a function call.</p>
<p>Put another way, if multiple threads are dispatched with an entry point of <code>my_function</code> any variables created during the lifetime of this function call are independent of all other threads running in parallel.</p>
<p>As an extension to this idea, if a reference to a (local or otherwise) variable is passed into multiple functions which execute concurrently, this would <strong>not</strong> be guaranteed to be thread safe, because two or more threads are holding a reference to the same object in memory. (This is essentially very similar to that of the global variable example.)</p>
<p>Are there any other examples? Or do these two examples sufficiently explain the set of possibilities that must be considered?</p>
|
<python><multithreading>
|
2024-06-28 15:48:55
| 1
| 18,579
|
user2138149
|
78,683,270
| 1,304,376
|
How to watch recursion depth in vscode?
|
<p>I'm trying to convert a custom json-like response structure to json object. Since lists can contain dicts and vice versa, I'm calling them recursively as they appear. While the recursion is deep, I don't think it should have exceeded the recursion depth. So, I think I have a bug in the handling. Or maybe I'm misunderstanding the way that vs code is counting recursion and it's actually running out of recursion and I should increase it.</p>
<p>So, I would like to keep a watch on the recursion depth as I step through and try to get a sense of where I'm growing the recursion when I should be returning. However, I don't see a variable I can watch in VS code to see the recursion depth. Please help</p>
|
<python><visual-studio-code><recursion>
|
2024-06-28 15:23:55
| 0
| 1,676
|
Ching Liu
|
78,683,162
| 439,497
|
Why does sign of eigenvector flip for small covariance change?
|
<p>The 2 covariance matrices only differ by values at top-right and bottom-left. However, sign of principal component eigen vectors (right column) has flipped.</p>
<pre><code>import numpy as np
cov = np.array([[9369, 7060, 16469],
[7060, 8127, 23034],
[16469, 23034, 126402]])
eigenvalues, eigenvectors = np.linalg.eigh(cov)
print('Cov 1')
print(eigenvalues)
print(eigenvectors)
cov2 = np.array([[9369, 7060, *18969*],
[7060, 8127, 23034],
[*18969*, 23034, 126402]])
eigenvalues, eigenvectors = np.linalg.eigh(cov2)
print('Cov 2')
print(eigenvalues)
print(eigenvectors)
</code></pre>
<p>Results:</p>
<pre><code>Cov 1
[ 1188.72951707 9507.3058357 133201.96464723]
[[-0.5549483 0.82002403 0.13997491]
[ 0.8280924 0.52849364 0.18696912]
[-0.07934332 -0.21967035 0.97234231]]
Cov 2
[ 1390.92828009 8581.02234156 133926.04937835]
[[-0.57175381 0.80502124 -0.15823521]
[ 0.817929 0.54427789 -0.18642352]
[-0.06395096 -0.23601353 -0.96964318]]
</code></pre>
<p>How can I either prevent this happening, or 'correct' in some way so that sign is same - either positive or negative in both cases?</p>
<p>EDIT:</p>
<p>Does anyone have c# aswell?</p>
<p>Can someone tell me if the above issue happens with mathnet.numerics.evd()?</p>
|
<python><eigenvector>
|
2024-06-28 14:58:09
| 2
| 7,005
|
ManInMoon
|
78,683,099
| 1,726,633
|
Gradient flow in Pytorch for autocallable options
|
<p>I have the following code:</p>
<pre><code>import numpy as np
import torch
from torch import autograd
# Define the parameters with requires_grad=True
r = torch.tensor(0.03, requires_grad=True)
q = torch.tensor(0.02, requires_grad=True)
v = torch.tensor(0.14, requires_grad=True)
S = torch.tensor(1001.0, requires_grad=True)
# Generate random numbers and other tensors
Z = torch.randn(10000, 5)
t = torch.tensor(np.arange(1.0, 6.0))
c = torch.tensor([0.2, 0.3, 0.4, 0.5, 0.6])
# Calculate mc_S with differentiable operations
mc_S = S * torch.exp((r - q - 0.5 * v * v) * t + Z.cumsum(axis=1))
# Calculate payoff with differentiable operations
res = []
mask = 1.0
for col, coup in zip(mc_S.T, c):
payoff = mask * torch.where(col > S, coup, torch.tensor(0.0))
res.append(payoff)
mask = mask * (payoff == 0)
v = torch.stack(res).T
result = v.sum(axis=1).mean()
# Compute gradients - breaks here
grads = autograd.grad(result, [r, q, v, S], allow_unused=True, retain_graph=True)
print(grads)
</code></pre>
<p>I'm trying to price an autocallable option with early knockout and require the sensitivities to input variables.</p>
<p>However, the way the coupons are calculated (the c tensor in the code above), breaks the computational graph and I'm unable to obtain the gradients. Is there a way to get this code to calculate the derivatives?</p>
<p>Thanks</p>
|
<python><pytorch><quantitative-finance>
|
2024-06-28 14:44:03
| 1
| 368
|
user1726633
|
78,683,065
| 5,931,672
|
PIP downloading again package already installed
|
<p>I am trying to install <a href="https://github.com/timeseriesAI/tsai/" rel="nofollow noreferrer">tsai</a>. When doing so, PIP tries to download PyTorch:</p>
<pre><code>Collecting torch<1.14,>=1.7.0 (from tsai==0.3.5)
</code></pre>
<p>However, I already have a version that matches 1.7 < my version < 1.14. See:</p>
<pre><code>$ pip list | grep torch
torch 1.8.1
</code></pre>
<p>Why is this happening and how can I prevent it?</p>
|
<python><pip>
|
2024-06-28 14:34:33
| 3
| 4,192
|
J Agustin Barrachina
|
78,683,048
| 2,645,548
|
How to authorize in python google cloud library with API-token
|
<p>How to authorize with API-token?
My code</p>
<pre class="lang-py prettyprint-override"><code>from google.auth.api_key import Credentials
from google.cloud.storage import Client
client = Client('my-project', Credentials('my-token'))
blobs = client.list_blobs('my-bucket')
for blob in blobs:
print(blob.name)
</code></pre>
<p>raises</p>
<pre><code>google.api_core.exceptions.Unauthorized: 401 GET
</code></pre>
<p>How to do this correctly?</p>
<p>I can't to authorize with json because it runs in docker container and in production environment I can't mount volume with json, only API-token in envs.</p>
|
<python><python-3.x><google-cloud-storage>
|
2024-06-28 14:30:04
| 1
| 611
|
jonsbox
|
78,682,971
| 1,003,288
|
Override return type hint of function from Python dependency
|
<p>I am using a dependency that has type hints but doesn't test them and some of them are just broken. For example:</p>
<pre class="lang-py prettyprint-override"><code>def foo() -> object:
return {"hello": "world"}
</code></pre>
<p>In my code I want to assign the output of that function to a variable that has a <code>dict</code> type.</p>
<pre class="lang-py prettyprint-override"><code>from dependency import foo
myvar: dict = foo()
</code></pre>
<p>With this <code>mypy</code> gives me an error as expected.</p>
<pre><code>error: Incompatible types in assignment (expression has type "object", variable has type "dict[Any, Any]") [assignment]
</code></pre>
<p>However if I try and cast it to a dict explicitly that isn't possible either.</p>
<pre class="lang-py prettyprint-override"><code>from dependency import foo
myvar: dict = dict(foo())
</code></pre>
<p>I see the following error.</p>
<pre><code>error: No overload variant of "dict" matches argument type "object" [call-overload]
</code></pre>
<p>What is the right way to tell <code>mypy</code> to ignore the broken type hint from the dependency?</p>
|
<python><python-typing>
|
2024-06-28 14:14:01
| 1
| 3,833
|
Jacob Tomlinson
|
78,682,936
| 7,056,539
|
SQLAlchemy async session requires refresh
|
<p>I'm new to SQLAlchemy in general, and even more to its asyncio mode.
I'm creating an async session like so (demo/pseudo code):</p>
<pre class="lang-py prettyprint-override"><code>async_scoped_session(
async_sessionmaker(
bind=create_async_engine(
format_db_url(CONFIG)
)
),
scopefunc=asyncio.current_task
)
</code></pre>
<p>and then creating a model like so:</p>
<pre class="lang-py prettyprint-override"><code> ...
async def model(self):
model = TestUser(name='test', age=1)
self.session.add(model)
await self.session.commit()
return model
...
</code></pre>
<p>but, when trying to use it:</p>
<pre class="lang-py prettyprint-override"><code>model = repo.model()
print(model.name)
</code></pre>
<p>it fails at the print with:</p>
<pre><code>sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_only() here. Was IO attempted in an unexpected place? (Background on this error at: https://sqlalche.me/e/20/xd2s)
</code></pre>
<p>These two adjustments fix the issue:</p>
<pre class="lang-py prettyprint-override"><code>await repo.session.refresh(model)
# and/or
id_ = await model.awaitable_attrs.name
</code></pre>
<p>But I have the feeling I'm missing something or doing something wrong, I kinda understand the need for <code>awaitable_attrs</code> when accessing a relationship, but it doesn't feel right having to an extra step before being able to access a model's own attribute.</p>
|
<python><sqlalchemy><python-asyncio>
|
2024-06-28 14:05:47
| 1
| 419
|
Nasa
|
78,682,716
| 13,537,183
|
Spark getItem shortcut
|
<p>I am doing the following in spark sql:</p>
<pre class="lang-py prettyprint-override"><code>spark.sql("""
SELECT
data.data.location.geometry.coordinates[0]
FROM df""")
</code></pre>
<p>This works fine, however I do not want to use raw SQL, I use dataframe API like so:</p>
<pre class="lang-py prettyprint-override"><code>df.select("data.data.location.geometry.coordinates[0]")
</code></pre>
<p>Unfortunately this does not work:</p>
<pre><code>AnalysisException: [DATATYPE_MISMATCH.UNEXPECTED_INPUT_TYPE] Cannot resolve "data.data.location.geometry.coordinates[0]" due to data type mismatch: Parameter 2 requires the "INTEGRAL" type, however "0" has the type "STRING".;
'Project [data#680.data.location.geometry.coordinates[0] AS 0#697]
+- Relation [data#680,id#681,idempotencykey#682,source#683,specversion#684,type#685] json
</code></pre>
<p>I know that I can use the F.col api and go with a getItem(0), but is there a built-in way to have the shortcut of getItem?</p>
<p>'.' is the shortcut of getField is there one for array slicing?</p>
<p>Thank you for your insight</p>
|
<python><apache-spark><pyspark><databricks>
|
2024-06-28 13:19:23
| 3
| 699
|
Pdeuxa
|
78,682,652
| 13,912,132
|
Python packages: Multiple subprojects
|
<p>I have this directory structure:</p>
<pre><code>pyproject.toml
proj1/
pyproject.toml
MANIFEST.in
proj1/
__init__.py
proj2/
pyproject.toml
MANIFEST.in
proj2/
__init__.py
</code></pre>
<p>What I want to do is:</p>
<ul>
<li>The toplevel pyproject.toml is only there for e.g. formatting and including proj1 and proj2</li>
<li>proj1 and proj2 are packages that should be packaged.</li>
<li>I want that if I do e.g. <code>python -m build</code> (Or <code>hatch build</code>, I don't care what buildsystem, as long as it works and is somewhat well-known) in the toplevel directory, that *.whl and *.tar.gz files are generated in a toplevel dist/ - for every subproject.</li>
</ul>
<p>Is that possible?</p>
<p>My toplevel pyproject.toml:</p>
<pre><code>[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "projs"
version = "0.0.3"
description = ""
[tool.hatch.build.targets.wheel]
packages=["proj1", "proj2"]
[tool.hatch.build.targets.sdist]
packages=["proj1", "proj2"]
</code></pre>
<p>my proj1 (and proj2)/pyproject.toml:</p>
<pre><code>[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "proj1"
license = "AGPL-3.0-or-later"
[tool.hatch.version]
path = "proj1/__init__.py"
[tool.hatch.build.targets.sdist]
include = [
"/proj1",
]
</code></pre>
<p>But now, if I run <code>hatch build</code>, I get only the toplevel dist/projs-$VERSION.tar.gz containing paths I don't want to have. (E.g. <code>projs-$VERSION/proj1/proj1/__init__.py</code>) Is this doable with any packaging technology of python?</p>
|
<python><python-packaging><hatch>
|
2024-06-28 13:06:14
| 0
| 3,593
|
JCWasmx86
|
78,682,628
| 6,179,785
|
Reading Excel date columns as strings without time part
|
<p>I'm encountering an issue while reading date columns from an Excel file into a Pandas DataFrame. The date values in my Excel sheet are formatted as DD-MM-YYYY (e.g., 05-03-2024), but when I use pd.read_excel, Pandas interprets these values as datetime objects and appends 00:00:00, resulting in output like this:</p>
<pre><code> Actual Delivery Date
0 2024-03-05 00:00:00
1 2024-03-05 00:00:00
2 2024-03-05 00:00:00
3 2024-03-05 00:00:00
</code></pre>
<p>I've tried the following approaches without success:</p>
<p>Using dtype=str when reading the Excel file.
Explicitly converting the date column to strings after loading.</p>
<pre><code>import pandas as pd
def load_excel_sheet(file_path, sheet_name):
excel_file = pd.ExcelFile(file_path, engine='openpyxl')
df_pandas = pd.read_excel(excel_file, sheet_name=sheet_name, dtype=str)
# Explicitly convert specific date columns to strings
for col in df_pandas.columns:
if df_pandas[col].dtype == 'datetime64[ns]':
df_pandas[col] = df_pandas[col].astype(str)
return df_pandas
def process_data_quality_checks(file_path, sheet_name):
df = load_excel_sheet(file_path, sheet_name)
for col in df.columns:
if not all(isinstance(x, str) for x in df[col]):
print(f"Column {col} has non-string data")
else:
print(f"Column {col} is all strings")
return df
file_path = r"path_to_your_excel_file.xlsx"
sheet_name = 'Sheet1'
df = process_data_quality_checks(file_path, sheet_name)
print(df.head())
</code></pre>
<p>Despite these efforts, my date columns still appear in the DataFrame with 00:00:00 appended. How can I ensure that Pandas reads these date values strictly as strings without any additional time information?</p>
|
<python><pandas>
|
2024-06-28 13:03:29
| 1
| 340
|
Ash3060
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.