QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,325,578
| 1,736,407
|
ValueError: Protocol message OrderedJob has no "template" field
|
<p>I'm writing a Google Cloud Function that invokes a Dataproc Workflow defined in a YAML file on cloud storage.</p>
<p>When testing the invocation, the function crashes immediately with the following stack trace:</p>
<pre><code>"Traceback (most recent call last):
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/proto/message.py", line 570, in __init__
pb_value = marshal.to_proto(pb_type, value)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/proto/marshal/marshal.py", line 217, in to_proto
pb_value = rule.to_proto(value)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/proto/marshal/rules/message.py", line 36, in to_proto
return self._descriptor(**value)
ValueError: Protocol message OrderedJob has no "template" field."
</code></pre>
<p>When I looked at the documentation for OrderedJob, there's no field (mandatory or otherwise called 'template'. Googling the exact error yields 0 results so I am a bit stuck. Any advice would be welcome, since this is my first time using workflows and I'm not sure where to even start.</p>
|
<python><google-cloud-functions><google-cloud-dataproc><google-workflows>
|
2023-05-24 16:19:55
| 1
| 2,220
|
Cam
|
76,325,424
| 9,757,174
|
ImportError: cannot import name 'resource_loader' from partially initialized module 'tensorflow._api.v2.compat.v1'
|
<p>I am building a chatbot using <code>rasa</code>. When I try to run <code>rasa train</code>, I get the following error.</p>
<pre><code>from tensorflow._api.v2.compat.v1 import resource_loader
ImportError: cannot import name 'resource_loader' from partially initialized module 'tensorflow._api.v2.compat.v1'
</code></pre>
<p>Could someone help me identify what library I could change to fix this?</p>
|
<python><tensorflow><rasa>
|
2023-05-24 15:59:51
| 1
| 1,086
|
Prakhar Rathi
|
76,325,336
| 7,327,257
|
Remove files with a string in a bucket google cloud storage in python
|
<p>I'm working with google cloud storage and have several files uploaded to a bucket with similar names. Each file has the following format: <code>company_tile_lc_date1_today.tif</code>, where <code>today</code> is the date when the file was created and uploaded to the bucket. I also have files with this format: <code>company_tile_lcpr_date1_today.tif</code>. I need to remove all files as the first one (contains string <code>_lc_</code>) but keep the others. I have found several solutions using python and google cloud storage, all of them using <code>blob.delete</code>, however I haven't found a way to specify the string that has to have the file to remove. I know I can use the <code>prefix</code> parameter, but that's to specify the folder, not the files.</p>
<p>I also need to delete all files where <code>today</code> is before some specific date, I guess that knowing how to know the first part, this part would be similar.</p>
<p>Any help would be appreciated.</p>
|
<python><google-cloud-storage>
|
2023-05-24 15:48:22
| 0
| 357
|
M. Merida-Floriano
|
76,325,304
| 298,209
|
Waiting on asyncio futures outside asyncio.run()
|
<p>I came across this pattern in some code base and it's breaking my mental model of how things work in asyncio. We have this function that awaits on a subset of futures and returns another subset that takes much longer to finish. It then waits for the second subset outside <code>asyncio.run()</code>. I'm not sure I understand where these second set of futures get CPU time given that there's no event loop outside asyncio.run(). Is it when we get to <code>f.result()</code> that they get a chance to run/finish?</p>
<pre class="lang-py prettyprint-override"><code>
async def foo():
futures_a = ...
futures_b = ...
await asyncio.gather(*futures_a)
return futures b
async def main():
tasks = set()
tasks.add(asyncio.create_task(foo))
tasks.add(asyncio.create_task(foo))
tasks.add(asyncio.create_task(foo))
return await asyncio.gather(*tasks)
pending_futures = asyncio.run(main())
for f in pending_futures():
f.result()
</code></pre>
|
<python><python-asyncio>
|
2023-05-24 15:44:33
| 0
| 5,580
|
Milad
|
76,325,202
| 3,611,472
|
How to solve compatibility with old pandas and tensorflow on M1 chip
|
<p>I am working with a MacBook Pro M1.</p>
<p>I have to run a code that was written several years ago and relies on python 3.6, pandas 0.20.3 and numpy 1.17.3.</p>
<p>On top of these packages, I should also use TensorFlow. Here's the problem. Since I am running the code on a Mac M1, I need to use <code>tensorflow-macos</code>, which is not compatible with python 3.6. Is there any other way to install a version of tensorflow compatible with python 3.6 pandas 0.20.3 and numpy 1.17.3 on an M1?</p>
<p>Rewriting the code will be eventually done, but right I have a timing constraint and I cannot rewrite it.</p>
|
<python><pandas><tensorflow><apple-m1>
|
2023-05-24 15:33:17
| 1
| 443
|
apt45
|
76,325,145
| 10,755,032
|
NotFittedError: This RandomForestRegressor instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator
|
<p>I have taken a look at this: <a href="https://stackoverflow.com/questions/51397611/randomforestclassifier-instance-not-fitted-yet-call-fit-with-appropriate-argu">RandomForestClassifier instance not fitted yet. Call 'fit' with appropriate arguments before using this method</a> not helped.</p>
<p>I was running RandomForest Regressor model with RandomizedSearchCV. It was running for 3 hr and suddenly it gave this error.</p>
<p>The relevant part of my code:</p>
<pre><code>rf = RandomForestRegressor()
rs_rf = RandomizedSearchCV(rf, param, cv=2, n_jobs=-1, verbose=1)
rs_rf.fit(X_train, Y_train)
rs_rf_train = rf.predict(X_train)
rs_rf_test = rf.predict(X_test)
</code></pre>
<p>The error:</p>
<pre class="lang-none prettyprint-override"><code>---------------------------------------------------------------------------
NotFittedError Traceback (most recent call last)
<ipython-input-8-de03d0ce1f81> in <cell line: 141>()
139 rs_rf = RandomizedSearchCV(rf, param, cv=2, n_jobs=-1, verbose=1)
140 rs_rf.fit(X_train, Y_train)
--> 141 rs_rf_train = rf.predict(X_train)
142 rs_rf_test = rf.predict(X_test)
143
1 frames
/usr/local/lib/python3.10/dist-packages/sklearn/utils/validation.py in check_is_fitted(estimator, attributes, msg, all_or_any)
1388
1389 if not fitted:
-> 1390 raise NotFittedError(msg % {"name": type(estimator).__name__})
1391
1392
NotFittedError: This RandomForestRegressor instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.
</code></pre>
|
<python><machine-learning><scikit-learn><random-forest>
|
2023-05-24 15:26:44
| 1
| 1,753
|
Karthik Bhandary
|
76,325,091
| 7,185,934
|
pyproject.toml setuptools: edit user's .bashrc file
|
<p>I am writing a Python package that has an entrypoint (cli) script using a pyproject.toml file that builds with setuptools.</p>
<p>I'm installing this package locally with <code>pip install --user .</code>
--user is necessary for me because in my environment only my user's dir (/home) is preserved over sessions.</p>
<p>The CLI script gets installed correctly in my /home dir, however the path to it is not added (pip even gives a warning about it).</p>
<p>To solve this problem I would ideally add a line to my <code>.bashrc</code> file that appends the CLI script's dir path to PATH.</p>
<p>Is there any way to achieve this through specification in <code>pip</code> or <code>pyproject.toml</code>?</p>
|
<python><setuptools><pyproject.toml>
|
2023-05-24 15:20:13
| 1
| 815
|
David Skarbrevik
|
76,324,972
| 1,310,540
|
Error while accessing local file using selenium webdriver
|
<p>I am having an issue which I tried to reproduce locally.</p>
<p>I have an XML file and when I open it <code>chrome webdriver</code> with url like: <code>http://localhost:63342/Testing_Prj/EXPORT.xml</code> the <strong>driver.page_source</strong> working fine. Alternately if I open the same file using url like: <code>file:///E:/Testing_Prj/EXPORT.xml</code> the <strong>driver.page_source</strong> and other properties gives <strong>timeout error</strong>.</p>
<p>The original issue is when I click on (Generate) button, a new tab opens with dynamic url and opens the dynamically generated xml data.</p>
<p>After switch_to that window_handler I am unable to do any activity, like page_source, execute_script, etc.</p>
<p><strong>EXPORT.xml</strong></p>
<pre><code><?xml version="1.0" encoding="utf-16"?>
<teachers>
<teacher>
<name>Sam Davies</name>
<age>35</age>
<subject>Maths</subject>
</teacher>
<teacher>
<name>Cassie Stone</name>
<age>24</age>
<subject>Science</subject>
</teacher>
<teacher>
<name>Derek Brandon</name>
<age>32</age>
<subject>History</subject>
</teacher>
</teachers>
</code></pre>
<p>Thanks for the help in advance.</p>
|
<python><xml><selenium-webdriver>
|
2023-05-24 15:06:08
| 1
| 931
|
Mehmood
|
76,324,839
| 5,924,264
|
unbound method __init__() error in unit tests but not in regular executions
|
<p>I got this:</p>
<pre><code>
DataBaseStorage.__init__(
> self, key=key, sz=sz,
)
E TypeError: unbound method __init__() must be called with DataBaseStorage instance as first argument (got BaseStorage instance instead)
path/to/file/DataBase.py:80: TypeError
</code></pre>
<p>The traceback is a bit scattered, but right above this error, the call is:</p>
<pre><code># This is referring to a call to DerivedStorage.__init__
path/to/file/Base.py:213: in __init__
</code></pre>
<p>in a unit test. I haven't run across this error in non-unit test executions even though those executions go down the same path, so I am not sure what is going on.</p>
<p>The inheritance hierarchy here is poorly designed (unfortunately we can't change it currently), but it looks like below (skeleton code):</p>
<pre><code>class DerivedStorage(DataBaseStorage):
def __init__(self, key, sz):
# do some stuff
DataBaseStorage.__init__(self, key, sz)
class BaseStorage(DerivedStorage):
def __init__(self, key, sz):
# do some stuff
DerivedStorage.__init__(self, key=key, sz=sz)
# do some other stuff
</code></pre>
<p>Also note that we have to support python2 in the codebase.</p>
<p>I've found several similar questions with issue, but I don't think they answer the question because this code does work outside of unit tests, but in unit tests it fails. What could be causing that?</p>
<p>The unit test itself crashes on the following line:</p>
<pre><code> bsto = BaseStorage(
key=key,
sz=10,
)
</code></pre>
<p>edit: I mentioned that it works in non-unit-test executions, and I think that's because we've been using python3+ for those executions, so I think this issue may be specific to python 2.7.</p>
|
<python><python-2.7><unit-testing><inheritance>
|
2023-05-24 14:53:08
| 0
| 2,502
|
roulette01
|
76,324,830
| 17,487,457
|
dataframe column's aggregate based on simple majority
|
<p>I have a <code>dataframe</code> from my model's prediction similar to the one below:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({
'trip-id': [8,8,8,8,8,8,8,8,4,4,4,4,4,4,4,4,4,4,4,4],
'segment-id': [1,1,1,1,1,1,1,1,0,0,0,0,0,0,5,5,5,5,5,5],
'true_label': [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
'prediction': [3, 3, 3, 1, 2, 4, 0, 0, 3, 3, 3, 0, 1, 2, 3, 3, 1, 1, 2, 2]})
df
trip-id segment-id true_label prediction
0 8 1 3 3
1 8 1 3 3
2 8 1 3 3
3 8 1 3 1
4 8 1 3 2
5 8 1 3 4
6 8 1 3 0
7 8 1 3 0
8 4 0 3 3
9 4 0 3 3
10 4 0 3 3
11 4 0 3 0
12 4 0 3 1
13 4 0 3 2
14 4 5 3 3
15 4 5 3 3
16 4 5 3 1
17 4 5 3 1
18 4 5 3 2
19 4 5 3 2
</code></pre>
<p>In the given sample are predictions and true label for the instances <code>[0,1,..4]</code> of trips' segments.</p>
<p>I would like to generate a summary of segment's predictions based on simple majority.</p>
<ul>
<li>to consider as segment's predicted value, the value of that predicted instance <code>[0,1,..4]</code> of the segment having simple majority.</li>
<li>where there's a tie for the majority predicted instances, the value matching the <code>true_label</code> is considered the segment's prediction.</li>
<li>if there's a tie of majority, and none of the instances in the tie matches the <code>true_label</code>, then from those in the tie, the instance coming first in the <code>df</code> is regarded the segment's predicted value.</li>
</ul>
<p>Currently I can do this:</p>
<pre class="lang-py prettyprint-override"><code>segments_summary = (
df['true_label'].eq(df['prediction'])
.groupby([df['true_label'],df['trip-id'], df['segment-id']]).mean()
.ge(0.5)
.groupby(level='true_label').agg(['size','sum'])
.rename(columns={'size':'total-segments','sum':'correctly-predicted'})\
.assign(recall = lambda x: round(x['correctly-predicted']/x['total-segments'], 2))
.reindex(range(5), fill_value='-')
.reset_index())
</code></pre>
<p>Which produces:</p>
<pre class="lang-py prettyprint-override"><code>segments_summary
true_label total-segments correctly-predicted recall
0 0 - - -
1 1 - - -
2 2 - - -
3 3 3 1 0.33
4 4 - - -
</code></pre>
<p>But this is not exactly what I wanted. Going by the conditions I above, all the 3 segments should have been predicted correctly.</p>
<ul>
<li><code>trip 8, segment 1</code>: <code>3</code> has the simple majority, so that segment should considered as predicted <code>3</code></li>
<li><code>trip 4, segment 0</code>: <code>3</code> has simple majority, that segment is predicted as <code>3</code>.</li>
<li><code>trip 4, segment 5</code>: is s tie, so the prediction matching <code>true_label</code> should be the segment's prediction -> <code>3</code>.</li>
</ul>
<p>Expected result:</p>
<pre class="lang-py prettyprint-override"><code> true_label total-segments correctly-predicted recall
0 0 - - -
1 1 - - -
2 2 - - -
3 3 3 3 1.0
4 4 - - -
</code></pre>
|
<python><pandas><dataframe>
|
2023-05-24 14:52:02
| 1
| 305
|
Amina Umar
|
76,324,824
| 2,283,347
|
How to return existing matching record for Django CreateView
|
<p>I use a regular <code>generic.edit.CreateView</code> of <code>Django</code> to create an object according to user input. The <code>MyModel</code> has an <code>UniqueConstrait</code> so the creation would fail if the new object happens to match an existing one.</p>
<p>However, instead of telling users that creation fails due to duplicate records, I would like to simply return the existing object since this is what users want. I tried to override</p>
<pre class="lang-py prettyprint-override"><code> def save(self, commit=True):
return MyModel.get_or_create(value=self.cleaned_data['value'])[0]
</code></pre>
<p>but this does not work because <code>form._post_clean</code> will call <a href="https://github.com/django/django/blob/0c1518ee429b01c145cf5b34eab01b0b92f8c246/django/forms/models.py#LL494C7-L494C7" rel="nofollow noreferrer"><code>form.instance.full_clean</code></a> with its default <code>validate_constraints=True</code>, so an error will return during form validation.</p>
<p>Essentially I am looking for a <code>GetOrCreateView</code> but I am not sure what is a clean way to achieve this (other than maybe overriding <code>form._full_clean</code>). Note that I am not looking for a <code>UpdateView</code> because we do not know in advance which existing object will be returned.</p>
<p><strong>Edit</strong>: After struggling with the validation issue for a very long time, I finally abandoned <code>CreateView</code> and resorted to a simpler <code>FormView</code>. The <code>FormView</code> will simply get all the user input I need, before calling <code>get_or_create(form.cleaned_data[...])</code> in <code>form_valid(self, form)</code>.</p>
|
<python><django><django-class-based-views>
|
2023-05-24 14:51:39
| 1
| 799
|
user2283347
|
76,324,705
| 1,802,693
|
Generating classes in python by using an exisiting one's constructor
|
<p>I want to generate some classes, which automatically sets an existing one <code>FieldDescriptor</code> by using the values from enum.</p>
<p>I want to generate the following classes without writing them:</p>
<ul>
<li><code>GEN_STING</code>, <code>GEN_BIGINT</code>, <code>GEN_FLOAT</code></li>
</ul>
<p>For some reason I always have problem with the:</p>
<ul>
<li>constructor</li>
<li>instantiation</li>
<li>number of arguments in <code>__init__()</code></li>
</ul>
<p>What is the proper solution for this?</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
from typing import Union, Type
from dataclasses import dataclass
import numpy as np
class FieldPairingTypes(Enum):
STRING = (str, "string", "keyword")
BIGINT = (np.int64, "bigint", "long")
FLOAT = (np.float64, "double", "double")
@dataclass
class FieldDescriptor:
original_field_name: str
datalake_field_name: str
datalake_field_type: Type
glue_field_type: str
datamart_field_type: Union[str, Type]
def __init__(self, ofn, dfn, field_type: FieldPairingTypes):
self.original_field_name = ofn
self.datalake_field_name = dfn
self.datalake_field_type, self.glue_field_type, self.datamart_field_type = field_type.value
def generate_class(class_name, field_type):
def __init__(self, ofn, dfn):
super().__init__(ofn, dfn, field_type)
attrs = {
# "__init__": __init__,
#"__init__": FieldDescriptor.__init__,
"__init__": lambda x, y: FieldDescriptor.__init__(x, y, field_type),
}
return type(class_name, (FieldDescriptor,), attrs)
generated_classes = {}
for value in FieldPairingTypes:
class_name = "GEN_" + str(value).split(".")[-1]
generated_classes[class_name] = generate_class(class_name, value)
for class_name, generated_class in generated_classes.items():
instance = generated_class("Hello", "World")
print(f"{class_name}: {instance.datalake_field_type}")
</code></pre>
<p>What is the proper solution for this?</p>
|
<python><metaprogramming><dynamically-generated>
|
2023-05-24 14:37:00
| 2
| 1,729
|
elaspog
|
76,324,700
| 3,817,456
|
How to calculate current time in different timezone correctly in Python
|
<p>I was trying to calculate the current time in NYC (EST time aka Eastern Daylight time or GMT-4) given current time in Israel (Israel daylight time, currently GMT+3) where I'm currently located. So right now Israel is 7 hrs ahead of NYC, but I get an 8 hr difference, with NYC coming out an hour earlier than it really is:</p>
<pre><code>from pytz import timezone
from datetime import datetime
tz1 = timezone('Israel')
dt1 = datetime.now(tz1)
tz2 = timezone('EST')
dt2 = datetime.now(tz2)
print(f'{dt1} vs {dt2} ')
output: 2023-05-24 17:01:47.167155+03:00 vs 2023-05-24 09:01:47.167219-05:00
</code></pre>
<p>Does anyone have an idea why this might be?</p>
|
<python><datetime><timezone><dst><pytz>
|
2023-05-24 14:36:33
| 1
| 6,150
|
jeremy_rutman
|
76,324,695
| 3,371,250
|
How to create a tree structure from a logical expression?
|
<p>I want to parse a logical expression like the following:</p>
<pre><code>(f = '1' OR f = '2') AND (s = '3' OR s = '4' OR s = '5') AND (t = '6')
</code></pre>
<p>What I need, is a representation of this logical expression in the form of a expression tree. In the end I want to be able to create a JSON representation of this data structure in the form of:</p>
<pre><code>{
...
nodes: [
{
id: 1,
operator: 'AND'
operands: [
2,
3,
4
]
},
{
id: 2,
operator: 'OR'
operands: [
5,
6
]
},
{
id: 3,
operator: 'OR'
operands: [
7,
8,
9
]
},
leafs: [
{
id: 4,
operator: '='
operands: [
t,
6
]
},
...
}
</code></pre>
<p>I am aware that this is not the best representation of the data structure, but this is just an example.
How do I approach this problem using packages like pyparse or re?</p>
|
<python><regex><parsing><expression-trees><python-re>
|
2023-05-24 14:36:04
| 2
| 571
|
Ipsider
|
76,324,628
| 3,438,507
|
How to sort a list of dictionaries by a list that can contain duplicate values?
|
<p><strong>Context:</strong><br />
In Python 3.9 sorting a list of most objects by a second list is easy, even if duplicates are present:</p>
<pre><code>>>> sorted(zip([5, 5, 3, 2, 1], ['z', 'y', 'x', 'w', 'x']))
[(1, 'x'), (2, 'w'), (3, 'x'), (5, 'y'), (5, 'z')]
</code></pre>
<p>If this list to be sorted contains dictionaries, sorting generally goes fine:</p>
<pre><code>>>> sorted(zip([3, 2, 1], [{'z':1}, {'y':2}, {'x':3}]))
[(1, {'x': 3}), (2, {'y': 2}), (3, {'z': 1})]
</code></pre>
<p><strong>Issue:</strong><br />
However, when the list to be sorted by contains duplicates, the following error occurs:</p>
<pre><code>>>>sorted(zip([5, 5, 3, 2, 1], [{'z':1}, {'y':2}, {'x':3}, {'w': 4}, {'u': 5}]))
*** TypeError: '<' not supported between instances of 'dict' and 'dict'
</code></pre>
<p>The issue sees pretty crazy to me: How do the values of the list to be sorted even affect the list to be sorted by?</p>
<p><strong>Alternative solution:</strong><br />
One, not so elegant solution would be to get the dictionary objects from the list by index:</p>
<pre><code>>>> sort_list = [5, 5, 3, 2, 1]
>>> dicts_list = [{'z':1}, {'y':2}, {'x':3}, {'w': 4}, {'u': 5}]
>>> [dicts_list[i] for _, i in sorted(zip(sort_list, range(len(sort_list))))]
[{'u': 5}, {'w': 4}, {'x': 3}, {'z': 1}, {'y': 2}]
</code></pre>
<p><strong>Related Questions on StackOverflow:</strong><br />
Many similar questions have been raised on StackOverflow, related to</p>
<ul>
<li><a href="https://stackoverflow.com/questions/72899/how-to-sort-a-list-of-dictionaries-by-a-value-of-the-dictionary-in-python">the sorting of a list of dictionaries by the value of the dictionary</a></li>
<li><a href="https://stackoverflow.com/questions/25624106/sort-list-of-dictionaries-by-another-list">the sorting of a list of dictionaries by their ID's</a></li>
<li><a href="https://stackoverflow.com/questions/12442830/how-does-sorteddict-dict-get-work-for-duplicate-values">the sort order when sorting dictionaries</a></li>
</ul>
<p>This specific case, especially including duplicates has not been discussed yet.</p>
|
<python><list><dictionary><sorting>
|
2023-05-24 14:28:27
| 3
| 1,155
|
M.G.Poirot
|
76,324,479
| 2,986,042
|
How to properly use subprocess.Popen thread's in python?
|
<p>I am writing a python script which will execute some another script from <code>bash</code> terminals. I need <code>two bash terminal</code> and execute some script in that bash terminal one by one. After executing the script, I will read the out put messages to a <code>Tkinter text area</code>. I have designed simple python script which will create <code>2 process</code>. I am using <code>thread</code> to monitor the stdout and stderror for each processes. I am using while loop to hold the process.</p>
<p><strong>Here is the my script</strong></p>
<pre><code>import os
import tkinter as tk
from tkinter import *
import tkinter.scrolledtext as tkst
import subprocess
from subprocess import Popen
import threading
class TKValidator:
def __init__(self):
#create TK
self.ws = Tk()
self.ws.state('zoomed')
self.flag = IntVar()
#process for bash 1
self.process1 = subprocess.Popen(["bash"], stderr=subprocess.PIPE,shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
#process for bash 2
self.process2 = subprocess.Popen(["bash"], stderr=subprocess.PIPE,shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
exit = False
#start threads (stdout and stderror monitor)
threading.Thread(target=self.read_stdout_1).start()
threading.Thread(target=self.read_stderror_1).start()
threading.Thread(target=self.read_stdout_2).start()
threading.Thread(target=self.read_stderror_2).start()
self.frame2 = tk.Frame(self.ws, bg="#777474")
self.frame2.grid(column=0, row=0, padx=(100,100), pady=(10, 10), sticky="W")
#Start button
self.btnVerify = Button(self.ws, text='Start', command=self.ButtonClick)
self.btnVerify.grid(row=5, columnspan=3, pady=10)
#output text area
self.logArea = tkst.ScrolledText(self.ws, wrap= tk.WORD, width=80, height=20, name="logArea")
self.logArea.grid(padx=(100,100), pady=(10, 10) ,row=10, sticky="W")
def read_stdout_1(self):
while not exit:
msg = self.process1.stdout.readline()
self.logArea.insert(INSERT, "process_1 logging start")
self.logArea.insert(INSERT, msg.decode())
self.logArea.insert(END, "\n logging end")
def read_stderror_1(self):
while not exit:
msg = self.process1.stderr.readline()
self.logArea.insert(INSERT, "Error")
self.logArea.insert(INSERT, msg.decode())
self.logArea.insert(END, "\n Error end")
def read_stdout_2(self):
while not exit:
msg = self.process2.stdout.readline()
self.logArea.insert(INSERT, "process_2 logging start")
self.logArea.insert(INSERT, msg.decode())
self.logArea.insert(END, "\logging end")
def read_stderror_2(self):
while not exit:
msg = self.process2.stderr.readline()
self.logArea.insert(INSERT, "Error")
self.logArea.insert(INSERT, msg.decode())
self.logArea.insert(END, "\n Error end")
def ButtonClick(self):
print("Button clicked")
#set flag to 1
self.flag = 1
while not exit:
if self.flag == 1:
print("Case 1")
self.process1.stdin.write(r'python C:\path\test1.py'.encode())
self.process1.stdin.flush()
self.flag = 2
elif self.flag == 2:
print("Case 2")
self.process1.stdin.write(r'python C:\path\test2.py'.encode())
self.process1.stdin.flush()
self.flag = 3
elif self.flag == 3:
print("Case 3")
self.process2.stdin.write(r'python C:\path\test3.py'.encode())
self.process2.stdin.flush()
self.flag = 4
elif self.flag == 4:
print("Case 3")
self.process2.stdin.write(r'python C:\path\test4.py'.encode())
self.process2.stdin.flush()
self.flag = 5
else:
break
def run(self):
self.ws.title('Test Script')
self.ws.mainloop()
validator = TKValidator()
validator.run()
</code></pre>
<p>But after exectuing this script, I can see only the output like</p>
<pre><code>Button clicked
</code></pre>
<p>I want to run the while loop one by one but it is not working. Can you please suggest, what is the problem here and how can I fix this to work as I expected ?</p>
|
<python><stdin><popen>
|
2023-05-24 14:09:12
| 1
| 1,300
|
user2986042
|
76,324,393
| 1,934,212
|
Heatmap based on DataFrame
|
<p>The Dataframe</p>
<pre><code>import pandas as pd
import plotly.express as px
df = pd.DataFrame({"A":[1,2,3],"B":[4,5,6]})
print(df)
A B
0 1 4
1 2 5
2 3 6
</code></pre>
<p>transformed into a heatmap using</p>
<pre><code>fig = px.density_heatmap(df)
fig.show()
</code></pre>
<p>results in</p>
<p><a href="https://i.sstatic.net/KfnE7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KfnE7.png" alt="enter image description here" /></a></p>
<p>What I actually want, is the index (0,1,3) on the x-axis, column names (A,B) on the y axis and the actual cell values represented by colors, as a third dimension. How can I get that representation?</p>
|
<python><pandas><plotly>
|
2023-05-24 14:03:08
| 2
| 9,735
|
Oblomov
|
76,324,263
| 11,540,781
|
Pandas/Dask read_parquet columns case insensitive
|
<p>Can i have a <em>columns</em> argument on pd.read_parquet() that filters columns, but is case insensitive, I have files with the same columns, but some are camel case, some are all capital, some are lowercase, it is a mess, and i can't read all columns and filter afterwards, and sometimes I have to read directly to pandas.</p>
<p>I know read_csv has a usecols argument that can be callable, so when the files are csvs I can do this: <code>pd.read_csv(filepath, usecols=lambda col: col.lower() in cols)</code></p>
<p>But read_parquet <em>columns</em> argument can't be callable, how can I do something similar?</p>
|
<python><pandas><dask><parquet><dask-dataframe>
|
2023-05-24 13:50:19
| 2
| 343
|
Ramon Griffo
|
76,324,247
| 6,195,489
|
get sqlalchemy with apscheduler multithreading to work
|
<p>I have a list of jobs I am adding to an <a href="https://apscheduler.readthedocs.io/en/3.x/" rel="nofollow noreferrer">apscheduler</a> BlockingScheduler with a ThreadPoolExecutor number the same size as the number of jobs.</p>
<p>The jobs I am adding are using sqlalchemy and interacting with the same database, but i am getting errors:</p>
<p><code>sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) database is locked</code></p>
<p>I have used a scoped_session and a sessionmaker in my base sqlalchemy set-up.</p>
<pre><code> from os.path import join, realpath
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import scoped_session, sessionmaker
db_name = environ.get("DB_NAME")
db_path = realpath(join( "data", db_name))
engine = create_engine(f"sqlite:///{db_path}", pool_pre_ping=True)
session_factory = sessionmaker(bind=engine)
Session = scoped_session(session_factory)
Base = declarative_base()
</code></pre>
<p>Then an example of the scheduled job class I add to apscheduler I have something like this:</p>
<pre><code>from app.data_structures.base import Base, Session, engine
from app.data_structures.job import Job
from app.data_structures.scheduled_job import ScheduledJob
from app.data_structures.user import User
class AccountingProcessorJob(ScheduledJob):
name: str = "Accounting Processor"
def __init__(self, resources: AppResources, depends: List[str] = None) -> None:
super().__init__(resources)
def job_function(self) -> None:
account_dir = realpath(environ.get("ACCOUNTING_DIRECTORY"))
Base.metadata.create_all(engine, Base.metadata.tables.values(), checkfirst=True)
session = Session()
try:
#do some stuff with the session here e.g.
# with some variables that are setup
user = User(user_name=user_name)
session.add(user)
user.extend(jobs)
session.commit()
except:
session.rollback()
finally:
Session.remove()
</code></pre>
<p>I was under the impression that using a scoped_session and a session factory would start a new session for each thread, and make it thread-safe.</p>
<p>where the User and Job are sqlalchemy orm objects, eg:</p>
<pre><code>from sqlalchemy import Boolean, Column, ForeignKey, Integer, String
from sqlalchemy.orm import Mapped, mapped_column, relationship
from sqlalchemy.sql.expression import false
from app.data_structures.base import Base
from app.data_structures.job import Job
class User(Base):
__tablename__ = "users"
user_name: Mapped[str] = mapped_column(primary_key=True),
employee_number = Column(Integer)
manager = relationship("User", remote_side=[user_name], post_update=True)
jobs: Mapped[list[Job]] = relationship()
def __init__(
self,
user_name: str,
employee_number: int = None,
manager: str = None,
) -> None:
self.user_name = user_name
self.employee_number = employee_number
self.manager = manager
</code></pre>
<p>Can anyone explain what I am doing wrong, and how to go about fixing it?</p>
|
<python><sqlite><sqlalchemy><apscheduler>
|
2023-05-24 13:48:40
| 1
| 849
|
abinitio
|
76,324,028
| 19,325,656
|
Request.endpoint is none when bearer token is present
|
<p>I have app where before request i check if JWT token is present in request. If the token is present I check for the user and if I get the user I want to redirect user to URL that he wanted to access if token isn't present redirect user to login page.</p>
<p>The problem is when I'm trying to redirect the request.endpoint is None</p>
<p>The problem goes away when Im not passing bearer token into request</p>
<p>code</p>
<pre><code>view_endpoints = ['views.task', 'views.tests']
@jwt_required(optional=True)
@app.before_request
def require_authorization():
if optional_jwt():
username = get_jwt_identity()
if username:
return redirect(request_flask.endpoint)
if request_flask.endpoint not in view_endpoints:
logged = current_user.is_authenticated
if not logged:
return redirect('/login')
</code></pre>
<p>Im passing token via postman auth tab</p>
|
<python><flask><flask-jwt-extended><flask-jwt>
|
2023-05-24 13:22:26
| 0
| 471
|
rafaelHTML
|
76,323,649
| 11,466,416
|
PyBind11 - compilation errors from several library files
|
<p>recently I created code in c++ that I would like to use in Python, so I opted for PyBind11 as it seemed to be straight forward. As I never worked with this tool, I first wanted to understand and try out the basic example given in the documentation:</p>
<p><a href="https://pybind11.readthedocs.io/en/latest/basics.html" rel="nofollow noreferrer">https://pybind11.readthedocs.io/en/latest/basics.html</a></p>
<pre><code>// file test.cpp
#include <pybind11/pybind11.h>
int add(int i, int j) {
return i + j;
}
PYBIND11_MODULE(example, m) {
m.doc() = "pybind11 example plugin"; // optional module docstring
m.def("add", &add, "A function that adds two numbers");
}
</code></pre>
<p>I am working on a Windows 10 with Anaconda and I have cygwin and Visual Studio 2022 installed. I created a env and installed all required packages with pip.</p>
<p>First, I wanted to use the gcc compiler (cygwin) and found this in order to set the compiler flags:
<a href="https://stackoverflow.com/questions/60699002/how-can-i-build-manually-c-extension-with-mingw-w64-python-and-pybind11">How can I build manually C++ extension with mingw-w64, Python and pybind11?</a></p>
<p>I tried to compile the code with the following command:
<code>c++ -shared -std=c++23 -fPIC -IC:\Users\blindschleiche\Anaconda3\envs\feb2023\Include -IC:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include -Wall -LC:\Users\blindschleiche\anaconda3\Lib test.cpp -LC:\Users\blindschleiche\Anaconda3\pkgs\python-3.8.16-h6244533_2\libs -o test.pyd -lPython38</code></p>
<p>But I get a bunch of errors stemming from Python library files. Of course, I never changed these files on my own. And some of the errors seem to be "incorrect". Here is the full error message:</p>
<pre><code>In file included from C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/Python.h:156,
from C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/detail/../detail/common.h:266,
from C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/detail/../attr.h:13,
from C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/detail/class.h:12,
from C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h:13,
from test.cpp:1:
C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/fileutils.h:79:5: error: '__int64' does not name a type; did you mean '__int64_t'?
79 | __int64 st_size;
| ^~~~~~~
| __int64_t
In file included from /usr/include/sys/stat.h:22,
from C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/pyport.h:245,
from C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/Python.h:63,
from C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/detail/../detail/common.h:266,
from C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/detail/../attr.h:13,
from C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/detail/class.h:12,
from C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h:13,
from test.cpp:1:
C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/fileutils.h:80:12: error: expected ';' at end of member declaration
80 | time_t st_atime;
| ^~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/fileutils.h:80:12: error: expected unqualified-id before '.' token
80 | time_t st_atime;
| ^~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/fileutils.h:82:12: error: expected ';' at end of member declaration
82 | time_t st_mtime;
| ^~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/fileutils.h:82:12: error: expected unqualified-id before '.' token
82 | time_t st_mtime;
| ^~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/fileutils.h:84:12: error: expected ';' at end of member declaration
84 | time_t st_ctime;
| ^~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/fileutils.h:84:12: error: expected unqualified-id before '.' token
84 | time_t st_ctime;
| ^~~~~~~~
In file included from test.cpp:1:
C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h: In member function 'char* pybind11::cpp_function::strdup_guard::operator()(const char*)':
C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h:76:36: error: 'strdup' was not declared in this scope; did you mean 'strcmp'?
76 | # define PYBIND11_COMPAT_STRDUP strdup
| ^~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h:324:23: note: in expansion of macro 'PYBIND11_COMPAT_STRDUP'
324 | auto *t = PYBIND11_COMPAT_STRDUP(s);
| ^~~~~~~~~~~~~~~~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h: In member function 'void pybind11::cpp_function::initialize_generic(pybind11::cpp_function::unique_function_record&&, const char*, const std::type_info* const*, pybind11::size_t)':
C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h:76:36: error: 'strdup' was not declared in this scope; did you mean 'strcmp'?
76 | # define PYBIND11_COMPAT_STRDUP strdup
| ^~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h:610:46: note: in expansion of macro 'PYBIND11_COMPAT_STRDUP'
610 | = signatures.empty() ? nullptr : PYBIND11_COMPAT_STRDUP(signatures.c_str());
| ^~~~~~~~~~~~~~~~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h: In member function 'pybind11::class_<type_, options>& pybind11::class_<type_, options>::def_property_static(const char*, const pybind11::cpp_function&, const pybind11::cpp_function&, const Extra& ...)':
C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h:76:36: error: there are no arguments to 'strdup' that depend on a template parameter, so a declaration of 'strdup' must be available [-fpermissiv
]
76 | # define PYBIND11_COMPAT_STRDUP strdup
| ^~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h:1781:33: note: in expansion of macro 'PYBIND11_COMPAT_STRDUP'
1781 | rec_fget->doc = PYBIND11_COMPAT_STRDUP(rec_fget->doc);
| ^~~~~~~~~~~~~~~~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h:76:36: note: (if you use
-fpermissive', G++ will accept your code, but allowing the use of an undeclared name is deprecated)
76 | # define PYBIND11_COMPAT_STRDUP strdup
| ^~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h:1781:33: note: in expansion of macro 'PYBIND11_COMPAT_STRDUP'
1781 | rec_fget->doc = PYBIND11_COMPAT_STRDUP(rec_fget->doc);
| ^~~~~~~~~~~~~~~~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h:76:36: error: there are no arguments to 'strdup' that depend on a template parameter, so a declaration of 'strdup' must be available [-fpermissiv
]
76 | # define PYBIND11_COMPAT_STRDUP strdup
| ^~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\lib\site-packages\pybind11\include/pybind11/pybind11.h:1789:33: note: in expansion of macro 'PYBIND11_COMPAT_STRDUP'
1789 | rec_fset->doc = PYBIND11_COMPAT_STRDUP(rec_fset->doc);
|
</code></pre>
<p>Just for example, focus on the errors mentioned for "fileutils.h":</p>
<pre><code>C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/fileutils.h:80:12: error: expected ';' at end of member declaration
80 | time_t st_atime;
| ^~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/fileutils.h:80:12: error: expected unqualified-id before '.' token
80 | time_t st_atime;
| ^~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/fileutils.h:82:12: error: expected ';' at end of member declaration
82 | time_t st_mtime;
| ^~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/fileutils.h:82:12: error: expected unqualified-id before '.' token
82 | time_t st_mtime;
| ^~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/fileutils.h:84:12: error: expected ';' at end of member declaration
84 | time_t st_ctime;
| ^~~~~~~~
C:\Users\blindschleiche\Anaconda3\envs\feb2023\Include/fileutils.h:84:12: error: expected unqualified-id before '.' token
84 | time_t st_ctime;
| ^~~~~~~~
</code></pre>
<p>and compare the errors with the actual code within the file:</p>
<pre><code>// fileutils.h, line 70 onwards
#ifdef MS_WINDOWS
struct _Py_stat_struct {
unsigned long st_dev;
uint64_t st_ino;
unsigned short st_mode;
int st_nlink;
int st_uid;
int st_gid;
unsigned long st_rdev;
__int64 st_size;
time_t st_atime;
int st_atime_nsec;
time_t st_mtime;
int st_mtime_nsec;
time_t st_ctime;
int st_ctime_nsec;
unsigned long st_file_attributes;
unsigned long st_reparse_tag;
};
#else
# define _Py_stat_struct stat
#endif
</code></pre>
<p>There is neither a semicolon missing, nor there is a <code>.</code> token.
So I do not understand what are the reasons for this error or whether someone already encountered such problems. I tried to find information about this issue, but most questions relating this topic are about a "missing python.h-file" and not some compilation errors.</p>
<p>Has someone an idea how to resolve this errors? Or does someone have experience with using PyBind11 on a windows system?</p>
|
<python><c++><compiler-errors><pybind11>
|
2023-05-24 12:40:36
| 1
| 456
|
Blindschleiche
|
76,323,628
| 13,506,329
|
Inconsistent behaviour between NumPy floats and integers
|
<p>Consider the following code</p>
<pre><code># Create the 5x3 array
array1 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [101, 110, 120], [13, 14, 15]])
# Create the 8x3 array
array2 = np.array([[1, 2, 3], [7, 8, 9], [4, 5, 6], [16, 17, 18], [19, 20, 21], [10, 11, 12], [22, 23, 24], [13, 14, 15]])
# Check equality of each element in array1 with every element in array2
result = np.any(array1[:, None] == array2, axis=2)
# Convert the resulting matrix to an nx1 Boolean matrix
final_result = np.squeeze(np.any(result, axis=1, keepdims=True))
</code></pre>
<p>This code behaves exactly as expected and outputs the following</p>
<pre><code>[ True True True False True]
</code></pre>
<p>However, when I try the same thing with an array of floats, the behaviour is different.</p>
<pre><code>array3 = np.array([[90., -40.8 , -1.35],[0., 0., -1.35],[0., 0.,-1.35], [100., 0.,-1.35], [-10., 50.8,-1.35], [100., -50.8,5.], [ 29.5, 0., 5. ],[ -2.89, -50.8,-1.35], [0., 0.,0.], [0., 0.,0.], [0., 0.,0.], [0., 0.,0.], [0., 0.,0.]], dtype=float)
array4 = np.array([[90, -50.8, -1.35], [90, -50.8, 5], [90, -50.8, -1.35], [-10, 50.8, -1.35], [-10, 50.8, 5], [90, -50.8, 5]], dtype=float)
# Check equality of each element in array4 with every element in array4
result = np.any(array4[:, None] == array3, axis=2)
# Convert the resulting matrix to an nx1 Boolean matrix
final_result = np.squeeze(np.any(result, axis=1, keepdims=True))
</code></pre>
<p>This outputs</p>
<pre><code>[ True True True True True True]
</code></pre>
<p>This is wrong because only the element at <code>array[3]</code> should be <code>True</code> while all others should be <code>False</code>. When printing out <code>result</code> I can see the following</p>
<pre><code>[[ True True True True True True False True False False False False
False]
[ True False False False False True True True False False False False
False]
[ True True True True True True False True False False False False
False]
[ True True True True True False False True False False False False
False]
[False False False False True True True False False False False False
False]
[ True False False False False True True True False False False False
False]]
</code></pre>
<p>And from here I can infer that when at least one element of the array matches with the other, the output is <code>True</code>. How can I modify the code so that a <code>True</code> is returned only if all the elements in the array equal all the elements of the other array?</p>
<p>The expected output must be the size of <code>array4</code> and look as follows</p>
<pre><code>[False False False True False False
</code></pre>
|
<python><python-3.x><numpy><vectorization><numpy-ndarray>
|
2023-05-24 12:37:38
| 1
| 388
|
Lihka_nonem
|
76,323,607
| 9,039,975
|
Django ORM : Filter to get the users whose birthday week is in n days
|
<p>I am trying to make a django query which is supposed to gave me the users whose birthday week is in n days.</p>
<p>I already tried to use the __week operator but it's not working as expected :</p>
<pre><code> now = timezone.now().date()
first_day_of_next_week = (now + timedelta(days=(7 - now.weekday())))
if now + relativedelta(days=n_days) == first_day_of_next_week:
return qs.filter(birth_date__isnull=False, birth_date__week=first_day_of_next_week.strftime("%V"))
</code></pre>
<p>Indeed, for example if a user is born the 24/06/1997, the 24/06 is not on the same week nbr in 1997 than in 2023, so it's giving me unexpected results.</p>
<p>Could you help me on this ?</p>
|
<python><django><datetime><orm>
|
2023-05-24 12:34:44
| 2
| 875
|
Artory
|
76,323,573
| 970,872
|
reading escaped sequences from sys.stdin, bytes after escape are delayed until the next keystroke using select
|
<p>I'm trying to process keystrokes in linux so I can handle arrow keys as well as normal alphnumeric etc keys.
This potentially simple approach using select and stdin delivers all the keys, but after pressing (for example) uparrow, I don't get the extra chars after escape until I press another key.</p>
<p>The extra characters are there if I read them, but if they aren't there (as in just pressing escape), then trying to read will hang the input and when I get the next char read I don 't know if it was part of an escape sequence or not (unless I look at timestamps)</p>
<p>I have tried other packages but they all have problems - requiring root, or only working if there is a screen present for example.</p>
<p>I want to run this code both directly on a PC, or over ssh to a raspberry pi with a server only build.</p>
<p>Here is a trivial test program:</p>
<pre><code>#!/usr/bin/env python
import sys
import tty
import termios
import select, time
old_settings = termios.tcgetattr(sys.stdin)
ts = time.time()
try:
tty.setraw(sys.stdin.fileno())
while True:
rlist, _, _ = select.select([sys.stdin], [], [], 2)
if rlist:
# Read a single character
char = sys.stdin.read(1)
if ord(char) == 27:
print('at %6.2f ESCAPE!' % (time.time()-ts), '\x0d')
# at this point any other characers can be read, but you can't check to see if read will block!!!
else:
print('at %6.2f gotta' % (time.time()-ts), char if ord(char) >32 else '{%d}' % ord(char), '\x0d')
if char == 'x':
break
finally:
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings)
print('byeeeee')
</code></pre>
<p>If I type 'abcde' you can see the qualifying chars after esc for uparrow don't appear until I type e. If I run the same code without select, then the timings are correct, but the code blocks waiting for all keyboard input so I cannot respond to other events.</p>
<pre><code>$ python3 inputer3.py
at 1.38 gotta a
at 2.44 gotta b
at 3.72 gotta c
at 4.88 ESCAPE!
at 9.32 gotta d
at 10.51 ESCAPE!
at 13.42 gotta [
at 13.42 gotta A
at 13.42 gotta e
</code></pre>
<p>Commenting out the rlist lines it all works as expected:
pressing uparrow gives me [A all with almost identical timestampsL</p>
<pre><code>$ python3 inputer3.py
at 1.33 gotta a
at 2.10 gotta b
at 2.93 gotta c
at 4.66 ESCAPE!
at 5.58 gotta d
at 7.88 ESCAPE!
at 7.88 gotta [
at 7.88 gotta A
at 10.44 gotta e
</code></pre>
|
<python><linux><keyboard>
|
2023-05-24 12:30:17
| 1
| 557
|
pootle
|
76,323,478
| 5,340,217
|
Numpy indexing behavior with unexpected dimension ordering
|
<pre class="lang-py prettyprint-override"><code>>> np.arange(24).reshape(2,3,4)[0,:,[2,3]].shape
(2,3)
>> np.arange(24).reshape(2,3,4)[0,[1,2],:].shape
(2,4)
</code></pre>
<p>I get the second one, but why the first one is not <code>(3,2)</code>?</p>
<p>In the first case, the first two indices (<code>0</code> and <code>:</code>) are basic indexing, and the last one (<code>[2,3]</code>) is advanced indexing.</p>
<p>According to the <a href="https://numpy.org/doc/stable/user/basics.indexing.html#combining-advanced-and-basic-indexing" rel="nofollow noreferrer">guide</a>, deal with the basic indexing first (producing <code>(3,4)</code>-shaped view), then with advanced indexing it would get <code>(3,2)</code>-shaped copy. It looks like the single integer index (<code>0</code>) becomes advanced indexing, which makes no sense.</p>
|
<python><numpy>
|
2023-05-24 12:21:10
| 0
| 422
|
Brainor
|
76,323,287
| 8,477,566
|
Is it possible to stack multiple transformations/functions in PyTorch into a single function?
|
<p>Is it possible to stack multiple transformations/functions in PyTorch into a single function? I'm ideally looking for something like this (possibly with more care taken over tensor shapes):</p>
<pre class="lang-py prettyprint-override"><code>import torch
f_stack = torch.stack([lambda x: x+1, lambda x: x*2, lambda x: x*x])
f_stack(torch.tensor([0, 0, 0]))
# >>> torch.Tensor([1, 0, 0])
f_stack(torch.tensor([3, 3, 3]))
# >>> torch.Tensor([4, 6, 9])
f_stack(torch.tensor([3, 0, 0]))
# >>> torch.Tensor([4, 0, 0])
</code></pre>
|
<python><function><deep-learning><pytorch><vectorization>
|
2023-05-24 11:59:54
| 1
| 1,950
|
Jake Levi
|
76,323,281
| 8,671,089
|
unable to write on kafka topic created on kafka container
|
<p>I am writing integration tests and created kafka topic using docker command <code>docker exec kafka-broker kafka-topics.sh --create --bootstrap-server localhost:9093 --partitions 1 --replication-factor 1 --topic test-topic</code> in github workflow, Topic is created successfully. Using kafka topic in testcase to write data on
but it gives error <code>org.apache.kafka.common.KafkaException: Failed to construct kafka producer</code></p>
<p>Full erro message</p>
<blockquote>
<p>WARN ClientUtils: Couldn't resolve server kafka:9092 from bootstrap.servers as DNS resolution failed for kafka
org.apache.kafka.common.KafkaException: Failed to construct kafka
producer at
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:440)
at
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:291)
at
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:274)
at
org.apache.spark.sql.kafka010.producer.InternalKafkaProducerPool.createKafkaProducer(InternalKafkaProducerPool.scala:136)
at
org.apache.spark.sql.kafka010.producer.InternalKafkaProducerPool.$anonfun$acquire$1(InternalKafkaProducerPool.scala:83)
at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:86)
at
org.apache.spark.sql.kafka010.producer.InternalKafkaProducerPool.acquire(InternalKafkaProducerPool.scala:82)
at
org.apache.spark.sql.kafka010.producer.InternalKafkaProducerPool$.acquire(InternalKafkaProducerPool.scala:198)
at
org.apache.spark.sql.kafka010.KafkaWriteTask.execute(KafkaWriteTask.scala:49)
at
org.apache.spark.sql.kafka010.KafkaWriter$.$anonfun$write$2(KafkaWriter.scala:72)
at
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at
org.apache.spark.sql.kafka010.KafkaWriter$.$anonfun$write$1(KafkaWriter.scala:73)
at
org.apache.spark.sql.kafka010.KafkaWriter$.$anonfun$write$1$adapted(KafkaWriter.scala:70)
at
org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2(RDD.scala:1011)
at
org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2$adapted(RDD.scala:1011)
at
org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2278)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136) at
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829) Caused by:
org.apache.kafka.common.config.ConfigException: No resolvable
bootstrap urls given in bootstrap.servers at
org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:89)
at
org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48)
at
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:414)</p>
</blockquote>
<p>I have a code as bellow</p>
<pre><code>df = #df creation code
df.write.format("kafka")
.option("kafka.bootstrap.servers", "kafka:9092")
.option("kafka.security.protocol", "PLAINTEXT")
.option("topic", test-topic)
.save()
</code></pre>
<p>i want to run testcases in CICD. Can someone please help me how how to fix this?</p>
|
<python><docker><pyspark><apache-kafka>
|
2023-05-24 11:59:26
| 2
| 683
|
Panda
|
76,323,278
| 20,770,190
|
StaleDataError: DELETE statement on table 'event_services' expected to delete 2 row(s); Only 3 were matched
|
<p>I have a model named <code>Event</code>, and a model named <code>ServicePartner</code>, they have many-to-many relationship with each other with a secondary join and secondary table named <code>EventServices</code>. Now, in the <code>event_services</code> table, I have three rows related to a specific <code>event</code>. But two of them are related to the <code>service_partner.id=3</code> and one of them is related to the <code>service_partner.id=4</code>. Now, when I call <code>event.services</code> it returns two objects namely <code>service_partner.id=3</code> and <code>service_partner.id=4</code>. But my desire is removing those relations with <code>.clear()</code> I mean I want to remove those three rows in the <code>event_services</code> table (secondary table) using <code>event.services.clear()</code> and/or <code>event.event_services.clear()</code>. But after <code>session.commit()</code> I get the following error:</p>
<pre><code>StaleDataError: DELETE statement on table 'event_services' expected to delete 2 row(s); Only 3 were matched.
</code></pre>
<p>Here are my models:</p>
<pre class="lang-py prettyprint-override"><code>class Event(BaseModel):
__tablename__ = "event"
name = Column(String(1000))
services: Mapped[list] = relationship(
"ServicePartner",
secondary="event_services",
secondaryjoin="and_(ServicePartner.id==EventServices.service_partner_id, EventServices.deleted==None)",
back_populates="events",
)
event_services: Mapped[list] = relationship("EventServices", viewonly=True)
class ServicePartner(BaseModel):
__tablename__ = "service_partner"
name = Column(String(1000))
events = relationship(
"Event",
secondary="event_services",
secondaryjoin="and_(Event.id==EventServices.event_id, EventServices.deleted==None)",
back_populates="services",
)
class EventServices(BaseModel):
__tablename__ = 'event_services'
event_id: Mapped[int] = Column(Integer, ForeignKey("event.id"))
service_partner_id: Mapped[int] = Column(Integer, ForeignKey("service_partner.id"))
price: Mapped[int] = Column(Integer, default=0, server_default="0")
service_name: Mapped[str] = Column(String(255))
service_partner = relationship("ServicePartner", viewonly=True)
</code></pre>
|
<python><sqlalchemy>
|
2023-05-24 11:59:16
| 1
| 301
|
Benjamin Geoffrey
|
76,323,081
| 577,647
|
PEP8 between specific lines
|
<p>I have some huge files in my codebase which have many pep8 related issues.</p>
<p>Is there any way to analyze specific lines with pep8</p>
<pre><code>pep8 input /path/to/my-code.py --lines=100-200
</code></pre>
<p>So that I can analyze specific part of the code?</p>
|
<python><django><pep8>
|
2023-05-24 11:33:39
| 1
| 2,888
|
tolga
|
76,322,954
| 2,137,570
|
python - Beautiful soup - get specific value in html not standard tag
|
<p>Fairly new to beautiful soup. Trying to parse this tag</p>
<p>html</p>
<pre><code><score-bill scoreA="86" audiencestate="upright" data-qa="score-panel" data-scoresmanager="scorebill:scoreAction" id="scoreboard" mediatype="assetseries" rating="" skeleton="panel" scoreb="98" assetstate="active">
</code></pre>
<p>Desired Results</p>
<pre><code>86
</code></pre>
<p>Question</p>
<p>How do I parse the value in this tag?</p>
<p>Any help would be greatly appreciated</p>
|
<python><beautifulsoup>
|
2023-05-24 11:16:22
| 1
| 5,998
|
Lacer
|
76,322,753
| 7,424,495
|
Disable google cloud authentication python when mocking google.cloud.storage
|
<p>With the following mock of google cloud storage</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud import storage
class MockBlob:
def download_as_string(self) -> bytes:
return bytes("\n".join(INPUT_IDS), "utf-8")
class MockBucket:
def get_blob(self, path: str) -> MockBlob:
return MockBlob()
class MockStorageClient(storage.Client):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def _require_client_info(self, client_info=None):
pass
def _require_virtual(self):
pass
def get_bucket(self, bucket_or_name: str):
return MockBucket()
</code></pre>
<p>i get the error displayed below</p>
<pre class="lang-py prettyprint-override"><code>.venv/lib/python3.8/site-packages/google/cloud/storage/client.py:173: in __init__
super(Client, self).__init__(
.venv/lib/python3.8/site-packages/google/cloud/client/__init__.py:320: in __init__
_ClientProjectMixin.__init__(self, project=project, credentials=credentials)
.venv/lib/python3.8/site-packages/google/cloud/client/__init__.py:268: in __init__
project = self._determine_default(project)
.venv/lib/python3.8/site-packages/google/cloud/client/__init__.py:287: in _determine_default
return _determine_default_project(project)
.venv/lib/python3.8/site-packages/google/cloud/_helpers/__init__.py:152: in _determine_default_project
_, project = google.auth.default()
.venv/lib/python3.8/site-packages/google/auth/_default.py:615: in default
credentials, project_id = checker()
.venv/lib/python3.8/site-packages/google/auth/_default.py:608: in <lambda>
lambda: _get_explicit_environ_credentials(quota_project_id=quota_project_id),
.venv/lib/python3.8/site-packages/google/auth/_default.py:228: in _get_explicit_environ_credentials
credentials, project_id = load_credentials_from_file(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
filename = '', scopes = None, default_scopes = None, quota_project_id = None, request = None
def load_credentials_from_file(
filename, scopes=None, default_scopes=None, quota_project_id=None, request=None
):
"""Loads Google credentials from a file.
The credentials file must be a service account key, stored authorized
user credentials, external account credentials, or impersonated service
account credentials.
Args:
filename (str): The full path to the credentials file.
scopes (Optional[Sequence[str]]): The list of scopes for the credentials. If
specified, the credentials will automatically be scoped if
necessary
default_scopes (Optional[Sequence[str]]): Default scopes passed by a
Google client library. Use 'scopes' for user-defined scopes.
quota_project_id (Optional[str]): The project ID used for
quota and billing.
request (Optional[google.auth.transport.Request]): An object used to make
HTTP requests. This is used to determine the associated project ID
for a workload identity pool resource (external account credentials).
If not specified, then it will use a
google.auth.transport.requests.Request client to make requests.
Returns:
Tuple[google.auth.credentials.Credentials, Optional[str]]: Loaded
credentials and the project ID. Authorized user credentials do not
have the project ID information. External account credentials project
IDs may not always be determined.
Raises:
google.auth.exceptions.DefaultCredentialsError: if the file is in the
wrong format or is missing.
"""
if not os.path.exists(filename):
> raise exceptions.DefaultCredentialsError(
"File {} was not found.".format(filename)
)
E google.auth.exceptions.DefaultCredentialsError: File was not found.
.venv/lib/python3.8/site-packages/google/auth/_default.py:116: DefaultCredentialsError
</code></pre>
<p>The purpose of the test is to see if the code utilizing the bucket is working. The blob that is fetched contains a text file of ids that are separated by "\n", which is why the download function returns a static byte string.
How do i disable the credentials, such that it is not an issue.</p>
|
<python><google-cloud-storage>
|
2023-05-24 10:55:32
| 1
| 1,751
|
S.MC.
|
76,322,694
| 12,875,947
|
How to update multiple dictionary key-value without using for loop
|
<p>I have a list of dictionaries with the same keys but different values.
Example:</p>
<pre><code>[{ 'Price' : 100, 'Quantity' : 3 }, { 'Price' : 200, 'Quantity' : 5 }]
</code></pre>
<p>Is there a way to update the value of a particular key in all dictionaries in one go without using for loop?</p>
<p>That is, is there a way to make Quantity=0 for all dictionaries in the list in one go?</p>
<p>I am looking for performance since I have a huge list of dictionaries, and I'm under the assumption that there may be a faster way to do this than using a for loop. I have. gone through multiple questions on stack overflow but did not get any satisfactory response.</p>
|
<python><django><pandas><dataframe><numpy>
|
2023-05-24 10:48:39
| 2
| 1,886
|
Narendra Vishwakarma
|
76,322,597
| 1,506,850
|
prevent AssertionError: daemonic processes are not allowed to have children
|
<p>I am running stuff using <code>Pool</code>/<code>multiprocessing</code>.
Whenever I call Pool again (nested) within one of the child processes of the main process, this error is raised?</p>
<pre><code>prevent multiprocessing gives AssertionError: daemonic processes are not allowed to have children
</code></pre>
<p>Is there a way to detect if code is running already part of a <code>Pool</code>, to be able to prevent this error?</p>
|
<python><multiprocessing><pool>
|
2023-05-24 10:36:49
| 1
| 5,397
|
00__00__00
|
76,322,534
| 1,714,385
|
How to remove trailing rows that contain zero of pandas DataFrame
|
<p>I have a pandas dataframe with a single column, which ends with some values being zero, like so:</p>
<pre><code>index value
0 4.0
1 34.0
2 -2.0
3 15.0
... ...
96 0.0
97 45
98 0.0
99 0.0
100 0.0
</code></pre>
<p>I would like to strip away the trailing rows that contain the zero value, producing the following dataframe:</p>
<pre><code>index value
0 4.0
1 34.0
2 -2.0
3 15.0
... ...
96 0.0
97 45
</code></pre>
<p>How can I do it by leveraging pandas's functions?</p>
<p>I know that I can check the last value of the dataframe iteratively and remove it if it's zero, but I'd rather do it in a way that leverages pandas's built-in function because this would be much faster.</p>
<pre><code>while df.iloc[-1,0] == 0:
df.drop(df.tail(1).index,inplace=True)
</code></pre>
<p>EDIT: to be clear, the dataframe may or may not contain other zeros. However, I only want to strip trailing zeros, while the other zeros should stay untouched. I have edited the example accordingly.</p>
|
<python><pandas>
|
2023-05-24 10:28:44
| 4
| 4,417
|
Ferdinando Randisi
|
76,322,516
| 4,929,646
|
Verify Apple's signature
|
<p>I'm trying to verify a signature according to the <a href="https://developer.apple.com/documentation/storekit/skadnetwork/verifying_an_install-validation_postback#3599761" rel="nofollow noreferrer">documentation</a>. Here is an example:</p>
<pre><code># cryptography==37.0.4
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.asymmetric import ec
import base64
data = b"demo data"
signature = b"demo signature"
public_key_base64 = "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEWdp8GPcGqmhgzEFj9Z2nSpQVddayaPe4FMzqM9wib1+aHaaIzoHoLN9zW4K8y4SPykE3YVK3sVqW6Af0lfx3gg=="
public_key_bytes = base64.b64decode(public_key_base64)
apple_public_key = ec.EllipticCurvePublicKey.from_encoded_point(ec.SECP256R1(), public_key_bytes)
apple_public_key.verify(
signature,
data,
ec.ECDSA(hashes.SHA256())
)
</code></pre>
<p><code>from_encoded_point</code> generates:</p>
<pre><code> raise ValueError("Unsupported elliptic curve point type")
ValueError: Unsupported elliptic curve point type
</code></pre>
<p>Also tried different approaches from <code>chat GPT</code> but none works. Could you provide a working example please?</p>
|
<python><cryptography><x509><public-key>
|
2023-05-24 10:26:48
| 1
| 11,422
|
Danila Ganchar
|
76,322,463
| 8,930,395
|
How to initialize a global object or variable and reuse it in every FastAPI endpoint?
|
<p>I am having a class to send notifications. When being initialized, it involves making a connection to a notification server, which is time-consuming. I use a background task in FastAPI to send notifications, as I don't want to delay the response due to the notification. Below is the sample code:</p>
<p><strong>file1.py</strong></p>
<pre class="lang-py prettyprint-override"><code>noticlient = NotificationClient()
@app.post("/{data}")
def send_msg(somemsg: str, background_tasks: BackgroundTasks):
result = add_some_tasks(data, background_tasks, noticlient)
return result
</code></pre>
<p><strong>file2.py</strong></p>
<pre class="lang-py prettyprint-override"><code>def add_some_tasks(data, background_tasks: BackgroundTasks, noticlient):
background_tasks.add_task(noticlient.send, param1, param2)
result = some_operation
return result
</code></pre>
<p>Here, the notification client is declared globally. I could have it initialized in <strong>file2.py</strong>, under <code>add_some_tasks</code>, but it would get initialized every time a request arrives, and that would require some time. Is there any way to use a middleware to re-use it every time a request arrives, so that it doesn't need to be initialized every time?</p>
<p>Or, another approach might be to initialize notification in class definition:</p>
<p><strong>file1.py</strong></p>
<pre class="lang-py prettyprint-override"><code>class childFastApi(FastAPI):
noticlient = NotificationClient()
app = childFastApi()
@app.post("/{data}")
def send_msg(somemsg: str, background_tasks: BackgroundTasks):
result = add_some_tasks(data, background_tasks, app.noticlient)
return result
</code></pre>
|
<python><global-variables><fastapi><background-task><starlette>
|
2023-05-24 10:20:42
| 1
| 4,606
|
LOrD_ARaGOrN
|
76,322,400
| 6,752,358
|
Reuse bigquery queryJob as base query to use for further operation
|
<p>As the title says, I don't know if it is possible to reuse the queryjob obtained by the execution of a query to perform additional SQL operations. Below an example of what I mean</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud import bigquery
client = bigquery.Client()
myquery = "SELECT * FROM mytable"
qjob = client.query(myquery)
</code></pre>
<p>qjob contains the query results. I want to use this result to perform additional filtering, in pseudo code <code>...</code></p>
<pre class="lang-py prettyprint-override"><code>...
mynewquery = "SELECT c1 FROM <qjob> WHERE C1=1"
qjob2 = client.query(mynewquery)
</code></pre>
<p>Hope it is clear. This is somehow similar to the construct</p>
<pre class="lang-sql prettyprint-override"><code>WITH <query name> AS (SELECT ...)
SELECT a FROM <query name> WHERE...
</code></pre>
|
<python><sql><python-3.x><google-bigquery>
|
2023-05-24 10:12:38
| 1
| 359
|
lordcenzin
|
76,322,383
| 11,452,928
|
How Jax use LAX-backend implementation of functions
|
<p>I need to compute the kron procuts of two arrays and I want to test if doing it using Jax is faster than doing it using Numpy.</p>
<p>Now, in numpy my code there is <code>res = numpy.kron(x1,x2)</code>, in Jax there is <code>jax.numpy.kron(x1,x2)</code> but how can I use it properly?
My doubs are:</p>
<ul>
<li><p>is it sufficient to replace <code>numpy</code> with <code>jax.numpy</code> as follows: <code>res = jax.numpy.kron(x1,x2)</code>?</p>
</li>
<li><p>should I first sent x1 and x2 to the device using <code>x1_dev = jax.device_put(x1)</code> and after that run <code>res = jax.numpy.kron(x1_dev,x2_dev)</code>?</p>
</li>
<li><p>should I add <code>jax.block_until_ready()</code> to the <code>jax.numpy.kron()</code> call?</p>
</li>
</ul>
|
<python><jax>
|
2023-05-24 10:10:32
| 1
| 753
|
fabianod
|
76,322,334
| 10,413,428
|
typing.NamedTuple as type annonation for list does not work
|
<p>I though I could specify a type for the elements of a list as follows:</p>
<pre class="lang-py prettyprint-override"><code>import typing
CustomType = typing.NamedTuple("CustomType", [("one", str), ("two", str)])
def test_function(some_list: list[CustomType]):
print(some_list)
if __name__ == "__main__":
test_list = list[
CustomType(one="test", two="test2"),
CustomType(one="Test", two="Test2")
]
test_function(some_list=test_list)
</code></pre>
<p>But this warns me, at the test_function call that <code>Expected type 'list[CustomType]', got 'Type[list]' instead</code>.</p>
<p>The following will work, but I am not sure why it must be in this way:</p>
<pre class="lang-py prettyprint-override"><code>test_list: list[CustomType] = list([
CustomType(one="test", two="test2"),
CustomType(one="Test", two="Test2")
])
</code></pre>
|
<python><python-3.x>
|
2023-05-24 10:04:13
| 1
| 405
|
sebwr
|
76,322,177
| 2,966,197
|
streamlit markdown color change not working
|
<p>My streamlit markdown text is all coming white and I want it to be black. Here is what my markdown code is but it just inputs everything as it is:</p>
<pre><code>st.markdown(
"""
<span style='color:black'>This is First page
You can:
- Say Hi
- Send email
- Contact via form </span>
"""
)
</code></pre>
<p>I also tried to use <code>:black[]</code> under markdown text and it was still same
My streamlit version is <code>1.21.0</code></p>
<p><strong>Note</strong>: I do have a background image on my frame</p>
|
<python><streamlit>
|
2023-05-24 09:46:12
| 1
| 3,003
|
user2966197
|
76,322,147
| 8,219,760
|
Overriden `Process.run` does not execute asynchronously
|
<p>Having subclassed <code>Process.run</code></p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing as mp
import time
DELAY = 2
class NewProcess(mp.get_context().Process):
def run(self) -> None:
# add new kwarg to item[4] slot
old_que = self._args[0]
new_que = mp.SimpleQueue()
while not old_que.empty():
item = old_que.get()
new_que.put(
(
item[0],
item[1],
item[2], # Function
item[3], # Arguments
item[4] # Keyword arguments
| {
"message": "Hello world!",
},
)
)
# Recreate args
self._args = new_que, *self._args[1:]
# Continue as normal
super().run()
def delay(*args, **kwargs):
time.sleep(DELAY)
return args, kwargs
if __name__ == "__main__":
context = mp.get_context()
context.Process = NewProcess
with context.Pool(2) as pool:
responses = []
start = time.perf_counter()
for _ in range(2):
resp = pool.apply_async(
func=delay,
args=tuple(range(3)),
kwds={},
)
responses.append(resp)
for resp in responses:
resp.wait()
responses = [resp.get() for resp in responses]
total = time.perf_counter() - start
assert total - DELAY < 1e-2, f"Expected to take circa {DELAY}s, took {total}s"
assert responses == (
expected := list(
(
(0, 1, 2),
{
"message": "Hello world!"
}
)
)
), f"{responses=}!={expected}"
</code></pre>
<p>I would expect that <code>delay</code> function executes asynchronously taking circa <code>DELAY</code> time. However, it does not. Script fails with</p>
<pre><code>Traceback (most recent call last):
File "/home/vahvero/Desktop/tmp.py", line 54, in <module>
assert total - DELAY < 1e-2, f"Expected to take circa {DELAY}s, took {total}s"
AssertionError: Expected to take circa 2s, took 4.003754430001209s
</code></pre>
<p>Why my changes to <code>run</code> cause linear rather than parallel processing?</p>
|
<python><multiprocessing>
|
2023-05-24 09:42:41
| 1
| 673
|
vahvero
|
76,322,128
| 710,955
|
PyO3 - How to return enums to python module?
|
<p>I'm trying to build a Python package from Rust using PyO3. Right now I'm stuck trying to return <code>enums</code> Rust type to Python.</p>
<p>I have a simple enum like so:</p>
<pre class="lang-rust prettyprint-override"><code>pub enum Lang {
Deu,
Eng,
Fra
}
</code></pre>
<p>And in <code>lib.rs</code></p>
<pre class="lang-rust prettyprint-override"><code>#[pyfunction]
fn detect_language(text: &str) -> PyResult<????> {
// Do some stuff ....
res:Lang = Do_some_stuff(text)
Ok(res)
}
#[pymodule]
fn pymylib(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_function(wrap_pyfunction!(detect_language, m)?)?;
Ok(())
}
</code></pre>
<p>In Python code</p>
<pre class="lang-py prettyprint-override"><code>from pymylib import detect_language
res=detect_language('Ceci est un test')
print(res) # Lang:Fra ???
</code></pre>
|
<python><rust><enums><pyo3>
|
2023-05-24 09:40:12
| 2
| 5,809
|
LeMoussel
|
76,322,054
| 294,974
|
Updating a boolean property in SQLAlchemy(2.x) model while satisfying MyPy
|
<p>I am trying to update a boolean property in my SQLAlchemy model and I want to make sure MyPy is satisfied with the code as well. However, MyPy is giving me an error when I try to update the property. Here's the error message:</p>
<pre><code>dashing/db/dao/workspace_dao.py:69: error: Incompatible types in assignment
(expression has type "bool", variable has type "Column[bool]") [assignment]
workspace.is_archived = new_value
^~~~~~~~~
Found 1 error in 1 file (checked 57 source files)
</code></pre>
<p>This is the function I am using to update the property:</p>
<pre class="lang-py prettyprint-override"><code> async def archive_workspace(
self,
workspace_id: UUID,
new_value: bool,
) -> Optional[WorkspaceModel]:
workspace = await self.session.get(WorkspaceModel, workspace_id)
if workspace is None:
return None
workspace.is_archived = new_value # ← MyPy does not like this assignment
await self.session.commit()
return workspace
</code></pre>
<p>And here is my model definition:</p>
<pre class="lang-py prettyprint-override"><code>class WorkspaceModel(Base):
__tablename__ = "workspace"
...
is_archived = Column(Boolean, unique=False, default=False)
</code></pre>
<p>What is the correct way to update the boolean property in my SQLAlchemy model so that MyPy does not raise any errors?</p>
|
<python><sqlalchemy><fastapi><mypy>
|
2023-05-24 09:32:10
| 1
| 1,560
|
carloe
|
76,322,024
| 5,406,764
|
Sympy count_ops returning incorrect result?
|
<p>I'm doing a simple test with sympy (python=3.10, sympy=1.12) and I don't understand why the result seems wrong (results below):</p>
<p>Code:</p>
<pre><code>from sympy import *
x0, x1, x2 = symbols('x0 x1 x2')
print(count_ops(2 * (x0 + x1) * x2, visual=True))
</code></pre>
<p>Result:</p>
<pre><code>Add + 3*MUL
</code></pre>
<p>I would expect 2 multiplications and 1 add... Any ideas on what I'm missing? Thanks!</p>
|
<python><sympy><symbolic-math><algebra>
|
2023-05-24 09:28:22
| 2
| 1,825
|
user5406764
|
76,321,982
| 10,992,997
|
Issue using poetry to package python code (No file/folder found for package ...)
|
<p>I have written a number of functions that help to ingest raw data from research device.</p>
<p>There's two groups of functions</p>
<p>Those that help to normalise timestamps</p>
<p>Those that actually read in/reshape the data</p>
<p>I have set the project up in a git repo, and there's two sub directories that contain each of the sub-sets
of functions</p>
<pre><code>
my_package
|- __init__.py
|- times
| |- __init__.py
| |- time_1.py
| |- time_2.py
|-reshaping
| |- __init__.py
| | - read_raw.py
| | - flatten.py
| - README.md
| - .gitattributes
| - .gitignore
| - licence
</code></pre>
<p>(sorry for the messy description of the folders I don't know how to make the nice folder diagrams you see on sites like this)</p>
<p>I'd like to use poetry to package this up, and I've seen that poetry is a good tool for doing this, but I when I go through the guides I can find online I get a set of errors and I can't seem to figure out what I'm doing wrong.</p>
<p>I cd into the my_package folder and run <code>poetry init</code> and go through the process. The only dependency is pandas and the only dev dependency is jupyter. This produces the pyproject.toml file without error. Then I can run <code>poetry install</code> which produces the <code>poetry.lock</code> file and seems to install the dependencies without issue.</p>
<p>However, when I try to run <code>poetry build</code> I get the error</p>
<pre><code>
building my_package(0.1.0)
No file/folder found for package my_package
</code></pre>
<p>even though I'm <em>in the folder</em> <code>~\github\my_package</code></p>
<p>I've tried finding the solution but a lot of the walkthroughs I've found aren't that helpful, and Friend_computer also hasn't been able to help.</p>
<p>Can anyone guide me on this? I'd really like to publish my first package.</p>
|
<python><package><python-poetry>
|
2023-05-24 09:23:51
| 0
| 581
|
KevOMalley743
|
76,321,902
| 3,835,843
|
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 871920D1991BC93C returns error
|
<p>In the <strong>Dockerfile</strong> it has written like this:</p>
<pre><code>FROM osgeo/gdal:ubuntu-small-3.6.3
RUN apt-get install --no-install-recommends -y gnupg
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 871920D1991BC93C
</code></pre>
<p>While I built, it shows this error:</p>
<pre><code> => [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.13kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/osgeo/gdal:ubuntu-small-3.6.3 4.8s
=> [auth] osgeo/gdal:pull token for registry-1.docker.io 0.0s
=> [ 1/13] FROM docker.io/osgeo/gdal:ubuntu-small-3.6.3@sha256:398d5aca0ca88295c13ada87e0f382ed409dac13a952461187be72ce3c376513 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 106.80kB 0.0s
=> CACHED [ 2/13] RUN apt-get install --no-install-recommends -y gnupg 0.0s
=> ERROR [ 3/13] RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 871920D1991BC93C
------
> [ 3/13] RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 871920D1991BC93C:
#7 0.565 Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
#7 0.586 Executing: /tmp/apt-key-gpghome.sBrxtWnhQq/gpg.1.sh --keyserver keyserver.ubuntu.com --recv-keys 871920D1991BC93C
#7 10.62 gpg: keyserver receive failed: Server indicated a failure
------
executor failed running [/bin/sh -c apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 871920D1991BC93C]: exit code: 2
</code></pre>
<p>[Note: I am using WINDOWS 11 Pro]</p>
<p>Does anybody have idea how to fix it?</p>
|
<python><docker>
|
2023-05-24 09:15:02
| 1
| 6,588
|
Arif
|
76,321,801
| 8,512,941
|
Type hints for lxml.etree._Element
|
<p>I often work with the <code>lxml</code> and my IDE (PyCharm 2021.2.2) warns me about accessing a protected member of the module in my type hints because many of my function use <code>lxml.etree._Element</code> as inputs or outputs. But as many functions of <code>lxml.etree</code> returns <code>_Element</code> objects I think it must be normal to use them.</p>
<p>For example this code</p>
<pre><code>from lxml import etree
root = etree.parse(r"C:\xmlfile.xml").getroot()
print(type(root))
</code></pre>
<p>prints : <code><class 'lxml.etree._Element'></code></p>
<p><br/><br/></p>
<p>What should I do to get rid of those warnings ? Should I configure my IDE to ignore them or is there a cleaner way to make the type hints ?</p>
<p><a href="https://i.sstatic.net/j3Qob.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j3Qob.jpg" alt="IDE warning" /></a></p>
|
<python><lxml><python-typing>
|
2023-05-24 09:03:52
| 0
| 349
|
Arlaf
|
76,321,764
| 188,331
|
How to return some float values with zip() in Python function? TypeError: 'float' object is not iterable
|
<p>I wrote a simple function that wants to return more than 1 float values at the same time</p>
<pre><code>def test_zip_float():
return zip(1.234, 3.456)
print(test_zip_float())
</code></pre>
<p>It results in:</p>
<blockquote>
<p>TypeError: 'float' object is not iterable</p>
</blockquote>
<p>I can rewrite it as:</p>
<pre><code>def test_zip_float():
return {"val1": 1.234, "val2": 3.456}
print(test_zip_float())
</code></pre>
<p>but it looks clumsy.</p>
<p>What is the best way to return multiple float values in a function?</p>
|
<python><python-3.x><floating-point><return>
|
2023-05-24 08:59:34
| 1
| 54,395
|
Raptor
|
76,321,503
| 7,791,963
|
In Python, how to read and count all values from an Exceel sheet only if the cell has no color?
|
<p>I have multiple exceel sheets that I am not able to change structure of. Each sheet contains multiple tables of different structures so it's hard to automatically parse it. However, the cells of interest are white cells in all these tables, all other cells such as headers and extra meta data has colored cells.</p>
<p>So, is there a way to extract all cells from an exceel sheet fulfilling the condition of having the color white and count the occurence of each value?</p>
|
<python><pandas><excel>
|
2023-05-24 08:27:01
| 2
| 697
|
Kspr
|
76,321,501
| 20,051,041
|
How to handle AioHttpClient in Pytest?
|
<p>I am writing my first test (unit test with Pytest), that contains AioHttpClient with BasicAuth (with username and poassword).
My function's structure:</p>
<pre><code>async def example_function(some parameters):
(...)
try:
synth_url = os.getenv("SYNTH_URL", 'https://...')
async with AioHttpClient.session().post(synth_url, data=data_in, auth=aiohttp.BasicAuth(
'username', os.getenv("MY_PASSWORD"))) as resp:
if resp.ok:
return await resp.read()
else:
(...)
</code></pre>
<p>What is the best practice when testing such function? I have tried to use monkeypatch or mock the AioHttpClient but without any success so far. Thank you.</p>
|
<python><mocking><pytest><aiohttp><pytest-aiohttp>
|
2023-05-24 08:26:47
| 1
| 580
|
Mr.Slow
|
76,321,460
| 2,966,197
|
Llamaindex cannot persist index to Chroma DB and load later
|
<p>I am creating 2 apps using <code>Llamaindex</code>. One allows me to create and store indexes in <code>Chroma DB</code> and other allows me to later load from this storage and query.</p>
<p>Here is my code to load and persist data to ChromaDB:</p>
<pre><code>import chromadb
from chromadb.config import Settings
chroma_client = chromadb.Client(Settings(
chroma_db_impl="duckdb+parquet",
persist_directory=".chroma/" # Optional, defaults to .chromadb/ in the current directory
))
chroma_collection = chroma_client.get_or_create_collection("quickstart")
def chromaindex():
UnstructuredReader = download_loader("UnstructuredReader")
loader = UnstructuredReader()
documents = loader.load_data(file= Path())
# create chroma vector store
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = GPTVectorStoreIndex.from_documents(documents, storage_context=storage_context)
index.storage_context.persist(vector_store_fname = 'demo')
</code></pre>
<p>Here is my code to later load the storage context and query:</p>
<pre><code>import chromadb
from chromadb.config import Settings
chroma_client = chromadb.Client(Settings(
chroma_db_impl="duckdb+parquet",
persist_directory=".chroma/" # Optional, defaults to .chromadb/ in the current directory
))
chroma_collection = chroma_client.get_collection("quickstart")
def chroma_ans(question):
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
sc = StorageContext.from_defaults(vector_store=vector_store)
index2 = load_index_from_storage(sc)
query_engine = index2.as_query_engine()
response = query_engine.query("What did the author do growing up?")
return response
</code></pre>
<p>When I run the 2nd code to query, I get <code>ValueError: No index in storage context, check if you specified the right persist_dir.</code>. I am not sure where I am making the mistake. ALl I want to do is in first app, create <code>storage context</code> and <code>index</code> and store then using <code>Chroma DB</code> and in second app load them again to query.</p>
<p>My <code>llamindex</code> version is <code>0.6.9</code></p>
|
<python><llama-index><chromadb>
|
2023-05-24 08:21:49
| 1
| 3,003
|
user2966197
|
76,321,441
| 18,987,572
|
How to open a Python 3.x instance in LabVIEW2016?
|
<p>For research purposes I have written a script in Python 3, and now this method should be included into the production cycle, which is running LabVIEW2016.</p>
<p>The script takes a matrix corresponding to a grayscale image and a float as command line arguments and prints a float after doing some image processing. Timing this with <code>cProfile</code>/<code>snakeviz</code> reveals that invoking the script takes around 1 second - 90% of this however is taken up by importing <code>numpy</code> and functions from <code>cv2</code> and <code>scikit-image</code>. This is crucial, because running this via command line from LabVIEW thousands of times is simply too slow. How can I avoid doing the imports every single time?</p>
<ul>
<li>The Python node in LabVIEW only exists from LabVIEW2018 onwards, and upgrading is out of the question at the moment</li>
<li><a href="http://docs.enthought.com/python-for-LabVIEW/" rel="nofollow noreferrer">Python Integration Toolkit</a> doesn't offer any new licences since it is end of life</li>
<li><a href="https://labpython.sourceforge.net/" rel="nofollow noreferrer">LabPython</a> is very old and only available for Python 2</li>
</ul>
<p>My next thought was to incorporate some precompiled version of the script, but that doesn't work with the command line arguments.</p>
<p>Is there some way to call a Python 3 script with inputs from LabVIEW2016 without having to run the imports every single time?</p>
|
<python><labview>
|
2023-05-24 08:19:46
| 0
| 445
|
king_of_limes
|
76,321,276
| 49,189
|
How can I update LinkedIn Basic profile in Python
|
<p>I am trying to update LinkedIn profile using this Python code :-</p>
<pre><code>import requests
access_token = "xxx"
profile_id = "me" # "me" refers to the currently authenticated user's profile
new_headline = "New Headline Text"
new_summary = "New Summary Text"
def update_profile():
endpoint_url = f"https://api.linkedin.com/v2/me"
headers = {
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json"
}
payload = {
"headline": new_headline,
"summary": new_summary
}
response = requests.patch(endpoint_url, headers=headers, json=payload)
if response.status_code == 200:
print("Profile updated successfully.")
else:
print("Error updating profile.")
print(response.text)
if __name__ == '__main__':
update_profile()
</code></pre>
<p>The authorisations I have are :-</p>
<p><a href="https://i.sstatic.net/RY854.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RY854.png" alt="enter image description here" /></a></p>
<p>But I get this error :-</p>
<pre><code>"message": "java.lang.IllegalArgumentException: No enum constant com.linkedin.restli.common.HttpMethod.PATCH",
</code></pre>
<p>How to fix this error ?</p>
<p>This is my Python environment</p>
<p><a href="https://i.sstatic.net/PMt9b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PMt9b.png" alt="enter image description here" /></a></p>
|
<python><rest><linkedin-api><rest.li>
|
2023-05-24 07:57:23
| 2
| 2,565
|
Chakra
|
76,321,217
| 1,574,551
|
Create a list of start date and end date in python with first day of month and last day of month
|
<p>I need a list of start date end date starting with the first day of month and last day of month. Output should look like below:</p>
<pre><code> start_date = datetime.date(2023, 1, 1)
Output:
2023-01-01 2023-01-31
2023-02-01 2023-02-28
2023-03-01 2023-03-31
2023-04-01 2020-04-30
2020-05-01 2020-05-31
</code></pre>
<p>Below is what I tried:</p>
<pre><code> from datetime import datetime
import datetime
start_date = datetime.date(2023, 1, 1)
today= datetime.date.today()
date_list = []
def date_range(start_date):
return [{"startDate": start_date.strftime("%Y-%m-%d"),
"endDate": (start_date +
datetime.timedelta(days=30.9)).strftime("%Y-%m-%d")}]
while start_date<today:
date_list.append(date_range(start_date))
start_date= (start_date + datetime.timedelta(days=30.9))
for date_item in date_list:
date_range = date_item[0]
startDate = date_range['startDate']
endDate = date_range['endDate']
print(startDate,endDate)
</code></pre>
<p>Thank you</p>
|
<python><list>
|
2023-05-24 07:48:51
| 4
| 1,332
|
melik
|
76,321,120
| 11,720,193
|
Error encountered with POST request to Botify
|
<p>I am trying to send a <code>POST</code> response to <code>Botify</code> and receive response as mentioned in the documentation <a href="https://developers.botify.com/docs/export-job-reference" rel="nofollow noreferrer">here</a>. However, the job keeps on failing. I haven't used Botify before so requesting help in fixing the failure :</p>
<p><strong>Python program</strong>:</p>
<pre><code>import requests
import json
token = 'XXXXXXXXXXXXXXXXXXXXXXX'
url = "https://api.botify.com/v1/jobs"
json = {
"job_type": "export",
"payload": {
"username": "XYZ0",
"project": "XYZ.com",
"export_size": 50,
"formatter": "csv",
"formatter_config": {
"delimiter": ",",
"print_delimiter": False,
"print_header": True,
"header_format": "verbose"
},
"connector": "direct_download",
"extra_config": {},
"query": {
"collections": ["crawl.20230509"],
"query": {
"dimensions": ["url",
"crawl.20230509.date_crawled",
"crawl.20230509.content_type",
"crawl.20230509.http_code"
],
"metrics": [],
"sort": [1]
}
}
}
}
headers = {
# "accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Token {token}"
}
response = requests.post(url, json=json, headers=headers).json()
print(response)
</code></pre>
<p>Note: I'm unsure as to what to put in the <code>extra_config</code> element above, so I passed an empty dictionary.</p>
<p><strong>Error</strong>:</p>
<pre><code>{'status': 400, 'error': {'error_code': '1020', 'message': 'Badly formatted request', 'error_detail': {'payload': {'query': {'collections': ['Unknown collection "crawl.20230509".']}}}}}
</code></pre>
<p><strong>EDIT</strong>:-
The following code is a different sample code very similar which works. Not sure where I am going wrong in my version:</p>
<pre><code>url = "https://api.botify.com/v1/jobs"
token = 'XXXXXXXXXXXXXXXXXXXXXXXX'
headers = {
#"accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Token {token}"
}
data = """
{
"job_type": "export",
"payload": {
"username": "XXXX0",
"project": "XXXXX.com",
"connector": "direct_download",
"formatter": "csv",
"formatter_config": {
"delimiter": ",",
"print_delimiter": false,
"print_header": true,
"header_format": "verbose"
},
"export_size": 50,
"query": {
"collections": ["crawl.20250515"],
"query": {
"dimensions": ["url",
"crawl.20221012.date_crawled",
"crawl.20221012.content_type",
"crawl.20221012.http_code"
],
"metrics": [],
"sort": [1]
}
</code></pre>
<p>Can someone please help me fix the issue.
Any help is appreciated.
Thanks</p>
|
<python><python-requests>
|
2023-05-24 07:36:55
| 0
| 895
|
marie20
|
76,321,113
| 619,774
|
Access denied when trying to access USB HID device via pyusb
|
<p>I want to send data to a USB HID device with Python. Here's my script:</p>
<pre><code>import usb.core
import usb.util
# Device constants
VENDOR_ID = 0x1b1c
PRODUCT_ID = 0x0a6b
# Find our device
dev = usb.core.find(idVendor=VENDOR_ID, idProduct=PRODUCT_ID)
# Set the active configuration.
# With no arguments, the first configuration will be the active one
dev.set_configuration()
</code></pre>
<p>On the last line (set_configuration) I get the following error:</p>
<blockquote>
<p>usb.core.USBError: [Errno 13] Access denied (insufficient permissions)</p>
</blockquote>
<p>I already created the following udev rule, but that did not help:</p>
<p><strong>99-corsair.py:</strong></p>
<pre><code># allow r/w access by all users
SUBSYSTEMS=="usb", ATTRS{idVendor}=="1b1c", ATTRS{idProduct}=="0a6b", MODE="0666"
</code></pre>
<p>The same error keeps coming up.
What else is necessary to allow USB-HID access via pyusb?</p>
|
<python><linux><usb><manjaro>
|
2023-05-24 07:36:08
| 1
| 9,041
|
Boris
|
76,321,038
| 2,717,424
|
Pandas: Referring to previous calculation results within the same calculation step
|
<p>I have a Pandas DataFrame and want to calculate a new column based on values of the current row and the previous one.
The following, where I just add the current value and the previous value, example works fine:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(data=[1,2,3,4,5], columns=["old"])
df.loc[:, ["new"]] = df["old"] + df["old"].shift()
df
</code></pre>
<p><a href="https://i.sstatic.net/XkhqC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XkhqC.png" alt="enter image description here" /></a></p>
<p>However, when I try to use a previous value of the new column for the calculation of further values of the new column, this only works for calculations that refer to values of the new column that exist before the calculation started:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(data=[1,2,3,4,5], columns=["old"])
df.loc[0, ["new"]] = 0
df.loc[1:, ["new"]] = df["old"] + df["new"].shift()
df
</code></pre>
<p><a href="https://i.sstatic.net/lVvqK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lVvqK.png" alt="enter image description here" /></a></p>
<p>As you can see, it is able to calculate the value of the second row, since I defined the value of <code>"new"</code> for the first row manually, but it is not able to refer to the result of the second row when calculating values for the third row and onwards.</p>
<p>How can I achieve this in the most elegant and efficient way? Basically, I am looking for a more elegant and efficient way of this:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(data=[1,2,3,4,5], columns=["old"])
df.loc[0, ["new"]] = 0
for i in range(1, len(df)):
df.loc[i:, ["new"]] = df["old"] + df["new"].shift()
</code></pre>
<p>This results in the correct output, but I think the loop is kind of inefficient.</p>
<p><a href="https://i.sstatic.net/MzkNr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MzkNr.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2023-05-24 07:25:16
| 1
| 1,029
|
Sebastian Dine
|
76,320,904
| 426,132
|
Python can't parse command line arguments
|
<p>I'm trying to get the arguments by using getopt</p>
<pre><code>import sys
import getopt
import time
from datetime import timedelta
start_time = time.monotonic()
filename = ''
startIndex = 1
debug = False
outputFile = 'output.csv'
try:
opts, args = getopt.getopt(sys.argv[1:], "hc:o:", ["help","counter=", "output="])
except getopt.GetoptError as err:
# print help information and exit:
print(err) # will print something like "option -a not recognized"
sys.exit(2)
for opt, arg in opts:
if opt == '-h':
print ('help info')
sys.exit()
elif opt in ("-o", "--output"):
outputfile = arg
elif opt in ("-c", "--counter"):
startIndex = arg
filename = sys.argv[1]
print(outputFile)
print(startIndex)
print(debug)
print(opts)
print(args)
</code></pre>
<p>Output:</p>
<pre><code> python pSRA.py rs.txt -o out.csv -c 3
output.csv
1
False
[]
['rs.txt', '-o', 'out.csv', '-c', '3']
</code></pre>
<p>when printing the args they are all there, however opts is an empty list.
What am I doing wrong?</p>
|
<python><getopt>
|
2023-05-24 07:05:55
| 1
| 1,441
|
user426132
|
76,320,737
| 10,669,819
|
How to copy Flask Python Artifacts files to Windows Server using DEVOPS Pipelines
|
<p>I have a pipeline for Python Flask Project which have two stages. 1st is build and Test and other is deployment.</p>
<p>In Deployment Stage I want to copy artifacts files to my Remote Windows Server which can be only accessed by IP with Port. How can I do it?</p>
<p>I have tried few things</p>
<ol>
<li><p>Window Machine File Copy. But it says cannot parse URL</p>
<pre><code> inputs:
SourcePath: '$(Build.ArtifactStagingDirectory)/flask-files.zip'
MachineNames: 'IP:Port' # Replace with the name or IP address of your Windows Server
AdminUserName: 'username' # Replace with the admin username to access the Windows Server
AdminPassword: 'password' # Replace with the admin password to access the Windows Server
TargetPath: 'C:\inetpub\wwwroot\sampletransfer'
displayName: 'Copy Artifact to Windows Server'
</code></pre>
</li>
<li><p>PowerShell@2</p>
</li>
</ol>
<pre><code> displayName: 'Copy artifact to remote server'
inputs:
targetType: 'filePath'
filePath: 'copy-artifact.ps1'
arguments: '-ArtifactStagingDirectory "$(Build.ArtifactStagingDirectory)"'
</code></pre>
<p><strong>.ps1 file has this code</strong></p>
<pre><code> [Parameter(Mandatory=$true)]
[string]$ArtifactStagingDirectory
)
$sourcePath = Join-Path -Path $ArtifactStagingDirectory -ChildPath "flask-files.zip"
$targetPath = "\\myip:port\c$\inetpub\wwwroot\sampletransfer"
$adminUsername = "myuser"
$adminPassword = ConvertTo-SecureString "password" -AsPlainText -Force
$credentials = New-Object System.Management.Automation.PSCredential($adminUsername, $adminPassword)
Copy-Item -Path $sourcePath -Destination $targetPath -ToSession (New-PSSession -ComputerName myip -Port myport-Credential $credentials)
</code></pre>
<p><strong>But its giving me this error</strong></p>
<pre><code>The WinRM client cannot process the request. If the authentication scheme is different from Kerberos, or if the client
computer is not joined to a domain, then HTTPS transport must be used or the destination machine must be added to the
TrustedHosts configuration setting. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts
list might not be authenticated. You can get more information about that by running the following command: winrm help ```
</code></pre>
|
<python><powershell><azure-pipelines><azure-pipelines-yaml>
|
2023-05-24 06:42:55
| 3
| 580
|
Usman Rafiq
|
76,320,554
| 11,357,695
|
Import errors after updating spyder and python
|
<p>--
EDIT - <code>conda list</code> output (top few lines)</p>
<pre><code># packages in environment at C:\ANACONDA3:
#
# Name Version Build Channel
_ipyw_jlab_nb_ext_conf 0.1.0 py39haa95532_0
alabaster 0.7.12 pyhd3eb1b0_0
altair 4.1.0 pypi_0 pypi
</code></pre>
<p>--</p>
<p>I was originally using <code>spyder v4</code> and <code>python v3.7</code>, but needed to update python to 3.9 for some software. I removed spyder (<code>conda remove spyder</code>), updated python (<code>conda install python=3.9</code>) and reinstalled spyder (<code>conda install spyder=4.2</code>). Upon starting spyder I had a dependency issue, so also updated that (<code>conda update nbconvert</code>). Everything starts fine - I have the correct spyder and python versions.</p>
<p>When working previously i had a lot of packages installed with pip (pip install X). These packages are located in my anaconda directory (<code>C:\ANACONDA3\Lib\site-packages</code>). There were no issues with this before the updates, but now I get import errors - for example, I previously did <code>pip install primer3</code> (<a href="https://libnano.github.io/primer3-py/quickstart.html" rel="nofollow noreferrer">link</a>), and have used it successfully, but now I get:</p>
<pre><code>import primer3
Traceback (most recent call last):
Cell In[3], line 1
import primer3
File C:\ANACONDA3\lib\site-packages\primer3\__init__.py:40
from .bindings import (calcHairpin, calcHomodimer, calcHeterodimer,
File C:\ANACONDA3\lib\site-packages\primer3\bindings.py:40
from . import thermoanalysis
ImportError: cannot import name 'thermoanalysis' from partially initialized module 'primer3' (most likely due to a circular import) (C:\ANACONDA3\lib\site-packages\primer3\__init__.py)
</code></pre>
<p>I have heard that pip and conda dont play that well together, but only after I had been using pip for a while - I am very much an amateur. However, seeing as they were installed into the anaconda directory, I assume that I was using the conda pip rather than a standalone local pip, and this should be fine?</p>
<p>Can anyone advise what the issue is/how to fix it? Whilst this has been an interesting learning experience, I have a lot of broken code now :(</p>
<p>Cheers!
Tim</p>
|
<python><anaconda><conda><spyder><importerror>
|
2023-05-24 06:15:50
| 1
| 756
|
Tim Kirkwood
|
76,320,436
| 9,652,160
|
How can I handle different tensor sizes in the forward() method in PyTorch?
|
<p>I'm training an <a href="https://en.wikipedia.org/wiki/Long_short-term_memory" rel="nofollow noreferrer">LSTM</a> model, and I use a window size of 20. However, I need the output tensor to have a shorter length.</p>
<p>This small code fragment illustrates what I'm trying to do.</p>
<pre><code>def forward(self, x):
x, _ = self.lstm(x) # x.size() is [:, :20, :]
x = self.linear(x)
x = x[:, :10, :] # I trim the result to 10.
return x
</code></pre>
<p>This code works as intended, but is there a way to initially get (obtained by self.lstm()) the tensor which is [:, :10, :] in size?</p>
<p>I suppose that the current implementation may slow the learning process.</p>
|
<python><pytorch><artificial-intelligence>
|
2023-05-24 05:53:19
| 0
| 505
|
chm
|
76,320,424
| 13,994,829
|
Python: memory leak with memory_profiler
|
<p>I want to use <code>memory_profiler</code> package to analyze memory usage.</p>
<p>However, I've some confusions:</p>
<h2>Example 1</h2>
<pre class="lang-py prettyprint-override"><code># test1.py
from memory_profiler import profile
class Object:
def __init__(self):
pass
list = []
@profile
def func():
for i in range(100000):
list.append(Object())
func()
</code></pre>
<h2>Results 1</h2>
<h4>First test</h4>
<pre><code>mprof: Sampling memory every 0.1s
running new process
running as a Python program...
Filename: test1.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
10 43.9 MiB 43.9 MiB 1 @profile
11 def func():
12 50.1 MiB 0.0 MiB 100001 for i in range(100000):
13 50.1 MiB 6.2 MiB 100000 l.append(Object())
</code></pre>
<p><a href="https://i.sstatic.net/R0Aq3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R0Aq3.png" alt="enter image description here" /></a></p>
<h4>Second test</h4>
<pre><code>mprof: Sampling memory every 0.1s
running new process
running as a Python program...
Filename: test1.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
10 44.2 MiB 44.2 MiB 1 @profile
11 def func():
12 50.3 MiB -2867.4 MiB 100001 for i in range(100000):
13 50.3 MiB -2861.3 MiB 100000 l.append(Object())
</code></pre>
<p><a href="https://i.sstatic.net/Zpsri.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zpsri.png" alt="enter image description here" /></a></p>
<h4>Question 1</h4>
<p>About <code>test1.py</code>:</p>
<ol>
<li>The result of memory usage is <strong>different</strong> some time.</li>
<li>Are there have memory leak in this example? I think python have <strong>garbage collection</strong> mechanism, so <code>list</code> should be <code>None</code> which will not cause memory leak. However, the result of <code>mprof plot</code> show usage increase.</li>
</ol>
<hr />
<h2>Example 2</h2>
<pre class="lang-py prettyprint-override"><code># test2.py
from memory_profiler import profile
class Object:
def __init__(self):
pass
@profile
def func():
global list
list = []
for i in range(100000):
list.append(Object())
func()
</code></pre>
<h2>Results 2</h2>
<pre><code>mprof: Sampling memory every 0.1s
running new process
running as a Python program...
Filename: test2.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
7 44.2 MiB 44.2 MiB 1 @profile
8 def func():
9 global l
10 44.2 MiB 0.0 MiB 1 l = []
11 50.3 MiB -2905.3 MiB 100001 for i in range(100000):
12 50.3 MiB -2899.2 MiB 100000 l.append(Object())
</code></pre>
<p><a href="https://i.sstatic.net/RASNp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RASNp.png" alt="enter image description here" /></a></p>
<h4>Question 2</h4>
<p>About <code>test2.py</code>:</p>
<ol>
<li>I think this example will cause a memory leak. The reason is that after the function ends, <code>list</code> as a <strong>global variable</strong> still holds a reference to the list, and the objects in the list are not released, resulting in a memory leak.</li>
<li>But why the result of <code>Increment</code> was <code>-2899.2 MiB</code> which seem there are not happend memory leak.</li>
</ol>
|
<python><memory-leaks><memory-profiling>
|
2023-05-24 05:50:19
| 0
| 545
|
Xiang
|
76,320,365
| 8,801,862
|
Pass file (model weight) as an argument to Docker Image
|
<p>I have a problem with loading a file to a Docker image. I would like to pass the file path as an argument and then be able to access (read) it from inside the image.</p>
<h3>File <em>model.py</em></h3>
<pre><code>if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--image-path', type=str)
parser.add_argument('--model-dir', type=str)
args, _ = parser.parse_known_args()
sample_image = args.image_path
print("Sample:", sample_image)
print("Model:", args.model_dir)
print("Done.")
</code></pre>
<h3>File <em>Dockerfile</em></h3>
<pre class="lang-yaml prettyprint-override"><code>FROM python:3.10
WORKDIR /usr/src/app
RUN apt update && apt install -y python3-pip
COPY model.py model.pth.tar ./
ENTRYPOINT ["python3", "./model.py", "--model_dir"]
CMD ["./model.pth.tar"]
</code></pre>
<p>From my understanding, the combination of these ENTRYPOINT and CMD should have worked. But I don't have any luck.</p>
<p>I managed to pass the required image (picture) by mounting:</p>
<p><code>docker build -t my_model --progress=plain .</code></p>
<p><code>docker run -v /tmp:/tmp/myfilesdocker my_model --image-path=happy.jpg</code></p>
<p>However, I don't want to do that with my <code>model.pth.tar</code> (model weight). I want it to be passed by default (or any other way that works and doesn't require mounting).</p>
|
<python><docker><machine-learning><dockerfile>
|
2023-05-24 05:38:42
| 3
| 401
|
user13
|
76,320,274
| 7,185,934
|
Post install script for pyproject.toml projects
|
<p>I am building a Python package using setuptools in a pyproject.toml file. The package is being installed locally using <code>pip install .</code></p>
<p>Is there a way to specify running a post-install Python script in the pyproject.toml file?</p>
|
<python><setuptools><pyproject.toml>
|
2023-05-24 05:19:38
| 0
| 815
|
David Skarbrevik
|
76,320,197
| 2,966,197
|
streamlit app not loading background image
|
<p>I am building a steamlit page with multipage support and want to put a background image and another banner image in it. When I run the app, I see the banner image but the background image is not showing. Here is my code:</p>
<pre><code>import os
import streamlit as st
import numpy as np
from PIL import Image
# Custom imports
from multipage import MultiPage
from pages import page1 # import your pages here
import base64
@st.cache_data
def get_base64_of_bin_file(bin_file):
with open(bin_file, 'rb') as f:
data = f.read()
return base64.b64encode(data).decode()
#
def set_png_as_page_bg(png_file):
bin_str = get_base64_of_bin_file(png_file)
page_bg_img = '''
<style>
body {
background-image: url("data:image/png;base64,%s");
background-size: cover;
}
</style>
''' % bin_str
st.markdown(page_bg_img, unsafe_allow_html=True)
return
#
#This image doesn't show
set_png_as_page_bg('./image/log.jpg')
# Create an instance of the app
app = MultiPage()
# This image loads
display = Image.open('./image/log2.jpeg')
st.image(display, width = 400)
col1, col2 = st.columns(2)
col2.title("Content Summarizer and Q&A")
# Add page
app.add_page("Welcome", page1.app)
# The main app
app.run()
</code></pre>
<p>The image I want to load for background is in the same code directory as other image but it doesn't show up on page and neither it throws any error. I have tried to move the background image code to <code>page.py</code> as well and same behavior there as well.</p>
|
<python><streamlit>
|
2023-05-24 04:59:32
| 1
| 3,003
|
user2966197
|
76,319,631
| 610,569
|
How can I use/load the downloaded Hugging Face models from snapshot_download?
|
<p>I have downloaded the model from <a href="https://en.wikipedia.org/wiki/Hugging_Face" rel="nofollow noreferrer">Hugging Face</a> using <code>snapshot_download</code>, e.g.,</p>
<pre><code>from huggingface_hub import snapshot_download
snapshot_download(repo_id="facebook/nllb-200-distilled-600M", cache_dir="./")
</code></pre>
<p>And when I list the directory, I see:</p>
<pre class="lang-none prettyprint-override"><code>ls ./models--facebook--nllb-200-distilled-600M/snapshots/bf317ec0a4a31fc9fa3da2ce08e86d3b6e4b18f1/
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>config.json@ README.md@ tokenizer_config.json@
generation_config.json@ sentencepiece.bpe.model@ tokenizer.json@
pytorch_model.bin@ special_tokens_map.json@
</code></pre>
<p>I can load the model locally, but I'll have to guess the snapshot hash, e.g.,</p>
<pre><code>from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained(
"./models--facebook--nllb-200-distilled-600M/snapshots/bf317ec0a4a31fc9fa3da2ce08e86d3b6e4b18f1/",
local_files_only=True
)
</code></pre>
<p>That works, but how do I load the Hugging Face model without guessing the hash?</p>
|
<python><machine-learning><huggingface-transformers><large-language-model><huggingface-hub>
|
2023-05-24 01:58:54
| 1
| 123,325
|
alvas
|
76,319,288
| 6,467,512
|
Can you use a custom trained model for feature extraction?
|
<p>I am trying to develop an algorithm using features extracted from images to find similar items.</p>
<p>I have this code I am working with:</p>
<pre><code>def get_pil_image_from_path(path):
try:
image = Image.open(path)
except Exception as e:
print(e)
return image
def get_color_image(img):
img = img.resize((224, 224))
img = img.convert('RGB')
return img
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.vgg16 import VGG16, preprocess_input
from tensorflow.keras.models import Model
class VGFeatureExtractor:
def __init__(self):
# Use VGG-16 as the architecture and ImageNet for the weight
base_model = VGG16(weights='imagenet')
# Customize the model to return features from fully-connected layer
self.model = Model(inputs=base_model.input, outputs=base_model.get_layer('fc1').output)
def extract(self, img):
img = get_color_image(img)
# Reformat the image
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
# Extract features
feature = self.model.predict(x)[0]
# Change shape to (1, 4096) as the model expects
feature = feature.reshape(1, feature.shape[0])
return feature / np.linalg.norm(feature)
image = get_pil_image_from_path(imagePath)
ve = VGFeatureExtractor()
features_vector = ve.extract(image)
</code></pre>
<p>Could I use my own <a href="https://en.wikipedia.org/wiki/Residual_neural_network" rel="nofollow noreferrer">ResNet</a> classification model I trained instead of the pretrained <a href="https://en.wikipedia.org/wiki/ImageNet" rel="nofollow noreferrer">ImageNet</a> model, and would that show improvement?</p>
<p>Currently, the similar items suggested are not the same color. What can I do to improve this code?</p>
<p>I am using <a href="https://github.com/facebookresearch/faiss/wiki" rel="nofollow noreferrer">Faiss</a> to search the features and currently it is not doing well.</p>
|
<python><tensorflow><machine-learning><keras><resnet>
|
2023-05-23 23:52:07
| 1
| 323
|
AynonT
|
76,319,286
| 4,117,496
|
Django template loop through dict and render it value below key
|
<p>I'm using Django template to generate an HTML page in which I'm going through a dict, the key is an URL to a picture, the value is the caption to that image, what I'd like to render is in each row, the caption being right below the image, and render all entries (always four) in a dict in the same row, here's my code so far:</p>
<pre><code><div class="grid grid-cols-2 gap-2 p-2">
{% for one_metric in url_list %}
<ul>
<tr>
{% for url, caption in one_metric.items %}
<td><img width="350" height="300" src="{{ url }}"></td>
<td>{{ caption }}</td>
{% endfor %}
</tr>
</ul>
{% endfor %}
</div>
</code></pre>
<p>What ended up rendering is the caption listed on the <em><strong>right</strong></em> of the picture instead of being <em><strong>below</strong></em> it, I've searched through the Internet but didn't figure it out, any advice would be greatly appreciated!</p>
|
<python><django><templates><django-views><django-templates>
|
2023-05-23 23:51:17
| 2
| 3,648
|
Fisher Coder
|
76,319,082
| 1,558,035
|
Why can't I find the redis module for langchain vectorstores?
|
<p>I am trying to just simply store some embeddings that I have already generated using a Redis container. I get the error on import..</p>
<pre><code>from langchain.vectorstores.redis import Redis
</code></pre>
<hr />
<pre><code>ModuleNotFoundError: No module named 'langchain.vectorstores.redis'
</code></pre>
<p>This is from attempting to follow the tutorial listed here: <a href="https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html" rel="nofollow noreferrer">https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html</a></p>
<p>Also I am on python 3.8.0 if this matters.</p>
<p>Any insight would be appreciated, thanks :)</p>
|
<python><redis><langchain>
|
2023-05-23 22:46:37
| 1
| 1,276
|
Ostap Hnatyuk
|
76,319,055
| 758,811
|
Can Symppy combine an expression with `Max` and another operator like `|`
|
<p>I'm trying to simplify a complex equation but don't seem to combine <code>Max</code> with bitwise operators. Here is a minimal example.</p>
<pre class="lang-py prettyprint-override"><code>from sympy import simplify, symbols
a,b,c = symbols("a,b,c")
simplify("Max(a, b) | c")
</code></pre>
<p>Gives the following error</p>
<pre><code>TypeError: unsupported operand type(s) for |: 'Max' and 'Symbol'
</code></pre>
<p>Is there any way to make this work? Or is it outside reasonable sympy constructs?</p>
|
<python><sympy>
|
2023-05-23 22:40:25
| 0
| 629
|
BrT
|
76,319,019
| 12,297,767
|
Cannot Create Path Hierarchy Tokenizer in Azure Cognitive Search
|
<p>I am creating an index for Azure Cognitive Search using the Python SDK. I am trying to create a custom analyzer using a custom tokenizer. My definitions are as follows</p>
<pre><code>tokenizer = [
{
"name": "taxonomy_delimiter",
"@odata.type": "#Microsoft.Azure.Search.path_hierarchy_v2",
"delimiter": " ● ",
"maxTokenLength": 10000,
},
]
analyzer = [
{
"name": "taxonomy",
"@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
"tokenizer": "taxonomy_delimiter",
"tokenFilters": [
"asciifolding",
"lowercase",
"trim"
]
}
]
</code></pre>
<p>When I try to run <code>SearchIndexClient(endpoint, AzureKeyCredential(key_admin1), api_version='2021-04-30-Preview').create_index(index)</code> I get the following warning and error</p>
<pre><code>Subtype value #Microsoft.Azure.Search.path_hierarchy_v2 has no mapping, use base class LexicalTokenizer.
() The request is invalid. Details: Cannot dynamically create an instance of type 'Microsoft.Azure.Search.Tokenizer'. Reason: Cannot create an abstract class.
</code></pre>
<p>I'm using the tokenizer as described <a href="https://learn.microsoft.com/en-us/azure/search/index-add-custom-analyzers#built-in-analyzers" rel="nofollow noreferrer">here</a>. What am I doing wrong?</p>
|
<python><sdk><tokenize><azure-cognitive-search>
|
2023-05-23 22:30:24
| 1
| 564
|
Fruity Medley
|
76,318,819
| 344,669
|
Python generate the ClientRequestException object for unit testing
|
<p>For my <code>Python</code> application unit test, I need to generated the <code>ClientRequestException</code> object, to test the failure condition. using the below code to generate the exception object, but getting the error message.</p>
<p><strong>Code:</strong></p>
<pre><code> from office365.runtime.client_request_exception import ClientRequestException
from requests import Response
e = Response()
e.args = []
e.response="hello"
e.status_code = 404
e.headers['Content-Type'] = 'application/json'
err = ClientRequestException(*e.args, response=e.response)
print(err)
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>self = ClientRequestException(), args = (), kwargs = {'response': 'hello'}
def __init__(self, *args, **kwargs):
super(ClientRequestException, self).__init__(*args, **kwargs)
> content_type = self.response.headers.get('Content-Type', '').lower().split(';')[0]
E AttributeError: 'str' object has no attribute 'headers'
..\..\venv\lib\site-packages\office365\runtime\client_request_exception.py:7: AttributeError
</code></pre>
<p>May I know the correct way to generate <code>ClientRequestException</code> object?</p>
<p>Thanks</p>
|
<python><python-3.x><pytest>
|
2023-05-23 21:40:38
| 1
| 19,251
|
sfgroups
|
76,318,661
| 3,826,115
|
Holoviews interactive plot of gridded data with slider on top
|
<p>This code creates an interactive plot with Holoviews:</p>
<pre><code>import numpy as np
import xarray as xr
import holoviews as hv
hv.extension('bokeh')
# Create sample data
x = np.linspace(-10, 10, 10)
y = np.linspace(-10, 10, 10)
time = np.arange(0, 10, 1)
data = np.random.rand(len(time), len(x), len(y))
# Create xarray DataFrame
ds = xr.DataArray(data, coords=[time, x, y], dims=['time', 'x', 'y'], name='data').to_dataset()
# Create Holoviews Dataset
hv_ds = gv.Dataset(ds, kdims=['time', 'y', 'x'], vdims=['data'])
#plot with slider
hv_ds.to(gv.Image, ['x', 'y'])
</code></pre>
<p>It creates a gridded plot of the data with a slider that moves through the time dimension. By default, the slider is placed on the right.</p>
<p>How can I move the slider so it is on top of the plot?</p>
|
<python><bokeh><holoviews>
|
2023-05-23 21:09:36
| 1
| 1,533
|
hm8
|
76,318,650
| 3,175,046
|
Adding a primitive type repeated field in a protobuf from python
|
<p>I have a proto2 file defined like the following:</p>
<pre><code>message X {
repeated A a = 1;
repeated B b = 2;
}
message BtoOtherBEntry {
required uint64 parent_b = 1; // key
repeated uint64 children_b = 2; // value
}
</code></pre>
<p>Then I have a python code that maps a type uint64 B to a list of B.
Let's say this is BtoBMap.</p>
<p>I am trying to populate X.b</p>
<pre><code> X x
for key, values in self.BtoBMap.items():
if len(values) > 0:
b_to_b_entry = x.b.add()
b_to_b_entry.parent_b = key
b_to_b_entry.children_b.extend(values)
</code></pre>
<p>I have tried looping and using <code>add</code> and also tried <code>append</code> which I don' think is defined in proto2. <code>extend</code> doesn't work. How else can I assign these values?</p>
|
<python><list><dictionary><protocol-buffers>
|
2023-05-23 21:08:00
| 1
| 1,015
|
Hadi
|
76,318,624
| 448,192
|
Type hint a field that can be None with a default value
|
<p>When using Python 3.10 in PyCharm and type hinting my methods I am noticing something that confuses me a bit and I am looking for the correct approach and also understanding what causes the following.</p>
<p>If I have this method:</p>
<pre><code>def my_method(value: str = None) -> None:
...
</code></pre>
<p>and I try to call it from a different place, placing the cursor inside the brackets in the method and using the CMD + P (MacOS - Main Menu | View | Code Editor View Actions | Code View Actions | Parameter Info shortcut) I am seeing that the parameter typing is automatically recognised as:</p>
<pre><code>value: str | None = None
</code></pre>
<p>From everything I've read online the correct way to achieve type hinting a parameter which is None or a string with None by default would be one of these:</p>
<pre><code>value: str | None = None # modern 3.10+ syntax
value: typing.Union[str, None] = None
value: typing.Optional[str] = None # alias for the previous one
</code></pre>
<p>Is this something that is PyCharm specific in my case? How does it work? As far as I understand they have their own engine that looks at autocomplete, hinting etc.. Would it be incorrectly interpreted as a str in other editors? I also tested VS Code where it gives me the method definition as <code>value: str = None</code>
but I also don't get a warning if I define the method as
<code>value: str</code> and then call it with <code>method(None)</code> which in PyCharm shows up as a warning.</p>
|
<python><pycharm><python-typing>
|
2023-05-23 21:02:30
| 1
| 16,160
|
DArkO
|
76,318,571
| 16,484,106
|
How do I prevent losing a row when extracting a table from a PDF than spans multiple pages?
|
<p>I have a PDF table with a total of 33 rows, however this number can change. The table expands onto a second page which means it looks like two separate tables.</p>
<p>My goal is to take all items in column 0, 2, and 3 and add to three separate lists. I have been able to get this working but I noticed one row is missing from table 2, which is the very first row on the second page.</p>
<p>My current Python script looks like:</p>
<pre><code>import tabula
file_path = "address.pdf"
tables = tabula.read_pdf(file_path, pages="all", multiple_tables=True)
full_range_index = 0
full_range = []
starting_range_index = 2
starting_range = []
ending_range_index = 3
ending_range = []
table_one_row_count = 27
table_two_row_count = 6
# for i in range(table_one_row_count):
# extracted_row = tables[0].iloc[i].values.tolist()
# full_range.append(extracted_row[full_range_index])
# starting_range.append(extracted_row[starting_range_index])
# ending_range.append(extracted_row[ending_range_index])
for i in range(table_two_row_count):
extracted_row = tables[1].iloc[i].values.tolist()
full_range.append(extracted_row[full_range_index])
starting_range.append(extracted_row[starting_range_index])
ending_range.append(extracted_row[ending_range_index])
print(full_range)
</code></pre>
<p>An example of what <code>full_range</code> should look like is <code>['one', 'two', 'three', 'four', 'five', 'six']</code> however it looks like <code>[nan, 'two', 'three', 'four', 'five', 'six']</code>.</p>
<p>Is there something I can do to not lose the first row on the second page/table?</p>
|
<python><python-3.x><pdf><tabular>
|
2023-05-23 20:52:05
| 1
| 384
|
agw2021
|
76,318,541
| 2,930,456
|
C program crashes on windows when run from Python program
|
<p>I'm trying to run a C program from a Python program using <code>subprocess.run()</code> on Windows 11. However when the C program is run using <code>subprocess.run()</code> the C program does not run and returns <code>3221225781</code>. But when I run the C program directly and not from python it runs fine and prints "Hello World!". Looking online <code>3221225781</code> means <code>STATUS_DLL_NOT_FOUND</code>. I've tried compiling the C program statically so it doesn't require any DLLs using <code>gcc -static -static-libgcc -o test.exe test.c</code> but I still get the same <code>STATUS_DLL_NOT_FOUND</code> error. The C program is being compiled using gcc installed from mingw.</p>
<p>test.c</p>
<pre><code>#include <stdio.h>
int main() {
printf("Hello World!");
return 0;
}
</code></pre>
<p>test.py</p>
<pre><code>import subprocess
if __name__ == '__main__':
prog = "./test.exe"
result = subprocess.run(prog)
print(result)
</code></pre>
|
<python><c><python-3.x><windows><mingw>
|
2023-05-23 20:47:04
| 0
| 1,401
|
2trill2spill
|
76,318,261
| 1,264,933
|
LangChain - create_sql_agent prompt, though, and observation output
|
<p>When creating a <code>create_sql_agent()</code> how do you get the prompt, thought, and observation?</p>
<p>I know how to get the final answer which is just the response of the <code>agent_executor.run</code> but I would like to get the various observations and graph the results.</p>
<p>Code example shows just the "final answer"</p>
<pre><code>dbsql = SQLDatabase.from_uri(database)
llm = OpenAI(temperature=0, verbose=True)
toolkit = SQLDatabaseToolkit(llm=llm,db=dbsql)
agent_executor = create_sql_agent(
llm=OpenAI(temperature=0),
toolkit=toolkit,
verbose=True
)
output = agent_executor.run("MY QUESTION")
print(f"Agent Executor output: {output}")
</code></pre>
|
<python><langchain><large-language-model>
|
2023-05-23 20:00:23
| 1
| 875
|
peterlandis
|
76,318,098
| 20,726,966
|
Could not build wheels for pycrypto, which is required to install pyproject.toml-based projects - ERROR
|
<p>I'm facing an error while deploying to Heroku. ERROR: Could not build wheels for pycrypto, which is required to install pyproject.toml-based projects. However, my project does not specify use for pycrypto. What is causing this issue? My requirements.txt looks like:</p>
<ul>
<li>python==3.10.9</li>
<li>firebase_admin</li>
<li>pyrebase</li>
<li>pyrebase4</li>
<li>dash</li>
<li>dash_auth</li>
<li>dash_bootstrap_components</li>
<li>dash_daq</li>
<li>pandas</li>
<li>plotly</li>
</ul>
<pre><code>Heroku CLI output
Building wheel for pycrypto (setup.py): started
remote: Building wheel for pycrypto (setup.py): finished with status 'error'
remote: error: subprocess-exited-with-error
remote:
remote: × python setup.py bdist_wheel did not run successfully.
remote: │ exit code: 1
remote: ╰─> [71 lines of output]
remote: checking for gcc... gcc
remote: checking whether the C compiler works... yes
remote: checking for C compiler default output file name... a.out
remote: checking for suffix of executables...
remote: checking whether we are cross compiling... no
remote: checking for suffix of object files... o
remote: checking whether we are using the GNU C compiler... yes
remote: checking whether gcc accepts -g... yes
remote: checking for gcc option to accept ISO C89... none needed
remote: checking for __gmpz_init in -lgmp... yes
remote: checking for __gmpz_init in -lmpir... no
remote: checking whether mpz_powm is declared... yes
remote: checking whether mpz_powm_sec is declared... yes
remote: checking how to run the C preprocessor... gcc -E
remote: checking for grep that handles long lines and -e... /usr/bin/grep
remote: checking for egrep... /usr/bin/grep -E
remote: checking for ANSI C header files... yes
remote: checking for sys/types.h... yes
remote: checking for sys/stat.h... yes
remote: checking for stdlib.h... yes
remote: checking for string.h... yes
remote: checking for memory.h... yes
remote: checking for strings.h... yes
remote: checking for inttypes.h... yes
remote: checking for stdint.h... yes
remote: checking for unistd.h... yes
remote: checking for inttypes.h... (cached) yes
remote: checking limits.h usability... yes
remote: checking limits.h presence... yes
remote: checking for limits.h... yes
remote: checking stddef.h usability... yes
remote: checking stddef.h presence... yes
remote: checking for stddef.h... yes
remote: checking for stdint.h... (cached) yes
remote: checking for stdlib.h... (cached) yes
remote: checking for string.h... (cached) yes
remote: checking wchar.h usability... yes
remote: checking wchar.h presence... yes
remote: checking for wchar.h... yes
remote: checking for inline... inline
remote: checking for int16_t... yes
remote: checking for int32_t... yes
remote: checking for int64_t... yes
remote: checking for int8_t... yes
remote: checking for size_t... yes
remote: checking for uint16_t... yes
remote: checking for uint32_t... yes
remote: checking for uint64_t... yes
remote: checking for uint8_t... yes
remote: checking for stdlib.h... (cached) yes
remote: checking for GNU libc compatible malloc... yes
remote: checking for memmove... yes
remote: checking for memset... yes
remote: configure: creating ./config.status
remote: config.status: creating src/config.h
remote: In file included from /app/.heroku/python/include/python3.11/Python.h:86,
remote: from src/_fastmath.c:31:
remote: /app/.heroku/python/include/python3.11/cpython/pytime.h:208:60: warning: ‘struct timespec’ declared inside parameter list will not be visible outside of this definition or declaratio
remote: 208 | PyAPI_FUNC(int) _PyTime_FromTimespec(_PyTime_t *tp, struct timespec *ts);
remote: | ^~~~~~~~
remote: /app/.heroku/python/include/python3.11/cpython/pytime.h:213:56: warning: ‘struct timespec’ declared inside parameter list will not be visible outside of this definition or declaratio
remote: 213 | PyAPI_FUNC(int) _PyTime_AsTimespec(_PyTime_t t, struct timespec *ts);
remote: | ^~~~~~~~
remote: /app/.heroku/python/include/python3.11/cpython/pytime.h:217:63: warning: ‘struct timespec’ declared inside parameter list will not be visible outside of this definition or declaratio
remote: 217 | PyAPI_FUNC(void) _PyTime_AsTimespec_clamp(_PyTime_t t, struct timespec *ts);
remote: | ^~~~~~~~
remote: src/_fastmath.c:33:10: fatal error: longintrepr.h: No such file or directory
remote: 33 | #include <longintrepr.h> /* for conversions */
remote: | ^~~~~~~~~~~~~~~
remote: compilation terminated.
remote: error: command '/usr/bin/gcc' failed with exit code 1
remote: [end of output]
remote:
remote: note: This error originates from a subprocess, and is likely not a problem with pip.
remote: ERROR: Failed building wheel for pycrypto
remote: Running setup.py clean for pycrypto
remote: Building wheel for sseclient (setup.py): started
remote: Building wheel for sseclient (setup.py): finished with status 'done'
remote: Created wheel for sseclient: filename=sseclient-0.0.27-py3-none-any.whl size=5565 sha256=30988661931e8740f4a7ee87948f5f69e2803cb6d31179cbe4cb6e9bbea1241e
remote: Stored in directory: /tmp/pip-ephem-wheel-cache-ivcb6vdl/wheels/7c/54/eb/a223b1599728ecaf0528281c17c96c503aa7d18a752a4e4e3a
remote: Building wheel for jwcrypto (setup.py): started
remote: Building wheel for jwcrypto (setup.py): finished with status 'done'
remote: Created wheel for jwcrypto: filename=jwcrypto-1.4.2-py3-none-any.whl size=90472 sha256=b9d97dca4df5d53e6f69d6e0c7c0406fdb519a521a08f8e15d02bcfff20c8cb3
remote: Stored in directory: /tmp/pip-ephem-wheel-cache-ivcb6vdl/wheels/42/b6/e3/23d953d3b1a939d81aa460121597ac050eaf99d04578eb4340
remote: Successfully built dash_daq gcloud sseclient jwcrypto
remote: Failed to build pycrypto
remote: ERROR: Could not build wheels for pycrypto, which is required to install pyproject.toml-based projects
remote: ! Push rejected, failed to compile Python app.
remote:
remote: ! Push failed
remote:
remote: Verifying deploy...
remote:
remote: ! Push rejected to acuradyne.
remote:
To ##app link
! [remote rejected] master -> master (pre-receive hook declined)
</code></pre>
<p>Any possible solutions or reasoning behind this problem will be helpful</p>
|
<python><heroku><plotly-dash>
|
2023-05-23 19:36:32
| 3
| 318
|
Homit Dalia
|
76,318,047
| 1,452,762
|
Complete sequence between a range in python
|
<p>I have a range that is decided (e.g., 0 to 14). I have a list of list with at least one sequence as a sublist already in the list of list in ascending order (e.g., [[6, 7], [10, 11, 12]]). I need a function that completes the sequence from 0 to 14 with the already included sequences intact. For the running example, the output list of list should be [[0], [1], [2], [3], [4], [5], [6,7], [8], [9], [10, 11, 12], [13], [14]]. What is the most Pythonic way to do this?</p>
<pre><code>def complete_seq(range, listoflist):
return new_listoflist
completed_list = complete_seq((0, 14), [[6, 7], [10, 11, 12]])
</code></pre>
|
<python><sequence>
|
2023-05-23 19:28:32
| 2
| 315
|
Les_Salantes
|
76,317,946
| 2,717,424
|
Calculate Exponential Moving Average using Pandas DataFrame
|
<p>I want to calculate the exponential moving average (<strong>EMA</strong>) for a set of price data using Pandas.
I use the formula from <a href="https://plainenglish.io/blog/how-to-calculate-the-ema-of-a-stock-with-python" rel="nofollow noreferrer">this article</a> as well as the test data from its example calculation to validate my results:</p>
<p><a href="https://i.sstatic.net/Fsy8Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fsy8Z.png" alt="enter image description here" /></a></p>
<p>I found some previous posts that suggest using <code>ewm</code> and <code>mean</code> for this. Following the example data from the article mentioned above, the attempt would be something like this for <strong>EMA(5)</strong>:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(data=[10, 11, 11.5, 10.75, 12, 11.75, 12.25, 14, 16, 17, 15.6],columns=["price"])
df["ema_5"] = df.price.ewm(span=5, min_periods=5, adjust=False).mean()
</code></pre>
<p>Unfortunately, the results do not match the expected values for index 4 and onwards.</p>
<p>Therefore, I tried a more "manual" approach that follows the described formula, where I first calculate the <strong>SMA</strong> (simple moving average) for the first ever <strong>EMA</strong> (index 4) and then use the formula for every succeeding item.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(data=[10, 11, 11.5, 10.75, 12, 11.75, 12.25, 14, 16, 17, 15.6],columns=["price"])
df.loc[4, ["ema_5"]] = df.loc[:4, "price"].mean()
df.loc[5:, ["ema_5"]] = (df["price"] * (2/6)) + (df["ema_5"].shift(1) * (1 - (2/6)))
</code></pre>
<p>This attempt provides me the expected <strong>EMA(5)</strong> values for index 4 and 5 but it does not continue calculating for index 6 and onwards. How can I apply this formula to every item beyond index 5?</p>
|
<python><pandas><moving-average><technical-indicator>
|
2023-05-23 19:11:57
| 1
| 1,029
|
Sebastian Dine
|
76,317,880
| 12,300,981
|
Why are my errors so high for my minimized solution?
|
<p>I have a couple of large datasets that I am attempting to generate a theoretical model of. I am able to find a global minima/solution for this fitting, but when I attempt to calculate errors, they come out to be large even though the landscape is well defined.</p>
<p>I have the adjustable parameter of the solution I'm trying to get, and 3 adjustable parameters to modify my "model" so it can match my experimental (these are just scaling factors, a single constant applied to the entire model).</p>
<pre><code>import numpy as np
from scipy.optimize import minimize
def fun(k,io,exp_data,exp_error,model1,model2):
F=(np.sqrt((8*k[0]*io)+k[0]**2)-k[0])/4
C=(-np.sqrt(k[0])*np.sqrt((8*io)+k[0])+k[0]+(4*io))/4
fraction_F=np.tile(((np.array([F/io],dtype=float)).reshape((len(model1),1))),(len(model1[0]))) # this is a single value, but I need it to be applied to all datasets since the datasets are list of lists
fraction_C=np.tile(((np.array([C/io],dtype=float)).reshape((len(model1),2))),(len(model2[0])))
comb_model=(F*model1)+(C/2*model2)
resize_scaling_factor=np.tile(((np.array(k[1:])).reshape((len(comb_model),1))),(len(comb_model[0]))) #same as before a constant to be applied to the datasets
return ((np.sum(((exp_data-(resize_scaling_factor*comb_model))/exp_error)**2))/comb_model.size) #normalized chi2
minimize(fun,args=(io,exp_data,exp_error,model1,model2),bounds=((0,np.inf),)*4,method='Nelder-Mead',x0=[1000]+[1e-9]*3)
</code></pre>
<p>The solution for k, which is what I care about, is around 500-1000, and the scaling factors are all very small (1e-8 to 1e-11). This setup, I can find the solution quite nicely. If I look at the chi2 landscape
<a href="https://i.sstatic.net/weIDQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/weIDQ.png" alt="enter image description here" /></a></p>
<p>We can see a nice minima can be found around ~400ish. I would assume then the errors for this solution should be small as well. However when I calculate the errors:</p>
<pre><code>from numpy.linalg import inv
from statsmodels.tools import approx_hess2
solution=minimize(fun,args=(io,exp_data,exp_error,model1,model2),bounds=((0,np.inf),)*4,method='Nelder-Mead',x0=[1000]+[1e-9]*3)
error=np.sqrt(np.diag(inv(approx_hess2(solution.x,fun,args=(io,exp_data,exp_error,model1,model2)))))
</code></pre>
<p>The errors are larger than the solution</p>
<pre><code>428+/-593 #first value is solution from minimization, 2nd value is the inverse Hessian diagonal
#albeit errors for scaling factors are better
7.91e-10+/-4.5e-11
1.72e-9+/-5.34e-11
3.52e-9+/-6.3e-11
#we can see all scaling factors have small errors relative to solution, so why is this not true of the other solution considering the well defined landscape
</code></pre>
<p>From my understanding, everything is setup correctly, my chi2 for my data is good (it's normalized to error, so it should be close to 1 which it is). The scaling factors have small errors. So I don't know why my other solution has such high errors (based off the graph it should not be +/-500, maybe like +/- 50 or something).</p>
<p>Any help would greatly be appreciated!</p>
|
<python><numpy><scipy><statsmodels><scipy-optimize>
|
2023-05-23 19:01:07
| 0
| 623
|
samman
|
76,317,508
| 9,820,561
|
Method Resolution Order (MRO) in Python: super multiple inheritance
|
<p>I got the following example code. I understand that the code is falling since <code>super(A, self).__init__(attr_base1, attr_base2)</code> is calling the <code>__init__</code> of <code>B(Base)</code>, but I don't really understand why. Since I put <code>A</code> in <code>super(A, self)</code>, I thought that it should search for the parent class starting from <code>A</code>. What should I do if I want A.<strong>init</strong>(self, attr1, "b1", "b2") to call in super for the <code>Base</code> <code>__init__</code>?</p>
<pre><code>class Base(object):
def __init__(self, attr_base1, attr_base2):
self.attr_base1 = attr_base1
self.attr_base2 = attr_base2
def foo(self):
print ('A.foo()')
class A(Base):
def __init__(self, attr, attr_base1, attr_base2):
super(A, self).__init__(attr_base1, attr_base2)
self.attr1 = attr
class B(Base):
def __init__(self, attr):
self.attr2 = attr
def foo(self):
print ('B.foo()')
class C(A, B):
def __init__(self, attr1, attr2):
A.__init__(self, attr1, "b1", "b2")
A.__init__(self, attr2)
c = C(1, 2)
</code></pre>
|
<python><method-resolution-order>
|
2023-05-23 18:01:44
| 1
| 362
|
Mr.O
|
76,317,227
| 19,130,803
|
MyPy: module not found mypy-extension
|
<p>I am working on python app using docker. I am using <code>mypy</code> as type checker, during checking I got an error for <code>Callable[NamedArgs]</code>.</p>
<p>version <code>mypy = "^1.2.0"</code></p>
<p>Upon this, I used <code>from mypy_extensions import NamedArg</code> and corrected the error.</p>
<pre><code>def bar(*, id: int) -> bool:
return True
foo: Callable[[NamedArg(int, "id")], bool] = bar
</code></pre>
<p>This time type checking is <strong>passed</strong> and docker build process started, But during lanuch/up, I am getting error as</p>
<pre><code>ModuleNotFoundError: No module named 'mypy_extensions'
</code></pre>
<p>After, this I ran <code>poetry add mypy-extensions --group dev</code>, it installed successfully.</p>
<p>version <code>mypy-extensions = "^1.0.0"</code></p>
<p>Then I ran again, still <strong>getting same error</strong>.</p>
<p>Please help.</p>
|
<python><docker><mypy>
|
2023-05-23 17:18:12
| 0
| 962
|
winter
|
76,317,200
| 20,292,449
|
Cannot open include file: 'sys/mman.h': No such file or directory
|
<p>I was trying to install the raspberry's GPIO pin controller library using this command on windows on vscode editors powershell terminal. How I am trying to control the raspberry pi3 pins is I am simulating the hardware design in a proteus8.13 simulation software not on an actual raspberry pi3 hardware?</p>
<pre><code>pip install RPi.GPIO
</code></pre>
<p>but keeps getting the following error:-</p>
<pre><code>fatal error C1083: Cannot open include file: 'sys/mman.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.36.32532\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for RPI.GPIO
Failed to build RPI.GPIO
ERROR: Could not build wheels for RPI.GPIO, which is required to install pyproject.toml-based projects
</code></pre>
<p>I already downloaded Visual Studio 2022 with the <code>C++ build tools</code> and add the necessary environment variables, but keeps getting the above error I mentioned?</p>
|
<python><pip><raspberry-pi3>
|
2023-05-23 17:13:21
| 1
| 532
|
ayex
|
76,317,082
| 12,091,935
|
How to get pythonpath to work for the python module in openmodelica Buildings running ubuntu 20
|
<p>I am having problems running my model in openmodelica on ubuntu 20. I tried exporting the path following the documentation and installed libpython3.8-dev.
<a href="https://build.openmodelica.org/Documentation/Buildings.Utilities.IO.Python_3_8.UsersGuide.html" rel="nofollow noreferrer">https://build.openmodelica.org/Documentation/Buildings.Utilities.IO.Python_3_8.UsersGuide.html</a>
I ran the model, but I am getting this error,but I am not sure if it it my python code or it is something wrong the pythonpath</p>
<pre class="lang-none prettyprint-override"><code>Simulation process failed. Exited with code 255.
> Simulation process failed. Exited with code 255.
/tmp/OpenModelica_sigi-laptop/OMEdit/peakShavingPythonModule/peakShavingPythonModule -port=36007 -logFormat=xmltcp -override=startTime=0,stopTime=10,stepSize=1,tolerance=1e-6,solver=dassl,outputFormat=mat,variableFilter=.* -r=/tmp/OpenModelica_sigi-laptop/OMEdit/peakShavingPythonModule/peakShavingPythonModule_res.mat -w -lv=LOG_STATS -inputPath=/tmp/OpenModelica_sigi-laptop/OMEdit/peakShavingPythonModule -outputPath=/tmp/OpenModelica_sigi-laptop/OMEdit/peakShavingPythonModule
The initialization finished successfully without homotopy method.
Failed to load "/home/sigi-laptop/.openmodelica/libraries/Buildings 9.1.0/Resources/Python-Sources/peak_shaving_no_soc.py". This may occur if you did not set the PYTHONPATH environment variable or if the Python module contains a syntax error. The error message is "(null)"
</code></pre>
<p>My modelica code</p>
<pre><code>model peakShavingPythonModule
Buildings.Utilities.IO.Python_3_8.Real_Real peak_shaving_test(functionName = "peak_shaving", moduleName = "/home/sigi-laptop/.openmodelica/libraries/Buildings 9.1.0/Resources/Python-Sources/peak_shaving_no_soc.py", nDblRea = 1, nDblWri = 1, samplePeriod = 1) annotation(
Placement(visible = true, transformation(origin = {-14, 30}, extent = {{-10, -10}, {10, 10}}, rotation = 0)));
Modelica.Blocks.Sources.CombiTimeTable combiTimeTable(table = [0, 0; 1, 100; 2, -100; 3, 150; 4, -150; 5, 75; 6, 30; 7, 10; 8, -5; 9, -32; 10, 42]) annotation(
Placement(visible = true, transformation(origin = {-64, 30}, extent = {{-10, -10}, {10, 10}}, rotation = 0)));
equation
connect(combiTimeTable.y, peak_shaving_test.uR) annotation(
Line(points = {{-52, 30}, {-26, 30}}, color = {0, 0, 127}, thickness = 0.5));
annotation(
uses(Buildings(version = "9.1.0"), Modelica(version = "4.0.0")),
experiment(StartTime = 0, StopTime = 10, Tolerance = 1e-6, Interval = 1),
__OpenModelica_commandLineOptions = "--matchingAlgorithm=PFPlusExt --indexReductionMethod=dynamicStateSelection -d=initialization,NLSanalyticJacobian",
__OpenModelica_simulationFlags(lv = "LOG_STATS", s = "dassl", variableFilter = ".*"));
end peakShavingPythonModule;
</code></pre>
<p>My python code (saved to /.openmodelica/libraries/Buildings 9.1.0/Resources/Python-Sources):</p>
<pre><code>def peak_shaving(net_load):
if (net_load <= -15) and (net_load >= -100):
ivt_ctrl = abs(net_load + 15)
elif (net_load <= -100):
ivt_ctrl = 100
elif (net_load >= 0) and (net_load < 100):
ivt_ctrl = -1*(net_load);
elif (net_load < 100):
ivt_ctrl = -100
else:
ivt_ctrl = 0
return int(ivt_ctrl)
</code></pre>
|
<python><modelica><pythonpath><openmodelica>
|
2023-05-23 16:55:07
| 1
| 435
|
Luis Enriquez-Contreras
|
76,317,031
| 4,858,605
|
How to handle multiple pytorch models with pytriton + sagemaker
|
<p>I am trying to adapt pytriton to host multiple models for a multi-model sagemaker setup.
In my case, I am trying to get it to load all models that are hosted in the SAGEMAKER_MULTI_MODEL_DIR folder.</p>
<p>I could not find any relevnt example <a href="https://github.com/triton-inference-server/pytriton/tree/main/examples" rel="nofollow noreferrer">here</a> for a multimodel use case, so I am trying with this code below. Is this the right approach?</p>
<pre><code>import logging
import numpy as np
from pytriton.decorators import batch
from pytriton.model_config import ModelConfig, Tensor
from pytriton.triton import Triton
logger = logging.getLogger("examples.multiple_models_python.server")
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(name)s: %(message)s")
# assume these are custom pytorch models
# loaded from SAGEMAKER_MULTI_MODEL_DIR using a custom function
models = [model1, model2]
@batch
def _infer(input, model):
# do processing
return [result]
with Triton() as triton:
logger.info("Loading models")
for model in models:
triton.bind(
model_name=model.name,
infer_func=_infer,
inputs=[
Tensor(name="multiplicand", dtype=np.float32, shape=(-1,)),
model
],
outputs=[
Tensor(name="product", dtype=np.float32, shape=(-1,)),
],
config=ModelConfig(max_batch_size=8),
)
triton.serve()
</code></pre>
<p>However, this does not work due to the models not existing on loadtime for pytriton. Is there anymore documentation to using pytriton in a multimodel setup?</p>
|
<python><amazon-web-services><amazon-sagemaker><triton>
|
2023-05-23 16:48:20
| 1
| 2,462
|
toing_toing
|
76,316,915
| 1,445,660
|
pike.exception.StreamLostError python pika rabbitmq
|
<p>I get <code>pike.exception.StreamLostError: Stream connection lost: connectionResetError(10054...</code> when I call <code>connection.channel()</code>:</p>
<pre><code>p = pika.ConnectionParameters('localhost')
c = pika.BlockingConnection(p)
channel = connection.channel()
...
</code></pre>
<p>RabbitMQ 3.11.16
Erlang 26.0</p>
|
<python><rabbitmq><pika>
|
2023-05-23 16:32:49
| 1
| 1,396
|
Rony Tesler
|
76,316,765
| 3,247,006
|
How to pass metadata to "Metadata" section in "Payments" on Stripe Dashboard after a payment on Stripe Checkout?
|
<p>With the Django code below, I'm trying to pass metadata to <strong>Metadata</strong> section in <strong>Payments</strong> on <strong>Stripe Dashboard</strong>:</p>
<pre class="lang-py prettyprint-override"><code># "views.py"
from django.shortcuts import redirect
import stripe
def test(request):
checkout_session = stripe.checkout.Session.create(
line_items=[
{
"price_data": {
"currency": "USD",
"unit_amount_decimal": 1000,
"product_data": {
"name": "T-shirt",
},
},
"quantity": 2,
}
],
metadata = { # Here
"name": "Joho Smith",
"age": "36",
"gender": "Male",
},
mode='payment',
success_url='http://localhost:8000',
cancel_url='http://localhost:8000'
)
return redirect(checkout_session.url, code=303)
</code></pre>
<p>So, I make a payment on <strong>Stripe Checkout</strong> as shown below:</p>
<p><a href="https://i.sstatic.net/0PZek.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0PZek.png" alt="enter image description here" /></a></p>
<p>Then, I click on the payment on <strong>Payments</strong> on <strong>Stripe Dashboard</strong> as shown below:</p>
<p><a href="https://i.sstatic.net/3gzui.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3gzui.png" alt="enter image description here" /></a></p>
<p>But, <strong>Metadata</strong> section doesn't have anything as shown below:</p>
<p><a href="https://i.sstatic.net/22sN0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/22sN0.png" alt="enter image description here" /></a></p>
<p>But actually, <strong>Events and logs</strong> section has the metadata as shown below:</p>
<p><a href="https://i.sstatic.net/vt1uN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vt1uN.png" alt="enter image description here" /></a></p>
<p>So, how can I pass metadata to <strong>Metadata</strong> section in <strong>Payments</strong> on <strong>Stripe Dashboard</strong> after a payment on <strong>Stripe Checkout</strong>?</p>
|
<python><django><stripe-payments><metadata><checkout>
|
2023-05-23 16:13:18
| 2
| 42,516
|
Super Kai - Kazuya Ito
|
76,316,606
| 8,214,951
|
Skorch NeuralNetRegressor and GridSearchCV - Custom Parameters
|
<p>I have the following model defined, that I would like to apply Hyperparameter tuning to. I want to use GridSearchCV and change the number of layers etc.</p>
<pre class="lang-py prettyprint-override"><code>class Regressor(nn.Module):
def __init__(self, n_layers=3, n_features=10, activation=nn.ReLU):
super().__init__()
self.layers = []
self.activation_functions = []
for i in range(n_layers):
self.layers.append(nn.Linear(n_features, n_features))
self.activation_functions.append(activation())
self.add_module(f"layer{i}", self.layers[-1])
self.add_module(f"act{i}", self.activation_functions[-1])
self.output = nn.Linear(n_features, 1)
def forward(self, x):
for layer, act in zip(self.layers, self.activation_functions):
x=act(layer(x))
x = self.output(x)
return x
</code></pre>
<p>I have defined the Skorch NeuralNetRegressor as follows:</p>
<pre class="lang-py prettyprint-override"><code>model = NeuralNetRegressor(
module=FatRegressor,
max_epochs=100,
batch_size=10,
module__n_layers=2,
criterion=nn.MSELoss,
)
print(model.initialize())
</code></pre>
<p>My parameter grid is</p>
<pre class="lang-py prettyprint-override"><code>param_grid = {
'model__optimizer': [optim.Adam, optim.Adamax, optim.NAdam],
'model__max_epochs': list(range(30,40)), # Want to ramp between 10 and 100
'module__activation': [nn.Identity, nn.ReLU, nn.ELU, nn.ReLU6, nn.GELU, nn.Softplus, nn.Softsign, nn.Tanh,
nn.Sigmoid, nn.Hardsigmoid],
'model__batch_size': [10,12,15,20],
'model__n_layers': list(range(11,30)),
'model__lr': [0.0001, 0.0008, 0.009, 0.001, 0.002, 0.003, 0.004, 0.01],
}
</code></pre>
<p>When using the Pipeline:</p>
<pre class="lang-py prettyprint-override"><code>pipeline = Pipeline(steps=[('scaler', StandardScaler()),
('model', NeuralNetRegressor(module=FatRegressor, device='cuda'))])
grid = GridSearchCV(
estimator = pipeline,
param_grid=param_grid,
n_jobs=-1,
cv=3,
error_score='raise',
return_train_score=True,
verbose=3
)
</code></pre>
<p>I get the following error:</p>
<pre class="lang-py prettyprint-override"><code>Invalid parameter 'n_layers' for estimator <class 'skorch.regressor.NeuralNetRegressor'>[uninitialized](
module=<class '__main__.Regressor'>,
). Valid parameters are: ['module', 'criterion', 'optimizer', 'lr', 'max_epochs', 'batch_size', 'iterator_train', 'iterator_valid', 'dataset', 'train_split', 'callbacks', 'predict_nonlinearity', 'warm_start', 'verbose', 'device', 'compile', '_params_to_validate'].
</code></pre>
<p>Is there any way to use custom parameter names?</p>
|
<python><scikit-learn><pytorch><gridsearchcv><scikit-optimize>
|
2023-05-23 15:52:23
| 0
| 430
|
flying_loaf_3
|
76,316,414
| 386,861
|
Altair returns error with single selector
|
<p>Trying to create a slider on the gapminder dataset from vega in Altair.</p>
<pre><code>import pandas as pd
import altair as alt
from vega_datasets import data
df= data.gapminder()
yearslider = alt.selection_single(
name="Year",
field="year",
init={"year": 1955},
bind=alt.binding_range(min=1955, max=2005, step=5)
)
alt.Chart(df).mark_circle().encode(
alt.X("fertility:Q", title=None, scale=alt.Scale(zero=False)),
alt.Y("life_expect:Q", title="Life expectancy",
scale=alt.Scale(zero=False)),
alt.Size("pop:Q", scale=alt.Scale(range=[0, 1000]),
legend=alt.Legend(orient="bottom")),
alt.Color("cluster:N", legend=None),
alt.Shape("cluster:N"),
alt.Order("pop:Q", sort="descending"),
tooltip=['country:N'],
).configure_view(fill="#fff").properties(title="Number of children by mother").interactive().add_selector(yearslider).transform_filter(yearslider)
</code></pre>
<p>Don't know what the error means.</p>
<pre><code>altair.vegalite.v4.schema.core.SelectionDef->2->init, validating 'anyOf'
1955 is not valid under any of the given schemas
</code></pre>
<p>Updated</p>
|
<python><pandas><altair>
|
2023-05-23 15:30:31
| 1
| 7,882
|
elksie5000
|
76,316,395
| 12,169,964
|
converting curl url encoded data to python format
|
<p>I have the following working curl request which I use to update my apps manifest in slack:</p>
<pre><code>curl -D /dev/stderr -s -d app_id=<APP_ID> --data-urlencode manifest@manifest.json -H 'authorization: Bearer <CURRENT_TOKEN>' https://slack.com/api/apps.manifest.update
</code></pre>
<p>I am attempting to achieve the same result using python3 with the below:</p>
<pre><code>import requests
currentKey = <CURRENT_KEY>
headers = {
'authorization': 'Bearer ' + currentKey,
}
data = {
'app_id': '<APP_ID>',
}
response = requests.post('https://slack.com/api/apps.manifest.update', headers=headers, data=data)
print(response.text)
</code></pre>
<p>This code will currently return an error:</p>
<blockquote>
<p>{"ok":false,"error":"invalid_arguments","response_metadata":{"messages":["[ERROR]
missing required field: manifest"]}}</p>
</blockquote>
<p>as I haven't passed the manifest.json file which needs to be provided as url encoded data based on <a href="https://api.slack.com/methods/apps.manifest.update" rel="nofollow noreferrer">slacks docs</a>. Can someone confirm what the best approach is with encoding and passing my files data to the request?</p>
<p>I've attempted different revisions of the upload using <code>json</code> and <code>urllib</code> to try and read in the file, encode it, pass it to an object and include it within the 'data' dict but none seem to return an accepted format.</p>
<p>Is the process I'm trying to take correct, or should I be looking to pass in the file via the <code>files</code> option for <code>requests</code>?</p>
|
<python><curl><python-requests><slack-api>
|
2023-05-23 15:28:45
| 1
| 335
|
El_Birdo
|
76,316,261
| 10,335
|
How to edit an already created Python Script in PowerBI?
|
<p>In my Power BI dashboard, I created a Python Script that accesses an API and generates a Pandas data frame.</p>
<p>It works fine, but how can I edit the Python code?</p>
<p>I thought it would be something simple, but I can't really find how to find it in the interface. If I send the .pbix file to someone, they will receive an alert that a Python script is executing and display the code nicely formatted.</p>
<p>I can find the code if I go to "Model Exhibition -> Edit query -> Advanced Editor" (I'm translating the options from another language, they can be somewhat different). I is a M language code and the Python script is displayed as a long line as the image below:</p>
<p><a href="https://i.sstatic.net/NI6o4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NI6o4.png" alt="How python code is displayed in power bi" /></a></p>
<p>I believe it is possible to open a text box to edit the Python script, but can't really find it.</p>
|
<python><powerbi><powerquery><powerbi-desktop>
|
2023-05-23 15:13:03
| 2
| 40,291
|
neves
|
76,316,138
| 7,909,676
|
Invoking Amazon Lex using audio recorded from capacitor-voice-recorder
|
<p><strong>TLDR;</strong> How to convert <code>audio/webm;codecs=opus</code> to <code>audio/l16; rate=16000; channels=1</code> in python.</p>
<hr />
<h2>The issue</h2>
<p>I am using <code>capacitor-voice-recorder</code> to record audio in my Ionic application. In firefox its recording audio in <code>audio/webm;codecs=opus</code> format. It seems Amazon Lex requires PCM format:</p>
<ul>
<li>PCM format, audio data must be in little-endian byte order.
<ul>
<li>audio/l16; rate=16000; channels=1</li>
<li>audio/x-l16; sample-rate=16000; channel-count=1</li>
<li>audio/lpcm; sample-rate=8000; sample-size-bits=16; channel-count=1; is-big-endian=false</li>
</ul>
</li>
</ul>
<p>I know nothing about audio, so apologies in advance.</p>
<p>I send this audio as a base64 encoded string to a Lambda function which is written in python, and invokes Lex which looks like this:</p>
<pre><code>def get_lex(recording):
try:
response = client.recognize_utterance(
botId='',
botAliasId='',
localeId='en_US',
sessionId=str(uuid.uuid4()),
requestContentType='audio/l16; rate=16000; channels=1',
responseContentType='text/plain; charset=utf-8',
inputStream=recording
)
return decode(response['interpretations'])
except Exception as e:
print(e)
def lambda_handler(event, context):
# Decode Base64 string into bytes
decoded_bytes = base64.b64decode(event['body'])
response = get_lex(decoded_bytes)
</code></pre>
<h2>The Question</h2>
<p>My question is, how can I convert the audio which I have to a format which is acceptable by Amazon Lex. I can do it in either my application in Javascript, or in my Lambda function in Python.</p>
<p>Anything that I have searched points me to ffmpeg but I don't believe that's a valid option for me in Lambda.</p>
|
<python><amazon-web-services><audio><amazon-lex><amazon-polly>
|
2023-05-23 14:59:21
| 1
| 20,464
|
Leeroy Hannigan
|
76,316,081
| 13,349,935
|
WebSocket connection to 'wss://127.0.0.1:8080/' failed
|
<h1>Problem Context</h1>
<p>I have an express server serving my webpage with HTTPS via the following code:</p>
<pre><code>const express = require("express");
const app = express();
const fs = require("fs");
const https = require("https");
const sslKey = fs.readFileSync(__dirname + "/certs/key.pem");
const sslCert = fs.readFileSync(__dirname + "/certs/cert.pem");
app.get("/", (req, res) => {
res.sendFile(__dirname + "/pages/index.html");
});
https
.createServer(
{
key: sslKey,
cert: sslCert,
},
app
)
.listen(3000);
</code></pre>
<p>The webpage is connecting to a Python websocket server running on my computer on port 8080 via the following code:</p>
<pre><code>const server = "127.0.0.1";
const ws = new WebSocket(`wss://${server}:8080`);
</code></pre>
<p>And my Python websocket server is running based on the following code:</p>
<pre><code>import asyncio
import websockets
import random
import string
import subprocess
import os
import logging
import ssl
import speech_recognition as sr
r = sr.Recognizer()
logging.basicConfig()
ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
ssl_cert = "certs/cert.pem"
ssl_key = "certs/key.pem"
ssl_context.load_cert_chain(ssl_cert, keyfile=ssl_key)
async def server(websocket, path):
await call_some_custom_irrelevant_function_with_the_socket(websocket, path)
start_server = websockets.serve(
server, "127.0.0.1", 8080, ssl=ssl_context, origins="*")
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
</code></pre>
<p>There are separate TLS/SSL certificates being used by both, the express server, and the Python websocket server. I generated them and granted them trust on my local computer using <a href="https://github.com/FiloSottile/mkcert" rel="nofollow noreferrer">mkcert</a>, a way to generate and automatically grant certificates trusted authority on your local device. Certificates for both are placed inside the <code>certs</code> directory of each's project folder and appropriately referenced in the code.</p>
<p>The command I used to generate certificates for both was:</p>
<pre><code>mkcert localhost 127.0.0.1 ::1
</code></pre>
<p>I run the Python websocket server with <code>python app.py</code> in its directory successfully, and start my express app using <code>nodemon app.js</code> in its directory as well, being able to access <code>https://localhost:3000</code> in a secure manner.</p>
<h1>Problem</h1>
<p>When I open my webpage and pull the trigger to connect to the websocket server (Irrelevant code as it's just an event handler on a button that calls the websocket connection code I gave above and some other irrelevant socket event stuff), it waits for like 1 second and then gives out the following error:</p>
<pre><code>WebSocket connection to 'wss://127.0.0.1:8080/' failed:
</code></pre>
<p>I researched a bit regarding this and it <em>seems</em> that there may be some sort of problem with how I am integrating my TLS/SSL certificates, however, I am clueless as to exactly what. If someone could help me out in fixing this, or even point me in the right direction, I'd be grateful.</p>
|
<javascript><python><node.js><express><websocket>
|
2023-05-23 14:54:31
| 0
| 1,392
|
Syed M. Sannan
|
76,316,066
| 2,610,522
|
Xarray distance based sliding window in lat and lon system?
|
<p>I need to select all pixels within 50km of a central pixel in a lat and long coordinate system. I could do the loop but that is not ideal. Currently the <code>rolling</code> function in <code>xarray</code> is based on fixed window size. This doesn't work for me since the number of cells within 50 km depends on the latitude (more pixels in higher latitudes). Is there any efficient way to construct windows based on variable window size? Here is an example of fixed window:</p>
<pre><code>import xarray as xr
ds = xr.tutorial.load_dataset("ersstv5")
ds_roll = ds.rolling({"lat": 5, "lon": 5}, center=True).construct(
{"lat": "lat_dim", "lon": "lon_dim"}
)
</code></pre>
<p>I am looking for a way to make the window size dynamic based on certain distance from a central window.<br />
Thanks</p>
|
<python><python-xarray>
|
2023-05-23 14:53:30
| 0
| 810
|
Ress
|
76,315,985
| 8,012,864
|
Python remove duplicates form multidimensional list
|
<p>I have a list in python that looks like this...</p>
<pre><code>[
{
"title": "Green Jacket",
"price": "18",
"instock": "yes",
},
{
"title": "Red Hat",
"price": "5",
"instock": "yes",
},
{
"title": "Green Jacket",
"price": "25",
"instock": "no",
},
{
"title": "Purple Pants",
"price": "100",
"instock": "yes",
},
]
</code></pre>
<p>I am trying to remove items from the list that have duplicate names, so in the example above the final list would look like this...</p>
<pre><code>[
{
"title": "Green Jacket",
"price": "18",
"instock": "yes",
},
{
"title": "Red Hat",
"price": "5",
"instock": "yes",
},
{
"title": "Purple Pants",
"price": "100",
"instock": "yes",
},
]
</code></pre>
<p>Is converting to a dict going to help me in this instance?</p>
|
<python>
|
2023-05-23 14:41:51
| 1
| 443
|
jsmitter3
|
76,315,907
| 3,007,075
|
Applying a function to itself multiple times in python
|
<p>According to the documentation of <a href="https://docs.python.org/3/library/functools.html#functools.reduce" rel="nofollow noreferrer">functools</a>, the <code>reduce</code> method is useful for</p>
<blockquote>
<p>Apply function of two arguments cumulatively to the items of iterable, from left to right, so as to reduce the iterable to a single value. For example, <code>reduce(lambda x, y: x+y, [1, 2, 3, 4, 5])</code> calculates <code>((((1+2)+3)+4)+5)</code>.</p>
</blockquote>
<p>Now, I wanted to have a single valued function, I could make the workaround of creating a lambda to ignore an input:</p>
<pre><code>from os.path import dirname
from functools import reduce
reduce(lambda d, _: dirname(d), range(3), __file__)
</code></pre>
<p>Is there an alternative to <code>reduce</code> that avoids this workaround? In Mathematica there is the <a href="https://reference.wolfram.com/language/ref/Nest.html" rel="nofollow noreferrer"><code>Nest</code></a> function, is there no analogue in python?</p>
|
<python>
|
2023-05-23 14:33:21
| 2
| 1,166
|
Mefitico
|
76,315,710
| 577,647
|
Method takes 2 positional arguments but 3 were given
|
<p>My method actually gets 2 arguments other than self:</p>
<pre><code>class MyService(object):
def my_method(self, instance, **kwargs):
</code></pre>
<p>In the test code:</p>
<pre><code>my_service.my_method(old_instance, new_data)
</code></pre>
<p>and Python says that:</p>
<pre><code>takes 2 positional arguments but 3 were given
</code></pre>
<p>Actually method has self among arguments so I was expecting to have no problem.</p>
<p>What am I missing?</p>
|
<python><python-3.x><django>
|
2023-05-23 14:11:18
| 2
| 2,888
|
tolga
|
76,315,624
| 2,479,038
|
Registering discriminated union automatically
|
<p>Using pydantic 1.10.7 and python 3.11.2</p>
<p>I have a recursive Pydantic model and I would like to deserialize each types properly using <a href="https://docs.pydantic.dev/latest/usage/types/#discriminated-unions-aka-tagged-unions" rel="nofollow noreferrer">discriminated union</a>.</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Field
class Base(BaseModel):
kind: str
sub_models: Annotated[
List[Union[A,B]],
Field(
default_factory=list,
discriminator="kind"
)
]
class A(Base):
kind: Literal["a"]
a_field: str
class B(Base):
kind: Literal["b"]
b_field: str
</code></pre>
<p>I would like to automatically register the subclasses in a way Pydantic will be able to understand, like so</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Field
B = TypeVar("B", bound="Base")
class Base(BaseModel):
kind: str
sub_models: Annotated[
List[B],
Field(
default_factory=list,
discriminator="kind"
)
]
_subs: Set[Type[B]] = set()
def __init_subclass__(cls, /, **kwargs):
Base._subs.add(cls)
cls.__annotations__["kind"] = Literal[cls.__name__.lower()] # <- works
# list comprehension in a type definition is not valid,
Base.__annotations__["sub_models"] = List[Union[subclass for subclass in Base._subs]]
class A(Base):
a_field: str
class B(Base):
b_field: str
</code></pre>
<p>Any idea how to have discriminated union configured dynamically?</p>
<p>I have tried registering manually the subclasses, but it involves a circular dependency of type hints and I need to "not forget" to add each new type to the union.</p>
|
<python><python-3.x><pydantic>
|
2023-05-23 14:02:16
| 1
| 1,211
|
abstrus
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.