QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,315,486
| 6,930,340
|
Map values in a pandas series according to MultiIndex values
|
<p>I have a <code>pd.Series</code> with a multiindex.</p>
<pre><code>import pandas as pd
# Create the Series
data = {
("long_only", "Security_1"): -1,
("long_only", "Security_3"): 1,
("long_only", "Security_5"): 1,
("short_only", "Security_2"): -1,
("short_only", "Security_4"): 1,
("both", "Security_1"): -2,
("both", "Security_2"): -1,
("both", "Security_4"): 1,
}
signals = pd.Series(data, dtype="int8")
signals.index = signals.index.set_names(["direction", "symbol"])
print(signals)
direction symbol
long_only Security_1 -1
Security_3 1
Security_5 1
short_only Security_2 -1
Security_4 1
both Security_1 -2
Security_2 -1
Security_4 1
dtype: int8
</code></pre>
<p>Now I would like to transform the integer values into strings according to the following <code>dict</code>:</p>
<pre><code># Define the mapping dictionary
trade_state_dict = {
"long_only": {
-1: "long -> flat",
1: "flat -> long",
},
"short_only": {
-1: "flat -> short",
1: "short -> flat",
},
"both": {
-2: "long -> short",
-1: "long -> flat",
1: "flat -> long",
2: "short -> long",
},
}
</code></pre>
<p>I need to use the "direction" level for finding the inner dictionary in <code>trade_state_dict</code>.</p>
<p>The expected result should look like this:</p>
<pre><code>direction symbol
long_only Security_1 "long -> flat"
Security_3 "flat -> long"
Security_5 "flat -> long"
short_only Security_2 "flat -> short"
Security_4 "short -> flat"
both Security_1 "long -> short"
Security_2 "long -> flat"
Security_4 "flat -> long"
</code></pre>
|
<python><pandas><dictionary>
|
2023-05-23 13:44:05
| 2
| 5,167
|
Andi
|
76,315,436
| 13,647,125
|
HTML iframe with dash output
|
<p>I have 2 pretty simple dashboards and I would like to run this two dashboards with flask using main.py for routing.</p>
<p>app1.py</p>
<pre><code>import dash
from dash import html, dcc
app = dash.Dash(__name__)
app.layout = html.Div(
children=[
html.H1('App 1'),
dcc.Graph(
id='graph1',
figure={
'data': [{'x': [1, 2, 3], 'y': [4, 1, 2], 'type': 'bar', 'name': 'App 1'}],
'layout': {
'title': 'App 1 Graph'
}
}
)
]
)
</code></pre>
<p>and</p>
<p>app2.py</p>
<pre><code>import dash
from dash import html, dcc
app = dash.Dash(__name__)
app.layout = html.Div(
children=[
html.H1('App 2'),
dcc.Graph(
id='graph2',
figure={
'data': [{'x': [1, 2, 3], 'y': [2, 4, 1], 'type': 'bar', 'name': 'App 2'}],
'layout': {
'title': 'App 2 Graph'
}
}
)
]
)
</code></pre>
<p>main.py</p>
<pre><code># main_app.py
from flask import Flask, render_template
import app1
import app2
app = Flask(__name__)
@app.route('/')
def index():
return 'Main App'
@app.route('/app1')
def render_dashboard1():
return render_template('dashboard1.html')
@app.route('/app2')
def render_dashboard2():
return render_template('dashboard2.html')
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>dashboard1.html</p>
<pre><code><!-- dashboard1.html -->
<!DOCTYPE html>
<html>
<head>
<title>Dashboard 1</title>
</head>
<body>
<h1>Dashboard 1</h1>
<iframe src="/app1" width="1000" height="800"></iframe>
</body>
</html>
</code></pre>
<p>dashboard2.html</p>
<pre><code><!-- dashboard2.html -->
<!DOCTYPE html>
<html>
<head>
<title>Dashboard 2</title>
</head>
<body>
<h1>Dashboard 2</h1>
<iframe src="/app2" width="1000" height="800"></iframe>
</body>
</html>
</code></pre>
<blockquote>
<p>structure</p>
<pre><code>/
app1.py
app2.py
main.py
/templates
dashboard1.html
dashboard2.html
</code></pre>
</blockquote>
<p>but when I run my main.py and route for app1 I can see frame for the app1 but there is no graph.
Could someone please explain how to use iframe to for me to be able to see output?</p>
|
<python><html><iframe><plotly-dash>
|
2023-05-23 13:38:13
| 1
| 755
|
onhalu
|
76,315,421
| 19,556,911
|
FME - How to manipulate attribute values of all features (in PythonCaller ?)
|
<p>I would like to manipulate whole ''column'' (list of attribute values) in a PythonCaller but I don't really understand how I can do that. For example in the example here below, I create a new attribute in <code>input</code> (=iterating over each each feature, right ?) ; what I am trying to do :</p>
<ol>
<li><p>in <code>close</code> : to print all unique values of the newly created attribute
(WITHOUT iterating over each feature, so to work directly on the list of all attribute values that my data contain)</p>
</li>
<li><p>in <code>close</code> (or somewhere else) : to add directly a new attribute to my data by applying a function on the firstly created attribute (again WITHOUT iterating over each feature)</p>
</li>
</ol>
<p>Is it possible to do all these things in a Python caller ?</p>
<pre><code>import fme
import fmeobjects
def processFeature(feature):
pass
class FeatureProcessor(object):
def __init__(self):
pass
def input(self,feature):
progs_dict = globals()[fme.macroValues['VARNAME_DICT']]
all_prog_cols = list(progs_dict.keys())
all_col_attributes = filter(
lambda x: x in all_prog_cols,
feature.getAllAttributeNames())
value_tmp = map(str, map(feature.getAttribute, all_col_attributes))
value = [i for i in value_tmp if i == 'X']
if len(value) == 0:
value = [progs_dict['aucun']]
feature.setAttribute('MYNEWATTR', ';'.join(value))
self.pyoutput(feature)
def close(self):
#### EX 1) I WOULD LIKE TO PRINT ALL UNIQUE VALUES OF THE NEWLY CREATED ATTRIBUTE
#print(', '.join(set(self.MYNEWATTR))) # <<< this does not work
#### EX 2) WOULD IT BE POSSIBLE TO CREATE HERE A NEW ATTRIBUTE BASED ON THE NEWLY CREATED ONE WIHTOUT ITERATING OVER ALL FEATURES
#self.ANOTHERATTR = map(my_python_func, self.MYNEWATTR) #<<< does not work
pass
</code></pre>
|
<python><fme>
|
2023-05-23 13:36:40
| 0
| 327
|
mazu
|
76,315,335
| 5,640,161
|
Can a Python package and its corresponding PyPi project have different names?
|
<p>For example, I'm wondering how is it possible that <code>scikit-learn</code> is the name of a PyPi package while the actual Python module is named <code>sklearn</code>. The reason I'm asking is that I have a local Python package <code>packageA</code> that I can't upload to PyPi since that name happens to already be taken. I therefore wonder if I can upload it as <code>packageB</code> (which actually is available on PyPi)? If so, how can I do that?</p>
|
<python><pip><pypi><python-packaging>
|
2023-05-23 13:28:06
| 1
| 863
|
Tfovid
|
76,315,248
| 10,413,428
|
Errors when using | instead of typing.Union in dataclasses type hints
|
<p>I am using Python 3.11.3 on Linux and currently for my union types in dataclasses I am using the following type hints:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from typing import Union
import numpy as np
@dataclass
class Test:
data: Union[np.array, list[tuple]]
if __name__ == "__main__":
test = Test(data=np.array([2, 1]))
test2 = Test(data=[(1, 2), (1, 2)])
</code></pre>
<p>Now I read, that we can replace the <code>Union[]</code> syntax with the more elegant <code>|</code> syntax since 3.10 but this does not work for me.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
import numpy as np
@dataclass
class Test:
data: np.array | list[tuple]
if __name__ == "__main__":
test = Test(data=np.array([2, 1]))
test2 = Test(data=[(1, 2), (1, 2)])
</code></pre>
<p>throws the following error:</p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "/home/user/test/test.py", line 7, in <module>
class Test:
File "/home/user/test/test.py", line 8, in Test
data: np.array | list[tuple]
~~~~~~~~~^~~~~~~~~~~~~
TypeError: unsupported operand type(s) for |: 'builtin_function_or_method' and 'types.GenericAlias'
</code></pre>
<p>Do I miss something? Or does this not work for data classes?</p>
|
<python><python-3.x><python-3.11>
|
2023-05-23 13:18:28
| 0
| 405
|
sebwr
|
76,315,246
| 9,571,575
|
How to count results with SQLAlchemy Select API?
|
<p>All answers to this question (i.e. counting results from queries in SQLAlchemy) that I could find, where given for Query API. But in my application I am using AsyncSession which does not support query and I have to use select. I have no idea how to get count of all results from query using select API as count() simply does not work.</p>
<p>My code so far:</p>
<pre><code> q = (
select(User)
.order_by(User.id)
)
if user_type:
q = q.where(User.user_type == user_type)
if name:
q = q.where(User.name.contains(name))
if surname:
q = q.where(User.surname.contains(surname))
if email:
q = q.where(User.email.contains(email))
result = await self.db_session.execute(q)
results = result.scalars()
return results
</code></pre>
<p>What I'd like to achieve is something like:</p>
<pre><code>count_results = results.count()
</code></pre>
<p>and return them alongside with list of retrieved objects. Thank all of you in advance!</p>
|
<python><sqlalchemy><fastapi>
|
2023-05-23 13:18:26
| 1
| 831
|
ugabuga77
|
76,315,191
| 1,934,212
|
Read arrays from csv
|
<p>Parsing a csv file with the content</p>
<pre><code>time,X,Y,status1,status2
1659312306212,"[-53, -70]","[-1512, -1656]",-65,18.44140625
1659312421965,"[25, -34]","[-1532, -1520]",-71,18.43359375
</code></pre>
<p>using</p>
<pre><code>import pandas as pd
df = pd.read_csv("horrible.csv")
</code></pre>
<p>results in</p>
<pre><code>print(df)
time X Y status1 status2
0 1659312306212 [-53, -70] [-1512, -1656] -65 18.441406
1 1659312421965 [25, -34] [-1532, -1520] -71 18.433594
print(df.dtypes)
time int64
X object
Y object
status1 int64
status2 float64
dtype: object
</code></pre>
<p>What additional steps are necessary to</p>
<ol>
<li>interpret the objects in the X and Y columns as an array of numbers</li>
</ol>
<p>and</p>
<ol start="2">
<li>pivot them for each row</li>
</ol>
<p>such that the resulting df looks like</p>
<pre><code> time X Y status1 status2
0 1659312306212 -53 -1512 -65 18.441406
1 -70 -1656
2 1659312421965 25 -1532 -71 18.433594
3 -34 -1520
</code></pre>
<p>?</p>
|
<python><pandas>
|
2023-05-23 13:11:47
| 1
| 9,735
|
Oblomov
|
76,315,110
| 736,312
|
Extraction of position of an image in a PDF file
|
<p>I am using pyMuPdf library to extract images from a pdf file.
I want to get the position of the images (origin) and the size of them.<br>
I could get the sizes. However I can't get the position correctly using:</p>
<pre><code>def extract_images_from_pdf(_input_pdf_file_name, _output_folder):
_pdf_file_document = fitz.open(_input_pdf_file_name)
for _page_index, _page in enumerate(_pdf_file_document): # Get the page itself
_images_list = _pdf_file_document.get_page_images(pno=_page_index, full=True) # Get image list for this page
for _image_index, _image in enumerate(_images_list):
_xref = _image[0]
_base_image = _pdf_file_document.extract_image(_xref)
_image_bytes = _base_image["image"]
_image = PILImage.open(BytesIO(_image_bytes))
_output_image_name = f"{_output_folder}/image_{_image_index + 1:04d}.png"
_image.save(open(_output_image_name, "wb"))
</code></pre>
<p>I can process each images and extract them.<br>
However,I am having trouble retrieving the original position of those images.
I want to get each pages as an image, getting each images in that page and then get the origin point and the size of those extracted images. I am using the following code to get the origin, but from one reason, I am not getting the origin position correctly.</p>
<pre><code>def get_image_origins(_input_pdf_file_name, _page_index):
_pdf_file_document = fitz.open(_input_pdf_file_name)
_image_list = _pdf_file_document.get_page_images(pno=_page_index, full=True)
_image_bounding_boxes = []
for _image_index, _image_item in enumerate(_image_list):
_image_code_name = _image_item[7]
# The format of _image_bounding_box is (x_min, y_min, x_max, y_max) for each images inside the page.
_image_rects = _pdf_file_document[_page_index].get_image_rects(_image_code_name, transform=True)
_image_box = _pdf_file_document[_page_index].get_image_bbox(_image_item, transform=True)
if len(_image_rects) > 0:
_image_bounding_box, _ = _image_rects[0]
_image_bounding_boxes.append(_image_bounding_box)
return _image_bounding_boxes
</code></pre>
<p>Please help.</p>
|
<python><pymupdf>
|
2023-05-23 13:03:33
| 1
| 796
|
Toyo
|
76,314,996
| 1,632,519
|
Update requirements.txt without installing packages
|
<p>I have a <code>requirements.txt</code> with dependencies pinned using <code>==</code>. The pinning using <code>==</code> is a requirement to ensure reproducibility. I'd like to update all of them to the most recent version. I do not want to install any of them, I only want to modify <code>requirements.txt</code></p>
|
<python><pip><requirements.txt>
|
2023-05-23 12:56:25
| 1
| 2,126
|
Philippe
|
76,314,792
| 3,842,823
|
Python: Catching (and then re-throw) warnings from my code
|
<p>I want to catch and then re-throw warnings from my Python code, similarly to <code>try</code>/<code>except</code> clause. My purpose is to catch the warning and then re-throw it using my logger.</p>
<p>The warnings are issued from whatever packages I'm using, I would like something that is totally generic, exactly like the <code>try</code>/<code>except</code> clause.</p>
<p>How can I do that in Python >= v3.8?</p>
|
<python><python-3.x><warnings>
|
2023-05-23 12:31:52
| 1
| 1,951
|
Xxxo
|
76,314,724
| 5,287,366
|
Annotations - a method is given a type as a generic argument. Annotate the return type that should be exacly the same as given in the argument
|
<p>Similar to a <code>classmethod</code>, there is a method that takes a generic type as the first argument (not an object of the type). I want to annotate its return type as precisely the same type that is given in the first argument.</p>
<pre><code>ChildT= typing.TypeVar("ChildT")
def load_child(child_type: ChildT, child_name: str) -> ChildT:
# ...
return child_type()
</code></pre>
<p>In the code below, the <code>load_child</code> shall be annotated to return the <code>child_type</code> type.</p>
<pre><code>test_frame = load_child(FrameType, "some_frame") # I want this to work
test_frame = load_child(FrameType(), "some_frame") # But, of course, only this works with the code above
</code></pre>
|
<python><python-typing>
|
2023-05-23 12:24:07
| 1
| 485
|
JD.
|
76,314,711
| 17,877,528
|
Best approach to fix this JSON file
|
<p>i have a huge JSON file that looks like this. I'm pasting an image because i think it's better to see the problem.</p>
<p><a href="https://i.sstatic.net/XY1Md.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XY1Md.png" alt="enter image description here" /></a></p>
<p>It's not inside brackets and there's no comma separating one entry from the other.</p>
<p>How can i make this a valid JSON file using python?</p>
<p>Thanks in advance.</p>
|
<python><json><format>
|
2023-05-23 12:22:40
| 0
| 774
|
José Carlos
|
76,314,512
| 13,836,083
|
AsyncIO appears to be slower than multithreaded even if tasks are only bound to I/O
|
<p>I am creating 8000 text files and writing 100 bytes of data to each file in both asyncio and multithread version. I was expecting asyncio version of the code to be completed before multithread version of the code , as all tasks related to I/O bound and I have understood that asyncio will perform better but the situation is other way around. Asyncio is not only perfroming slow but it is also taking more space than the thread version of the code.</p>
<p>I must be doing something wrong, I have tried it to find where I am lacking in my code but didn't found as such.</p>
<p><code>Async IO Version</code></p>
<pre><code>import time
import aiofiles
import asyncio
from timeit import timeit
DATA="A"*100
async def write_file(filename_plus_path):
async with aiofiles.open(filename_plus_path,"w") as f:
await f.write(DATA)
async def main():
coro = [asyncio.create_task(write_file(f"./file/text{i}.txt")) for i in range(8000)]
await coro[-1]
st = time.time()
print("Started at:",time.strftime("%X"))
asyncio.run(main())
print("Finished at:",time.strftime("%X"))
res = time.time() - st
final_res = res * 1000
print('Execution time:', final_res, 'milliseconds')
</code></pre>
<p><code>Output:</code></p>
<pre><code>Started at: 16:15:39
Finished at: 16:15:44
Execution time: 4243.027448654175 milliseconds
</code></pre>
<p><code>Multithread Version</code></p>
<pre><code>import time
import threading
DATA="A"*100
def write_file(filename_plus_path):
with open(filename_plus_path,"w") as f:
f.write(DATA)
print("Started at:",time.strftime("%X"))
st = time.time()
thrs=[threading.Thread(target=write_file,args=(f"./file/text{i}.txt",)) for i in range(8000)]
for th in thrs:
th.start()
for th in thrs:
th.join()
# print("Finished at:",time.strftime("%X"))
res = time.time() - st
final_res = res * 1000
print('Execution time:', final_res, 'milliseconds')
print("Finished at:",time.strftime("%X"))
</code></pre>
<p><code>output:</code></p>
<pre><code>Started at: 16:16:57
Execution time: 3494.1318035125732 milliseconds
Finished at: 16:17:01
</code></pre>
|
<python><multithreading><python-asyncio>
|
2023-05-23 12:00:11
| 1
| 540
|
novice
|
76,314,511
| 2,072,457
|
s3fs.put into empty and non-empty S3 folder
|
<p>I am copying folder to <code>S3</code> with <code>s3fs.put(..., recursive=True)</code> and I experience weird behavior. The code is:</p>
<pre><code>import s3fs
source_path = 'foo/bar' # there are some <files and subfolders> inside
target_path = 'S3://my_bucket/baz'
s3 = s3fs.S3FileSystem(anon=False)
s3.put(source_path, target_path, recursive=True)
</code></pre>
<p>First time I run the command (files and S3 "folders" are created), results end up like this:</p>
<pre><code>S3://my_bucket/baz/<files and subfolders>
</code></pre>
<p>Second time I run the command, the result looks like this</p>
<pre><code>S3://my_bucket/baz/bar/<files and subfolders>
</code></pre>
<p>I can probably check existence of the "folders" before, but that does not solve the problem that I do not want to see <code>bar</code> in the resulting tree structure. I tried to append <code>'/'</code> to <code>target_path</code> in line with the documentation, but it did not have any effect. Is there a way to force <code>s3fs</code> behave same way regardless of existing data in S3?</p>
|
<python><amazon-s3><python-s3fs>
|
2023-05-23 11:59:50
| 1
| 969
|
Pepacz
|
76,314,456
| 20,220,485
|
How do you add a condition to a generator function based on the previous output?
|
<p>I am trying to reconcile <code>id_label_token</code>, which is a list of tuples containing a tokenized string, label, and character index, with the original string <code>string</code>.</p>
<p>I have some working code that uses a generator to do this. However, it can't handle instances where there is a label for a token between parentheses in the original string. I am new to generators, and I am finding it difficult to implement a condition that produces my desired output.</p>
<p>How could I check the previous <code>token_type</code> against the current <code>token_type</code> so that I don't skip assigning a <code>token_type</code> when a token is between parentheses?</p>
<p>Some help would be appreciated. And if this question requires different framing, please don't hesitate to say so.</p>
<p>Data:</p>
<pre><code>id_label_token = [(0, 'O', '('),
(1, 'DATE-B', '6'),
(2, 'DATE-I', ')'),
(4, 'DATE-I', '13th'),
(9, 'DATE-B', 'February'),
(18, 'DATE-I', '1942'),
(23, 'O', '('),
(24, 'GPE-B', 'N.S.'),
(28, 'O', ')')]
string = "(6) 13th February 1942 (N.S.)"
</code></pre>
<p>Current code:</p>
<pre><code>def get_tokens(tokens):
it = iter(tokens)
_, token_type, next_token = next(it)
word = yield
while True:
if next_token == word:
word = yield next_token, token_type
_, token_type, next_token = next(it)
else:
_, _, tmp = next(it)
next_token += tmp
it = get_tokens(id_label_token)
next(it)
out = [it.send(w) for w in string.split()]
print(out)
</code></pre>
<p>Current output:</p>
<pre><code>[('(6)', 'O'), ('13th', 'DATE-I'), ('February', 'DATE-B'), ('1942', 'DATE-I'), ('(N.S.)', 'O')]
</code></pre>
<p>Desired output:</p>
<pre><code>[('(6)', 'DATE-B'), ('13th', 'DATE-I'), ('February', 'DATE-B'), ('1942', 'DATE-I'), ('(N.S.)', 'GPE-B')]
</code></pre>
|
<python><string><iteration><generator><tokenize>
|
2023-05-23 11:54:27
| 2
| 344
|
doine
|
76,314,345
| 10,755,032
|
TypeError: Input has ['int', 'str'] as feature name / column name types
|
<p>I am trying to fit a ML model and I am getting the following error:</p>
<p><code>TypeError: Feature names are only supported if all input features have string names, but your input has ['int', 'str'] as feature name / column name types. If you want feature names to be stored and validated, you must convert them all to strings, by using X.columns = X.columns.astype(str) for example. Otherwise you can remove feature / column names from your input data, or convert them all to a non-string data type.</code></p>
<p>My code:</p>
<pre><code>import pandas as pd, numpy as np
import csv
import warnings
from bs4 import BeautifulSoup, MarkupResemblesLocatorWarning
from sklearn.impute import SimpleImputer
from sklearn.exceptions import ConvergenceWarning
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LinearRegression, LogisticRegression, Perceptron
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import mean_squared_error, r2_score, accuracy_score, confusion_matrix, ConfusionMatrixDisplay
import seaborn as sns
import matplotlib.pyplot as plt
## Reading the data
train_url = 'https://github.com/Rakesh9100/ML-Project-Drug-Review-Dataset/raw/main/datasets/drugsComTrain_raw.tsv'
test_url = 'https://github.com/Rakesh9100/ML-Project-Drug-Review-Dataset/raw/main/datasets/drugsComTest_raw.tsv'
dtypes = { 'Unnamed: 0': 'int32', 'drugName': 'category', 'condition': 'category', 'review': 'category', 'rating': 'float16', 'date': 'string', 'usefulCount': 'int16' }
train_df = pd.read_csv(train_url, sep='\t', quoting=2, dtype=dtypes, parse_dates=['date'])
train_df = train_df.sample(frac=0.8, random_state=42)
test_df = pd.read_csv(test_url, sep='\t', quoting=2, dtype=dtypes, parse_dates=['date'])
## Extracting day, month, and year into separate columns
for df in [train_df, test_df]:
df['day'] = df['date'].dt.day.astype('int8')
df['month'] = df['date'].dt.month.astype('int8')
df['year'] = df['date'].dt.year.astype('int16')
## Suppressing MarkupResemblesLocatorWarning, FutureWarning and ConvergenceWarning
warnings.filterwarnings('ignore', category=MarkupResemblesLocatorWarning)
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.filterwarnings("ignore", category=ConvergenceWarning)
## Defining function to decode HTML-encoded characters
def decode_html(text):
decoded_text = BeautifulSoup(text, 'html.parser').get_text()
return decoded_text
## Applying the function to the review column
train_df['review'], test_df['review'] = train_df['review'].apply(decode_html), test_df['review'].apply(decode_html)
## Dropped the original date column and removed the useless column
train_df, test_df = [df.drop('date', axis=1).drop(df.columns[0], axis=1) for df in (train_df, test_df)]
## Handling the missing values
train_imp, test_imp = [pd.DataFrame(SimpleImputer(strategy='most_frequent').fit_transform(df)) for df in (train_df, test_df)]
## Assigning old column names
train_imp.columns = ['drugName', 'condition', 'review', 'rating', 'usefulCount', 'day', 'month', 'year']
test_imp.columns = ['drugName', 'condition', 'review', 'rating', 'usefulCount', 'day', 'month', 'year']
## Converting the text in the review column to numerical data
vectorizer = TfidfVectorizer(stop_words='english', max_features=3000)
train_reviews = vectorizer.fit_transform(train_imp['review'])
test_reviews = vectorizer.transform(test_imp['review'])
## Replacing the review column with the numerical data
train_imp.drop('review', axis=1, inplace=True)
test_imp.drop('review', axis=1, inplace=True)
train_imp = pd.concat([train_imp, pd.DataFrame(train_reviews.toarray())], axis=1)
test_imp = pd.concat([test_imp, pd.DataFrame(test_reviews.toarray())], axis=1)
## Encoding the categorical columns
for i in ["drugName", "condition"]:
train_imp[i] = LabelEncoder().fit_transform(train_imp[i])
test_imp[i] = LabelEncoder().fit_transform(test_imp[i])
## Converting the data types of columns to reduce the memory usage
train_imp, test_imp = train_imp.astype('float16'), test_imp.astype('float16')
train_imp[['drugName', 'condition', 'usefulCount', 'year']] = train_imp[['drugName', 'condition', 'usefulCount', 'year']].astype('int16')
test_imp[['drugName', 'condition', 'usefulCount', 'year']] = test_imp[['drugName', 'condition', 'usefulCount', 'year']].astype('int16')
train_imp[['rating']] = train_imp[['rating']].astype('float16')
test_imp[['rating']] = test_imp[['rating']].astype('float16')
train_imp[['day', 'month']] = train_imp[['day', 'month']].astype('int8')
test_imp[['day', 'month']] = test_imp[['day', 'month']].astype('int8')
#print(train_imp.iloc[:,:15].dtypes)
#print(test_imp.iloc[:,:15].dtypes)
## Splitting the train and test datasets into feature variables
X_train, Y_train = train_imp.drop('rating', axis=1), train_imp['rating']
X_test, Y_test = test_imp.drop('rating', axis=1), test_imp['rating']
##### LinearRegression regression algorithm #####
linear=LinearRegression()
linear.fit(X_train, Y_train)
line_train=linear.predict(X_train)
line_test=linear.predict(X_test)
</code></pre>
<p>The error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-23-e787292883ec> in <cell line: 89>()
87
88 linear=LinearRegression()
---> 89 linear.fit(X_train, Y_train)
90 line_train=linear.predict(X_train)
91 line_test=linear.predict(X_test)
3 frames
/usr/local/lib/python3.10/dist-packages/sklearn/utils/validation.py in _get_feature_names(X)
1901 # mixed type of string and non-string is not supported
1902 if len(types) > 1 and "str" in types:
-> 1903 raise TypeError(
1904 "Feature names are only supported if all input features have string names, "
1905 f"but your input has {types} as feature name / column name types. "
TypeError: Feature names are only supported if all input features have string names, but your input has ['int', 'str'] as feature name / column name types. If you want feature names to be stored and validated, you must convert them all to strings, by using X.columns = X.columns.astype(str) for example. Otherwise you can remove feature / column names from your input data, or convert them all to a non-string data type.
</code></pre>
|
<python><machine-learning><scikit-learn>
|
2023-05-23 11:40:06
| 1
| 1,753
|
Karthik Bhandary
|
76,314,312
| 774,575
|
Is it possible to use 'sharey=row' with 'subplot_mosaic'
|
<p>Is there a possibility to use <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplots.html" rel="nofollow noreferrer">sharey='row'</a> with <code>plt.subplot_mosaic()</code>? I know I could <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.sharey.html" rel="nofollow noreferrer">share the axis for a pair of subplots</a>, but this requires calling the function for all subplots individually, on the other hand <code>sharey='row'</code> is single call.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = [A * np.sinc(np.linspace(0, 10, 100)) for A in (1,2)]
nrows = 1
ncols = 2
fig_kw = dict(figsize=(ncols*3, nrows*1.8), layout='constrained')
mosaic = [['t1', 't2']]
fig, axs = plt.subplot_mosaic(mosaic=mosaic, **fig_kw)
axs['t1'].plot(x[0])
axs['t2'].plot(x[1])
</code></pre>
<p><a href="https://i.sstatic.net/RqptD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RqptD.png" alt="enter image description here" /></a></p>
<p>Using <code>fig_kw = dict(figsize=(ncols*3, nrows*1.8), sharey= 'row', layout='constrained')</code> results in:</p>
<blockquote>
<p><code>'sharey' must be an instance of bool, not a str</code></p>
</blockquote>
<p>I tried using <code>fig_kw</code>, but I got the same error.</p>
|
<python><matplotlib>
|
2023-05-23 11:36:36
| 0
| 7,768
|
mins
|
76,314,287
| 12,415,855
|
Selenium / click on "View More"-button not possible?
|
<p>i try to click on the "Click More" - Button on this site:
<a href="https://www.hublot.com/en-ch/find-your-hublot/big-bang" rel="nofollow noreferrer">https://www.hublot.com/en-ch/find-your-hublot/big-bang</a></p>
<p><a href="https://i.sstatic.net/x698v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x698v.png" alt="enter image description here" /></a></p>
<p>using the following code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
if __name__ == '__main__':
options = Options()
options.add_argument("start-maximized")
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service(ChromeDriverManager().install())
driver = webdriver.Chrome (service=srv, options=options)
waitWD = WebDriverWait (driver, 10)
link = "https://www.hublot.com/en-ch/find-your-hublot/big-bang"
driver.get (link)
waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[@id="onetrust-accept-btn-handler"]'))).click()
waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[@class="modal__header_btn js_country_pop_up_close"]'))).click()
waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@class,"pl_section__view-more")]'))).click()
</code></pre>
<p>But i get this error:</p>
<pre><code>$ python temp.py
Traceback (most recent call last):
File "C:\DEV\Fiverr\TRY\Lawfulwizard\temp.py", line 24, in <module>
waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@class,"pl_section__view-more")]'))).click()
File "C:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\support\wait.py", line 95, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
</code></pre>
<p>Why is this button clicking not working and how can i do this?</p>
|
<python><selenium-webdriver>
|
2023-05-23 11:34:39
| 1
| 1,515
|
Rapid1898
|
76,314,278
| 4,045,275
|
Why is a SQLite Select statement so much slower than numpy.select? Any way to speed it up?
|
<h3>What I am trying to do</h3>
<p>I am trying to export and import tables between <code>pandas</code> and a <code>sqlite</code> database. The reasons for doing this are a combination of:</p>
<ul>
<li>needing to store certain data in sqlite format, and</li>
<li>finding the sqlite syntax clearer and easier to read than vectorised numpy operations, when doing things like creating new variables based on nesting multiple if / case when statements</li>
</ul>
<h3>My questions</h3>
<p>Running a select statement with an in-memory sqlite database is quite slow - it is about 3-5 times slower than using <code>numpy.select</code>. My questions are:</p>
<ul>
<li>what drives such a huge difference?</li>
<li>I understand that <code>numpy.select</code> is vectorised, whereas for example <code>pandas.DataFrame.apply()</code> is not. But isn't sqlite written in C? I would have expected speeds comparable to those of numpy.</li>
<li>Is there anything I can do to speed up my select statement in sqlite? I have tried creating indices but that actually slows things down even more</li>
<li>Specifically, when the table is small (ca. < 1,000 rows) then sqlite is faster, but once the table reaches 100,000 rows sqlite becomes slower than numpy.select.</li>
<li>I am using SQLAlchemy because with the sqlite3 package I was having trouble exporting from sql back into pandas</li>
</ul>
<p>With 100,000 rows the results are:
<a href="https://i.sstatic.net/iSFVU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iSFVU.png" alt="enter image description here" /></a></p>
<h3>Minimal, reproducible example</h3>
<p>I have a toy example below. Please note I time the <code>select</code> statement separately from the importing/exporting of the table between pandas and sqlite.</p>
<pre><code>import numpy as np
import pandas as pd
from sqlalchemy.engine import create_engine
from sqlalchemy import text
import time
start = time.time()
time_df = pd.DataFrame()
time_df['start'] = [start]
rng = np.random.default_rng()
myrows = int(100e3)
mycols = 20
df = pd.DataFrame(data=rng.integers(low=0, high=100, size=(myrows, mycols)))
df['my field'] = np.arange(0,myrows)
df['y'] = df['my field']*2
df['city'] = np.tile(['Paris','New York'],int(myrows/2))
df['mydate'] = pd.to_datetime("15-Jan-2023")
time_df['df creation'] = [time.time()]
df_date_cols = [col for col in df.columns if df[col].dtype == 'datetime64[ns]']
engine = create_engine('sqlite:///:memory:', echo=False)
conn_sqla = engine.connect()
df.to_sql('df', conn_sqla)
time_df['export to sql'] = [time.time()]
values = {'myx':1}
# conn_sqla.execute(text("CREATE INDEX idx_city ON df(city)"))
# conn_sqla.execute(text("CREATE INDEX idx_myfield ON df([my field])"))
conn_sqla.execute(text("""
CREATE TABLE df2 as SELECT m.*
, case when city = 'New York' then 'NY'
when [my field] > 50 then 'not NY; > 50'
else 'not NY; <= 50'
end as [New field]
from df m
where [my field] > :myx
"""), values)
time_df['run SQL select'] = [time.time()]
df_from_sql = pd.read_sql("df2", conn_sqla, parse_dates=df_date_cols)
conn_sqla.close()
time_df['import from SQL'] = [time.time()]
df_from_np = df.query("`my field` > 1").copy(deep=True)
conditions = {'NY': df_from_np['city'].eq("New York"),
'not NY; > 50': df_from_np['my field'].gt(50)}
df_from_np['new field'] = np.select(conditions.values(), conditions.keys(), "not NY; <= 50")
time_df['np select'] = [time.time()]
def myfunc(city,value):
if city == 'New York':
return 'NY'
elif value > 50:
return'not NYY; 50'
else:
return 'not NY; <= 50'
df_apply = df.query("`my field` > 1").copy(deep=True)
df_apply['new field'] = df_apply.apply(lambda x: myfunc(x['city'], x['my field']), axis =1 )
time_df['df.apply'] = [time.time()]
time_df = time_df.transpose()
time_df['seconds elapsed'] = np.hstack([0, np.diff(time_df[0])])
print(time_df)
</code></pre>
|
<python><pandas><performance><sqlite><sqlalchemy>
|
2023-05-23 11:33:17
| 1
| 9,100
|
Pythonista anonymous
|
76,314,273
| 9,589,875
|
Is it possible to install a single submodule from Matplolib?
|
<p>I am using the submodule pyplot from Matplotlib and then packaging my app into an installer. I want to limit the size of this installer, is it possible to install this submodule on its own or a subset of Matplotlib? Currently I'm including the entire Matplotlib library in my installer which seems very excessive and redundant.</p>
|
<python><matplotlib><pip><package>
|
2023-05-23 11:33:01
| 1
| 905
|
MJ_Wales
|
76,314,229
| 7,800,760
|
How to download spaCy models in a Poetry managed environment
|
<p>I am writing a Python Jupyter notebook that does some NLP processing on Italian texts.</p>
<p>I have installed spaCy 3.5.3 via Poetry and then attempt to run the following code:</p>
<pre class="lang-py prettyprint-override"><code>import spacy
load_model = spacy.load('it_core_news_sm')
</code></pre>
<p>The <code>import</code> line works as expected, but running <code>spacy.load</code> produces the following error:</p>
<blockquote>
<p>OSError: [E050] Can't find model 'it_core_news_sm'. It doesn't seem to be a Python package or a valid path to a data directory.
The model name is correct as shown on <a href="https://spacy.io/models/it" rel="noreferrer">https://spacy.io/models/it</a></p>
</blockquote>
<p>After a web search, I see that a solution is to issue the following command:</p>
<pre class="lang-none prettyprint-override"><code>python3 -m spacy download it_core_news_sm
</code></pre>
<p>After running this command the above code works as expected, however, is there a more 'kosher' way of doing this via Poetry?</p>
|
<python><nlp><spacy><python-poetry><virtual-environment>
|
2023-05-23 11:29:10
| 1
| 1,231
|
Robert Alexander
|
76,314,184
| 8,671,089
|
Unable to connect to kafka running in docker container
|
<p>I am unable to connect to kafka running in container.</p>
<p>I have .env file</p>
<pre><code>KAFKA_BROKER_ID=1
KAFKA_ENABLE_KRAFT=true
KAFKA_CFG_PROCESS_ROLES=broker,controller
KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9094
ALLOW_PLAINTEXT_LISTENER=yes
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT,CONTROLLER:PLAINTEXT
KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,EXTERNAL://:9093,CONTROLLER://:9094
KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://localhost:9093
KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT
KAFKA_CFG_BROKER_ID=1
KAFKA_CFG_NODE_ID=1
KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true
KAFKA_CFG_MESSAGE_MAX_BYTES=5242880
KAFKA_CFG_MAX_REQUEST_SIZE=5242880
BITNAMI_DEBUG=true
KAFKA_CFG_DELETE_TOPIC_ENABLE=true
</code></pre>
<p>and running docker in workflow
<code>docker run --name kafka-broker --env-file ./container-env.env -d bitnami/kafka:latest</code>
I can see kafka server is stated properly.</p>
<p>I am trying to create kafka topic but not able to connect to kafka.</p>
<pre><code>
client = AdminClient(conf={"bootstrap.servers": "localhost:9093"})
topic_name = "test-topic"
new_topic = NewTopic(
topic=topic_name,
num_partitions=1,
replication_factor=1,
)
client.create_topics(new_topics=[new_topic])
topic_exists = False
while not topic_exists:
cluster_metadata = client.list_topics()
topics = cluster_metadata.topics
topic_exists = (new_topic.topic in topics.keys())
</code></pre>
<p>Am I missing any environment variable for using wrong ports here? Help would be apreciated</p>
|
<python><docker><apache-kafka><kafka-python>
|
2023-05-23 11:23:29
| 0
| 683
|
Panda
|
76,314,158
| 11,357,695
|
Spyder Kernels, anaconda and Python 3.9
|
<p>--</p>
<p>Edit - pip uninstall not working</p>
<pre><code>pip uninstall primer3
WARNING: Ignoring invalid distribution -umpy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umexpr (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -iopython (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -illow (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -cipy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umpy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umexpr (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -iopython (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -illow (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -cipy (c:\anaconda3\lib\site-packages)
WARNING: Skipping primer3 as it is not installed.
</code></pre>
<p>--</p>
<p>I am looking to use some software that requires Python v>=3.9. I downloaded it as described <a href="https://realpython.com/installing-python/#how-to-install-from-the-full-installer" rel="nofollow noreferrer">here</a> via default options to <code>C:\Users\u03132tk\AppData\Local\Programs\Python\Python39\python.exe</code>.</p>
<p>My IDE is Spyder, originally installed at <code>C:\ANACONDA3\Lib\site-packages\spyder</code>. The original shortcut I clicked, <code>C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Anaconda3 (64-bit)\Spyder.lnk</code>, executed these files:</p>
<pre><code>C:\ANACONDA3\pythonw.exe
C:\ANACONDA3\cwp.py
C:\ANACONDA3
C:\ANACONDA3\pythonw.exe
C:\ANACONDA3\Scripts\spyder-script.py
</code></pre>
<p>As it was not the default interpreter for my Spyder version, I then downloaded Spyder version 5 via <a href="https://docs.spyder-ide.org/current/installation.html" rel="nofollow noreferrer">windows installer</a> in the hope that it would use Python version 3.9 (it uses 3.8 :( ). Spyder v5 is installed at <code>C:\Program Files\Spyder</code>, and the shortcut I use to open it (<code>C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Spyder.lnk</code>) executes these files:</p>
<pre><code>C:\Program Files\Spyder\Python\pythonw.exe
C:\Program Files\Spyder\Spyder.launch.pyw
</code></pre>
<p>After reading around I found out <a href="https://stackoverflow.com/a/45882065/11357695">how to change the Python interpreter</a>. My current problem is this error in the console:</p>
<pre><code>The Python environment or installation whose interpreter is located at
C:\Users\u03132tk\AppData\Local\Programs\Python\Python39\python.exe
doesn't have the spyder‑kernels module or the right version of it installed (>= 2.4.0 and < 2.5.0). Without this module is not possible for Spyder to create a console for you.
You can install it by activating your environment first (if necessary) and then running in a system terminal:
conda install spyder-kernels=2.4
or
pip install spyder-kernels==2.4.*
</code></pre>
<p>I did the pip option and it seemed to go OK, apart from these errors (the full output is given under 'text dump'):</p>
<pre><code>ERROR: spyder 4.1.5 requires pyqt5<5.13; python_version >= "3", which is not installed.
ERROR: spyder 4.1.5 requires pyqtwebengine<5.13; python_version >= "3", which is not installed.
ERROR: spyder 4.1.5 has requirement spyder-kernels<1.10.0,>=1.9.4, but you'll have spyder-kernels 2.4.3 which is incompatible.
ERROR: ipykernel 6.16.2 has requirement traitlets>=5.1.0, but you'll have traitlets 4.3.2 which is incompatible.
</code></pre>
<p>Given that <code>spyder-kernels-2.4.3</code> seemed to download OK, I thought it might be fine, but I get the same console error after restarting spyder. I have tried setting the preferred interpreter to <code>C:\Users\u03132tk\AppData\Local\Programs\Python\Python39\python.exe</code>, and I also tried copying the <code>Python39</code> directory into <code>C:\Program Files\Spyder</code> and setting the preferred interpreter to <code>C:\Program Files\Spyder\Python39\python.exe</code> - neither worked.</p>
<p>I am not sure of best practices with this kind of thing at all - could the issue be that I have two Spyder versions? I have folders that seem to indicate the spyder kernels installed OK for both the old (<code>C:\ANACONDA3\Lib\site-packages\spyder_kernels-2.4.3.dist-info</code>, <code>C:\ANACONDA3\Lib\site-packages\spyder_kernels</code>) and new (<code>C:\Program Files\Spyder\pkgs\spyder_kernels-2.4.3.dist-info</code>, <code>C:\Program Files\Spyder\pkgs\spyder_kernels</code>) spyder versions.</p>
<p>Also, if my latest install of Spyder is outside the Anaconda ecosystem, will that make it harder to use packages (or can I just Pip install them as needed still)?</p>
<p>When I leave python and venture into my actual computer I am a complete amateur, so any help would be much appreciated!</p>
<p>Thanks,
Tim</p>
<p>Text dump:</p>
<pre><code>C:\Users\u03132tk>pip install spyder-kernels==2.4.*
Collecting spyder-kernels==2.4.*
Downloading https://files.pythonhosted.org/packages/d8/63/2b2202362cca689c38f5a63268596f6fff14bdfb4a539103e4073b7064b0/spyder_kernels-2.4.3-py2.py3-none-any.whl (98kB)
|████████████████████████████████| 102kB 6.4MB/s
Collecting pyzmq>=22.1.0; python_version >= "3" (from spyder-kernels==2.4.*)
Downloading https://files.pythonhosted.org/packages/40/dd/1fe1f11eb03d177a141188c80d1134f64146db3c5cecbcf4054817a2f8af/pyzmq-25.0.2-cp37-cp37m-win_amd64.whl (1.2MB)
|████████████████████████████████| 1.2MB ...
Collecting jupyter-client<9,>=7.4.9; python_version >= "3" (from spyder-kernels==2.4.*)
Downloading https://files.pythonhosted.org/packages/fd/a7/ef3b7c8b9d6730a21febdd0809084e4cea6d2a7e43892436adecdd0acbd4/jupyter_client-7.4.9-py3-none-any.whl (133kB)
|████████████████████████████████| 143kB 6.4MB/s
Collecting ipykernel<7,>=6.16.1; python_version >= "3" (from spyder-kernels==2.4.*)
Downloading https://files.pythonhosted.org/packages/95/46/505364588f6145f5edd29c1506b1964dd397a668c49f8bb42deffb6a0168/ipykernel-6.16.2-py3-none-any.whl (138kB)
|████████████████████████████████| 143kB ...
Requirement already satisfied: cloudpickle in c:\anaconda3\lib\site-packages (from spyder-kernels==2.4.*) (1.2.1)
Collecting ipython!=8.10.0,!=8.8.0,!=8.9.0,<9,>=7.31.1; python_version >= "3" (from spyder-kernels==2.4.*)
Downloading https://files.pythonhosted.org/packages/7c/6a/1f1365f4bf9fcb349fcaa5b61edfcefa721aa13ff37c5631296b12fab8e5/ipython-7.34.0-py3-none-any.whl (793kB)
|████████████████████████████████| 798kB 6.4MB/s
Requirement already satisfied: traitlets in c:\anaconda3\lib\site-packages (from jupyter-client<9,>=7.4.9; python_version >= "3"->spyder-kernels==2.4.*) (4.3.2)
Collecting jupyter-core>=4.9.2 (from jupyter-client<9,>=7.4.9; python_version >= "3"->spyder-kernels==2.4.*)
Downloading https://files.pythonhosted.org/packages/02/8f/0e0ad6804c0a021a27410ec997626a7a955c20916454d0205f16eb83de4b/jupyter_core-4.12.0-py3-none-any.whl (89kB)
|████████████████████████████████| 92kB 5.8MB/s
Collecting python-dateutil>=2.8.2 (from jupyter-client<9,>=7.4.9; python_version >= "3"->spyder-kernels==2.4.*)
Downloading https://files.pythonhosted.org/packages/36/7a/87837f39d0296e723bb9b62bbb257d0355c7f6128853c78955f57342a56d/python_dateutil-2.8.2-py2.py3-none-any.whl (247kB)
|████████████████████████████████| 256kB ...
Requirement already satisfied: entrypoints in c:\anaconda3\lib\site-packages (from jupyter-client<9,>=7.4.9; python_version >= "3"->spyder-kernels==2.4.*) (0.3)
Collecting tornado>=6.2 (from jupyter-client<9,>=7.4.9; python_version >= "3"->spyder-kernels==2.4.*)
Downloading https://files.pythonhosted.org/packages/f3/9e/225a41452f2d9418d89be5e32cf824c84fe1e639d350d6e8d49db5b7f73a/tornado-6.2.tar.gz (504kB)
|████████████████████████████████| 512kB 6.4MB/s
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Collecting nest-asyncio>=1.5.4 (from jupyter-client<9,>=7.4.9; python_version >= "3"->spyder-kernels==2.4.*)
Downloading https://files.pythonhosted.org/packages/e9/1a/6dd9ec31cfdb34cef8fea0055b593ee779a6f63c8e8038ad90d71b7f53c0/nest_asyncio-1.5.6-py3-none-any.whl
Requirement already satisfied: packaging in c:\anaconda3\lib\site-packages (from ipykernel<7,>=6.16.1; python_version >= "3"->spyder-kernels==2.4.*) (19.0)
Collecting matplotlib-inline>=0.1 (from ipykernel<7,>=6.16.1; python_version >= "3"->spyder-kernels==2.4.*)
Downloading https://files.pythonhosted.org/packages/f2/51/c34d7a1d528efaae3d8ddb18ef45a41f284eacf9e514523b191b7d0872cc/matplotlib_inline-0.1.6-py3-none-any.whl
Collecting debugpy>=1.0 (from ipykernel<7,>=6.16.1; python_version >= "3"->spyder-kernels==2.4.*)
Downloading https://files.pythonhosted.org/packages/af/7e/61738aba4da0de74e797ca71013977ebd8d35f58b18e674da045efb30487/debugpy-1.6.7-cp37-cp37m-win_amd64.whl (4.8MB)
|████████████████████████████████| 4.8MB 6.8MB/s
Requirement already satisfied: psutil in c:\anaconda3\lib\site-packages (from ipykernel<7,>=6.16.1; python_version >= "3"->spyder-kernels==2.4.*) (5.6.3)
Requirement already satisfied: backcall in c:\anaconda3\lib\site-packages (from ipython!=8.10.0,!=8.8.0,!=8.9.0,<9,>=7.31.1; python_version >= "3"->spyder-kernels==2.4.*) (0.1.0)
Requirement already satisfied: pickleshare in c:\anaconda3\lib\site-packages (from ipython!=8.10.0,!=8.8.0,!=8.9.0,<9,>=7.31.1; python_version >= "3"->spyder-kernels==2.4.*) (0.7.5)
Requirement already satisfied: setuptools>=18.5 in c:\anaconda3\lib\site-packages (from ipython!=8.10.0,!=8.8.0,!=8.9.0,<9,>=7.31.1; python_version >= "3"->spyder-kernels==2.4.*) (41.0.1)
Requirement already satisfied: jedi>=0.16 in c:\anaconda3\lib\site-packages (from ipython!=8.10.0,!=8.8.0,!=8.9.0,<9,>=7.31.1; python_version >= "3"->spyder-kernels==2.4.*) (0.17.1)
Requirement already satisfied: decorator in c:\anaconda3\lib\site-packages (from ipython!=8.10.0,!=8.8.0,!=8.9.0,<9,>=7.31.1; python_version >= "3"->spyder-kernels==2.4.*) (4.4.0)
Requirement already satisfied: pygments in c:\anaconda3\lib\site-packages (from ipython!=8.10.0,!=8.8.0,!=8.9.0,<9,>=7.31.1; python_version >= "3"->spyder-kernels==2.4.*) (2.4.2)
Requirement already satisfied: colorama; sys_platform == "win32" in c:\anaconda3\lib\site-packages (from ipython!=8.10.0,!=8.8.0,!=8.9.0,<9,>=7.31.1; python_version >= "3"->spyder-kernels==2.4.*) (0.4.1)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in c:\anaconda3\lib\site-packages (from ipython!=8.10.0,!=8.8.0,!=8.9.0,<9,>=7.31.1; python_version >= "3"->spyder-kernels==2.4.*) (2.0.9)
Requirement already satisfied: ipython-genutils in c:\anaconda3\lib\site-packages (from traitlets->jupyter-client<9,>=7.4.9; python_version >= "3"->spyder-kernels==2.4.*) (0.2.0)
Requirement already satisfied: six in c:\anaconda3\lib\site-packages (from traitlets->jupyter-client<9,>=7.4.9; python_version >= "3"->spyder-kernels==2.4.*) (1.12.0)
Requirement already satisfied: pywin32>=1.0; sys_platform == "win32" and platform_python_implementation != "PyPy" in c:\anaconda3\lib\site-packages (from jupyter-core>=4.9.2->jupyter-client<9,>=7.4.9; python_version >= "3"->spyder-kernels==2.4.*) (223)
Requirement already satisfied: pyparsing>=2.0.2 in c:\anaconda3\lib\site-packages (from packaging->ipykernel<7,>=6.16.1; python_version >= "3"->spyder-kernels==2.4.*) (2.4.0)
Requirement already satisfied: parso<0.8.0,>=0.7.0 in c:\anaconda3\lib\site-packages (from jedi>=0.16->ipython!=8.10.0,!=8.8.0,!=8.9.0,<9,>=7.31.1; python_version >= "3"->spyder-kernels==2.4.*) (0.7.0)
Requirement already satisfied: wcwidth in c:\anaconda3\lib\site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython!=8.10.0,!=8.8.0,!=8.9.0,<9,>=7.31.1; python_version >= "3"->spyder-kernels==2.4.*) (0.1.7)
Building wheels for collected packages: tornado
Building wheel for tornado (PEP 517) ... done
Stored in directory: C:\Users\u03132tk\AppData\Local\pip\Cache\wheels\4c\e2\a2\2a4d8b655b5018edd788430e998bfb2e15a2b7876dbee210e6
Successfully built tornado
ERROR: spyder 4.1.5 requires pyqt5<5.13; python_version >= "3", which is not installed.
ERROR: spyder 4.1.5 requires pyqtwebengine<5.13; python_version >= "3", which is not installed.
ERROR: spyder 4.1.5 has requirement spyder-kernels<1.10.0,>=1.9.4, but you'll have spyder-kernels 2.4.3 which is incompatible.
ERROR: ipykernel 6.16.2 has requirement traitlets>=5.1.0, but you'll have traitlets 4.3.2 which is incompatible.
Installing collected packages: pyzmq, jupyter-core, python-dateutil, tornado, nest-asyncio, jupyter-client, matplotlib-inline, debugpy, ipython, ipykernel, spyder-kernels
Found existing installation: pyzmq 18.0.0
Uninstalling pyzmq-18.0.0:
Successfully uninstalled pyzmq-18.0.0
Found existing installation: jupyter-core 4.6.3
Uninstalling jupyter-core-4.6.3:
Successfully uninstalled jupyter-core-4.6.3
Found existing installation: python-dateutil 2.8.0
Uninstalling python-dateutil-2.8.0:
Successfully uninstalled python-dateutil-2.8.0
Found existing installation: tornado 6.0.3
Uninstalling tornado-6.0.3:
Successfully uninstalled tornado-6.0.3
Found existing installation: jupyter-client 6.1.7
Uninstalling jupyter-client-6.1.7:
Successfully uninstalled jupyter-client-6.1.7
Found existing installation: ipython 7.6.1
Uninstalling ipython-7.6.1:
Successfully uninstalled ipython-7.6.1
Found existing installation: ipykernel 5.3.4
Uninstalling ipykernel-5.3.4:
Successfully uninstalled ipykernel-5.3.4
Found existing installation: spyder-kernels 1.9.4
Uninstalling spyder-kernels-1.9.4:
Successfully uninstalled spyder-kernels-1.9.4
Successfully installed debugpy-1.6.7 ipykernel-6.16.2 ipython-7.34.0 jupyter-client-7.4.9 jupyter-core-4.12.0 matplotlib-inline-0.1.6 nest-asyncio-1.5.6 python-dateutil-2.8.2 pyzmq-25.0.2 spyder-kernels-2.4.3 tornado-6.2
</code></pre>
|
<python><python-3.x><installation><anaconda><spyder>
|
2023-05-23 11:20:20
| 1
| 756
|
Tim Kirkwood
|
76,313,775
| 7,505,228
|
Combine more than two dict at once (summing the values that appear in more than one dict)
|
<p>Inspired from <a href="https://stackoverflow.com/questions/11011756/is-there-any-pythonic-way-to-combine-two-dicts-adding-values-for-keys-that-appe">this question</a></p>
<p>I have an arbitrary number of dictionaries (coming from a generator)</p>
<pre><code>a = {"a": 1, "b": 2, "c": 3}
b = {"c": 1, "d": 1}
c = {"a": 2, "b": 2}
...
</code></pre>
<p>I want to have a final dictionary that contains the following values for each key:</p>
<ul>
<li>If the key appears only in one dictionary, keep this value</li>
<li>If the key appears in multiple dictionaries, the final value is the sum of the values in the individual dicts.</li>
</ul>
<p>In my example, the result would be <code>{"a": 3, "b": 4, "c": 4, "d": 1}</code></p>
<p>Based on the answer of the question linked above, I can use <code>collections.Counter</code> when having a set number of dictionaries, like this:</p>
<pre><code>from collections import Counter
dict(Counter(a) + Counter(b) + Counter(c))
</code></pre>
<p>However, the number of dictionaries I have can be very large, is there any smart one-liner (or close) I can use to get this "sum" I am interested in ?</p>
<p>Sadly, using <code>sum(Counter(d) for d in (a,b,c))</code> raises a <code>TypeError: unsupported operand type(s) for +: 'int' and 'Counter'</code></p>
|
<python><dictionary>
|
2023-05-23 10:33:19
| 3
| 2,289
|
LoicM
|
76,313,685
| 10,755,032
|
Python Pandas dataframe - KeyError: 'date'
|
<p>I have looked this one up: <a href="https://stackoverflow.com/questions/52341766/keyerror-date">KeyError: 'Date'</a> and this one as well: <a href="https://stackoverflow.com/questions/62889178/pandas-dataframe-keyerror-date">Pandas DataFrame - KeyError: 'date'</a> it did not help. I am getting KeyError: 'date' with no explanation.</p>
<p>Here is my code:</p>
<pre><code>import pandas as pd, numpy as np
import csv
import warnings
from bs4 import BeautifulSoup, MarkupResemblesLocatorWarning
from sklearn.impute import SimpleImputer
from sklearn.exceptions import ConvergenceWarning
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LinearRegression, LogisticRegression, Perceptron
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import mean_squared_error, r2_score, accuracy_score, confusion_matrix, ConfusionMatrixDisplay
import seaborn as sns
import matplotlib.pyplot as plt
## Reading the data
dtypes = { 'Unnamed: 0': 'int32', 'drugName': 'category', 'condition': 'category', 'review': 'category', 'rating': 'float16', 'date': 'categorical', 'usefulCount': 'int16' }
train_df = pd.read_csv('/content/drugsComTrain_raw.tsv', sep='\t', quoting=2, dtype=dtypes)
# Randomly selecting 80% of the data from the training dataset
train_df = train_df.sample(frac=0.8, random_state=42)
test_df = pd.read_csv('/content/drugsComTest_raw.tsv', sep='\t', quoting=2, dtype=dtypes)
print(train_df.head())
## Converting date column to datetime format
train_df['date'], test_df['date'] = pd.to_datetime(train_df['date'], format='%b %d, %Y'), pd.to_datetime(test_df['date'], format='%b %d, %Y') #This is the line where Im getting the error.
</code></pre>
<p>The last line is where Im getting the error</p>
<p>Error:</p>
<pre><code>KeyError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
3801 try:
-> 3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
4 frames
/usr/local/lib/python3.10/dist-packages/pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
/usr/local/lib/python3.10/dist-packages/pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'date'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
<ipython-input-17-056c9fab2e6c> in <cell line: 24>()
22 print(train_df.head())
23 ## Converting date column to datetime format
---> 24 train_df['date'], test_df['date'] = pd.to_datetime(train_df['date'], format='%b %d, %Y'), pd.to_datetime(test_df['date'], format='%b %d, %Y')
25
26 ## Extracting day, month, and year into separate columns
/usr/local/lib/python3.10/dist-packages/pandas/core/frame.py in __getitem__(self, key)
3805 if self.columns.nlevels > 1:
3806 return self._getitem_multilevel(key)
-> 3807 indexer = self.columns.get_loc(key)
3808 if is_integer(indexer):
3809 indexer = [indexer]
/usr/local/lib/python3.10/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
-> 3804 raise KeyError(key) from err
3805 except TypeError:
3806 # If we have a listlike key, _check_indexing_error will raise
KeyError: 'date'
</code></pre>
|
<python><pandas><dataframe><date>
|
2023-05-23 10:24:50
| 2
| 1,753
|
Karthik Bhandary
|
76,313,592
| 8,932,411
|
import langchain => Error : TypeError: issubclass() arg 1 must be a class
|
<p>I want to use langchain for my project.</p>
<p>so I installed it using following command : <code>pip install langchain</code></p>
<p>but While importing "langchain" I am facing following Error:</p>
<pre><code>File /usr/lib/python3.8/typing.py:774, in _GenericAlias.__subclasscheck__(self, cls)
772 if self._special:
773 if not isinstance(cls, _GenericAlias):
--> 774 return issubclass(cls, self.__origin__)
775 if cls._special:
776 return issubclass(cls.__origin__, self.__origin__)
TypeError: issubclass() arg 1 must be a class
</code></pre>
<p>Any one who can solve this error ?</p>
|
<python><nlp><data-science><chatbot><langchain>
|
2023-05-23 10:09:47
| 7
| 764
|
M. D. P
|
76,313,575
| 3,165,683
|
Widgets not rendering in Voila
|
<p>I have code which executes widgets fine in jupyter labs but does not render the widgets in voila. See the image for the example from the voila github.
<a href="https://i.sstatic.net/1ISdd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1ISdd.png" alt="enter image description here" /></a></p>
<p>Python version 3.10.9</p>
<p>voila 0.3.6</p>
<p>jupyterlab 3.5.3</p>
<p>ipywidgets 7.6.5</p>
|
<python><jupyter-notebook><ipywidgets><voila>
|
2023-05-23 10:07:17
| 0
| 377
|
user3165683
|
76,313,341
| 5,462,551
|
Is it possible to use pytest fixture that returns a list as parameters list of a test?
|
<p>Consider the following code example, which is an abstraction of real code that I have:</p>
<pre class="lang-py prettyprint-override"><code>import os
import pytest
class Files:
def __init__(self):
self.file1 = "file1"
self.file2 = "file2"
self.file3 = "file3"
self.file4 = "file4"
self.file5 = "file5"
class MyRunner:
def __init__(self, files_obj):
self.files = files_obj
def get_files_to_copy(files_obj: Files):
return [files_obj.file1, files_obj.file2, files_obj.file3]
@pytest.fixture(scope="class")
def runner():
""" long time to run... """
files_obj = Files()
_runner = MyRunner(files_obj)
return _runner
@pytest.fixture(scope="class")
def must_exist_files_list(runner):
return get_files_to_copy(runner.files)
@pytest.mark.parametrize("file", must_exist_files_list)
def test_must_exist_files(file):
assert os.path.exists(file), f"{file} does not exist"
</code></pre>
<p>My general idea is this: the <code>runner</code> fixture takes time to initialize, and is used for other tests (not displayed here). I need to use the <code>runner</code>'s <code>files</code> attribute to obtain a list of files for the <code>test_must_exist_files</code> test. I thought I can create a fixture that returns a list (<code>must_exist_files_list</code>) and use it as a parameters list for the aforementioned test, but it returns an error:</p>
<pre class="lang-none prettyprint-override"><code>TypeError: 'function' object is not iterable
</code></pre>
<p>Is there a way to still do it similarly? Otherwise, what's the best way to do this?</p>
|
<python><pytest>
|
2023-05-23 09:42:34
| 1
| 4,161
|
noamgot
|
76,313,229
| 11,155,419
|
Mock a function before importing a module
|
<p>I have an Airflow DAG that</p>
<pre><code># my_dag.py
from my_module import my_function
MY_VAR = 'Hello World'
with DAG(
schedule_interval=my_function(),
...
):
....
</code></pre>
<p>and then I have a test that imports from <code>my_dag</code>:</p>
<pre><code># test_my_dag.py
from my_dag import MY_VAR
...
</code></pre>
<hr />
<p>Problem is that <code>my_function()</code> is supposed to initiate a connection, and thus, I would like to mock it since when importing <code>MY_VAR</code> in the test, will result in loading the DAG <code>my_dag.py</code> file, that will in turn run unmocked version of <code>my_function</code>.</p>
<p>How can I potentially mock <code>my_function</code> before importing from <code>my_dag</code> in tests?</p>
|
<python><airflow><pytest>
|
2023-05-23 09:29:40
| 2
| 843
|
Tokyo
|
76,313,015
| 4,495,790
|
How to interpret sklearn's NearestNeighbors' kneighbors on another array?
|
<p>I used to find nearest neighbours in data array <code>X</code> with <code>sklearn.neighbors.NearestNeighbors</code> like this:</p>
<pre><code>from sklearn.neighbors import NearestNeighbors
import numpy as np
nn = NearestNeighbors(n_neighbors=2).fit(X)
distances, indices = nn.nkneighbors(X)
</code></pre>
<p>In this case, I could get two closest neighbours in <code>X</code> to each row in the same <code>X</code>, that's straightforward. However, I'm confused about applying fitted <code>nn.nkneighbors(X)</code> to another array <code>Y</code>:</p>
<pre><code>distances, indices = nn.nkneighbors(Y)
</code></pre>
<p>This gives 2 neighbours to each row in <code>Y</code>, but how to interpret this? Are the found neighbours from <code>X</code> or <code>Y</code>?
And how to interpret <code>kneighbors([X, n_neighbors, return_distance])</code> in <a href="https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.html" rel="nofollow noreferrer">Sklearn's documentation page</a>, what is <code>n_neighbors</code> here?</p>
|
<python><scikit-learn><nearest-neighbor>
|
2023-05-23 09:05:36
| 1
| 459
|
Fredrik
|
76,312,965
| 17,659,993
|
PyInstaller app with cefpython throws ERROR:icu_util.cc(133)] Invalid file descriptor to ICU data receiv
|
<p>I've been developing a <strong>Tkinter</strong> GUI application with <strong>CEFPython</strong> for browser integration. My application runs smoothly when executed as a Python script. However, I've been running into issues when trying to compile it into a standalone executable using <strong>PyInstaller</strong>.</p>
<p>Here's the PyInstaller command I've been using: <code>pyinstaller -i assets/icon.ico -F -w bot.py</code></p>
<p>After running this command, PyInstaller successfully creates an executable in the dist directory. When I run the executable, however, I receive an error message in the debug.log file located in the same directory as the executable. The error message is as follows: <code>ERROR:icu_util.cc(133)] Invalid file descriptor to ICU data received</code>.</p>
<p>I've tried researching solutions online, but so far I've come up empty. I'd greatly appreciate any advice or suggestions on how to resolve this issue.</p>
|
<python><tkinter><pyinstaller><cefpython>
|
2023-05-23 09:00:52
| 1
| 333
|
Cassano
|
76,312,923
| 8,087,322
|
Include variable content inline in documentation
|
<p>I want to have something in my document like</p>
<blockquote>
<p>This module uses the constant of 150000 m/s as the speed of light.</p>
</blockquote>
<p>where the 150000 was generated programmatically from the module. This would reduce the maintainance of my documentation, as often just these numbers get updated.</p>
<p>For blocks, there is the <a href="https://stackoverflow.com/questions/7250659"><code>.. exec::</code> directive</a>; but is there something similar to do this inline?</p>
|
<python><python-sphinx><restructuredtext>
|
2023-05-23 08:56:32
| 0
| 593
|
olebole
|
76,312,879
| 1,497,720
|
Avoid changing powershell script directory when running python virtualenv
|
<p>for the powershell script below</p>
<pre><code>function jl1 {
cd C:\Users\MyUser
Envs\huggingface\Scripts\activate
cd "D:\Working"
jupyter lab
}
Set-Alias jl jl1
</code></pre>
<p>I always run <code>jl</code> on my active directory</p>
<pre><code>D:\Working123 > jl
</code></pre>
<p>Now since I do a</p>
<pre><code>cd C:\Users\MyUser
Envs\huggingface\Scripts\activate
</code></pre>
<p>To activate my python virtual environment "huggingface", I need to explicitly set the working directory from <code>D:\Working</code> to <code>D:\Working123</code></p>
<p>Is there any way to avoid changing the directory to cd <code>C:\Users\MyUser</code> to activate my python virtual environment?</p>
|
<python><python-3.x><powershell><virtualenv>
|
2023-05-23 08:51:24
| 1
| 18,765
|
william007
|
76,312,844
| 1,021,819
|
How can I exit Dask cleanly?
|
<p>I am starting Dask with a containerized LocalCluster but on closing the cluster and client I usually (but intermittently) receive a diverse range of exceptions - see for example the one below.</p>
<p>The cleanup code is:</p>
<pre class="lang-py prettyprint-override"><code>cluster.close()
client.close()
</code></pre>
<p>Is there an <code>Exception</code>-free way to close the Dask cluster, followed by closing the client(*)?</p>
<p>No solution I have found has resolved the issue for me. Surely it is possible to exit without <code>Exception</code>??</p>
<p>I would prefer a route that avoids use if the <code>with</code> statement, because the clean-up operation is embedded in a third-party class. If a context manager is the sole way to go, is it possible to call the relevant context manager directly without the <code>with</code>?</p>
<ul>
<li>PS Do I have it right that the cluster should be closed before the client (since the latter would presumably be required to achieve the former)? (Opinion seems to differ on the matter.)</li>
</ul>
<pre class="lang-py prettyprint-override"><code>2023-05-22 20:59:26,890 - distributed.client - ERROR -
ConnectionRefusedError: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/pip_packages/distributed/comm/core.py", line 291, in connect
comm = await asyncio.wait_for(
File "/usr/local/lib/python3.9/asyncio/tasks.py", line 481, in wait_for
return fut.result()
File "/pip_packages/distributed/comm/tcp.py", line 511, in connect
convert_stream_closed_error(self, e)
File "/pip_packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
distributed.comm.core.CommClosedError: in <distributed.comm.tcp.TCPConnector object at 0x40deb132b0>: ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/pip_packages/distributed/utils.py", line 742, in wrapper
return await func(*args, **kwargs)
File "/pip_packages/distributed/client.py", line 1298, in _reconnect
await self._ensure_connected(timeout=timeout)
File "/pip_packages/distributed/client.py", line 1328, in _ensure_connected
comm = await connect(
File "/pip_packages/distributed/comm/core.py", line 315, in connect
await asyncio.sleep(backoff)
File "/usr/local/lib/python3.9/asyncio/tasks.py", line 655, in sleep
return await future
asyncio.exceptions.CancelledError
2023-05-22 20:59:26 : ERROR : __exit__ : 768 :
Traceback (most recent call last):
File "/pip_packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/pip_packages/distributed/client.py", line 1500, in _handle_report
msgs = await self.scheduler_comm.comm.read()
File "/pip_packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/pip_packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) Client->Scheduler local=tcp://127.0.0.1:49712 remote=tcp://127.0.0.1:43205>: Stream is closed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/pip_packages/distributed/utils.py", line 742, in wrapper
return await func(*args, **kwargs)
File "/pip_packages/distributed/client.py", line 1508, in _handle_report
await self._reconnect()
File "/pip_packages/distributed/utils.py", line 742, in wrapper
return await func(*args, **kwargs)
File "/pip_packages/distributed/client.py", line 1298, in _reconnect
await self._ensure_connected(timeout=timeout)
File "/pip_packages/distributed/client.py", line 1328, in _ensure_connected
comm = await connect(
File "/pip_packages/distributed/comm/core.py", line 315, in connect
await asyncio.sleep(backoff)
File "/usr/local/lib/python3.9/asyncio/tasks.py", line 655, in sleep
return await future
asyncio.exceptions.CancelledError
</code></pre>
<p>Thanks as ever!</p>
|
<python><dask><dask-distributed><contextmanager>
|
2023-05-23 08:45:41
| 1
| 8,527
|
jtlz2
|
76,312,641
| 6,145,828
|
Apache Beam: merge branches after write outputs
|
<p>I am trying to write an apache beam pipeline where the pipeline divides into three branches, where each branch writes into BigQuery, and then merges into one to write another Bigquery Table for logging.</p>
<p>I am unable to merge the branches, here is the code:</p>
<pre><code>pipeline_options = PipelineOptions(None)
p = beam.Pipeline(options=pipeline_options)
ingest_data = (
p
| 'Start Pipeline' >> beam.Create([None])
)
p1 = (ingest_data | 'Read from and to date 1' >> beam.ParDo(OutputValueProviderFn('table1'))
| 'fetch API data 1' >> beam.ParDo(get_api_data())
| 'write into gbq 1' >> beam.io.gcp.bigquery.WriteToBigQuery(table='proj.dataset.table1',
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
custom_gcs_temp_location='gs://project/temp')
)
p2 = (ingest_data | 'Read from and to date 2' >> beam.ParDo(OutputValueProviderFn('table2'))
| 'fetch API data 2' >> beam.ParDo(get_api_data())
| 'write into gbq 2' >> beam.io.gcp.bigquery.WriteToBigQuery(table='proj.dataset.table2',
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
custom_gcs_temp_location='gs://proj/temp')
)
p3 = (ingest_data | 'Read from and to date 3' >> beam.ParDo(OutputValueProviderFn('table3'))
| 'fetch API data 3' >> beam.ParDo(get_api_data())
| 'write into gbq 3' >> beam.io.gcp.bigquery.WriteToBigQuery(table='proj.dataset.table3',
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
custom_gcs_temp_location='gs://proj/temp')
)
# Here I would like to merge the three branches into one: This doesn't work
merge = (p1, p2, p3) | 'Write Log' >> beam.io.gcp.bigquery.WriteToBigQuery(table='proj.dataset.table_logging',
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
custom_gcs_temp_location='gs://proj/temp')
)
</code></pre>
<p>This causes the following error:</p>
<pre><code>AttributeError: Error trying to access nonexistent attribute `0` in write result. Please see __documentation__ for available attributes.
</code></pre>
<p>I don't care about the output of the three branches, I need to merge them just to be sure that the three previous writes are completed.</p>
<p>Beam inspector graph looks like this:</p>
<p><a href="https://i.sstatic.net/8bfEp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8bfEp.png" alt="enter image description here" /></a></p>
|
<python><google-cloud-dataflow><apache-beam>
|
2023-05-23 08:21:31
| 1
| 830
|
Francesco Pegoraro
|
76,312,424
| 19,950,360
|
latest version pyarrow can't serialize and deserialize? (module 'pyarrow' has no attribute 'serialize')
|
<p>I want my bigquery table data saving GCS to an arrow file(.arrow):</p>
<pre class="lang-py prettyprint-override"><code>import pyarrow as pa
query = f"""
SELECT * FROM `{table_path}.{table_id}`
"""
query_results = b_client.query(query).result()
table = query_results.to_arrow()
serialized_table = pa.serialize(table).to_buffer().to_pybytes()
</code></pre>
<p>This is my original method but after upgrading my pyarrow version I get error for <code>serialize()</code>:</p>
<pre><code>AttributeError: module 'pyarrow' has no attribute 'serialize'
</code></pre>
<p>How can I resolve this?</p>
<p>Also in GCS my arrow file has 130000 rows and 30 columns And <code>.arrow</code> file size is 60MB. When I receive a request then I return data that reads GCS. But it is too slow. It takes like 30 seconds.
How to make it faster? like 10 seconds</p>
|
<python><google-cloud-platform><serialization><google-cloud-storage><pyarrow>
|
2023-05-23 07:54:02
| 1
| 315
|
lima
|
76,312,410
| 18,551,983
|
How to generate a PSSM matrix using PSI BLAST from BioPython
|
<p>Is there any to generate PSSM matrix from PSI BLAST using the python package BioPython? Indeed, I have 8000 sequences in .fasta file. Every sequence length is also long?</p>
<p>I am using this below code:</p>
<pre><code>for fasta in files:
alignment = AlignIO.read(fasta, "fasta")
summary_align = AlignInfo.SummaryInfo(alignment)
consensus = summary_align.dumb_consensus()
my_pssm = summary_align.pos_specific_score_matrix(consensus, chars_to_ignore = ['N', '-'])
file_pssm = fasta+"pssm"
with open(file_pssm) as f:
f.write(my_pssm)
</code></pre>
<p>Is there any better way to do it? The matrix which is shown consisting of 0 and 1 only. I need actual PSSM scoring values (which are in normalized form)</p>
|
<python><python-3.x><biopython><ncbi>
|
2023-05-23 07:52:28
| 1
| 343
|
Noorulain Islam
|
76,312,254
| 2,132,157
|
How can I have a list of all the file used by a Jinja2 template?
|
<p>I have a jinja2 template that uses other templates with <code>include</code> and <code>extends</code> statements. I would like to retrieve a list of all the templates that are involved in the construction of the template.</p>
<pre><code>import jinja2
environment = jinja2.Environment()
template = environment.get_template("mynestedtemplate.html")
template.render(vartest="Hello")
</code></pre>
<p>I see only a function for listing the blocks:</p>
<pre><code>jinja2Template.blocks
</code></pre>
<p>Is there a way to list the templates used?</p>
|
<python><templates><jinja2>
|
2023-05-23 07:31:45
| 1
| 22,734
|
G M
|
76,312,142
| 11,426,624
|
groupby with dictionary comprehension
|
<p>I have a dataframe</p>
<pre><code>df = pd.DataFrame({'id':[1,2,3,1, 1], 'time_stamp_date':['12','12', '12', '14', '14'], 'sth':['col1','col1', 'col2','col2', 'col3']})
d time_stamp_date sth
0 1 12 col1
1 2 12 col1
2 3 12 col2
3 1 14 col2
4 1 14 col3
</code></pre>
<p>and I would like to get the following dataframe. So for each column in <code>sht_list</code>I would like to check if it appears in sth for a specific <code>id</code> and <code>time_stamp_date</code>.</p>
<pre><code>id time_stamp_date sth col1 col2 col3 col4
0 1 12 col1 1 0 0 0
1 2 12 col1 1 0 0 0
2 3 12 col2 0 1 0 0
3 1 14 col2 0 1 1 0
4 1 14 col3 0 1 1 0
</code></pre>
<p>I can do it like this</p>
<pre><code>df_out = df.assign(**{col: df.groupby(['id','time_stamp_date']).sth.transform(
lambda x: (x==col).any()).astype(int)
for col in sht_list})
</code></pre>
<p>but I would like to use only groupby (without transform and then having to use drop_duplicates())
but the below doesn't work because all columns will be named the same ().
The error that I get is <code>SpecificationError: Function names must be unique, found multiple named <lambda></code>.</p>
<pre><code>df_out = df.groupby(['id','time_stamp_date'])[['sth']].agg({lambda x: (x==col).any().astype(int)
for col in sht_list})
</code></pre>
<p>Is it possible to make the above code work?</p>
|
<python><pandas><dataframe><group-by><aggregation>
|
2023-05-23 07:13:38
| 3
| 734
|
corianne1234
|
76,311,912
| 364,088
|
How to install wheel while using a python virtual environment?
|
<p>I followed instructions to install a wheel while my virtualenv was active. When I went to use it I found it wasn't available but when I deactivated the venv it was, so it appears that although the venv was active during the install the package was installed into 'plain old python'.</p>
<p>How can I install a wheel and have it installed into the venv ?</p>
<p>The instructions I used are <a href="https://github.com/cloudflare/stpyv8#installing" rel="nofollow noreferrer">here</a> and this is the transcript of what I did ...</p>
<pre><code>(venv) glaucon@rome:/tmp/stpyv8$ ll
total 18380
drwxrwxr-x 2 glaucon glaucon 4096 May 23 17:49 ./
drwxrwxrwt 42 root root 16384 May 23 17:49 ../
-rw-rw-r-- 1 glaucon glaucon 18797973 May 23 17:49 stpyv8-ubuntu-20.04-python-3.9.zip
(venv) glaucon@rome:/tmp/stpyv8$ unzip stpyv8-ubuntu-20.04-python-3.9.zip
Archive: stpyv8-ubuntu-20.04-python-3.9.zip
inflating: stpyv8-ubuntu-20.04-3.9/icudtl.dat
inflating: stpyv8-ubuntu-20.04-3.9/stpyv8-11.3.244.11-cp39-cp39-linux_x86_64.whl
(venv) glaucon@rome:/tmp/stpyv8$ cd stpyv8-ubuntu-20.04-3.9/
(venv) glaucon@rome:/tmp/stpyv8/stpyv8-ubuntu-20.04-3.9$ ll
total 24384
drwxrwxr-x 2 glaucon glaucon 4096 May 23 17:50 ./
drwxrwxr-x 3 glaucon glaucon 4096 May 23 17:50 ../
-rw-r--r-- 1 glaucon glaucon 10542048 May 17 15:27 icudtl.dat
-rw-r--r-- 1 glaucon glaucon 14416203 May 17 15:27 stpyv8-11.3.244.11-cp39-cp39-linux_x86_64.whl
(venv) glaucon@rome:/tmp/stpyv8/stpyv8-ubuntu-20.04-3.9$ popd
/usr/share ~/dev/orientation-stpyv8-sandbox/stpyv8
(venv) glaucon@rome:/usr/share$ cd stpyv8/
(venv) 1 glaucon@rome:/usr/share/stpyv8$ sudo cp -v /tmp/stpyv8/stpyv8-ubuntu-20.04-3.9/icudtl.dat .
'/tmp/stpyv8/stpyv8-ubuntu-20.04-3.9/icudtl.dat' -> './icudtl.dat'
(venv) glaucon@rome:/usr/share/stpyv8$ ll
total 10312
drwxr-xr-x 2 root root 4096 May 23 17:51 ./
drwxr-xr-x 277 root root 12288 May 23 17:46 ../
-rw-r--r-- 1 root root 10542048 May 23 17:51 icudtl.dat
(venv) glaucon@rome:/usr/share/stpyv8$ popd
~/dev/orientation-stpyv8-sandbox/stpyv8
(venv) glaucon@rome:~/dev/orientation-stpyv8-sandbox/stpyv8$ sudo pip install --upgrade /tmp/stpyv8/stpyv8-ubuntu-20.04-3.9/stpyv8-11.3.244.11-cp39-cp39-linux_x86_64.whl
Processing /tmp/stpyv8/stpyv8-ubuntu-20.04-3.9/stpyv8-11.3.244.11-cp39-cp39-linux_x86_64.whl
Installing collected packages: stpyv8
Successfully installed stpyv8-11.3.244.11
</code></pre>
<p>When I then try to use it it's available without the venv being active ...</p>
<pre><code>glaucon@rome:~/dev/orientation-stpyv8-sandbox$ python3
Python 3.9.16 (main, Dec 7 2022, 01:11:51)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import STPyV8
>>> exit()
</code></pre>
<p>... but not with the venv being active ...</p>
<pre><code>glaucon@rome:~/dev/orientation-stpyv8-sandbox$ . ./venv/bin/activate
(venv) glaucon@rome:~/dev/orientation-stpyv8-sandbox$ python
Python 3.9.16 (main, Dec 7 2022, 01:11:51)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import STPyV8
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'STPyV8'
>>> exit()
(venv) glaucon@rome:~/dev/orientation-stpyv8-sandbox$
</code></pre>
<p>I presume there is a way to install wheels into venvs ? Can anyone tell me how please ?</p>
|
<python><ubuntu><virtualenv><python-3.9><python-wheel>
|
2023-05-23 06:40:10
| 1
| 8,432
|
shearichard
|
76,311,807
| 1,167,194
|
AttributeError: 'Adam' object has no attribute 'build' during unpickling
|
<p>I'm training a Keras model and saving it for later use using pickle.</p>
<p>When I unpickle I get this error:</p>
<p><code>AttributeError: 'Adam' object has no attribute 'build'</code></p>
<p>Here's the code:</p>
<pre><code>from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import pickle
model = Sequential()
model.add(Dense(32, activation='relu', input_shape=(3,)))
model.add(Dense(1, activation='linear')) # Use linear activation for regression
model.compile(loss='mean_squared_error', optimizer='adam')
pickle.dump(model, open("m.pkl", 'wb'))
loadedModel = pickle.load(open("m.pkl", 'rb'))
</code></pre>
<p>I get this error with TensorFlow 2.11.x and 2.13.0-rc0 on MacOS M1</p>
|
<python><tensorflow><keras><pickle>
|
2023-05-23 06:25:40
| 2
| 3,659
|
ColBeseder
|
76,311,676
| 1,804,024
|
Access gRPC python server with "localhost" security in WSL2 docker from windows
|
<p>The issue is this. I am getting a zip file made by pyinstaller. Inside the file there is a setting making the server listen only to localhost <code>server.add_insecure_port(f'localhost:{port_num}')</code>. I want to run this python file from a centos docker. Docker is running on WSL2.</p>
<p>So far I was able to run the docker but the issue is as follow: I can make grpcUrl calls to localhost:port from the WSL2 (in my case ubunty). But when I try to make those calls from windows cmd I get the following error:</p>
<p><code>Failed to dial target host "localhost:9111": dial tcp [::1]:9111: connectex: No connection could be made because the target machine actively refused it.</code>
Or this <code>Failed to dial target host "127.0.0.1:9111": dial tcp 127.0.0.1:9111: connectex: No connection could be made because the target machine actively refused it.</code></p>
<p>I have started the docker using <code>--network="host"</code> hoping it can help me, but it only help ubuntu to do grpcUrl calls, but not windows.</p>
<p>Any help would be appriciated. Note that I cannot change localhost to 0.0.0.0...</p>
|
<python><docker><grpc><windows-subsystem-for-linux>
|
2023-05-23 06:05:21
| 1
| 457
|
Andrey Dobrikov
|
76,311,461
| 11,098,908
|
Incorrect value returned by tkinter method canvas.winfo_width
|
<p>I've just started learning how to create a game with <code>tkinter</code>. I found it so difficult to find information for all the available methods in the module. This <a href="https://tcl.tk/man/tcl8.6/" rel="nofollow noreferrer">website</a> seemed comprehensive but was difficult to navigate to find relevant info for a specific method (for example <code>winfo_width</code>). I then tried to find the purpose of the method in question (<code>winfo_width</code>) by running an example. However, the returned value didn't make sense as shown below</p>
<pre><code>>>> import tkinter as tk
>>> root = tk.Tk()
>>> canvas = tk.Canvas(root, width=300, height=200)
>>> canvas.pack()
>>> rect = canvas.create_rectangle(50, 50, 150, 150, fill="red")
>>> print(canvas.winfo_width())
304 # should be 300, as specified in line 3
</code></pre>
<p>Could you please explain why the returned value was 304 instead of 300, and what is the best resource to find info for all available methods in the <code>tkinter</code>?</p>
|
<python><tkinter>
|
2023-05-23 05:16:33
| 1
| 1,306
|
Nemo
|
76,311,436
| 2,966,197
|
How does Llamaindex elasticsearch vector work
|
<p>I am building an app to use Opensearch as vecotr store with Llamaindex using <a href="https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/OpensearchDemo.html" rel="nofollow noreferrer">this</a> example. Here is the code I have:</p>
<pre><code> endpoint = getenv("OPENSEARCH_ENDPOINT", "http://localhost:9200")
idx = getenv("OPENSEARCH_INDEX", "gpt-index-demo")
UnstructuredReader = download_loader("UnstructuredReader")
loader = UnstructuredReader()
documents = loader.load_data(file=Path(file_name))
# OpensearchVectorClient stores text in this field by default
text_field = "content"
# OpensearchVectorClient stores embeddings in this field by default
embedding_field = "embedding"
# OpensearchVectorClient encapsulates logic for a
# single opensearch index with vector search enabled
client = OpensearchVectorClient(endpoint, idx, 1536, embedding_field=embedding_field, text_field=text_field)
# initialize vector store
vector_store = OpensearchVectorStore(client)
# initialize an index using our sample data and the client we just created
index = GPTVectorStoreIndex.from_documents(documents=documents)
</code></pre>
<p>The confusion I am having is that when we create <code>OpensearchVectorClient</code>, we are not passing any <code>documents</code> and then when we initialize index, we do not pass <code>vector_store</code>, nor is there a eligible field to pass this by looking at the definition of <code>GPTVectorStoreIndex.from_documents()</code>. So, how does the document gets to Elasticsearch index?</p>
|
<python><elasticsearch><amazon-opensearch><llama-index>
|
2023-05-23 05:10:20
| 1
| 3,003
|
user2966197
|
76,311,313
| 1,040,718
|
django: request.POST is empty
|
<p>I have the following rest API endpoint:</p>
<pre><code>def post(request, *args, **kwargs):
print(request.POST)
short_description = request.POST.get("short_description", "")
long_description = request.POST.get("long_description", "")
# rest of the code goes here
</code></pre>
<p>when I call</p>
<pre><code>response = client.post("/foo/bar/api/v1/error/1", {"short_description": "hello", "long_description": "world", format='json')
</code></pre>
<p>it gives me</p>
<p><code><QueryDict: {}></code></p>
<p>so both <code>short_description</code> and <code>long_description</code> is empty strings. How can I get that to pass the correct parameters in the POST?</p>
|
<python><django><rest><django-rest-framework><django-views>
|
2023-05-23 04:32:20
| 2
| 11,011
|
cybertextron
|
76,311,170
| 188,331
|
Compute corpus-level BLEU score for translations in Python via SacreBLEU
|
<p>I have more than 100K pairs of the parallel corpus. Samples:</p>
<pre><code>[
["How are you doing today", "comment allez-vous aujourd'hui"],
["Look out! He is a thief", "Chercher! C'est un voleur"],
...(and a lot more pairs of English-French translations)
]
</code></pre>
<p>From <code>evaluate</code> Python library, the sample code is as follow:</p>
<pre><code>import evaluate
predictions = ["hello there general kenobi", "foo bar foobar"]
references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]]
sacrebleu = evaluate.load("sacrebleu")
results = sacrebleu.compute(predictions=predictions, references=references)
print(results["score"])
</code></pre>
<p>which will print <code>100.0000004</code>, since there is an exact match of the predictions from the references.</p>
<p>I would like to obtain the corpus-level BLEU Score of the above parallel datasets, in order to know the quality of translations. How can I adjust the codes to apply the dataset? Thanks.</p>
|
<python><bleu>
|
2023-05-23 03:44:40
| 1
| 54,395
|
Raptor
|
76,311,127
| 6,824,121
|
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position
|
<p>I saw here <a href="https://stackoverflow.com/a/35651859/6824121">https://stackoverflow.com/a/35651859/6824121</a> that you can launch python script directly in your terminal in <strong>windows</strong> like this:</p>
<pre><code>> python -c exec("""import sys \nfor r in range(10): print('rob') """)
</code></pre>
<p>Which works perflectly.</p>
<p>I tried to launch this command:</p>
<pre><code>> python -c exec("""test = r'C:\Users\alexa\Downloads\_internal'""")
</code></pre>
<p>And I got error:</p>
<pre><code>File "<string>", line 1
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 11-12: truncated \UXXXXXXXX escape
</code></pre>
<p>As in this question <a href="https://stackoverflow.com/questions/1347791/unicode-error-unicodeescape-codec-cant-decode-bytes-cannot-open-text-file">"Unicode Error "unicodeescape" codec can't decode bytes... Cannot open text files in Python 3</a></p>
<p>But I used <code>raw string</code> so I don't know why I still got the error.</p>
<p>What am I missing here ?</p>
<p>I'm using <code>Python 3.6.8</code> even though I don't think this is related to the version of python.</p>
|
<python><windows><unicode>
|
2023-05-23 03:30:19
| 1
| 1,736
|
Lenny4
|
76,310,993
| 17,560,347
|
Pickling tuple of ndarrays which share memory consumes double space
|
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pickle
a = np.random.rand(4000)
b = a.T
assert np.shares_memory(a, b)
tup = (a, b)
s = pickle.dumps(tup)
print(len(s)) # 64187
s1 = pickle.dumps(a)
print(len(s1)) # 32151
</code></pre>
<p>Since <code>a</code> and <code>b</code> share the same memory, when pickling them, the result should be close to the result of pickling one of them.</p>
<p>Is there a way to achieve it?</p>
|
<python><numpy><serialization><pickle>
|
2023-05-23 02:44:18
| 0
| 561
|
吴慈霆
|
76,310,974
| 4,399,016
|
Extracting a CSV file from XML Response
|
<p>I have this code that returns an XML response.</p>
<pre><code>import requests
url = "https://www-genesis.destatis.de/genesisWS/web/ExportService_2010?method=TabellenExport&kennung=DEB924AL95&passwort=P@ssword123&name=42151-0002&bereich=Alle&format=csv&strukturinformation=false&komprimieren=false&transponieren=true&startjahr=&endjahr=&zeitscheiben=&regionalmerkmal=&regionalschluessel=&sachmerkmal=FAMSTD&sachschluessel=VERH&sachmerkmal2=&sachschluessel2=&sachmerkmal3=&sachschluessel3=&stand=&auftrag=false&sprache=en"
payload = "<soapenv:Envelope xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\r\nxmlns:xsd=\"http://www.w3.org/2001/XMLSchema\"\r\nxmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\"\r\nxmlns:web=\"http://webservice_2010.genesis\">\r\n <soapenv:Header/>\r\n <soapenv:Body>\r\n <web:TabellenExport soapenv:encodingStyle=\"http://schemas.xmlsoap.org/soap/encoding/\">\r\n <kennung xsi:type=\"xsd:string\">DEB924AL95</kennung>\r\n <passwort xsi:type=\"xsd:string\">P@ssword123</passwort>\r\n <namen xsi:type=\"xsd:string\">42151-0002</namen>\r\n <bereich xsi:type=\"xsd:string\">alle</bereich>\r\n <format xsi:type=\"xsd:string\">csv</format>\r\n <strukturinformation xsi:type=\"xsd:boolean\">false</strukturinformation>\r\n <komprimieren xsi:type=\"xsd:boolean\">false</komprimieren>\r\n <transponieren xsi:type=\"xsd:boolean\">false</transponieren>\r\n\r\n <startjahr xsi:type=\"xsd:string\"></startjahr>\r\n <endjahr xsi:type=\"xsd:string\"></endjahr>\r\n <zeitscheiben xsi:type=\"xsd:string\"></zeitscheiben>\r\n <regionalmerkmal xsi:type=\"xsd:string\"></regionalmerkmal>\r\n <regionalschluessel xsi:type=\"xsd:string\"></regionalschluessel>\r\n <sachmerkmal xsi:type=\"xsd:string\">FAMSTD</sachmerkmal>\r\n <sachschluessel xsi:type=\"xsd:string\">VERH</sachschluessel>\r\n <sachmerkmal2 xsi:type=\"xsd:string\"></sachmerkmal2>\r\n <sachschluessel2 xsi:type=\"xsd:string\"></sachschluessel2>\r\n <sachmerkmal3 xsi:type=\"xsd:string\"></sachmerkmal3>\r\n <sachschluessel3 xsi:type=\"xsd:string\"></sachschluessel3>\r\n <stand xsi:type=\"xsd:string\"></stand>\r\n <auftrag xsi:type=\"xsd:boolean\">false</auftrag>\r\n <sprache xsi:type=\"xsd:string\">de</sprache>\r\n </web:TabellenExport>\r\n </soapenv:Body>\r\n</soapenv:Envelope>\r\n\r\n"
headers = {
'Content-Type': 'application/xml'
}
response = requests.request("GET", url, headers=headers, data=payload)
print(response.text)
</code></pre>
<p>There is CSV data within the response got. It is within the tag:</p>
<pre><code><tabellenDaten>
</code></pre>
<p>How to extract this table and make a Pandas DataFrame with it?</p>
|
<python><pandas><csv><request><xml-parsing>
|
2023-05-23 02:36:38
| 2
| 680
|
prashanth manohar
|
76,310,847
| 815,653
|
Issues with "!pip install PyDither" on Google colab
|
<p>when i did "!pip install PyDither" on Google colab, I got the following error message. How can I fix it?</p>
<pre><code>Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting PyDither
Downloading PyDither-0.0.1.tar.gz (2.2 kB)
Preparing metadata (setup.py) ... done
Collecting numpy==1.19.0 (from PyDither)
Downloading numpy-1.19.0.zip (7.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.3/7.3 MB 93.7 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: opencv-python in /usr/local/lib/python3.10/dist-packages (from PyDither) (4.7.0.72)
INFO: pip is looking at multiple versions of opencv-python to determine which version is compatible with other requirements. This could take a while.
Collecting opencv-python (from PyDither)
Downloading opencv_python-4.7.0.68-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (61.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.8/61.8 MB 9.8 MB/s eta 0:00:00
Downloading opencv_python-4.6.0.66-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.9/60.9 MB 13.7 MB/s eta 0:00:00
Downloading opencv_python-4.5.5.64-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.5/60.5 MB 10.1 MB/s eta 0:00:00
Downloading opencv_python-4.5.5.62-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.4/60.4 MB 13.2 MB/s eta 0:00:00
Downloading opencv_python-4.5.4.60-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.3/60.3 MB 9.2 MB/s eta 0:00:00
Downloading opencv_python-4.5.4.58-cp310-cp310-manylinux2014_x86_64.whl (60.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.3/60.3 MB 10.9 MB/s eta 0:00:00
Downloading opencv-python-4.5.3.56.tar.gz (89.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 89.2/89.2 MB 7.7 MB/s eta 0:00:00
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
|
<python><opencv><pip>
|
2023-05-23 01:53:22
| 1
| 10,344
|
zell
|
76,310,718
| 3,427,777
|
pandas: mark duplicate rows using subset of MultiIndex levels, not columns
|
<p>I have a <code>df</code> with a many-leveled <code>MultiIndex</code>. Early on I need to mark certain rows to keep; in subsequent sorting and processing these rows will always be kept.</p>
<p>I have working code, but it's not very attractive and I'm wondering if there's a prettier / more efficient way to do it.</p>
<p>Given a <code>df</code> with a 3+ level <code>MultiIndex</code> and an arbitrary number of columns, I run this code to check for duplicates in the first 2 levels of the <code>MultiIndex</code>, and mark the first occurrence as the keeper:</p>
<pre><code>df['keeper'] = df.index.isin(df.assign(check=df.index.get_level_values(0), check2=df.index.get_level_values(1)).drop_duplicates(subset=['check', 'check2']).index)
</code></pre>
<p>Here's a toy <code>df</code> with resultant <code>keeper</code> col:</p>
<pre><code> 0 keeper
lev0 lev1 lev2
1 1 1 0.696469 True
2 NaN False
2 3 0.719469 True
2 0.980764 False
3 1 NaN True
</code></pre>
<p>I tried <code>reset_index</code> but in the end I need the MultiIndex to remain unchanged, and moving those levels to columns only to have to re-create the very large MultiIndex again afterwards seemed even less efficient than what I have.</p>
|
<python><pandas>
|
2023-05-23 01:07:25
| 2
| 22,862
|
fantabolous
|
76,310,696
| 2,745,148
|
Django staticfiles.W004 warning when using os.path.join
|
<p>I'm getting this warning:</p>
<blockquote>
<p>?: (staticfiles.W004) The directory
'/home/user/Desktop/Projects/project/project/project/static' in the
STATICFILES_DIRS setting does not exist.</p>
</blockquote>
<p>But only when I use this definition for the staticfiles dir</p>
<pre><code>BASE_DIR = Path(__file__).resolve().parent.parent
STATICFILES_DIRS = [os.path.join(BASE_DIR, '/static')]
</code></pre>
<p>If I use this instead</p>
<pre><code>STATICFILES_DIRS = ['/home/user/Desktop/Projects/project/project/project/static']
</code></pre>
<p>I have no issue.</p>
|
<python><django><os.path>
|
2023-05-23 00:57:01
| 0
| 813
|
Chuox
|
76,310,662
| 3,476,463
|
calculate distance from address using geopandas
|
<p>I have the python 3.7 code below. I'm using it to calculate the distance in miles between a point that I specify, the target_address, and a couple points that I have in a pandas dataframe. The code uses the latitude and longitude of the street address to create a shapely point and then calculates the difference in miles. I'm getting the error message below when I run the code, and I can't understand why. The shapely Point all look pretty similar. I've provided sample data below, can anyone tell me what the issue is?</p>
<p>Sample data:</p>
<pre><code>print(gdf)
store address latitude longitude \
0 1 5101 Business Center Dr, Fairfield, CA 94534 38.216613 -122.144712
1 2 601 Jackson St, Fairfield, CA 94533 38.248419 -122.044867
geometry
0 POINT (-122.14471 38.21661)
1 POINT (-122.04487 38.24842)
</code></pre>
<p>Code:</p>
<pre><code>import geopandas as gpd
from geopy.distance import geodesic
from geopy.geocoders import Nominatim
import pandas as pd
import numpy as np
def calculate_distance(point1, point2):
return geodesic(point1, point2,'photon').miles
target_address = '1113 Capistrano Court Fairfield, CA 94534'
max_distance = 5
#Convert the target address to a point:
target_point = gpd.tools.geocode(target_address,'photon')
#Filter the GeoDataFrame based on the distance:
gdf['distance'] = gdf['geometry'].apply(lambda x: calculate_distance(x, target_point))
filtered_df = gdf[gdf['distance'] <= max_distance]
</code></pre>
<p>Error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~/anaconda3/envs/gpthackathon/lib/python3.7/site-packages/geopy/point.py in __new__(cls, latitude, longitude, altitude)
168 try:
--> 169 seq = iter(arg)
170 except TypeError:
TypeError: 'Point' object is not iterable
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
/var/folders/wd/yc91yn2s6r39blpvnppxwqy80000gp/T/ipykernel_88527/819773319.py in <module>
1 #Filter the GeoDataFrame based on the distance:
2
----> 3 gdf['distance'] = gdf['geometry'].apply(lambda x: calculate_distance(x, target_point))
4 filtered_df = gdf[gdf['distance'] <= max_distance]
~/anaconda3/envs/gpthackathon/lib/python3.7/site-packages/geopandas/geoseries.py in apply(self, func, convert_dtype, args, **kwargs)
572 @inherit_doc(pd.Series)
573 def apply(self, func, convert_dtype=True, args=(), **kwargs):
--> 574 result = super().apply(func, convert_dtype=convert_dtype, args=args, **kwargs)
575 if isinstance(result, GeoSeries):
576 if self.crs is not None:
~/anaconda3/envs/gpthackathon/lib/python3.7/site-packages/pandas/core/series.py in apply(self, func, convert_dtype, args, **kwargs)
4355 dtype: float64
4356 """
-> 4357 return SeriesApply(self, func, convert_dtype, args, kwargs).apply()
4358
4359 def _reduce(
~/anaconda3/envs/gpthackathon/lib/python3.7/site-packages/pandas/core/apply.py in apply(self)
1041 return self.apply_str()
1042
-> 1043 return self.apply_standard()
1044
1045 def agg(self):
~/anaconda3/envs/gpthackathon/lib/python3.7/site-packages/pandas/core/apply.py in apply_standard(self)
1099 values,
1100 f, # type: ignore[arg-type]
-> 1101 convert=self.convert_dtype,
1102 )
1103
~/anaconda3/envs/gpthackathon/lib/python3.7/site-packages/pandas/_libs/lib.pyx in pandas._libs.lib.map_infer()
/var/folders/wd/yc91yn2s6r39blpvnppxwqy80000gp/T/ipykernel_88527/819773319.py in <lambda>(x)
1 #Filter the GeoDataFrame based on the distance:
2
----> 3 gdf['distance'] = gdf['geometry'].apply(lambda x: calculate_distance(x, target_point))
4 filtered_df = gdf[gdf['distance'] <= max_distance]
/var/folders/wd/yc91yn2s6r39blpvnppxwqy80000gp/T/ipykernel_88527/442241440.py in calculate_distance(point1, point2)
2
3 def calculate_distance(point1, point2):
----> 4 return geodesic(point1, point2,'photon').miles
~/anaconda3/envs/gpthackathon/lib/python3.7/site-packages/geopy/distance.py in __init__(self, *args, **kwargs)
538 self.set_ellipsoid(kwargs.pop('ellipsoid', 'WGS-84'))
539 major, minor, f = self.ELLIPSOID
--> 540 super().__init__(*args, **kwargs)
541
542 def set_ellipsoid(self, ellipsoid):
~/anaconda3/envs/gpthackathon/lib/python3.7/site-packages/geopy/distance.py in __init__(self, *args, **kwargs)
274 elif len(args) > 1:
275 for a, b in util.pairwise(args):
--> 276 kilometers += self.measure(a, b)
277
278 kilometers += units.kilometers(**kwargs)
~/anaconda3/envs/gpthackathon/lib/python3.7/site-packages/geopy/distance.py in measure(self, a, b)
554
555 def measure(self, a, b):
--> 556 a, b = Point(a), Point(b)
557 _ensure_same_altitude(a, b)
558 lat1, lon1 = a.latitude, a.longitude
~/anaconda3/envs/gpthackathon/lib/python3.7/site-packages/geopy/point.py in __new__(cls, latitude, longitude, altitude)
170 except TypeError:
171 raise TypeError(
--> 172 "Failed to create Point instance from %r." % (arg,)
173 )
174 else:
TypeError: Failed to create Point instance from <shapely.geometry.point.Point object at 0x7f940800d690>.
</code></pre>
|
<python><dataframe><geopandas><geopy>
|
2023-05-23 00:43:48
| 1
| 4,615
|
user3476463
|
76,310,625
| 8,311,330
|
Loss function does not train
|
<p>We are training a QuestionAnswering model for the SQUAD v2 dataset.</p>
<p>A RoBERTa encoder, with a classifier on top. Predicting the answer span works perfectly. However, we wanted to add a front classifier to predict the answerability of a question (as suggested in the paper "Retrospective Reader for Machine Reading Comprehension").</p>
<p>Using the model below, the start and end loss are decreasing, however the answerable_loss does not train.</p>
<p>What we intend to do is:</p>
<ul>
<li>Get the first element of the encoded input</li>
<li>Predict (answerable, unanswerable) on this element</li>
<li>softmax(answerable, unanswerable) to get a certainty percentage</li>
<li>calculate the loss using cross entropy</li>
</ul>
<p>(PS: notice some tricky dimension stuff due to batches)</p>
<p>And to the best of my knowledge this is what we should do, and are doing in the code. But obviously it does not work...</p>
<p>I can't seem to find my error. Why might this be the case?</p>
<pre><code>class LSTMFrontVerifier(nn.Module):
name = "lstm-front-verifier"
def __init__(self, encoder):
super().__init__()
self.encoder = encoder
self.answerable = nn.Linear(encoder.config.hidden_size, 2)
self.classifier = nn.Linear(encoder.config.hidden_size, 2)
def forward(self, input_ids, attention_mask=None, start_positions=None, end_positions=None):
outputs = self.encoder(input_ids, attention_mask=attention_mask)
lstm_output = outputs.last_hidden_state
logits = self.classifier(lstm_output)
start_logits, end_logits = logits.split(1, dim=-1)
start_logits, end_logits = start_logits.squeeze(-1), end_logits.squeeze(-1)
# Given answer-ability
unanswerabe = torch.logical_and(start_positions == 0, end_positions == 0).float()
start_loss = F.cross_entropy(start_logits, start_positions)
end_loss = F.cross_entropy(end_logits, end_positions)
# Predict answerability
pred_answerable = self.answerable(lstm_output[:, 0])
answerable_pred = F.softmax(pred_answerable, dim=1)
answerable_loss = F.cross_entropy(answerable_pred[:, 0], 1-unanswerabe) + \
F.cross_entropy(answerable_pred[:, 1], unanswerabe)
print((start_loss + end_loss).item(), answerable_loss.item())
loss = start_loss + end_loss + answerable_loss
return loss
def forward_ex(self, example):
input_ids = example["input_ids"].to(device)
start_positions = example["start_positions"].to(device) if "start_positions" in example else None
end_positions = example["end_positions"].to(device) if "end_positions" in example else None
attention_mask = example["attention_mask"].to(device) if "attention_mask" in example else None
return self.forward(input_ids, attention_mask, start_positions, end_positions)
</code></pre>
<p>For reproducability purpouses here a minimal code example:</p>
<pre><code>import torch
from transformers import AutoTokenizer
from datasets import load_dataset
from torch.utils.data.dataloader import DataLoader
import torch.nn.functional as F
import torch.nn as nn
from transformers import AutoModel
max_train_examples = 1000
max_length = 384
stride = 128
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
def preprocess_examples(examples):
questions = [q.strip() for q in examples["question"]]
inputs = tokenizer(
questions,
examples["context"],
max_length=max_length,
truncation="only_second",
stride=stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
offset_mapping = inputs.pop("offset_mapping")
sample_map = inputs.pop("overflow_to_sample_mapping")
answers = examples["answers"]
start_positions = []
end_positions = []
for i, offset in enumerate(offset_mapping):
sample_idx = sample_map[i]
answer = answers[sample_idx]
if not answer["answer_start"]:
start_positions.append(0)
end_positions.append(0)
continue
start_char = answer["answer_start"][0]
end_char = answer["answer_start"][0] + len(answer["text"][0])
sequence_ids = inputs.sequence_ids(i)
idx = 0
while sequence_ids[idx] != 1:
idx += 1
context_start = idx
while sequence_ids[idx] == 1:
idx += 1
context_end = idx - 1
if offset[context_start][0] > start_char or offset[context_end][1] < end_char:
start_positions.append(0)
end_positions.append(0)
else:
idx = context_start
while idx <= context_end and offset[idx][0] <= start_char:
idx += 1
start_positions.append(idx - 1)
idx = context_end
while idx >= context_start and offset[idx][1] >= end_char:
idx -= 1
end_positions.append(idx + 1)
inputs["start_positions"] = start_positions
inputs["end_positions"] = end_positions
return inputs
def convert_to_tensors(examples):
return {k: torch.tensor([x[k] for x in examples]) for k in examples[0]}
squad = load_dataset('squad_v2')
squad["train"] = squad["train"].select(range(max_train_examples))
tokenized_datasets = squad.map(
preprocess_examples,
batched=True,
remove_columns=squad["train"].column_names,
)
train_dataloader = DataLoader(tokenized_datasets["train"], batch_size=8, collate_fn=convert_to_tensors, shuffle=True)
roberta = AutoModel.from_pretrained("roberta-base").to(device)
model = LSTMFrontVerifier(roberta).to(device)
optimizer = torch.optim.AdamW(model.parameters(), lr=5e-5, betas=(0.9, 0.999), eps=1e-8, weight_decay=1e-3)
model.train()
optimizer.zero_grad() # Reset gradients tensors
for batch, x in enumerate(train_dataloader):
# Compute prediction error
loss = model.forward_ex(x)
optimizer.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
optimizer.step()
</code></pre>
<p>Packages:</p>
<ol>
<li>Install pytorch with gpu: <a href="https://pytorch.org/get-started/locally/" rel="nofollow noreferrer">https://pytorch.org/get-started/locally/</a></li>
<li>pip install transformers datasets</li>
</ol>
<p>Some prints of the print((start_loss + end_loss), answerable_loss):</p>
<pre><code>11.970989227294922 16.621864318847656
11.742887496948242 16.640228271484375
11.448827743530273 16.66130828857422
11.063633918762207 16.665538787841797
10.49462890625 16.663175582885742
10.176043510437012 16.661867141723633
10.13321590423584 16.646312713623047
10.05152702331543 16.64898681640625
9.76708698272705 16.648393630981445
9.409551620483398 16.700483322143555
8.939659118652344 16.641441345214844
9.03275203704834 16.647899627685547
8.160870552062988 16.63787078857422
8.27975845336914 16.641223907470703
7.900142669677734 16.64410972595215
6.427922248840332 16.644954681396484
6.4332380294799805 16.643535614013672
6.626171112060547 16.642642974853516
4.79502010345459 16.640335083007812
6.948017120361328 16.641925811767578
5.472411632537842 16.642606735229492
6.458420753479004 16.63710594177246
6.552549362182617 16.637182235717773
4.95197868347168 16.637977600097656
5.235410690307617 16.643829345703125
4.700412750244141 16.63840675354004
4.11396598815918 16.646831512451172
5.13016414642334 16.643505096435547
3.7867109775543213 16.63637924194336
5.582259178161621 16.643115997314453
5.7655229568481445 16.64023208618164
5.085046768188477 16.63158416748047
4.153951644897461 16.63810920715332
4.100613594055176 16.644237518310547
4.206878662109375 16.636249542236328
3.450410842895508 16.635835647583008
4.827783584594727 16.63633918762207
2.2874913215637207 16.644474029541016
2.3297667503356934 16.647319793701172
3.4870200157165527 16.652259826660156
3.31907320022583 16.6363582611084
4.377845764160156 16.637659072875977
3.427989959716797 16.635705947875977
4.224106311798096 16.640310287475586
</code></pre>
|
<python><machine-learning><pytorch><nlp><classification>
|
2023-05-23 00:30:35
| 1
| 960
|
Daan Seuntjens
|
76,310,591
| 1,224,336
|
Why does VSCode try running wsl.exe when debugging an Azure Function?
|
<p>I'm trying to work through the Microsoft Azure Functions tutorial <a href="https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-python" rel="nofollow noreferrer">Quickstart: Create a function in Azure with Python using Visual Studio Code</a> and I'm getting stuck on the "Run the function locally" step.</p>
<p>When I press F5 for <strong>Run and Debug</strong>, I get this output in the Terminal pane:</p>
<pre><code> * Executing task: .venv\Scripts\python -m pip install -r requirements.txt
<3>WSL (9) ERROR: CreateProcessEntryCommon:577: execvpe .venv\Scripts\python failed 2
<3>WSL (9) ERROR: CreateProcessEntryCommon:586: Create process not expected to return
* The terminal process "C:\Windows\System32\wsl.exe -e .venv\Scripts\python -m pip install -r requirements.txt" terminated with exit code: 1.
* Terminal will be reused by tasks, press any key to close it.
</code></pre>
<p>Why does it keep trying to run WSL, and how do I make it not do that?</p>
<p>The only clue on the tutorial page for fixing this error is this sentence:</p>
<blockquote>
<p>If you have trouble running on Windows, make sure that the default
terminal for Visual Studio Code isn't set to WSL Bash.</p>
</blockquote>
<p>So in VSCode, I press Sh-Ctl-P > "Terminal:Set Default Profile" > Powershell. I definitely do <em>not</em> select WSL, nor Git Bash.</p>
<p>This is a Windows 10 PC. My Azure Function project is in its own folder, with its own <strong>.venv</strong>. Although I have WSL installed, there's no reason I can think of for it to be executed in this project.The command <code>Get-Command python</code> points to the <strong>python.exe</strong> in the <strong>.venv</strong>.</p>
<p>I start VSCode from PowerShell, changing to the project folder and executing the command <code>code .</code> ("code space dot").</p>
<p><strong>EDITED TO ADD:</strong> To the right of the Terminal pane is a note for "wsl task":
<a href="https://i.sstatic.net/5OaDj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5OaDj.png" alt="wsl task details in terminal pane" /></a></p>
|
<python><visual-studio-code><azure-functions>
|
2023-05-23 00:14:51
| 1
| 687
|
Ray Depew
|
76,310,575
| 3,247,006
|
How to make the urls with and without a language prefix work in Django i18n?
|
<p>This is my <code>django-project</code> as shown below. *I'm learning <a href="https://docs.djangoproject.com/en/4.2/topics/i18n/translation/" rel="nofollow noreferrer">Translation</a> with <strong>Django 4.2.1</strong>:</p>
<pre class="lang-none prettyprint-override"><code>django-project
|-core
| |-settings.py
| └-urls.py
|-app1
| |-models.py
| |-admin.py
| └-urls.py
|-app2
└-locale
└-en
└-LC_MESSAGES
|-django.mo
└-django.po
</code></pre>
<p>And, this is <code>core/settings.py</code> which sets <code>fr</code> to <a href="https://docs.djangoproject.com/en/4.2/ref/settings/#language-code" rel="nofollow noreferrer">LANGUAGE_CODE</a> as a default language code as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "core/settings.py"
MIDDLEWARE = [
# ...
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
# ...
]
# ...
LANGUAGE_CODE = 'fr' # Here
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
from django.utils.translation import gettext_lazy as _
LANGUAGES = (
('fr', _('Français')),
('en', _('Anglais')),
)
LOCALE_PATHS = [
BASE_DIR / 'locale',
]
</code></pre>
<p>And, this is <code>app1/views.py</code> which returns the default french word <code>Bonjour</code> from <code>test</code> view as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "app1/views.py"
from django.http import HttpResponse
from django.utils.translation import gettext as _
def test(request): # Here
return HttpResponse(_("Bonjour"))
</code></pre>
<p>And, this is <code>app1/urls.py</code> with the path set <code>test</code> view in <code>urlpatterns</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "app1/urls.py"
from django.urls import path
from . import views
app_name = "app1"
urlpatterns = [
path('', views.test, name="test") # Here
]
</code></pre>
<p>And, this is <code>core/urls.py</code> with <a href="https://docs.djangoproject.com/en/4.2/topics/i18n/translation/#language-prefix-in-url-patterns" rel="nofollow noreferrer">i18n_patterns()</a> set <code>admin</code> and <code>app1</code> paths in <code>urlpatterns</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "core/urls.py"
from django.contrib import admin
from django.urls import path, include
from django.conf.urls.i18n import i18n_patterns
urlpatterns = i18n_patterns(
path('admin/', admin.site.urls),
path('app1/', include('app1.urls'))
)
</code></pre>
<p>Then, <code>http://localhost:8000/fr/app1/</code> can show <code>Bonjour</code> as shown below:</p>
<p><a href="https://i.sstatic.net/fmdvG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fmdvG.png" alt="enter image description here" /></a></p>
<p>And, <code>http://localhost:8000/en/app1/</code> can show <code>Hello</code> as shown below:</p>
<p><a href="https://i.sstatic.net/sWT8U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sWT8U.png" alt="enter image description here" /></a></p>
<p>Now, I also want to show <code>Bonjour</code> with <code>http://localhost:8000/app1/</code> but it redirects to <code>http://localhost:8000/en/app1/</code> showing <code>Hello</code> as shown below:</p>
<p><a href="https://i.sstatic.net/sWT8U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sWT8U.png" alt="enter image description here" /></a></p>
<p>Actually, if I set <a href="https://docs.djangoproject.com/en/4.2/topics/i18n/translation/#language-prefix-in-url-patterns" rel="nofollow noreferrer">prefix_default_language=False</a> to <code>i18n_patterns()</code> in <code>core/urls.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "core/urls.py"
from django.contrib import admin
from django.urls import path, include
from django.conf.urls.i18n import i18n_patterns
urlpatterns = i18n_patterns(
path('admin/', admin.site.urls),
path('app1/', include('app1.urls')),
prefix_default_language=False # Here
)
</code></pre>
<p>Then, <code>http://localhost:8000/app1/</code> can show <code>Bonjour</code> as shown below:</p>
<p><a href="https://i.sstatic.net/NkpaB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NkpaB.png" alt="enter image description here" /></a></p>
<p>But, <code>http://localhost:8000/fr/app1/</code> doesn't work to show <code>Bonjour</code> as shown below:</p>
<p><a href="https://i.sstatic.net/Gbygr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gbygr.png" alt="enter image description here" /></a></p>
<p>So, how can I show <code>Bonjour</code> with both <code>http://localhost:8000/fr/app1/</code> and <code>http://localhost:8000/app1/</code>?</p>
|
<python><django><django-views><django-urls><django-i18n>
|
2023-05-23 00:09:22
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,310,548
| 12,548,458
|
argparse: How to use required positionals and subparsers together?
|
<p>I'm encountering a simple use-case that <code>argparse</code> surprisingly doesn't seem to handle. I would like to have a required positional argument in addition to having multiple subparsers. The rationale is that this CLI application supports one use-case very readily with a concise syntax, with more granular options being nested under subcommands. For this post, imagine that I am writing a CLI application for tagging files with the syntax:</p>
<pre><code># tag a file
argparse_test.py <file> [<tags>...]
# list tags
argparse_test.py tags <file>
</code></pre>
<p>Usage example:</p>
<pre><code>$ argparse_test.py ./cat.jpg cute funny
$ argparse_test.py tags ./cat.jpg
> cute, funny
</code></pre>
<p>My implementation is as follows:</p>
<pre class="lang-py prettyprint-override"><code>import argparse
# initialize main parser
parser = argparse.ArgumentParser()
parser.add_argument('file', help="The file to index.")
parser.add_argument('tags', nargs='*', help="Tags to append.")
# initialize `tags` subparser
subparsers = parser.add_subparsers()
tag_subparser = subparsers.add_parser("tags")
tag_subparser.add_argument('file', help="The file to list tags for.")
# 1. Failing test case
args = parser.parse_args(["./cat.jpg", "cute", "funny"])
print(args)
# 2. Failing test case (comment out #1 to reach this)
args = parser.parse_args(["tags", "./cat.jpg"])
print(args)
</code></pre>
<p>Surprisingly, this fails in both cases!</p>
<p>For the first test case:</p>
<pre><code>usage: argparse_test.py [-h] file [tags ...] {tags} ...
argparse_test.py: error: argument {tags}: invalid choice: 'funny' (choose from 'tags')
</code></pre>
<p>For the second test case:</p>
<pre><code>usage: argparse_test.py tags [-h] file
argparse_test.py tags: error: the following arguments are required: file
</code></pre>
<p>Commenting out either the main parser or the subparser will fix one (but not both) test case every time. It looks like this issue stems from some conflict <code>argparse</code> has in dealing with required positionals and subcommands.</p>
<p><strong>Is there any known way to implement support for this CLI syntax via <code>argparse</code> in Python 3.8+?</strong></p>
<p>(BTW, I've tested this on Python 3.9, 3.10, and 3.11, all of which share the same behavior).</p>
|
<python><argparse>
|
2023-05-22 23:57:16
| 1
| 3,289
|
dlq
|
76,310,533
| 3,482,266
|
How to fix "Trainer: evaluation requires an eval_dataset" in Huggingface Transformers?
|
<p>I’m trying to do a finetuning without an evaluation dataset.
For that, I’m using the following code:</p>
<pre><code>training_args = TrainingArguments(
output_dir=resume_from_checkpoint,
evaluation_strategy="epoch",
per_device_train_batch_size=1,
)
def compute_metrics(pred: EvalPrediction):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
f1 = f1_score(labels, preds, average="weighted")
acc = accuracy_score(labels, preds, average="weighted")
return {"accuracy": acc, "f1": f1}
trainer = Trainer(
model=self.nli_model,
args=training_args,
train_dataset=tokenized_datasets,
compute_metrics=compute_metrics,
)
</code></pre>
<p>However, I get</p>
<pre><code>ValueError: Trainer: evaluation requires an eval_dataset
</code></pre>
<p>I thought that by default, Trainer does no evaluation… at least in the docs, I got this idea…</p>
|
<python><pytorch><huggingface-transformers><pre-trained-model><huggingface-trainer>
|
2023-05-22 23:54:01
| 5
| 1,608
|
An old man in the sea.
|
76,310,356
| 14,584,978
|
How do I refresh an excel query and get the results into a pandas datafame
|
<p>I need to access data in sharepoint files in python using the machine user's access to the file.
I need pandas output and a reliable method to refresh the query. I am thinking of using excel to run SharePoint queries.</p>
<p>I cannot use GraphAPI to do so.</p>
<p>What are some options?</p>
|
<python><sharepoint><powerquery>
|
2023-05-22 23:00:55
| 1
| 374
|
Isaacnfairplay
|
76,310,336
| 11,001,751
|
What theme/styling is used in the Plotly Dash Documentation? Or: how to create a thoroughly "dark" Dash App
|
<p>Plotly's Dash has a very nice dark themed <a href="https://dash.plotly.com" rel="nofollow noreferrer">documentation</a>, whose styling however does not seem readily available to the user (the default Dash app is unstyled). I would like to create a dark themed Dash app whose components look pretty much like the Dash documentation. The closest easy solution I have come across is to use a dark theme from the the <a href="https://dash-bootstrap-components.opensource.faculty.ai/docs/themes/" rel="nofollow noreferrer">Dash Bootstrap Components Themes</a> i.e.</p>
<pre class="lang-py prettyprint-override"><code>import dash_bootstrap_components as dbc
app = Dash(__name__, external_stylesheets=[dbc.themes.CYBORG])
</code></pre>
<p>together with the <code>"plotly_dark"</code> <a href="https://plotly.com/python/templates/" rel="nofollow noreferrer">template</a> for graphs. However, this does not work for all components, in particular the <a href="https://dash.plotly.com/dash-core-components/tabs" rel="nofollow noreferrer"><code>dcc.Tabs()</code></a> or <a href="https://dash.plotly.com/dash-core-components/rangeslider" rel="nofollow noreferrer"><code>dcc.RangeSlider()</code></a> elements in my app are still of a different (light) style. The way they look in the dash documentation would be perfect for my app.</p>
<p>So the question is: is there an easy way to create a thoroughly dark themed app, where all core components are consistently styled? And further: is it possible to apply the same style as in the Dash <a href="https://dash.plotly.com" rel="nofollow noreferrer">documentation</a>?</p>
|
<python><plotly-dash>
|
2023-05-22 22:53:43
| 1
| 1,379
|
Sebastian
|
76,310,293
| 880,874
|
I moved a working Python script from one server to another, but now it's suddenly not working
|
<p>I moved a working Python script to a new server and I am suddenly getting errors.</p>
<p>I installed Python and all it's dependicies on the new server, but for some reason I get this error:</p>
<pre><code>Exception has occurred: ObjectNotExecutableError
Not an executable object: 'EXECUTE dbo.sp_GatherInventory'
AttributeError: 'str' object has no attribute '_execute_on_connection'
The above exception was the direct cause of the following exception:
File "E:\Project\Python\WriteInventory.py", line 23, in <module>
rs = con.execute(qry)
^^^^^^^^^^^^^^^^
</code></pre>
<blockquote>
<p>sqlalchemy.exc.ObjectNotExecutableError: Not an executable object:
'EXECUTE dbo.sp_GatherInventory'</p>
</blockquote>
<p>Here is the problem part of the code:</p>
<pre><code># imports
from sqlalchemy import create_engine
from sqlalchemy.engine import URL
import pyodbc
import pandas as pd
import csv
import configparser
import sqlalchemy as sa
import xlwings as xw
import pysftp
connection_string = "DRIVER={ODBC Driver 17 for SQL Server};SERVER=mainDB;DATABASE=Inventory;"
connection_url = URL.create(
"mssql+pyodbc", query={"odbc_connect": connection_string})
engine = sa.create_engine(connection_url)
qry = "EXECUTE dbo.sp_GatherInventory"
with engine.connect() as con:
rs = con.execute(qry)
df = pd.read_sql_query(qry, engine)
</code></pre>
<p>Nothing has changed on the database and that stored procedure still exists.</p>
<p>I'm not sure why it's not working on this new server.</p>
<p>Does anyone have any ideas?</p>
<p>Thanks!</p>
|
<python><python-3.x><sqlalchemy>
|
2023-05-22 22:40:01
| 1
| 7,206
|
SkyeBoniwell
|
76,310,224
| 21,420,742
|
Getting count of a specific value grouped by ID in pandas
|
<p>I cam trying to get counts of everyone with a 1 in a specific column and tie the sum to a manager.</p>
<p>DF</p>
<pre><code> ID Full-Time Manager ID
101 0 103
102 1 103
103 1 110
104 0 107
105 1 103
106 1 107
107 1 110
108 0 107
109 1 103
110 1 NaN
</code></pre>
<p>I just need a count of those with a 1 under <code>Full-Time</code> by <code>Manager ID</code></p>
<pre><code> Manager ID Full-Time count
103 3
107 1
110 2
</code></pre>
<p>I tried <code>df[df['Full-Time'].eq(1)].groupby('Manager ID')['Manager ID'].count()</code>
Any suggestions?</p>
|
<python><python-3.x><pandas><dataframe><group-by>
|
2023-05-22 22:17:55
| 3
| 473
|
Coding_Nubie
|
76,310,185
| 3,595,231
|
selenium not able to click this dropdown link in my UI
|
<p>I have a selenium test case written in python, where the UI looks like this:
<a href="https://i.sstatic.net/Q6Hvl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q6Hvl.png" alt="enter image description here" /></a></p>
<p>As you may find, I want to click the drop-down button that is circled in red, the xpath value is:</p>
<pre><code>//div[contains(text(), 'Interface Settings - lan')]/../following-sibling::div[1]/div[1]/div[3]/div/div[2]//*[name()='svg']/*[name()='path']
</code></pre>
<p>And here is my python code to have it invoked.</p>
<pre><code>try:
dropdown = "//div[contains(text(), 'Interface Settings - lan')]/../following-sibling::div[1]/div[1]/div[3]/div/div[2]//*[name()='svg']/*[name()='path']"
WebDriverWait(self.driver, 60).until(
EC.presence_of_element_located((By.XPATH, dropdown))
).click()
...
except Exception as e:
self.logger.info(e)
raise
</code></pre>
<p>But somehow during the run-time, it keeps giving me such failure message:</p>
<pre><code>selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element <path fill="currentColor" d="M41 288h238c21.4 0 32.1 25.9 17 41L177 448c-9.4 9.4-24.6 9.4-33.9 0L24 329c-15.1-15.1-4.4-41 17-41zm255-105L177 64c-9.4-9.4-24.6-9.4-33.9 0L24 183c-15.1 15.1-4.4 41 17 41h238c21.4 0 32.1-25.9 17-41z"></path> is not clickable at point (1027, 920). Other element would receive the click: <div class="profile-save-section">...</div>
(Session info: chrome=113.0.5672.93)
Stacktrace:
</code></pre>
<p>Any idea how may I work around such thing ?</p>
<p>Thanks,</p>
<p>Jack</p>
<p>UPDATE,</p>
<p>Following is the screen shot of the URL, Basically I need to click the "Mode" drop-down link under this "lan" section.</p>
<p><a href="https://i.sstatic.net/J3uAy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J3uAy.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><selenium-chromedriver>
|
2023-05-22 22:07:19
| 1
| 765
|
user3595231
|
76,310,061
| 12,144,502
|
Increasing The Amount Of Neurons In Hidden Layer
|
<p>I am working to understand how to build my own ANN from scratch.</p>
<p>I have looked around and found a simple two layer architecture and the <code>init_parameters</code> function is like this.</p>
<pre class="lang-py prettyprint-override"><code>def init_parameters():
W1 = np.random.normal(size=(10, 784)) * np.sqrt(1./(784))
b1 = np.random.normal(size=(10, 1)) * np.sqrt(1./10)
W2 = np.random.normal(size=(10, 10)) * np.sqrt(1./20)
b2 = np.random.normal(size=(10, 1)) * np.sqrt(1./(784))
return W1, b1, W2, b2
</code></pre>
<p>This is following the MNIST data set. I understand that the input layer is <code>784</code> for each
<code>28x28</code> input image. For the hidden layer it is <code>10</code> if I am understanding the provided code. Lastly the output layer is <code>10</code> for each number from <code>0</code> to <code>9</code>.</p>
<p>My goal is to increase the number of neurons in the hidden layer.</p>
<p>This is what I have tried:</p>
<pre class="lang-py prettyprint-override"><code> def init_parameters():
W1 = np.random.normal(size=(20, 784)) * np.sqrt(1./(784))
b1 = np.random.normal(size=(10, 1)) * np.sqrt(1./10)
W2 = np.random.normal(size=(10, 20)) * np.sqrt(1./20)
b2 = np.random.normal(size=(10, 1)) * np.sqrt(1./(784))
return W1, b1, W2, b2
</code></pre>
<p>This gives me an error when using my `forward_prop´ function</p>
<pre class="lang-py prettyprint-override"><code>def forward_prop(W1, b1, W2, b2, X):
Z1 = W1.dot(X) + b1
A1 = ReLU(Z1)
Z2 = W2.dot(A1) + b2
A2 = softmax(Z2)
return Z1, A1, Z2, A2
</code></pre>
<p>Error:</p>
<pre><code>-> Z1 = W1.dot(X) + b1
ValueError: operands could not be broadcast together with shapes (20,29400) (10,1)
</code></pre>
<p>From the error it seems that the <code>b1</code> is not matched so I thought changing <code>b1</code> to <code>b1 = np.random.normal(size=(20, 1)) * np.sqrt(1./10)</code> will work and it did.</p>
<p>The new <code>init_parameters</code> is this:</p>
<pre class="lang-py prettyprint-override"><code>def init_parameters():
W1 = np.random.normal(size=(20, 784)) * np.sqrt(1./(784))
b1 = np.random.normal(size=(20, 1)) * np.sqrt(1./10)
W2 = np.random.normal(size=(10, 20)) * np.sqrt(1./20)
b2 = np.random.normal(size=(10, 1)) * np.sqrt(1./(784))
return W1, b1, W2, b2
</code></pre>
<p>Did I actually increase the neuron in the hidden layer and is this the right approach to what I am trying to achieve?</p>
|
<python><python-3.x><machine-learning><neural-network>
|
2023-05-22 21:38:25
| 1
| 400
|
zellez11
|
76,310,030
| 3,523,464
|
Fast Pandas column encoding
|
<p>I've been trying to speed up the following code that encodes Pandas columns as ints.</p>
<pre><code>
with cpu_pool as p:
results = p.amap(encode_int_column, jobs)
while not results.ready():
time.sleep(4)
tmp_df = pd.DataFrame({k: v for k, v in results.get()})
</code></pre>
<p>with <code>encode_int_column</code> being</p>
<pre><code>def encode_int_column(input_tuple: Tuple[str, Any]) -> Tuple[Any, List[int]]:
uvals = np.unique(input_tuple[1])
hmap = dict(zip(uvals, list(range(len(uvals)))))
return input_tuple[0], [hmap[x] for x in input_tuple[1]]
</code></pre>
<p>Wondering what is a faster way of doing this (maybe more Pandas-idiomatic?). Thanks!</p>
|
<python><pandas>
|
2023-05-22 21:30:23
| 2
| 2,382
|
sdgaw erzswer
|
76,309,931
| 662,911
|
Understanding the results of Python's timeit module
|
<p>I'm new to using Python's timeit module to benchmark code, but the results I'm getting make me think that I'm misunderstanding how to interpret the results.</p>
<p>This question has two parts:</p>
<p>Part A. In the code below, I'm using timeit to measure the speed of Python's <code>sort()</code> method for lists. In the below example, I'm sorting an empty list:</p>
<pre><code>import timeit
setup_code = '''
array = []
'''
test_code = '''
array.sort()
'''
print(timeit.timeit(stmt=test_code, setup=setup_code, number=1))
</code></pre>
<p>I get results like <code>4.437984898686409e-06</code> and <code>2.2110179997980595e-06</code>. My understanding is that this is the number of seconds. But here's the thing: I get these results <em>instantaneously</em>. That is, as soon as I hit enter, that number appears on my screen. Shouldn't I, by definition, have to wait sometime between 2 and 4 seconds before I see those results?</p>
<p>Part B. Below, I measure the speed of sorting a list containing thousands of random integers:</p>
<pre><code>import timeit
setup_code = '''
import random
array = []
for i in range(100000):
n = random.randint(1, 100000)
array.append(n)
'''
test_code = '''
array.sort()
'''
print(timeit.timeit(stmt=test_code, setup=setup_code, number=1))
</code></pre>
<p>Here, I get what seem to be more accurate results like <code>0.02303651801776141</code>. But why is timeit telling me that sorting a large list is much faster than sorting an empty list?</p>
<p>Thank you in advance for your help! I'm using a Macbook Air, and getting the same results in both Python 2.7 and Python 3.11.</p>
|
<python><benchmarking><timeit>
|
2023-05-22 21:10:51
| 2
| 3,292
|
Rebitzele
|
76,309,737
| 10,258,933
|
Parametric optimization in Python
|
<p>I have the following function which I wish to maximize:</p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=%5Clog%20%5Cmathcal%20L%20%3D%20%5Csum_%7Bn%2Cm%7D%20%5Clog(1%20%2B%20x_%7Bnm%7D%5Ctheta_n%20%5Cphi_m)" alt="1" /></p>
<p>where each <img src="https://chart.googleapis.com/chart?cht=tx&chl=x_%7Bnm%7D" alt="2" /> is a given number, and each <img src="https://chart.googleapis.com/chart?cht=tx&chl=%5Ctheta_n" alt="3" /> and <img src="https://chart.googleapis.com/chart?cht=tx&chl=%5Cphi_m" alt="\sqrt{foo}" /> is a parameter constrained by <img src="https://chart.googleapis.com/chart?cht=tx&chl=%5Ctheta_n%2C%5C%3B%20%5Cpsi_m%20%5C%3B%20%5Cin%20(0%2C%20%5C%3B1)" alt="4" />.</p>
<p>How can I solve this using Python?</p>
<p>I have tried the following using Scipy:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.optimize import minimize
def objective_function(params, X):
theta, phi = params[:len(X)], params[len(X):]
return - np.sum(np.log(1 + X * theta.reshape(-1, 1) * phi.reshape(1, -1)))
def constraint(params):
return np.concatenate((1 - params, params))
N, M = A.shape
# Initial guess for parameters
initial_guess = np.ones(N + M)
# Define the bounds for the parameters
bounds = [(0, 1)] * (N + M)
# Define the constraints
constraints = [{'type': 'ineq', 'fun': constraint}]
# Solve the optimization problem
result = minimize(objective_function, initial_guess, args=(A,), bounds=bounds, constraints=constraints)
</code></pre>
<p>However, this code has been running for hours, and I am not sure how long I can expect it to take, or if it will not converge.</p>
<p>I suppose the problem is that, in my case, N=3000 and M=70, which makes it a rather complex optimisation problem. Are there more suitable libraries than Scipy for this?</p>
<p>Update:</p>
<p>My matrix A has shape (N, M) = (3000, 70) and contains the values -1 (6%), 0 (71%), and 1 (23%). The values are rather randomly distributed in the matrix.</p>
<p>For further explanation of the data and the context, please see the following <a href="https://stats.stackexchange.com/questions/616382/ranking-of-multiple-raters-with-partially-overlapping-assessments">question</a>.</p>
|
<python><scipy><scipy-optimize><scipy-optimize-minimize>
|
2023-05-22 20:38:43
| 2
| 806
|
Filip
|
76,309,688
| 1,964,489
|
Pandas: Add column with values matching keys from a dictionary
|
<p>I have a Pandas data frame with several columns including <code>date, address, value</code>, and <code>type</code>.
I also have a dictionary with key: value pairs as <code>address: alias</code>. I want to create a new database which will have columns: <code>date, address, value, type, alias</code> in such a way that the <code>alias</code> value is assigned to the cell in column <code>address</code> with a matching key. How can I do it in an easy way?</p>
<p>An example:
DataFrame</p>
<pre><code>| date | address | value | type |
| 01-01-23| n167867 | 10 | A |
| 03-02-23| b2567gh | 30 | A |
| 03-03-23| a78956b | 4 | B |
| 12-05-23| n1hd789 | 25 | C |
</code></pre>
<p>dictionary:</p>
<pre><code>{n167867 : ABC,
b2567gh : XYZ,
a78956b : CGH,
n1hd789 : ZET}
</code></pre>
<p>new data frame:</p>
<pre><code>| date | address | value | type | alias |
| 01-01-23| n167867 | 10 | A | ABC |
| 03-02-23| b2567gh | 30 | A | XYZ |
| 03-03-23| a78956b | 4 | B | CGH |
| 12-05-23| n1hd789 | 25 | C | ZET |
</code></pre>
|
<python><pandas>
|
2023-05-22 20:30:04
| 2
| 3,541
|
Ziva
|
76,309,664
| 11,666,502
|
How to import pandas df as variable into html
|
<p>I am using flask to write an app that displays a pandas df on part of a webpage. Here is my html template:</p>
<pre><code><!--user_input.html-->
<html>
<body>
<form action="" method="post">
<input type="text" name="user_input" value="{{ user_input }}">
<input type="submit" name="submit" value="submit">
</form>
<img src="/static/plot1.png" alt="Plot Image">
<img src="/static/plot2.png" alt="Plot Image">
<table cellspacing="0" border="1">
<tr>
{% for column in df.columns %}
<th style="width:500px;">{{ column }}</th>
{% endfor %}
</tr>
{% for row in df.itertuples() %}
<tr>
{% for value in row %}
<td style="width500px;">{{ value }}</td>
{% endfor %}
</tr>
{% endfor %}
</table>
</body>
</html>
</code></pre>
<p>and here is the part of my python code that calls this template:</p>
<pre><code>app = Flask(__name__)
@app.route('/', methods=['GET', 'POST'])
def index():
if request.method == 'POST':
input_message = request.form['user_input']
df = pd.read_csv(f'data.csv')
return render_template('user_input.html', user_input=input_message, df=df)
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>I keep getting the error:</p>
<pre><code>jinja2.exceptions.UndefinedError: 'df' is undefined
</code></pre>
<p>Why is this, I thought I was passing <code>df</code> into the HTML template with <code>df=df</code>.</p>
|
<python><html><pandas><flask>
|
2023-05-22 20:26:23
| 0
| 1,689
|
connor449
|
76,309,647
| 8,584,998
|
PyAudio distorted recording when while loop too busy
|
<p>I have a Python script that monitors an audio stream in real time and uses a moving average to determine when there is audio and set start and stop points based on when it is above or below a given threshold. Because the script runs 24/7, I avoid excessive memory usage by removing part of the audio stream after about 4 hours of audio.</p>
<pre><code>def record(Jn):
global current_levels
global current_lens
device_name = j_lookup[Jn]['device']
device_index = get_index_by_name(device_name)
audio = pyaudio.PyAudio()
stream = audio.open(format=FORMAT, channels=CHANNELS, rate=RATE, input=True, input_device_index=device_index, frames_per_buffer=CHUNK)
recorded_frames = []
quantized_history = []
long_window = int(LONG_MOV_AVG_SECS*RATE/CHUNK) # converting seconds to while loop counter
avg_counter_to_activate_long_threshold = LONG_THRESH*long_window
safety_window = 1.5*avg_counter_to_activate_long_threshold
long_thresh_met = 0
long_start_selection = 0
while True:
data = stream.read(CHUNK, exception_on_overflow=False)
recorded_frames.append(data)
frame_data = struct.unpack(str(CHUNK) + 'h', data)
frame_data = np.array(frame_data)
sum_abs_frame = np.sum(np.abs(frame_data))
quantized_history.append(0 if sum_abs_frame < j_lookup[Jn]['NOISE_FLOOR'] else 1)
current_levels[Jn] = sum_abs_frame
counter = len(recorded_frames)
current_lens[Jn] = counter
if counter >= long_window:
long_movavg = sum(quantized_history[counter-long_window:counter])/long_window
if long_movavg >= LONG_THRESH and long_thresh_met != 1:
long_start_selection = int(max(counter - safety_window, 0))
long_thresh_met = 1
if long_movavg < LONG_THRESH and long_thresh_met == 1:
long_end = int(counter)
long_thresh_met = 2
save_to_disk(recorded_frames[long_start_selection:long_end], audio, Jn)
if counter > MAX_LOOKBACK_PERIOD: # don't keep endless audio history to avoid excessive memory usage
del recorded_frames[0]
del quantized_history[0]
long_start_selection = max(0, long_start_selection - 1) # since you deleted first element, the recording start index is now one less
</code></pre>
<p>What I have above works, but what I noticed is that once I hit the four hour mark (the <code>if counter > MAX_LOOKBACK_PERIOD</code> statement at the very end becomes true), any audio saved after that point starts to sound distorted. For example, before the four hour point, the audio looks like:</p>
<p><a href="https://i.sstatic.net/moOov.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/moOov.png" alt="enter image description here" /></a></p>
<p>after the four hour mark, it looks like:</p>
<p><a href="https://i.sstatic.net/cu938.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cu938.png" alt="enter image description here" /></a></p>
<p>You can see the distortion appearing as these vertical spikes on the spectrogram. I assume the <code>del</code> function is just taking so long that the while loop can't keep up with the audio stream and this is somehow causing the distortion, but I'm not sure. It has to be related to <code>del</code> somehow because the distortion only appears once the <code>if counter > MAX_LOOKBACK_PERIOD</code> becomes true.</p>
<p>Any idea how to address this?</p>
|
<python><audio><real-time><pyaudio>
|
2023-05-22 20:23:08
| 1
| 1,310
|
EllipticalInitial
|
76,309,389
| 13,608,794
|
Python - tkinter.messagebox with DLL resource icons
|
<p>By default, <code>tkinter.messagebox</code> uses 4 icons located in <code>user32.dll</code> resource. (or other that look like that)</p>
<img src="https://i.sstatic.net/OdUik.png">
<hr />
<p>However, when browsing resource DLL files in Nirsoft IconsExtract utility, I found some fancy icons located in <code>comres.dll</code> file.</p>
<img src="https://i.sstatic.net/XDeVV.png">
<h2>Is there a way to change message box icons by resource and ID? <code><File>.dll,-<ID></code> (or other method)</h2>
<h4><em>But why, in the first place?</em></h4>
<p>Just for design purpose - there are thousands of icons in these files, and they may provide <em>a fresh look</em>, so why not?</p>
<pre class="lang-py prettyprint-override"><code>messagebox.showinfo(
title = "Success",
message = "File has been converted successfully.",
icon = "imageres.dll,-1405"
)
</code></pre>
<img src="https://i.sstatic.net/HCxXI.png">
<hr />
<p>Browsing tkinter source code is just a waste of time, since it contains a lot of references that don't lead anywhere, as shown below:</p>
<pre class="lang-py prettyprint-override"><code> s = master.tk.call(self.command, *master._options(self.options))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_tkinter.TclError: bad -icon value "imageres.dll,-1405": must be error, info, question, or warning
</code></pre>
<p>I believe that there is a way to do this with <code>win32api</code>, <code>ctypes</code> and more Windows-specific Python modules. <strong>How?</strong></p>
|
<python><tkinter><icons><messagebox>
|
2023-05-22 19:38:26
| 1
| 303
|
kubinka0505
|
76,309,287
| 4,759,176
|
Python Flask, display an image stored as a binary blob in MySQL database
|
<p>I'm trying to display an image that is stored in a <code>LONGBLOB</code> column in a <code>MySQL</code> database, using Flask:</p>
<p><strong>app.py</strong>:</p>
<pre><code>@app.route('/get_image', methods=['GET'])
def get_image():
args = request.args
image_id = args.get('image_id')
image = # code for getting image blob from database
return send_file(image, mimetype='image/png')
</code></pre>
<p><strong>HTML code</strong>:</p>
<pre><code><img src="http://127.0.0.1:5000/get_image?image_id=1"/>
</code></pre>
<p>When a request for the image is sent, I can see in debugger that the image bytes are retrieved from the database:</p>
<p><a href="https://i.sstatic.net/GZq3E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GZq3E.png" alt="enter image description here" /></a></p>
<p>However the image is not displayed:</p>
<p><a href="https://i.sstatic.net/tDDqJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tDDqJ.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><flask>
|
2023-05-22 19:18:49
| 1
| 5,239
|
parsecer
|
76,309,194
| 1,717,931
|
find subsets in python that sum less than or equal a target value
|
<p>I've seen different variations of this problem and I am unable to get a solution to this particular one: "Given a list of positive integers and a target t, find all maximal subsets whose sum is less than or equal to t. Each element must appear only as many times as they appear in the given input list"</p>
<p>For eg: If the list is <code>[1, 3, 5, 2, 2, 5, 3, 1]</code> and <code>target = 6</code>:</p>
<p>one possible output: <code>[[1,2,2,1], [3, 3], [5], [5]]</code>
another possible output: <code>[1,2,3], [1,2,3], [5], [5]</code></p>
<p>There maybe other possibilities as well..</p>
<p>Looks like a nice greedy approach can solve this problem...but, I am unable to write a soln in Python. When I say 'maximal' subset, the system must try to find subsets preferably of largest size possible...rather than choosing smaller-sized subsets. Any help?</p>
<ul>
<li>go through in a greedy manner and add elements in a bag as we see fit (as close to target value ...).</li>
<li>remove elements already bagged.</li>
<li>take a new bag now.</li>
<li>repeat the above two steps until all elements are bagged.</li>
</ul>
<p>NOTE: i am not looking for computationally efficient soln. I am looking for 'a' solution that works. The given list is small in size...perhaps a max of 100 elements.</p>
|
<python><algorithm>
|
2023-05-22 19:03:19
| 1
| 2,501
|
user1717931
|
76,309,157
| 9,049,108
|
How to make a list of unique sets in python?
|
<p>I wanted to make a list of empty sets for my program.
This works for making a list of unique zeros.</p>
<pre><code>x=5
l=[0]*x
print(l)
m[0]=1
#[1,0,0,0,0]
</code></pre>
<p>If you try the same with sets or a list of list it doesn't work.</p>
<pre><code>x=5
l=[set()]*x
l[0].add(1)
print(l)
#[{1},{1},{1},{1},{1}]
</code></pre>
<p>The same happens with lists.</p>
<pre><code>x=5
l=[[]]*x
l[0]+=[1]
print(l)
#[[1],[1],[1],[1],[1]]
</code></pre>
<p>The only work around I've found is using range(). My question is there a different way of doing this I don't know about? It seems like I'm missing something.</p>
<pre><code>x=5
l=[[] for i in range(x)]
l[0].add(1)
print(l)
#[[1],[0],[0],[0],[0]]
</code></pre>
|
<python><list><set>
|
2023-05-22 18:56:10
| 0
| 576
|
Michael Hearn
|
76,308,912
| 3,045,182
|
Is it possible to get the entity from SQLAlchemys "do_orm_execute" event?
|
<p>We're trying to implement a History table for some select classes. All the queries are written for SQLAlchemy Core 1.x and passed to the <code>execute()</code> method on the <code>Session</code> object. That means we cannot use the more straight forward events such as "before_flush".</p>
<p>According to the <a href="https://docs.sqlalchemy.org/en/20/orm/session_events.html#session-execute-events" rel="nofollow noreferrer">docs</a> we should be able to access the type that is a target for an <code>insert()</code>, <code>update()</code> or <code>delete()</code></p>
<p>But that just throws <code>AttributeError: 'Update' object has no attribute 'column_descriptions'</code> (I'm guessing it's either due to it only being available on selects or only if the query originated from the ORM)</p>
<p>All the tables are declared in a file generated from sqlacodegen without any declarative base, like so:</p>
<pre><code>t_example = Table(
'examples', metadata,
Column('id', UUID, primary_key=True),
Column('name', Text, nullable=False),
)
</code></pre>
<p>Now we've newly migrated to SQLAlchemy 2.0 and implemented asyncio, implemented a declarative_base but since thousands of queries are written directly towards tables like:</p>
<pre><code>query = select(t_example.c.name).where(
t_example.c.id == some_id
)
</code></pre>
<p>And we don't want to spend weeks updating them but slowly migrating the tables to classes like so:</p>
<pre><code>class Example(Base, LogInsertMixin, LogUpdateMixin, LogDeleteMixin):
__table__ = t_example
id: Mapped[UUID]
name: Mapped[str]
</code></pre>
<p>As you can see we have some mixins that I thought could be declared on the models that require some logging to the database:</p>
<pre><code>class LogInsertMixin:
pass
class LogUpdateMixin:
pass
class LogDeleteMixin:
pass
</code></pre>
<p>But I can't find any way in the "do_orm_execute" event to make an <code>isistance()</code> check toward:</p>
<pre><code>class DatabaseLogger:
def orm_execute_event(self, orm_execute_state: ORMExecuteState):
if orm_execute_state.is_update and isinstance(*****what to check for? :(*****, LogUpdateMixin):
pass
</code></pre>
<p>I can see that the event fires but <code>ORMExecuteState</code> isn't well documented. Is this way even feasible? How can I find out what <code>Table()</code> was passed to an <code>update()</code> or <code>delete()</code>?</p>
|
<python><postgresql><sqlalchemy>
|
2023-05-22 18:15:16
| 1
| 477
|
HenrikM
|
76,308,874
| 1,232,087
|
VsCode - Python: Create Environment not available
|
<p>The question is about <a href="https://code.visualstudio.com/docs/python/environments" rel="nofollow noreferrer">Using Python environments in VS Code</a>.</p>
<p>The section <a href="https://code.visualstudio.com/docs/python/environments#_using-the-create-environment-command" rel="nofollow noreferrer">Using the Create Environment command</a> of the above link reads:</p>
<blockquote>
<p>In Command Palette (Ctrl+Shift+P), start typing the <strong>Python: Create Environment</strong> command to search, and then select the command.</p>
</blockquote>
<p>But when I type the above command in VsCode, as shown below, it says: <code>No matching commands</code>.</p>
<p><a href="https://i.sstatic.net/Tinlc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tinlc.png" alt="enter image description here" /></a></p>
<p>What I may be missing, and how we can fix the issue?</p>
<p>I have version 1.65.2 of <code>VsCode</code>, and version 3.10 of <code>python</code> installed on the latest version of <code>Windows 10 Pro</code></p>
|
<python><visual-studio-code><virtual-environment>
|
2023-05-22 18:09:37
| 1
| 24,239
|
nam
|
76,308,825
| 344,669
|
Python 10, on Windows how it shows this folder exists /home/appuser/temp
|
<p>In my python program, want to check directory exists before using it. When I check <code>/home/appuser/temp</code> folder, it returns <code>True</code>. Since its Linux path, I expect to return False, not sure why its returning True.</p>
<pre><code>>>> import os
>>> os.path.exits("/home/appuser/temp")
True
>>> from pathlib import Path
>>> Path("/home/appuser/temp").exists()
True
>>> import platform
>>> platform.system()
'Windows'
</code></pre>
<p>Why its returning True? How will I locate this folder on windows?</p>
<p>Thanks
SR</p>
|
<python><python-3.x>
|
2023-05-22 18:00:44
| 0
| 19,251
|
sfgroups
|
76,308,800
| 3,482,266
|
Merging/joining on candidates index in row
|
<p>In a pandas dataframe, there's a column called <code>candidates</code>. This column gives a list with several indices.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>Column 1</th>
<th>Candidates</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>34</td>
<td>[2,3,22]</td>
</tr>
<tr>
<td>1</td>
<td>42</td>
<td>[0,4,30,9]</td>
</tr>
<tr>
<td>2</td>
<td>231</td>
<td>...</td>
</tr>
<tr>
<td>3</td>
<td>123</td>
<td>...</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>I'm looking for joining/merging the dataframe with itself according to those candidates, just like in the example below...</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>Column 1</th>
<th>Index</th>
<th>Column 1</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>34</td>
<td>2</td>
<td>231</td>
</tr>
<tr>
<td>0</td>
<td>34</td>
<td>3</td>
<td>123</td>
</tr>
<tr>
<td>0</td>
<td>34</td>
<td>22</td>
<td>45</td>
</tr>
<tr>
<td>1</td>
<td>42</td>
<td>0</td>
<td>34</td>
</tr>
<tr>
<td>1</td>
<td>42</td>
<td>4</td>
<td>900</td>
</tr>
<tr>
<td>1</td>
<td>42</td>
<td>30</td>
<td>87</td>
</tr>
</tbody>
</table>
</div>
<p>How does one do this in pandas?</p>
<p>Note: Column 1_y values come from Column 1, in the original dataframe.</p>
|
<python><pandas>
|
2023-05-22 17:55:55
| 1
| 1,608
|
An old man in the sea.
|
76,308,776
| 11,021,252
|
How to extract the coordinate of the shaded region when LineString and circular Polygon geometries intersects?
|
<p>I have geometries, as shown in the figure.
three circular Polygon geometries (circle1,circle2 and circle3) and nineteen LineStrimgs (line0,line1,...,line18).
I want to extract the coordinates of the grey-shaded region (area between circle1 and circle, which has been equally divided by the passing line strings) in the figure. Here I only shaded three areas, but I want to extract the coordinates of all the similar regions individually as a list.</p>
<p>How can I do that? If the solution could use the functionalities of geopandas and shapely, it would be great.
I tried to interpolate the method between the Linestring and polygons, but there was no use at all. Could you help me?
<a href="https://i.sstatic.net/Pimkl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pimkl.png" alt="enter image description here" /></a></p>
|
<python><geometry><gis><geopandas><shapely>
|
2023-05-22 17:51:26
| 1
| 507
|
VGB
|
76,308,700
| 21,420,742
|
Merging dataframes on differently-named columns in pandas while only keeping a subset of columns
|
<p>I have 2 datasets and I need to merge over only specific columns but none of the fields have the same name.</p>
<p>DF1</p>
<pre><code> Name ID Job_Type Emp_Type
Adam 101 Full-Time Employee
Ben 102 Part-Time Contractor
Cathy 103 Part-Time Employee
Doug 104 Full-Time Contractor
Emily 105 Full-Time Employee
</code></pre>
<p>DF2</p>
<pre><code> hiring_manager_id hire_type hiring_status hiring_phase name_of_emp
101 Employee Pending Requested NaN
101 Employee Approved Hired Sam
105 Contractor Approved Approved NaN
113 Employee Approved Hired Gabe
119 Contractor Pending Interviewing NaN
</code></pre>
<p>I would like to take specific columns and add them to <code>df1</code> like so:</p>
<pre><code>Name ID Job_Type Emp_Type hire_type hiring_status hiring_phase
Adam 101 Full-Time Employee Employee Pending Requested
Adam 101 Full-Time Employee Contractor Approved Hired
Ben 102 Part-Time Contractor NaN NaN NaN
Cathy 103 Part-Time Employee NaN NaN NaN
Doug 104 Full-Time Contractor NaN NaN NaN
Emily 105 Full-Time Employee Contractor Approved Hired
</code></pre>
<p>I tried</p>
<pre class="lang-py prettyprint-override"><code>df1 = pd.merge(df1, df2[['ID','hire_type','hiring_status','hiring_phase']], on = 'ID', how= 'left')
</code></pre>
<p>This caused an error when I tried it.</p>
<p>Any Suggestions? Thank you!</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-05-22 17:40:52
| 2
| 473
|
Coding_Nubie
|
76,308,673
| 11,360,794
|
Kombu - RabbitMQ - Pytest - [Errno 104] Connection reset by peer
|
<p>I'm using Kombu to connect to RabbitMQ, and found myself in some strange behavior. I have a generic producer function (see below), that I use throughout our code base. This works very well in most cases, but some of our unit tests started failing here. The connection is closed when opening the with block and opening the connection manually results in a <code>kombu.exceptions.OperationalError: [Errno 104] Connection reset by peer</code>.</p>
<p>Everything is run with docker-compose, also the tests are run inside a container. Though the reset pack indicates a network issue, I don't have the feeling it's particularly network related as I have no issues sending the message from other places in the code. I suppose it could have something to do with the testing setup, but I have no idea where to start looking.</p>
<ul>
<li>I validated that the message is properly serialized</li>
<li>I validated that the connection string is working</li>
<li>I validated that the two containers have a network link</li>
<li>I validated compatible versions (Kombu 5.2.3, AMQP 5.1.1, RabbitMQ 3.11.8)</li>
<li>There's no open concurrent open connections</li>
<li>There's no messages in the queue</li>
<li>There's a lot of stack and git tickets that mention the 'Connection reset by peer', but more information than a reset package is send from remote in this context is hard to find, especially stuff that's relevant.</li>
</ul>
<p>The strange thing is really that this function works fine when called, but fails in a fraction of our tests. There's not much strange going on with our pytest setup though (init file at the bottom).</p>
<hr />
<p>The error traceback:</p>
<pre><code>platform/resources.py:39: in publish_event
send_amqp_message(event)
messagebus/producer.py:16: in send_amqp_message
conn.connect()
/usr/local/lib/python3.9/dist-packages/kombu/connection.py:275: in connect
return self._ensure_connection(
/usr/local/lib/python3.9/dist-packages/kombu/connection.py:434: in _ensure_connection
return retry_over_time(
/usr/local/lib/python3.9/dist-packages/kombu/utils/functional.py:312: in retry_over_time
return fun(*args, **kwargs)
/usr/local/lib/python3.9/dist-packages/kombu/connection.py:878: in _connection_factory
self._connection = self._establish_connection()
/usr/local/lib/python3.9/dist-packages/kombu/connection.py:813: in _establish_connection
conn = self.transport.establish_connection()
/usr/local/lib/python3.9/dist-packages/kombu/transport/pyamqp.py:201: in establish_connection
conn.connect()
/usr/local/lib/python3.9/dist-packages/amqp/connection.py:329: in connect
self.drain_events(timeout=self.connect_timeout)
/usr/local/lib/python3.9/dist-packages/amqp/connection.py:525: in drain_events
while not self.blocking_read(timeout):
/usr/local/lib/python3.9/dist-packages/amqp/connection.py:530: in blocking_read
frame = self.transport.read_frame()
/usr/local/lib/python3.9/dist-packages/amqp/transport.py:312: in read_frame
payload = read(size)
/usr/local/lib/python3.9/dist-packages/amqp/transport.py:627: in _read
s = recv(n - len(rbuf))
/usr/local/lib/python3.9/dist-packages/httpretty/core.py:697: in recv
return self.forward_and_trace('recv', buffersize, *args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <httpretty.core.fakesock.socket object at 0x402d1bf760>, function_name = 'recv', a = (1342177288,), kw = {}
function = <built-in method recv of socket object at 0x402d1b19a0>
callback = <built-in method recv of socket object at 0x402d1b19a0>
def forward_and_trace(self, function_name, *a, **kw):
if self.truesock and not self.__truesock_is_connected__:
self.truesock = self.create_socket()
### self.connect_truesock()
if self.__truesock_is_connected__:
function = getattr(self.truesock, function_name)
if self.is_http:
if self.truesock and not self.__truesock_is_connected__:
self.truesock = self.create_socket()
### self.connect_truesock()
if not self.truesock:
raise UnmockedError()
callback = getattr(self.truesock, function_name)
> return callback(*a, **kw)
E ConnectionResetError: [Errno 104] Connection reset by peer
/usr/local/lib/python3.9/dist-packages/httpretty/core.py:666: ConnectionResetError
</code></pre>
<p>The producer function:</p>
<pre><code>def send_amqp_message(message: BaseEvent, queue: str = "eventbus") -> None:
with Connection(MESSAGEBUS_CONNECTION_STRING) as conn:
if not conn.connected:
conn.connect()
queue = conn.SimpleQueue(queue)
queue.put({"event": message.name, "payload": asdict(message.data)}, serializer="uuid")
queue.close()
</code></pre>
<p>The RabbitMQ logs (on debug):</p>
<pre><code>rabbitmq-1 | 2023-05-22 17:18:53.633771+00:00 [info] <0.2691.0> accepting AMQP connection <0.2691.0> (172.26.0.5:32944 -> 172.26.0.4:5672)
rabbitmq-1 | 2023-05-22 17:18:53.633833+00:00 [error] <0.2691.0> closing AMQP connection <0.2691.0> (172.26.0.5:32944 -> 172.26.0.4:5672):
rabbitmq-1 | 2023-05-22 17:18:53.633833+00:00 [error] <0.2691.0> {bad_header,<<1,0,0,0,0,0,177,0>>}
rabbitmq-1 | 2023-05-22 17:18:55.687146+00:00 [info] <0.2701.0> accepting AMQP connection <0.2701.0> (172.26.0.5:32958 -> 172.26.0.4:5672)
rabbitmq-1 | 2023-05-22 17:18:55.693462+00:00 [warning] <0.2701.0> closing AMQP connection <0.2701.0> (172.26.0.5:32958 -> 172.26.0.4:5672):
rabbitmq-1 | 2023-05-22 17:18:55.693462+00:00 [warning] <0.2701.0> client unexpectedly closed TCP connection
rabbitmq-1 | 2023-05-22 17:18:55.695395+00:00 [info] <0.2706.0> accepting AMQP connection <0.2706.0> (172.26.0.5:32968 -> 172.26.0.4:5672)
rabbitmq-1 | 2023-05-22 17:18:55.695585+00:00 [error] <0.2706.0> closing AMQP connection <0.2706.0> (172.26.0.5:32968 -> 172.26.0.4:5672):
rabbitmq-1 | 2023-05-22 17:18:55.695585+00:00 [error] <0.2706.0> {bad_header,<<1,0,0,0,0,0,177,0>>}
rabbitmq-1 | 2023-05-22 17:18:59.759150+00:00 [info] <0.2718.0> accepting AMQP connection <0.2718.0> (172.26.0.5:45838 -> 172.26.0.4:5672)
rabbitmq-1 | 2023-05-22 17:18:59.765966+00:00 [warning] <0.2718.0> closing AMQP connection <0.2718.0> (172.26.0.5:45838 -> 172.26.0.4:5672):
rabbitmq-1 | 2023-05-22 17:18:59.765966+00:00 [warning] <0.2718.0> client unexpectedly closed TCP connection
rabbitmq-1 | 2023-05-22 17:18:59.767295+00:00 [info] <0.2723.0> accepting AMQP connection <0.2723.0> (172.26.0.5:45842 -> 172.26.0.4:5672)
rabbitmq-1 | 2023-05-22 17:18:59.767437+00:00 [error] <0.2723.0> closing AMQP connection <0.2723.0> (172.26.0.5:45842 -> 172.26.0.4:5672):
rabbitmq-1 | 2023-05-22 17:18:59.767437+00:00 [error] <0.2723.0> {bad_header,<<1,0,0,0,0,0,177,0>>}
</code></pre>
<p>And the docker-compose, for what it's worth:</p>
<pre><code>version: '2'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
restart: always
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
RABBITMQ_LOG_LEVEL: debug
ports:
- 5672:5672
- 15672:15672
volumes:
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
redis:
image: redis:alpine
test_container:
command: /src/bin/debug
depends_on:
- rabbitmq
- redis
environment:
CELERY_URI: amqp://guest:guest@rabbitmq
TRUSTED_DOMAINS: 'service;example.com;another-example.com;test'
MESSAGEBUS_CONNECTION_STR: amqp://guest:guest@rabbitmq
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
volumes:
- '.:/src'
</code></pre>
<p>Pytest.ini</p>
<pre><code>[pytest]
norecursedirs = ve
testpaths = /src/tests
junit_family=xunit2
; addopts=--tb=short -n 5 --dist=loadscope
</code></pre>
|
<python><rabbitmq><celery><kombu>
|
2023-05-22 17:35:01
| 1
| 790
|
Niels Uitterdijk
|
76,308,664
| 19,968,680
|
Tables are different accessing database with SQLAlchemy and psql
|
<p>The tables in my PostgreSQL database are different when I connect with psql and SQLAlchemy. The results are not the same when doing a simple SELECT using the following two methods. Using psql results in an old version of the data without any data, while I am able to update and fetch the most recent data using SQLAlchemy.</p>
<p>psql method:</p>
<pre><code>bash $ psql $DATABASE_URL
user=> SELECT * FROM scraped_data; /* This results in an empty table! */
id | user_id | date | state | progress |
</code></pre>
<p>SQLAlchemy method:</p>
<pre><code># Where DATABASE_URL has asyncpg driver specified.
ENGINE = create_async_engine(os.environ["DATABASE_URL"], future=True)
async def get_scraped_data():
async with ENGINE.connect() as conn:
result = await conn.execute(
text("SELECT * FROM scraped_data ")
return result.all() # This works, and I get the expected data!
</code></pre>
<p>When I query the database using SQLAlchemy, I am able to get the most recent data.</p>
<p>Why is there a difference in the tables when using psql and when using SQLAlchemy?</p>
|
<python><database><postgresql><sqlalchemy><orm>
|
2023-05-22 17:34:03
| 0
| 322
|
gbiz123
|
76,308,628
| 12,098,671
|
How to download images attached to a tweet from the tweet link?
|
<p>Since the Twitter API is no longer free how to download images from tweet links using Python?</p>
|
<python><web-scraping>
|
2023-05-22 17:28:39
| 1
| 759
|
Yash Khasgiwala
|
76,308,514
| 5,252,492
|
How to implement MASE (Mean Absolute Scaled Error) in python
|
<p>I have Predicted values and Actual values, and I can calculate Mean Absolute Percentage Error by doing:</p>
<pre><code>abs(Predicted-Actual)/ Predicted *100
</code></pre>
<p>How do I calculate MASE with respect to my Predicted and Actual values?</p>
|
<python><statistics>
|
2023-05-22 17:13:05
| 2
| 6,145
|
azazelspeaks
|
76,308,496
| 4,035,257
|
Find the exact location of an element in pandas dataframe
|
<p>I have the following pandas dataframe:</p>
<p><code>df = pd.DataFrame({"A": [1,2,3], "B": [-2,8,1], "C": [-451,23,326]})</code></p>
<p>Is there any function that returns the exact location of an element? Assuming that the element exists in the table and no duplicate. E.g. if <code>element = 326</code> then it will return <code> row:2 col:2</code>.
Many thanks</p>
|
<python><pandas><dataframe>
|
2023-05-22 17:09:19
| 2
| 362
|
Telis
|
76,308,481
| 11,942,410
|
error while updating table in MySQL : mysql.connector.errors.DatabaseError: 1412 (HY000): Table definition has changed, please retry transaction
|
<p>I am working on one personal project.
There is multiple similar questions on this forum however everyone is case specific.
In my case i am creating and loading temp table, i am joining that table with existing table in DB and updating the same table based on temp table results. while executing <code>db.execute_insert_update()</code> it throws an exception : <code>mysql.connector.errors.DatabaseError: 1412 (HY000): Table definition has changed, please retry transaction</code></p>
<pre><code> > def _create_and_load_table(self, dataframe: DataFrame) -> str: #first_function
self._log_entry()
temp_table_name = self._generate_table_name()
table_schema = "partner_staging"
table_with_schema = f"{table_schema}.{temp_table_name}"
self.df_mgr.jdbc_write(dataframe=dataframe, dbtable=table_with_schema,
check_count_before_write=True)
self._log_exit()
return temp_table_name
def reconcile_partner_payments(self, dataframe: DataFrame) -> NoReturn: #second function
self.logger.info("Start reconcile_partner_payments")
self.logger.info(f"df_col:{dataframe.columns}")
partner_payment_ids = dataframe.select("partner_payment_id",
"effective_payroll_date").distinct()
self.logger.info(f"head:{partner_payment_ids.head(3)}")
self.logger.info(f"count:{partner_payment_ids.count()}")
self.logger.info(f"c:{partner_payment_ids.columns}")
temp_table_name = self._create_and_load_table(partner_payment_ids)
table_schema = "partner_staging"
update_query = """UPDATE abc.partner_payment as pp
JOIN partner_staging.{temp_table_name} as t on pp.id =
t.partner_payment_id
SET pp.reconciled = 1, pp.modify_ts = NOW() , pp.modify_user = 'ok
successfull'"""
self.db.execute_insert_update(query_sql=update_query)
self.db.drop_table(table_schema, temp_table_name)
self.logger.info("End reconcile_partner_payments")
def execute_insert_update(self, query_sql, args=()) -> None: #method being used in second_function
try:
cursor = self.get_cursor()
self.start_transaction()
cursor.execute(query_sql, args)
self._connection.handle_unread_result()
self.commit()
except Error as e:
stack = traceback.format_exc()
self.rollback()
print(stack)
logger_func = Cache()[keys.LOGGER].info if keys.LOGGER in Cache() else print
logger_func(f"stack_: {stack}")
raise e
finally:
self.close_cursor()
> def _generate_table_name(self):#generate unique name
self.logger.info("Start _generate_table_name")
rand_str = random_str(length=4)
self.logger.info("End _generate_table_name")
return f"temp_{self.partner_name}_{self.company_name}_{self.payroll_name}_{rand_str}"
</code></pre>
<p>i tried defining the schema for the temp table however it is the same error.</p>
|
<python><mysql><pyspark>
|
2023-05-22 17:06:25
| 1
| 326
|
vish
|
76,308,468
| 836,026
|
Moving class call using registry from module to Jupyter notebook
|
<p>I'm trying to test some code initially written to be run in class module but I'm moving it to run in Jupyter notebook.
The code uses class registry to create objects in "<strong>init</strong>.py" (see below code). I believe I copied all relevant classes , but still getting error message saying "object missing in the registry".
Is there away to fix the error or simply bypass the the class registry.</p>
<pre><code># ********* inside __init__.py ************
from basic.utils import METRIC_REGISTRY
__all__ = ['calculate_psnr', 'calculate2']
def calculate_metric(data, opt):
opt = deepcopy(opt)
metric_type = opt.pop('type')
metric = METRIC_REGISTRY.get(metric_type)(**data, **opt)
return metric
</code></pre>
<p>Registry class:</p>
<p>class Registry():</p>
<pre><code> def __init__(self, name):
self._name = name
self._obj_map = {}
def _do_register(self, name, obj, suffix=None):
if isinstance(suffix, str):
name = name + '_' + suffix
assert (name not in self._obj_map), (f"An object named '{name}' was already registered "
f"in '{self._name}' registry!")
self._obj_map[name] = obj
def register(self, obj=None, suffix=None):
if obj is None:
# used as a decorator
def deco(func_or_class):
name = func_or_class.__name__
self._do_register(name, func_or_class, suffix)
return func_or_class
return deco
# used as a function call
name = obj.__name__
self._do_register(name, obj, suffix)
def get(self, name, suffix='basicsr'):
ret = self._obj_map.get(name)
if ret is None:
ret = self._obj_map.get(name + '_' + suffix)
print(f'Name {name} is not found, use name: {name}_{suffix}!')
if ret is None:
raise KeyError(f"No object named '{name}' found in '{self._name}' registry!")
return ret
def __contains__(self, name):
return name in self._obj_map
def __iter__(self):
return iter(self._obj_map.items())
def keys(self):
return self._obj_map.keys()
LOSS_REGISTRY = Registry('loss')
METRIC_REGISTRY = Registry('metric')
</code></pre>
<p>impelmeantion:</p>
<pre><code>@METRIC_REGISTRY.register()
def calculate_ssim(val, val2, **kwargs):
# some implemantion
return somevalue
</code></pre>
<p>call funcation:</p>
<pre><code>calculate_metric(metric_data, opt_)
</code></pre>
<p>Error:</p>
<pre><code>-> 2582 self.metric_results[name] += calculate_metric(metric_data, opt_)
2584 if use_pbar:
Input In [53], in calculate_metric(data, opt)
11 opt = deepcopy(opt)
12 metric_type = opt.pop('type')
---> 13 metric = METRIC_REGISTRY.get(metric_type)(**data, **opt)
14 return metric
Input In [54], in Registry.get(self, name, suffix)
1870 print(f'Name {name} is not found, use name: {name}_{suffix}!')
1871 if ret is None:
-> 1872 raise KeyError(f"No object named '{name}' found in '{self._name}' registry!")
1873 return ret
KeyError: "No object named 'calculate_psnr' found in 'metric' registry!"
</code></pre>
|
<python><pytorch>
|
2023-05-22 17:04:31
| 0
| 11,430
|
user836026
|
76,308,393
| 1,471,980
|
how do you convert dict data type fo json in python
|
<pre><code>r3={'result':[{'serverId':101, 'serverName':'abc', 'data'[{'percentUsed':10, 'value':1, 'rvalue':20}]},{'serverId':102,'serverName':'eee', 'data'[{'percentUsed':30, 'value':3, 'rvalue':24}]}}
print(type(r3))
dict
</code></pre>
<p>I need to convert this r3 to json</p>
<p>I have tried this</p>
<pre><code>json=json.loads(r3)
</code></pre>
<p>I get this error:</p>
<pre><code>'dict' object has no attribute 'loads'
</code></pre>
|
<python><json><dictionary>
|
2023-05-22 16:54:48
| 1
| 10,714
|
user1471980
|
76,308,299
| 2,221,360
|
PyQt/ PySide control widgets' actions from diffreent class or file
|
<p>I am working on a GUI project that was started as simple but growing in size now. Therefore, I have created multiple classes in multiple files to make the code easily understandable.</p>
<p>What I want is that each class should do specific tasks when clicking a particular button or widget.</p>
<p>Here is the simple app I have created using QDesigner which contains three buttons.</p>
<p><a href="https://i.sstatic.net/c96Zn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c96Zn.png" alt="enter image description here" /></a></p>
<p>Here is the code I am trying to implement.</p>
<pre><code>#!/bin/env python
import sys
from PyQt6 import QtWidgets
from ui_main import Ui_MainWindow
class Page2(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
# Keeping this action command within __init__ method does not work
self.btn2.clicked.connect(self.on_click_btn2) # <-- This does not work
def on_click_btn2(self):
print("Clicked Button 2")
class Page3(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
def on_click_btn3(self):
print("Clicked Button 3")
def clicks(self):
""" Calling this function from parent class works """
self.btn3.clicked.connect(self.on_click_btn3)
class MainWindow(Page2, Page3, QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
Page3.clicks(self)
self.btn1.clicked.connect(self.on_click_btn1)
def on_click_btn1(self):
print("Clicked Button 1")
app = QtWidgets.QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
</code></pre>
<p>My question is why am I not able to do any actions from child class's <strong>init</strong> method?</p>
<p>Here is the content of ui_main.py file that was converted using uic6</p>
<pre><code>from PyQt6 import QtCore, QtGui, QtWidgets
from PyQt6.QtWidgets import QPushButton as PushButton
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(332, 360)
self.centralwidget = QtWidgets.QWidget(parent=MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.verticalLayout = QtWidgets.QVBoxLayout(self.centralwidget)
self.verticalLayout.setContentsMargins(-1, 45, -1, -1)
self.verticalLayout.setObjectName("verticalLayout")
self.btn1 = PushButton(parent=self.centralwidget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Policy.Minimum, QtWidgets.QSizePolicy.Policy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.btn1.sizePolicy().hasHeightForWidth())
self.btn1.setSizePolicy(sizePolicy)
self.btn1.setObjectName("btn1")
self.verticalLayout.addWidget(self.btn1)
self.btn2 = PushButton(parent=self.centralwidget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Policy.Minimum, QtWidgets.QSizePolicy.Policy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.btn2.sizePolicy().hasHeightForWidth())
self.btn2.setSizePolicy(sizePolicy)
self.btn2.setObjectName("btn2")
self.verticalLayout.addWidget(self.btn2)
self.btn3 = PushButton(parent=self.centralwidget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Policy.Minimum, QtWidgets.QSizePolicy.Policy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.btn3.sizePolicy().hasHeightForWidth())
self.btn3.setSizePolicy(sizePolicy)
self.btn3.setObjectName("btn3")
self.verticalLayout.addWidget(self.btn3)
MainWindow.setCentralWidget(self.centralwidget)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.btn1.setText(_translate("MainWindow", "Page 1"))
self.btn2.setText(_translate("MainWindow", "Page 2"))
self.btn3.setText(_translate("MainWindow", "Page 3"))
</code></pre>
<p>Any help is appreciated.</p>
|
<python><pyqt><pyside><pyside6><pyqt6>
|
2023-05-22 16:40:24
| 1
| 3,910
|
sundar_ima
|
76,308,180
| 8,040,369
|
pymodbus: get data from input registers paralleling using multiprocessing in python
|
<p>I am using <strong>pymodbus</strong> library to get data from sensors using the below multiprocessing code</p>
<pre><code>def Get_Latest_Sensor_Data(tInput_Device_ID_List):
sub_df = tInput_df[tInput_df["device_id"] == tInput_Device_ID_List]
sub_df = sub_df.reset_index()
tIP_Addr = sub_df.iloc[0]['ip_addr']
tIP_Port = sub_df.iloc[0]['ip_port']
tDevice_ID = sub_df.iloc[0]['device_id']
tActive_Energy_Addr = sub_df.iloc[0]['active_energy_addr']
client = ModbusTcpClient(tIP_Addr, port=int(tIP_Port), timeout=10)
connection = client.connect()
tData_Len = 2
request = client.read_input_registers(int(tActive_Energy_Addr), tData_Len, unit=int(tDevice_ID))
result = request.registers
decoder = BinaryPayloadDecoder.fromRegisters(request.registers, byteorder=Endian.Big, wordorder=Endian.Big)
active_power_w = decoder.decode_32bit_int()
return tFinal_df
tInput_List = {
"ip_addr": ["<IP_ADDR>", "<IP_ADDR>"],
"ip_port": [502, 502],
"device_id":[51, 52],
"active_energy_addr": [220, 220]
}
tInput_df = pd.DataFrame(tInput_List)
tInput_IP_List = tInput_df['ip_addr'].tolist()
tInput_Device_ID_List = tInput_df['device_id'].tolist()
if __name__ == "__main__":
pool = Pool(processes=2)
tResult = pool.map(Get_Latest_Sensor_Data, tInput_Device_ID_List)
pool.close()
pool.join()
print(tResult)
</code></pre>
<p>I can see that this code is working and i could able to get the energy meter reading as well.</p>
<p>But since the <strong>Get_Latest_Sensor_Data()</strong> runs twice, i can see that the <strong>client connection is getting established twice</strong> and increases as I increase my processes value.
I tried creating the connection before calling the Pool.map() function and it throws error if mre than one device id is being used.</p>
<p>Is there a way to establish the connection to the IP_ADDR only once and get energy meter reading for n number of device id that will be passed.</p>
<p>Is this possible in Pool.map() or do i need to use something else ?</p>
<p>Any help is much appreciated.</p>
<p>Thanks,</p>
|
<python><multiprocessing><pymodbus>
|
2023-05-22 16:23:02
| 0
| 787
|
SM079
|
76,308,144
| 817,824
|
Having trouble opening a virtual environment in python3.9
|
<p>I have installed python3.9 through apt in my Ubuntu 18.04.6 LTS. Then I was trying to create a new virtual environment that runs on 3.9. This is the process I did:</p>
<pre><code>$ python3.9 -m venv /home/user1/py3.9
Error: Command '['/home/user1/py3.9/bin/python3.9', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1.
</code></pre>
<p>To remedy this I tried upgrading pip, which raised an error regarding distutils. Here was the output of that:</p>
<pre><code>$ python3.9 -m pip install --upgrade pip
Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 188, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/lib/python3.9/runpy.py", line 147, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "/usr/lib/python3.9/runpy.py", line 111, in _get_module_details
__import__(pkg_name)
File "/usr/lib/python3/dist-packages/pip/__init__.py", line 29, in <module>
from pip.utils import get_installed_distributions, get_prog
File "/usr/lib/python3/dist-packages/pip/utils/__init__.py", line 23, in <module>
from pip.locations import (
File "/usr/lib/python3/dist-packages/pip/locations.py", line 9, in <module>
from distutils import sysconfig
ImportError: cannot import name 'sysconfig' from 'distutils' (/usr/lib/python3.9/distutils/__init__.py)
</code></pre>
<p>I then installed distutils using the command: <code>sudo apt install python3.9-distutils</code>, which was sucessful and after that I could upgrade pip.
But I'm still having the same problem of <code>returned non-zero exit status 1</code> if I try to create the virtual environment. I'm not sure what I did wrong. But I did install python3.11 in the past and could create virtual environment the same way IIRC. I'm not sure what to do at this point.</p>
|
<python><python-3.x><virtualenv><python-venv>
|
2023-05-22 16:19:02
| 1
| 477
|
ponir
|
76,308,129
| 15,055
|
Is there a way to serialize and resume async functions between interpreter restarts?
|
<p>Say I have some async code like so:</p>
<pre><code> async def start(self):
name = await self.text_prompt("Hi there! What's your name?")
await self.send_message(f"Hello, {name}! How are you?")
# --- crash ---
await asyncio.sleep(1)
await self.send_message("Lol")
</code></pre>
<p>I'm looking to make my runner code robust between interpreter crashes. In an ideal world, if the Python interpreter crashes right after the <code>send_message</code> and I restart it, I would be able to execute some magical series of steps such that the async routine here resumes right at the <code># --- crash ---</code> point.</p>
<p>I could definitely make it work by changing how I write the code, for example if I re-wrote it like this:</p>
<pre><code> async def start(self):
self.execute_steps([
Step(self.text_prompt, "Hi there! What's your name?", store='name_response'),
Step(self.send_message_name_response),
Step(asyncio.sleep, 1),
Step(self.send_message, "Lol"),
])
</code></pre>
<p>Now I can serialize the Steps, my runner can keep track where it is, rebuild all needed data structures, etc.</p>
<p>My question is if this is possible just with the plain <code>async def</code> style?</p>
|
<python><serialization><async-await>
|
2023-05-22 16:16:14
| 0
| 230,589
|
Claudiu
|
76,308,118
| 10,266,106
|
Finding Percentiles and Values From Calculated Gamma Distribution
|
<p><strong>Background</strong></p>
<p>I am working on computing a series of best-fit gamma curves for a 2-D dataset in Numpy (ndarray), a prior question for the genesis of this can be found <a href="https://stackoverflow.com/q/75598344/10266106">here</a>.</p>
<p>Scipy was previously utilized (scipy.gamma.stats), however this library is not optimized for multi-dimensional arrays and a barebones function was written to meet the objective. I've successfully fit (albeit not as cleanly as Scipy) a curve to the dataset, which is provided below.</p>
<p><strong>Current Issue</strong></p>
<p>I want to obtain the percentile of a given value and vice versa along the calculated gamma distribution. However, I'm not obtaining expected values off the fitted curve. For example, providing the 50th percentile yields a value of 4.471, which does not match up with the curve fit shown below. What modifications or wholesale alterations can be made to yield both percentiles and values from supplied data?</p>
<p><strong>Graph</strong></p>
<p><a href="https://i.sstatic.net/QchWe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QchWe.png" alt="Gamma Fitting Output" /></a></p>
<p><strong>Code</strong></p>
<pre><code>import sys, os, math
import numpy as np
import scipy as sci
import matplotlib.pyplot as plt
data = np.array([0.00, 0.00, 11.26399994, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 17.06399918, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 8.33279991, 0.00, 7.54879951, 0.00, 0.00, 0.00, 4.58799982, 7.9776001, 0.00, 0.00, 0.00, 0.00, 11.45040035, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 18.73279953, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 8.94559956, 0.00, 7.73040009, 0.00, 0.00, 0.00, 5.03599977, 8.62639999, 0.00, 0.00, 0.00, 0.00, 11.11680031, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 14.37839985, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 8.16479969, 0.00, 7.30719948, 0.00, 0.00, 0.00, 3.41039991, 7.17280006, 0.00, 0.00, 0.00, 0.00, 10.0099199963, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 13.97839928, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 7.6855998, 0.00, 6.86559963, 0.00, 0.00, 0.00, 3.21600008, 7.93599987, 0.00, 0.00, 0.00, 0.00, 11.55999947, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 18.76399994, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 10.0033039951, 0.00, 8.10639954, 0.00, 0.00, 0.00, 4.76480007, 6.87679958, 0.00, 0.00, 0.00, 0.00, 11.42239952, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 19.42639732, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 10.0052400017, 0.00, 8.2567997, 0.00, 0.00, 0.00, 5.08239985, 7.9776001, 0.00, 0.00, 0.00, 0.00, 10.0099839973, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 11.5855999, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 7.88399982, 0.00, 5.96799994, 0.00, 0.00, 0.00, 3.07679987, 7.81360006, 0.00, 0.00, 0.00, 0.00, 11.51119995, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 20.0030959892, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 10.0050879955, 0.00, 8.20479965, 0.00, 0.00, 0.00, 5.51599979, 9.02879906, 0.00, 0.00])
def scigamma(data):
param = sci.stats.gamma.fit(data)
x = np.linspace(0, np.max(data), 250)
cdf = sci.stats.gamma.cdf(x, *param)
value = np.round((sci.stats.gamma.cdf(0.10, *param) * 100), 2)
percentile = np.round((sci.stats.gamma.ppf(50.00, *param) * 100), 2)
return cdf
scicdf = scigamma(data)
# Method of Moments estimation
mean = np.mean(data)
variance = np.var(data)
alpha = mean**2 / variance
beta = variance / mean
# Generate x-axis values for the curves
x = np.linspace(0, np.max(data), 250)
# Calculate the gamma distribution PDF values
pdf = (x ** (alpha - 1) * np.exp(-x / beta)) / (beta ** alpha * np.math.gamma(alpha))
# Calculate the gamma distribution CDF values
cdf = np.zeros_like(x)
cdf[x > 0] = np.cumsum(pdf[x > 0]) / np.sum(pdf[x > 0])
# Estimate the probability of zero values
num_zeros = np.count_nonzero(data == 0)
zero_probability = np.count_nonzero(data == 0) / len(data)
# Calculate the PDF and CDF values at zero
pdf_zero = zero_probability / (beta ** alpha * np.math.gamma(alpha))
cdf_zero = zero_probability
value = 2.50
percentile = 0.50
index = np.argmax(pdf >= value)
# Calculate the percentile using numerical integration
pct = np.trapz(pdf[:index+1], dx=1) + (value - pdf[index]) * (cdf[index] - cdf[index-1]) / (pdf[index-1] - pdf[index])
index = np.argmax(cdf >= percentile)
# Calculate the value using numerical integration
val = np.trapz(cdf[:index+1], dx=1) + (percentile - cdf[index-1]) * (pdf[index] - pdf[index-1]) / (cdf[index] - cdf[index-1])
# Plot the data histogram
plt.hist(data, bins=30, density=True, alpha=0.5, label='data')
# Plot the gamma distribution CDF curve
plt.plot(x, cdf, 'b', label='Gamma CDF | Custom Fit')
plt.plot(x, scicdf, 'k', label='Gamma CDF | SciPy Fit')
# Set plot labels and legend
plt.xlabel('data')
plt.ylabel('Probability')
plt.legend()
</code></pre>
|
<python><numpy><cdf><gamma-distribution>
|
2023-05-22 16:15:20
| 2
| 431
|
TornadoEric
|
76,307,932
| 468,455
|
Python class method not running
|
<p>I'm new to Python. I'm confused as to why the method innerMethod in the code below does not run when being called from another method in the class or from the instantiated object. I don't get an error but the print command is not called.</p>
<pre><code># create a class
class Room:
length = 0.0
breadth = 0.0
# method to calculate area
def calculate_area(self):
print("Area of Room =", self.length * self.breadth)
self.innerMethod
def innerMethod():
print("testing")
# create object of Room class
study_room = Room()
# assign values to all the attributes
study_room.length = 42.5
study_room.breadth = 30.8
# access method inside class
study_room.calculate_area()
</code></pre>
|
<python><class>
|
2023-05-22 15:52:45
| 0
| 6,396
|
PruitIgoe
|
76,307,855
| 3,231,250
|
Why does pandas not use NumPy correlation method?
|
<p>Recently I have realised that NumPy correlation function is much faster then comparing to pandas.</p>
<p>If I perform pair-wise correlation to the ~18k features, with NumPy It is 100x time faster.</p>
<pre><code>%timeit np.corrcoef(df.values)
5.17 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
%timeit df.T.corr()
8min 49s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
</code></pre>
<p>Why they don't just use NumPy for that? I have checked both source code. NumPy use vectorization for that, pandas prefers loops which makes more slower.</p>
|
<python><pandas><numpy>
|
2023-05-22 15:42:01
| 1
| 1,120
|
Yasir
|
76,307,777
| 3,453,901
|
How to use GitHub fine-grained access tokens in production across many repos when the max expiration is a year?
|
<p>I am trying to transition from the GitHub <code>Personal Access Tokens</code> classic version to the currently beta version <code>fine-grained tokens</code>. My issue is the token expiration max age is 1 year, whereas before the classic tokens could be set to not have any expiration. I know the simplest solution is just use the classic tokens, but if I've learned anything from GitHub's beta programs is that they will deprecate the previous option eventually. The <code>fine-grained tokens</code> are also appealing given they limit access to 1+ specific repositories.</p>
<p>I understand the reasons behind the expiration but am having an issue reconciling how to handle it. My issue is I have python packages hosted on GitHub using private repositories. I have a <code>requirements.txt</code> using the token to install the package(s) in the private repository(s) in multiple applications/scripts being hosted on the cloud or ran locally on multiple machines. If I only had one repository it would be a simple update, but there are many programs consuming these private packages. So, when the token(s) inevitably expires (max age is currently a year) I will then need to update the <code>requirements.txt</code> for all repositories that use the private repository package(s).</p>
<p>I realize I could store the token in a separate file and reference the token. However, this doesn't seem like a proper approach given the syntax appears so convoluted.</p>
<pre><code>-e git+https://github.com/USERNAME/REPOSITORY.git@TAG#egg=PACKAGE_NAME&
subdirectory=PACKAGE_DIRECTORY&oauth_token=$(cat PATH/TO/FILE.TXT)
</code></pre>
<p>This solution also doesn't solve the case where I have an CD <code>GitHub Action</code> that deploys an application to AWS that needs to install the private repositories' packages. I would need to store the <code>fine-grained token(s)</code> as a Secret for the repository to be used in the <code>GitHub Action</code>. Then when the token inevitably updates I will have to go to every repository and update it.</p>
<p>I created the <code>fine-grained access</code> tokens and began using them but ran into the issues described above.</p>
|
<python><github><requirements.txt><personal-access-token><github-fine-grained-tokens>
|
2023-05-22 15:31:25
| 1
| 2,274
|
Alex F
|
76,307,761
| 6,011,446
|
Time-series plot doesn't look right
|
<p>I downloaded data from INTERMAGNET for my research and I've been trying to plot the data between 1 Jan 2013 to 31 Dec 2013. The plot generated by the INTERMAGNET site looks like this:</p>
<p><a href="https://i.sstatic.net/OtLXk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OtLXk.png" alt="enter image description here" /></a></p>
<p>My plot on the other hand looks like this:</p>
<p><a href="https://i.sstatic.net/JqPFF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JqPFF.png" alt="enter image description here" /></a></p>
<p>This is the code I used to plot my graph:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
# read csv file
df = pd.read_csv('data/thl_data.csv')
# convert day, month, year, hour, minute columns to datetime
df['date'] = pd.to_datetime(df[['year', 'month', 'day', 'hour', 'minute']])
# set date as index
df = df.set_index('date')
# filter data to include only between 1 Jan 2013 to 31 Dec 2013
start_date = '2013-01-01'
end_date = '2013-12-31'
df = df.loc[start_date:end_date]
# plot x, y, z values
plt.figure(figsize=(18,12))
plt.plot(df['x'], label='x')
plt.xlabel('Time')
plt.ylabel('X (nT)')
plt.title('THL (Qaanaaq (Thule), Greenland)')
plt.legend()
plt.show()
</code></pre>
<p>I even tried the resampling method and set it to day average but the plot still looks kind of similar to the unsampled version. No matter what I do, I can't get the plot to look like the plot from INTERMAGNET's site. How do I go about solving this issue?</p>
<p>The data looks like this:</p>
<pre><code>day,month,year,hour,minute,x,y,z
1,1,2013,0,1,26104,-31575,562205
1,1,2013,0,2,26105,-31584,562201
1,1,2013,0,3,26109,-31593,562197
1,1,2013,0,4,26115,-31597,562197
1,1,2013,0,5,26113,-31611,562190
1,1,2013,0,6,26112,-31605,562195
1,1,2013,0,7,26106,-31604,562195
1,1,2013,0,8,26111,-31594,562202
1,1,2013,0,9,26111,-31596,562203
1,1,2013,0,10,26116,-31598,562202
1,1,2013,0,11,26113,-31596,562203
1,1,2013,0,12,26114,-31599,562202
1,1,2013,0,13,26110,-31604,562201
1,1,2013,0,14,26114,-31598,562206
1,1,2013,0,15,26124,-31587,562211
...
</code></pre>
|
<python><pandas><matplotlib>
|
2023-05-22 15:29:29
| 0
| 1,660
|
Nikhil Raghavendra
|
76,307,724
| 8,052,809
|
Numerical algorithm to decide whether a matrix is nilpotent
|
<p>I am looking for a numerical stable algorithm which decides whether a matrix is nilpotent
(A matrix <code>A</code> is nilpotent if <code>A^dim</code> is the zero matrix)</p>
<p>(I have a similar question about generating nilpotent matrices here: <a href="https://stackoverflow.com/q/76307680/8052809">Numerical algorithm for generating nilpotent matrices</a>)</p>
<p>So far I tried to compute the eigenvalues, but this seems to be very unstable, since the accuracy of computing eigenvalues depends on the condition number of the matrix of the eigenvalues of the input matrix.</p>
<p>Any other ideas?</p>
|
<python><r><math><matrix><numerical-methods>
|
2023-05-22 15:25:37
| 0
| 778
|
tommsch
|
76,307,666
| 785,404
|
How can I check if a Python package provides another package?
|
<p>The <a href="https://packaging.python.org/en/latest/overview/" rel="nofollow noreferrer">Python docs say</a></p>
<blockquote>
<p>Python and PyPI support multiple distributions providing different implementations of the same package. For instance the unmaintained-but-seminal PIL distribution provides the PIL package, and so does Pillow, an actively-maintained fork of PIL!</p>
</blockquote>
<p>What is this paragraph talking about exactly? How can I check whether some Python package (Pillow in this example) provides a different package (PIL in this example)?</p>
|
<python><pypi><python-packaging>
|
2023-05-22 15:19:22
| 0
| 2,085
|
Kerrick Staley
|
76,307,373
| 926,918
|
dask frame: efficiently drop rows that do not have minimum count of the row's corresponding reference element
|
<p>I have a dataframe of integers with no missing numbers. Due to the size of the problem, I need to find a Dask-based solution that I am unable to for the following problem.</p>
<p>Using the reference value under the column ('ref') and a cut-off k, retain only those rows that have the row's corresponding element at least k times (excluding 'ref' itself). The pandas solution that I have is:</p>
<pre><code>>>> import numpy as np
>>> import pandas as pd
>>>
>>> arr_random = np.random.randint(low=1, high=5, size=(15,7))
>>> df = pd.DataFrame(arr_random, columns=["ref","v1","v2","v3","v4","v5","v6"])
df
ref v1 v2 v3 v4 v5 v6
0 3 3 4 4 4 1 4
1 4 4 1 3 3 4 1
2 3 1 1 4 3 4 4
3 2 1 2 3 2 3 1
4 3 1 2 3 2 1 4
5 2 3 2 1 3 4 1
6 2 1 2 4 4 1 3
7 1 1 3 1 3 1 4
8 2 4 3 3 1 1 3
9 2 1 3 4 1 1 1
10 3 3 1 3 4 1 1
11 4 4 1 1 2 1 3
12 2 2 1 1 2 4 1
13 2 1 2 1 3 2 4
14 4 1 3 3 3 4 2
>>> occ = list()
>>> for i in range(len(ddf)):
... row = df.iloc[[i]]
... ref_val = list(row['ref'])[0]
... occ.append(sum([1 for x in row.isin([ref_val]).values[0].tolist() if x == True]) - 1)
...
>>> df['tot'] = occ
>>> df = df[df['tot'] > 1].drop('tot',axis=1)
>>> df
ref v1 v2 v3 v4 v5 v6
0 3 1 4 3 1 1 3
1 3 3 3 4 1 2 3
2 3 4 3 2 3 1 4
4 4 2 2 3 4 3 4
7 3 4 3 1 4 3 1
8 4 4 3 2 3 4 1
9 1 1 3 2 4 3 1
12 3 3 3 3 4 3 4
14 1 4 2 3 4 1 1
>>>
</code></pre>
|
<python><pandas><dask>
|
2023-05-22 14:47:27
| 0
| 1,196
|
Quiescent
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.