QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,085,684
| 678,491
|
Using matplotlib cmap function outside of matplotlib
|
<p>I'm trying to use the same mapping function as <strong>matplotlib</strong> <strong>cmap</strong> but outside of <strong>matplotlib</strong>. E.g.:</p>
<p>Given:</p>
<pre class="lang-py prettyprint-override"><code>plt.imshow(image, cmap="viridis")
</code></pre>
<p>Manually replicate same result such that every value of the <code>image[x,y]</code> → <code>RBG</code> triplet as per <code>viridis</code> color map.</p>
<p>Anyone knows the way to get to the function that does this?</p>
<p>Will be digging through the source in the mean time...</p>
<p>Thanks!</p>
|
<python><matplotlib>
|
2023-04-23 15:20:00
| 0
| 1,313
|
k1m190r
|
76,085,647
| 1,668,622
|
In Python is it technically possible to access a functions local variables after it has been executed?
|
<p>I guess, the answer is "no", since <a href="https://stackoverflow.com/questions/35754307/how-do-i-access-a-local-variable-in-my-decorated-function-from-my-decorator">someone else asked something similar</a> a while ago, but a lot happens in 7 years..</p>
<p>In some quite non-pythonic experiment I'd like to write a decorator which calls it's wrapped function and react on what's happening inside this function.</p>
<p>Currently a detail bugs me:</p>
<pre><code>def my_decorator(fn):
print(execute_and_get_value(fn, "some_value")
@my_decorator
def foo():
some_value = 23
</code></pre>
<p><code>my_decorator</code> should run <code>fn()</code> on module load (that's the part which works) and then <em>somehow</em> get the value of <code>some_value</code> defined in <code>foo()</code>. All I want to know if that's technically possible at all.</p>
<p>One approach would be to let <code>foo()</code> access a member to some global object instead of writing to a local variable and just read that global object afterwards. But that's cheating.</p>
<p>Is that possible or resp. is there a way to prove this impossible?</p>
|
<python><python-3.x><python-decorators>
|
2023-04-23 15:11:55
| 0
| 9,958
|
frans
|
76,085,625
| 17,936,708
|
Insert error using SQL Alchemy to transfer data from SQL Server to MariaDB
|
<p>Using SQL Alchemy, I am trying to move data from a SQL Server table to a MariaDB table. The tables are almost the same, as you can see in this code:</p>
<p>MariaDB table :</p>
<pre class="lang-sql prettyprint-override"><code>CREATE TABLE IF NOT EXISTS customers
(
id INT PRIMARY KEY,
first_name VARCHAR(50) NOT NULL,
last_name VARCHAR(50) NOT NULL,
email VARCHAR(50) NOT NULL UNIQUE,
phone_number VARCHAR(15) NOT NULL,
address VARCHAR(150) NOT NULL,
city VARCHAR(50) NOT NULL,
state VARCHAR(50) NOT NULL,
zip_code VARCHAR(10) NOT NULL,
create_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
update_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
</code></pre>
<p>SQL Server table :</p>
<pre class="lang-sql prettyprint-override"><code>CREATE TABLE customers
(
id INT PRIMARY KEY IDENTITY (1,1),
first_name VARCHAR(50) NOT NULL,
last_name VARCHAR(50) NOT NULL,
email VARCHAR(50) NOT NULL UNIQUE,
phone_number VARCHAR(15) NOT NULL,
address VARCHAR(150) NOT NULL,
city VARCHAR(50) NOT NULL,
state VARCHAR(50) NOT NULL,
zip_code VARCHAR(10) NOT NULL,
create_date DATETIME DEFAULT CURRENT_TIMESTAMP,
update_date DATETIME DEFAULT CURRENT_TIMESTAMP
);
</code></pre>
<p>And my code is like this :</p>
<pre class="lang-py prettyprint-override"><code>def main():
# Connection strings
sql_server_conn_str = CONFIG['connectionStrings']['sqlServer']
maria_conn_str = CONFIG['connectionStrings']['mariaDb']
# create SQLAlchemy engine for SQL Server and MariaDB
sql_server_engine = create_engine(sql_server_conn_str)
maria_engine = create_engine(maria_conn_str)
# create connection for both db
sql_server_conn = sql_server_engine.connect()
maria_conn = maria_engine.connect();
# create SQLAlchemy MetaData objects for SQL Server and MariaDB
sql_server_metadata = MetaData()
maria_metadata = MetaData()
# reflect the SQL Server database schema into the MetaData object
sql_server_metadata.reflect(bind=sql_server_engine)
# create Table objects for each SQL Server table
customers_sql_server = Table('customers', sql_server_metadata, autoload=True, autoload_with=sql_server_engine)
# reflect the MariaDB database schema into the MetaData object
maria_metadata.reflect(bind=maria_engine)
# create Table objects for each MariaDB table
customers_maria = Table('customers', maria_metadata, autoload=True, autoload_with=maria_engine)
# select all rows from the customers table in SQL Server
select_customers_sql_server = select(customers_sql_server)
# execute the select query and fetch all rows
result_proxy = sql_server_conn.execute(select_customers_sql_server)
customers_data = result_proxy.fetchall()
# insert the rows into the customers table in MariaDB
tuples_to_insert = [tuple(row) for row in customers_data]
maria_conn.execute(customers_maria.insert(), tuples_to_insert)
</code></pre>
<p>I have this error I couldn't solve:</p>
<pre><code>Traceback (most recent call last): ...\main.py", line 48, in <module>
main() File "...\main.py", line 35, in main
maria_conn.execute(customers_maria.insert(), tuples_to_insert) File "...\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 1413, in execute
return meth(
^^^^^ File "...\venv\Lib\site-packages\sqlalchemy\sql\elements.py", line 483, in
_execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 1613, in
_execute_clauseelement
keys = sorted(distilled_parameters[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '<' not supported between instances of 'str' and 'int'
</code></pre>
<p>Sorry, I am new to Python ^_^</p>
|
<python><sql-server><sqlalchemy><mariadb>
|
2023-04-23 15:07:32
| 1
| 390
|
Oleg Ivsv
|
76,085,260
| 599,912
|
Define constraints with or-tools for a carpooling problem
|
<p>I am having a real life situation where I am trying to solve a school carpooling problem. I thought it would be good to try and use or-tools and learn in the process.</p>
<p>The school is divided into 2 locations, some children go to location 1 and some to location 2 (for the entire week)</p>
<p>So far, I have defined the model:</p>
<pre class="lang-py prettyprint-override"><code>from ortools.sat.python import cp_model
def main():
# Data.
num_parents = 12
num_children = 20
num_days = 5
num_locations = 2
all_parents = range(num_parents)
all_children = range(num_children)
all_days = range(num_days)
all_locations = range(num_locations)
# Creates the model.
model = cp_model.CpModel()
# Creates ride variables.
# rides[(n, d, c, l)]: parent 'n' takes child 'c' on day 'd' to location l.
rides = {}
for n in all_parents:
for d in all_days:
for c in all_children:
for l in all_locations:
rides[(n, d, c, l)] = model.NewBoolVar('ride_n%id%ic%il%i' % (n, d, c, l))
</code></pre>
<p>Now I am trying to define the constraints. Here they are:</p>
<ol>
<li>Each parent can take up to 4 kids in a ride (would be cool to have that number adjustable per parent as some of them have bigger cars, but lets assume 4)</li>
<li>Each parent should only drive to one of the locations during a ride</li>
<li>Maximum 2 rides per parent per day (pickup and drop-off, but can't do more than that)</li>
<li>Parent always drives at least one of their children in a ride (at least one, but not necessarily all of them)</li>
<li>Each child either goes to location 1 or to location 2. It never changes.</li>
<li>I am trying to make it fair so that the work is spread across everyone</li>
</ol>
<p>Could someone please give me some examples how to model these constraints ? Also, I haven't yet thought of how to model the parent-child relationship in constraint #4 or the child-location relationship in constraints #5</p>
|
<python><constraints><linear-programming><or-tools><cp-sat>
|
2023-04-23 13:55:17
| 3
| 22,977
|
Michael
|
76,085,253
| 3,900,373
|
Call AWS API Gateway from Flask server
|
<p>I am using <a href="https://mblackgeo.github.io/flask-cognito-lib/" rel="nofollow noreferrer">flask-cognito-lib</a> and AWS Cognito to authentical users. I am able to follow the example code to secure Flask endpoints.</p>
<pre><code>from flask import Flask, jsonify, redirect, session, url_for
from flask_cognito_lib import CognitoAuth
from flask_cognito_lib.decorators import (
auth_required,
cognito_login,
cognito_login_callback,
cognito_logout,
)
app = Flask(__name__)
# Configuration required for CognitoAuth
app.config["AWS_REGION"] = "eu-west-1"
app.config["AWS_COGNITO_USER_POOL_ID"] = "eu-west-1_qwerty"
app.config["AWS_COGNITO_DOMAIN"] = "https://app.auth.eu-west-1.amazoncognito.com"
app.config["AWS_COGNITO_USER_POOL_CLIENT_ID"] = "asdfghjkl1234asdf"
app.config["AWS_COGNITO_USER_POOL_CLIENT_SECRET"] = "zxcvbnm1234567890"
app.config["AWS_COGNITO_REDIRECT_URL"] = "https://example.com/postlogin"
app.config["AWS_COGNITO_LOGOUT_URL"] = "https://example.com/postlogout"
auth = CognitoAuth(app)
@app.route("/login")
@cognito_login
def login():
# A simple route that will redirect to the Cognito Hosted UI.
# No logic is required as the decorator handles the redirect to the Cognito
# hosted UI for the user to sign in.
pass
@app.route("/postlogin")
@cognito_login_callback
def postlogin():
# A route to handle the redirect after a user has logged in with Cognito.
# This route must be set as one of the User Pool client's Callback URLs in
# the Cognito console and also as the config value AWS_COGNITO_REDIRECT_URL.
# The decorator will store the validated access token in a HTTP only cookie
# and the user claims and info are stored in the Flask session:
# session["claims"] and session["user_info"].
# Do anything login after the user has logged in here, e.g. a redirect
return redirect(url_for("claims"))
@app.route("/claims")
@auth_required()
def claims():
# This route is protected by the Cognito authorisation. If the user is not
# logged in at this point or their token from Cognito is no longer valid
# a 401 Authentication Error is thrown, which is caught here a redirected
# to login.
# If their session is valid, the current session will be shown including
# their claims and user_info extracted from the Cognito tokens.
return jsonify(session)
@app.route("/admin")
@auth_required(groups=["admin"])
def admin():
# This route will only be accessible to a user who is a member of all of
# groups specified in the "groups" argument on the auth_required decorator
# If they are not, a CognitoGroupRequiredError is raised which is handled
# below
return jsonify(session["claims"]["cognito:groups"])
@app.route("/logout")
@cognito_logout
def logout():
# Logout of the Cognito User pool and delete the cookies that were set
# on login.
# No logic is required here as it simply redirects to Cognito.
pass
@app.route("/postlogout")
def postlogout():
# This is the endpoint Cognito redirects to after a user has logged out,
# handle any logic here, like returning to the homepage.
# This route must be set as one of the User Pool client's Sign Out URLs.
return redirect(url_for("home"))
if __name__ == "__main__":
app.run()
</code></pre>
<p>Now I have created an AWS Lambda with API Endpoint trigger</p>
<pre><code>API endpoint: https://<endpoint ID>.execute-api.<region>.amazonaws.com/default/<endpoint-name>
Details
API type: HTTP
Authorization: JWT
Authorizer ID: <auth-ID>
CORS: No
Detailed metrics enabled: No
Method: ANY
Resource path: /<endpoint-name>
Service principal: apigateway.amazonaws.com
Stage: default
Statement ID: <stmt-ID>
</code></pre>
<p>How can I call this endpoint from Flask/Python?</p>
|
<python><amazon-web-services><flask><aws-lambda><jwt>
|
2023-04-23 13:54:01
| 0
| 2,316
|
alagris
|
76,085,135
| 4,225,430
|
How to beautify jupyter output, especially with alignment?
|
<p>I'm learning python pandas programming with jupyter notebook. I see from video clips that the output of pandas series is neat, with column alignment, such as the one below:</p>
<p><a href="https://i.sstatic.net/Irzcn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Irzcn.png" alt="enter image description here" /></a></p>
<p>But when I learn from my own jupyter, the output is rather messy:</p>
<p><a href="https://i.sstatic.net/3db1O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3db1O.png" alt="enter image description here" /></a></p>
<p>May I know how to change the fonts and style? I like the layout shown in the teaching video. Thank you for answering.</p>
|
<python><pandas><dataframe><jupyter>
|
2023-04-23 13:29:45
| 0
| 393
|
ronzenith
|
76,085,064
| 15,295,149
|
Langchain gpt-3.5-turbo models reads files - problem
|
<p>I am making really simple (and for fun) LangChain project.</p>
<p>A model can read PDF file and I can then ask him questions about specific PDF file.</p>
<p>Everything works fine (<strong>this is working example</strong>)</p>
<pre><code>from PyPDF2 import PdfReader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import ElasticVectorSearch, Pinecone, Weaviate, FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI
import os
os.environ["OPENAI_API_KEY"] = ""
reader = PdfReader('./2023_GPT4All_Technical_Report.pdf')
raw_text = ''
for i, page in enumerate(reader.pages):
text = page.extract_text()
if text:
raw_text += text
raw_text[:100]
text_splitter = CharacterTextSplitter(
separator = "\n",
chunk_size = 1000,
chunk_overlap = 200,
length_function = len,
)
texts = text_splitter.split_text(raw_text)
embeddings = OpenAIEmbeddings(model='gpt-3.5-turbo')
docsearch = FAISS.from_texts(texts, embeddings)
chain = load_qa_chain(OpenAI(), chain_type="stuff")
query = "Who is the author of the book?"
docs = docsearch.similarity_search(query)
res = chain.run(input_documents=docs, question=query)
print(res)
</code></pre>
<p><strong>What do I see as a problem:</strong></p>
<p>If I ask a some simple question like what is 2+2 he doesn't know.. How did I lost all the knowledge of a model? Is there a workaround that model has existing knowledge and I just add a knowledge of a specific pdf file?</p>
<p>Thanks to everyone for asnwers and I hope a good conversion will start from my question..</p>
<p>Also a suggestios would be awesome!</p>
|
<python><openai-api><langchain>
|
2023-04-23 13:16:49
| 2
| 746
|
devZ
|
76,084,999
| 12,398,468
|
Getting Unknown opcode error even though Python/TF versions for train and test are the same
|
<p>I'm trying to create a Siamese network following this tutorial <a href="https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/" rel="nofollow noreferrer">https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/</a></p>
<p>The only thing I've modified is the number of epochs in the <code>config.py</code>. Below are the commands I ran and logs generated.</p>
<pre><code>(tf-23) D:\Code\burgle\ml>python train_siamese_network.py
2.3.0
[]
[INFO] loading MNIST dataset...
[INFO] preparing positive and negative pairs...
[INFO] building siamese network...
2023-04-23 04:58:24.093663: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
[INFO] compiling model...
[INFO] training model...
Epoch 1/10
1875/1875 [==============================] - 78s 41ms/step - loss: 0.6933 - accuracy: 0.4952 - val_loss: 0.6933 - val_accuracy: 0.4991
Epoch 2/10
1875/1875 [==============================] - 76s 41ms/step - loss: 0.6932 - accuracy: 0.4999 - val_loss: 0.6932 - val_accuracy: 0.4990
Epoch 3/10
1875/1875 [==============================] - 77s 41ms/step - loss: 0.6932 - accuracy: 0.4995 - val_loss: 0.6933 - val_accuracy: 0.4816
Epoch 4/10
1875/1875 [==============================] - 75s 40ms/step - loss: 0.6932 - accuracy: 0.4982 - val_loss: 0.6933 - val_accuracy: 0.4845
Epoch 5/10
1875/1875 [==============================] - 75s 40ms/step - loss: 0.6932 - accuracy: 0.4990 - val_loss: 0.6932 - val_accuracy: 0.4978
Epoch 6/10
1875/1875 [==============================] - 76s 40ms/step - loss: 0.6932 - accuracy: 0.4999 - val_loss: 0.6933 - val_accuracy: 0.5000
Epoch 7/10
1875/1875 [==============================] - 77s 41ms/step - loss: 0.6932 - accuracy: 0.4996 - val_loss: 0.6932 - val_accuracy: 0.5000
Epoch 8/10
1875/1875 [==============================] - 77s 41ms/step - loss: 0.6932 - accuracy: 0.4983 - val_loss: 0.6933 - val_accuracy: 0.4997
Epoch 9/10
1875/1875 [==============================] - 77s 41ms/step - loss: 0.6932 - accuracy: 0.4976 - val_loss: 0.6933 - val_accuracy: 0.5000
Epoch 10/10
1875/1875 [==============================] - 77s 41ms/step - loss: 0.6932 - accuracy: 0.4983 - val_loss: 0.6932 - val_accuracy: 0.5000
[INFO] saving siamese model...
WARNING:tensorflow:From C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\training\tracking\tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
2023-04-23 05:11:11.403233: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:tensorflow:From C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\training\tracking\tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
[INFO] plotting training history...
(tf-23) D:\Code\burgle\ml>python test_siamese_network.py --input examples
2.3.0
[INFO] loading test dataset...
[INFO] loading siamese model...
2023-04-23 15:27:34.627626: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
XXX lineno: 43, opcode: 47
Traceback (most recent call last):
File "test_siamese_network.py", line 27, in <module>
model = load_model(config.MODEL_PATH)
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\keras\saving\save.py", line 187, in load_model
return saved_model_load.load(filepath, compile, options)
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py", line 120, in load
model = tf_load.load_internal(
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\saved_model\load.py", line 632, in load_internal
loader = loader_cls(object_graph_proto, saved_model_proto, export_dir,
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py", line 194, in __init__
super(KerasObjectLoader, self).__init__(*args, **kwargs)
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\saved_model\load.py", line 130, in __init__
self._load_all()
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py", line 221, in _load_all
self._finalize_objects()
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py", line 530, in _finalize_objects
self._reconstruct_all_models()
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py", line 548, in _reconstruct_all_models
self._reconstruct_model(model_id, model, layers)
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py", line 588, in _reconstruct_model
created_layers) = functional_lib.reconstruct_from_config(
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 1214, in reconstruct_from_config
process_node(layer, node_data)
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 1162, in process_node
output_tensors = layer(input_tensors, **kwargs)
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 925, in __call__
return self._functional_construction_call(inputs, args, kwargs,
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1117, in _functional_construction_call
outputs = call_fn(cast_inputs, *args, **kwargs)
File "C:\Users\heihachi\anaconda3\envs\tf-23\lib\site-packages\tensorflow\python\keras\layers\core.py", line 903, in call
result = self.function(inputs, **kwargs)
File "D:/Code/burgle/ml/pyimagesearch/utils.py", line 43, in euclidean_distance
(featsA, featsB) = vectors
SystemError: unknown opcode
</code></pre>
<p>I'm using TF 2.3 as suggested by the author of the tutorial and Python3.8. Both the scripts are run in the same environment. Below is the output of my <code>conda list</code>:</p>
<pre><code>(tf-23) D:\Code\burgle\ml>conda list
# packages in environment at C:\Users\heihachi\anaconda3\envs\tf-23:
#
# Name Version Build Channel
_tflow_select 2.3.0 eigen
abseil-cpp 20211102.0 hd77b12b_0
absl-py 1.3.0 py38haa95532_0
aiohttp 3.8.3 py38h2bbff1b_0
aiosignal 1.2.0 pyhd3eb1b0_0
appdirs 1.4.4 pyhd3eb1b0_0
astor 0.8.1 py38haa95532_0
astunparse 1.6.3 py_0
async-timeout 4.0.2 py38haa95532_0
attrs 22.1.0 py38haa95532_0
blas 1.0 mkl
blinker 1.4 py38haa95532_0
brotli 1.0.9 h2bbff1b_7
brotli-bin 1.0.9 h2bbff1b_7
brotlipy 0.7.0 py38h2bbff1b_1003
c-ares 1.19.0 h2bbff1b_0
ca-certificates 2022.12.7 h5b45459_0 conda-forge
cachetools 4.2.2 pyhd3eb1b0_0
certifi 2022.12.7 pyhd8ed1ab_0 conda-forge
cffi 1.15.1 py38h2bbff1b_3
charset-normalizer 2.0.4 pyhd3eb1b0_0
click 8.0.4 py38haa95532_0
colorama 0.4.6 py38haa95532_0
contourpy 1.0.5 py38h59b6b97_0
cryptography 39.0.1 py38h21b164f_0
cycler 0.11.0 pyhd3eb1b0_0
eigen 3.4.0 h2d74725_0 conda-forge
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.12.1 ha860e81_0
frozenlist 1.3.3 py38h2bbff1b_0
gast 0.4.0 pyhd3eb1b0_0
giflib 5.2.1 h8cc25b3_3
glib 2.69.1 h5dc1a3c_2
google-auth 2.6.0 pyhd3eb1b0_0
google-auth-oauthlib 0.4.4 pyhd3eb1b0_0
google-pasta 0.2.0 pyhd3eb1b0_0
grpc-cpp 1.48.2 hf108199_0
grpcio 1.48.2 py38hf108199_0
gst-plugins-base 1.18.5 h9e645db_0
gstreamer 1.18.5 hd78058f_0
h5py 2.10.0 py38h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2022.1.0 h6049295_2
icu 58.2 ha925a31_3
idna 3.4 py38haa95532_0
importlib-metadata 6.0.0 py38haa95532_0
importlib_resources 5.2.0 pyhd3eb1b0_1
imutils 0.5.4 py38haa244fe_3 conda-forge
intel-openmp 2021.4.0 haa95532_3556
jpeg 9e h2bbff1b_1
keras-applications 1.0.8 py_1
keras-preprocessing 1.1.2 pyhd3eb1b0_0
kiwisolver 1.4.4 py38hd77b12b_0
krb5 1.19.4 h5b6d351_0
lerc 3.0 hd77b12b_0
libbrotlicommon 1.0.9 h2bbff1b_7
libbrotlidec 1.0.9 h2bbff1b_7
libbrotlienc 1.0.9 h2bbff1b_7
libclang 14.0.6 default_hb5a9fac_1
libclang13 14.0.6 default_h8e68704_1
libdeflate 1.17 h2bbff1b_0
libffi 3.4.2 hd77b12b_6
libiconv 1.16 h2bbff1b_2
libogg 1.3.5 h2bbff1b_1
libpng 1.6.39 h8cc25b3_0
libprotobuf 3.20.3 h23ce68f_0
libtiff 4.5.0 h6c2663c_2
libvorbis 1.3.7 he774522_0
libwebp 1.2.4 hbc33d0d_1
libwebp-base 1.2.4 h2bbff1b_1
libxml2 2.10.3 h0ad7f3c_0
libxslt 1.1.37 h2bbff1b_0
lz4-c 1.9.4 h2bbff1b_0
markdown 3.4.1 py38haa95532_0
markupsafe 2.1.1 py38h2bbff1b_0
matplotlib 3.7.1 py38haa244fe_0 conda-forge
matplotlib-base 3.7.1 py38hf11a4ad_1
mkl 2021.4.0 haa95532_640
mkl-service 2.4.0 py38h2bbff1b_0
mkl_fft 1.3.1 py38h277e83a_0
mkl_random 1.2.2 py38hf11a4ad_0
multidict 6.0.2 py38h2bbff1b_0
munkres 1.1.4 py_0
numpy 1.23.5 py38h3b20f71_0
numpy-base 1.23.5 py38h4da318b_0
oauthlib 3.2.2 py38haa95532_0
opencv 4.6.0 py38h104de81_2
openssl 1.1.1t h2bbff1b_0
opt_einsum 3.3.0 pyhd3eb1b0_1
packaging 23.0 py38haa95532_0
pcre 8.45 hd77b12b_0
pillow 9.5.0 pypi_0 pypi
pip 23.0.1 py38haa95532_0
ply 3.11 py38_0
pooch 1.4.0 pyhd3eb1b0_0
protobuf 3.20.3 py38hd77b12b_0
pyasn1 0.4.8 pyhd3eb1b0_0
pyasn1-modules 0.2.8 py_0
pycparser 2.21 pyhd3eb1b0_0
pyjwt 2.4.0 py38haa95532_0
pyopenssl 23.0.0 py38haa95532_0
pyparsing 3.0.9 py38haa95532_0
pyqt 5.15.7 py38hd77b12b_0
pyqt5-sip 12.11.0 py38hd77b12b_0
pyreadline 2.1 py38_1
pysocks 1.7.1 py38haa95532_0
python 3.8.16 h6244533_3
python-dateutil 2.8.2 pyhd3eb1b0_0
python_abi 3.8 2_cp38 conda-forge
qt-main 5.15.2 he8e5bd7_8
qt-webengine 5.15.9 hb9a9bb5_5
qtwebkit 5.212 h2bbfb41_5
re2 2022.04.01 hd77b12b_0
requests 2.28.1 py38haa95532_1
requests-oauthlib 1.3.0 py_0
rsa 4.7.2 pyhd3eb1b0_1
scipy 1.10.1 py38h321e85e_0
setuptools 66.0.0 py38haa95532_0
sip 6.6.2 py38hd77b12b_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.41.2 h2bbff1b_0
tensorboard 2.10.0 py38haa95532_0
tensorboard-data-server 0.6.1 py38haa95532_0
tensorboard-plugin-wit 1.8.1 py38haa95532_0
tensorflow 2.3.0 mkl_py38h8c0d9a2_0
tensorflow-base 2.3.0 eigen_py38h75a453f_0
tensorflow-estimator 2.6.0 pyh7b7c402_0
termcolor 2.1.0 py38haa95532_0
tk 8.6.12 h2bbff1b_0
toml 0.10.2 pyhd3eb1b0_0
tornado 6.2 py38h2bbff1b_0
urllib3 1.26.15 py38haa95532_0
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
werkzeug 2.2.3 py38haa95532_0
wheel 0.38.4 py38haa95532_0
win_inet_pton 1.1.0 py38haa95532_0
wrapt 1.14.1 py38h2bbff1b_0
xz 5.2.10 h8cc25b3_1
yarl 1.8.1 py38h2bbff1b_0
zipp 3.11.0 py38haa95532_0
zlib 1.2.13 h8cc25b3_0
zstd 1.5.5 hd43e919_0
</code></pre>
<p>What seems to be the issue? All I've found regarding this error is related to issues in python versions. I've also tried using the latest TF version, however, the same error arises with additional information as seen below:</p>
<pre><code>XXX lineno: 43, opcode: 47
Traceback (most recent call last):
File "D:\Code\burgle\ml\test_siamese_network.py", line 27, in <module>
model = load_model(config.MODEL_PATH)
File "C:\Users\heihachi\anaconda3\envs\tf\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "D:/Code/burgle/ml/pyimagesearch/utils.py", line 43, in euclidean_distance
(featsA, featsB) = vectors
SystemError: Exception encountered when calling layer "lambda" (type Lambda).
unknown opcode
Call arguments received by layer "lambda" (type Lambda):
• inputs=['tf.Tensor(shape=(None, 48), dtype=float32)', 'tf.Tensor(shape=(None, 48), dtype=float32)']
• mask=None
• training=False
</code></pre>
<p>Could this github issue (<a href="https://github.com/tensorflow/tensorflow/issues/18660" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/18660</a>) be related?</p>
|
<python><tensorflow><machine-learning><keras><deep-learning>
|
2023-04-23 13:03:18
| 1
| 453
|
J Xkcd
|
76,084,995
| 16,363,897
|
Numpy Discrete Fourier Transform (fft) on multiple arrays
|
<p>This is a follow-up question to this <a href="https://stackoverflow.com/questions/76081149/polyfit-for-a-seasonality-model">other question</a></p>
<p>I have several quite large pandas dataframes and I need to apply Numpy's fft function to denoise each row separately.</p>
<p>What I'm doing right now is iter through the rows of the dataframe and apply numpy fft on each row:</p>
<pre><code>df = pd.DataFrame(np.random.rand(10000, 52), columns=range(1,53))
output = np.empty((0,52))
for index, row in df.iterrows():
spectrum = np.fft.rfft(row)
spectrum[6:] = 0
verified =np.fft.irfft(spectrum)
output = np.vstack((output, verified))
</code></pre>
<p>This is the output:</p>
<pre><code> 0 1 2 3 4 5 6 \
0 0.476861 0.449378 0.445224 0.458605 0.480861 0.504024 0.523671
1 0.569499 0.642474 0.691314 0.703127 0.673323 0.607825 0.522125
2 0.229334 0.206852 0.194395 0.186826 0.181918 0.181320 0.189769
3 0.542612 0.485116 0.454579 0.454857 0.480331 0.517998 0.551836
4 0.350204 0.428149 0.532144 0.627614 0.683061 0.680347 0.620361
... ... ... ... ... ... ... ...
9995 0.241540 0.247316 0.296193 0.381337 0.487676 0.595264 0.683786
9996 0.433201 0.386454 0.346898 0.324144 0.324614 0.349595 0.394791
9997 0.585794 0.503882 0.450025 0.438172 0.469075 0.529971 0.598605
9998 0.364178 0.363996 0.400953 0.465722 0.540743 0.605164 0.640928
9999 0.720946 0.693376 0.642622 0.577479 0.510498 0.454305 0.418147
7 8 9 ... 42 43 44 \
0 0.540032 0.557019 0.579607 ... 0.482889 0.561783 0.642733
1 0.437323 0.374242 0.347437 ... 0.294238 0.296055 0.301317
2 0.212893 0.254505 0.314413 ... 0.484699 0.568643 0.623170
3 0.567988 0.559083 0.526314 ... 0.490357 0.514340 0.561120
4 0.522275 0.416836 0.336087 ... 0.326105 0.452154 0.574466
... ... ... ... ... ... ... ...
9995 0.737062 0.746413 0.712124 ... 0.364438 0.427846 0.487851
9996 0.451591 0.509691 0.560211 ... 0.432432 0.453183 0.472881
9997 0.650264 0.665516 0.636145 ... 0.545284 0.532642 0.546533
9998 0.638102 0.597791 0.531701 ... 0.548747 0.573257 0.603792
9999 0.405732 0.414925 0.439214 ... 0.765117 0.746037 0.711731
45 46 47 48 49 50 51
0 0.709887 0.749437 0.753514 0.722475 0.664799 0.594611 0.527631
1 0.305615 0.308494 0.314105 0.329759 0.362792 0.416867 0.489137
2 0.638698 0.614139 0.556706 0.479495 0.397632 0.324239 0.267452
3 0.622705 0.685017 0.731861 0.749860 0.732776 0.683743 0.614625
4 0.658297 0.680878 0.638835 0.549281 0.444148 0.359660 0.324439
... ... ... ... ... ... ... ...
9995 0.531525 0.548069 0.531840 0.484457 0.415239 0.339668 0.276112
9996 0.491950 0.509969 0.524766 0.532484 0.528706 0.510272 0.477096
9997 0.589529 0.652623 0.717826 0.764054 0.774347 0.741951 0.673153
9998 0.630760 0.642783 0.630788 0.591811 0.531082 0.461475 0.400280
9999 0.674761 0.647708 0.639035 0.650343 0.675875 0.704429 0.722997
</code></pre>
<p>On my PC this script takes 5-6 seconds. Considering I have hundreds of dataframes to run the script on, the whole process would take a lot of time.
Is there a way to apply the fft function on the whole dataframe, or anyway to make the script faster?</p>
<p>Thanks</p>
|
<python><pandas><numpy>
|
2023-04-23 13:01:51
| 1
| 842
|
younggotti
|
76,084,776
| 7,115,122
|
How to implement stripe subscription with three months initial upfront payment
|
<p>I'm building a checkout page where a user subscribes to a $10 /month product. However, when the user signs up, they are required to pay for the first three months upfront (that's $30). If they wish to auto renew their subscription thereafter, we'll fallback to them paying $10 each month.</p>
<p>The frontend is a Single Page React app while the backend is built with python (flask). I'm collecting their payment details on the frontend using the <code>CardElement</code>. Next I create their payment method and send the details to the backend as follows</p>
<pre class="lang-js prettyprint-override"><code>const stripe = useStripe();
const elements = useElements();
...
const { paymentMethod } = await stripe.createPaymentMethod({
element: elements.getElement("card")!,
})
// this is a post request to the backend
const subscriptionRes = await ApiRequestAction.subscribeAccount({
paymentMethodId: paymentMethod.id,
...
});
</code></pre>
<p>On the backend, I start by creating the customer with the payment details as follows</p>
<pre class="lang-py prettyprint-override"><code>...
request_data = request.get_json();
payment_method_id = request_data["paymentMethodId"] # The payment method id sent from the frontend
...
stripe_customer = stripe.Customer.create(
email=...,
name=...,
payment_method=payment_method_id,
invoice_settings={"default_payment_method": payment_method_id}
)
</code></pre>
<p>This is where my nightmare begins. My first approach to solving the problem is to create a <code>Subscription Schedule</code> with two phases. The first phase has an <code>iteration</code> of <strong>3</strong> and the second phase has an <code>iteration</code> of <strong>1</strong> as follows</p>
<pre class="lang-py prettyprint-override"><code>...
subscription_sch = stripe.SubscriptionSchedule.create(
customer=stripe_customer.id,
start_date='now',
end_behavior='release',
phases=[
{
'items': [{'price': stripe_price_id, 'quantity': 3}],
'iterations': 3,
},
{
'items': [{'price': stripe_price_id, 'quantity': 1}],
'iterations': 1,
},
],
)
</code></pre>
<p>But with this approach,</p>
<ul>
<li>I'm not sure how to finalize the payment on the frontend. If I were to create a more traditional <code>Subscription</code> instance, I'll retrieve the <code>latest_invoice</code> and send the <code>PaymentIntent</code> secret of that invoice to the frontend for the payment to be completed. <a href="https://stripe.com/docs/billing/subscriptions/overview#invoice-lifecycle" rel="nofollow noreferrer">However, from what I read</a>, the user will only be charged one hour after the first invoice is generated which is not ideal for my use-case.</li>
<li>I'm also not sure how to complete the payment on the frontend</li>
</ul>
<p>The second approach was to create a one time $30 invoice pay-off for those first three months and then create a <code>Subscription</code> for them thereafter but I'm not sure how to go about that.</p>
<p>I'd love to know your thoughts on how you implemented something similar, or how you think this should be properly implemented. Thanks in advance.</p>
|
<python><reactjs><stripe-payments>
|
2023-04-23 12:15:46
| 1
| 1,354
|
Cels
|
76,084,731
| 5,016,440
|
Returning function from compiled string in Python
|
<p>I am trying to create a function from a string and return it from another Python function. Example below</p>
<pre><code>def compile_functions() -> t.Callable:
code = """def f():\n\treturn 1"""
compiled = compile(code, "<string>", "exec")
exec(code)
return f
</code></pre>
<p>However this produces <code>NameError: name 'f' is not defined</code>. What am I missing?</p>
|
<python>
|
2023-04-23 12:06:10
| 1
| 455
|
nestor556
|
76,084,676
| 10,313,194
|
Django Cannot keep last 3 row of database
|
<p>I want to keep latest 3 row of each key and delete oldest row if data of each key more than 3 row. I have sample data like this.</p>
<pre><code>id value key created
1 a1 001 2023-04-23 01:01:00 <= delete oldest of key 001
2 a2 001 2023-04-23 01:02:00
3 a3 001 2023-04-23 01:03:00
4 a4 001 2023-04-23 01:04:00
5 a5 002 2023-04-23 01:05:00 <= delete oldest of key 002
6 a6 002 2023-04-23 01:06:00
7 a5 002 2023-04-23 01:07:00
8 a6 002 2023-04-23 01:08:00
</code></pre>
<p>I get latest 3 row order by create and delete oldest with this code.</p>
<pre><code>if Key.objects.filter(key=key).exists():
objects_to_keep = Data.objects.filter(key=key).order_by('-created').values_list("id", flat=True)[:3]
objects_to_keep = list(objects_to_keep)
Data.objects.exclude(pk__in=objects_to_keep).delete()
</code></pre>
<p>If I add new row key=001 it remove all data of key 002 and the same when add new row key=002 it remove all data key 001. The output should be like this.</p>
<pre><code>id value key created
2 a2 001 2023-04-23 01:02:00
3 a3 001 2023-04-23 01:03:00
4 a4 001 2023-04-23 01:04:00
6 a6 002 2023-04-23 01:06:00
7 a5 002 2023-04-23 01:07:00
8 a6 002 2023-04-23 01:08:00
</code></pre>
<p>How to fix it?</p>
|
<python><django><django-views>
|
2023-04-23 11:54:28
| 1
| 639
|
user58519
|
76,084,565
| 1,145,011
|
Unable to connect to Snowflake to Python
|
<p>H,</p>
<p>I' unable to connect to SNowflake using Python. Via browser I'm able to connect using the same URL and Password.
Python Version -- 3.11
Snowflake - 3.0.2</p>
<p>I installed snowcd to investigate further and got below error. But not able to decode it.</p>
<h1>**Check for 4 hosts failed, display as follow:</h1>
<p>Host: api-35a58de5.duosecurity.com
Port: 443
Type: DUO_SECURITY
Failed Check: HTTP checker
Error: http check timeout
Suggestion: Check the connection to your http host or transparent Proxy</p>
<p>==============================================
Host: o.ss2.us
Port: 80
Type: OCSP_RESPONDER
Failed Check: HTTP checker
Error: http check timeout
Suggestion: Check the connection to your http host or transparent Proxy</p>
<p>==============================================
Host: app.snowflake.com
Port: 443
Type: SNOWSIGHT_DEPLOYMENT
Failed Check: HTTP checker
Error: http check timeout
Suggestion: Check the connection to your http host or transparent Proxy</p>
<p>==============================================
Host: apps-api.c1.us-west-2.aws.app.snowflake.com
Port: 443
Type: SNOWSIGHT_DEPLOYMENT
Failed Check: Certificate Check
Error: certificate checker timeout
Suggestion: Check your connection to apps-api.c1.us-west-2.aws.app.snowflake.com
**</p>
|
<python><snowflake-cloud-data-platform>
|
2023-04-23 11:30:20
| 1
| 1,551
|
user166013
|
76,084,523
| 3,668,129
|
How to downcast sample rate of ndarray (without saving it as wav file)
|
<p>I have a <code>ndarray</code> (which I got from microphone) with samples (sample rate = 48000).</p>
<p>I want to create a new <code>ndarray</code> with lower sample rate (16000).</p>
<p>One option is to save this <code>ndarray</code> as <code>wav</code> file and reload it with different sample rate (via <code>librosa</code>).</p>
<p>Is there a simple way to do it without saving the <code>ndarray</code> as wav file ?</p>
|
<python><python-3.x><librosa>
|
2023-04-23 11:18:55
| 1
| 4,880
|
user3668129
|
76,084,358
| 19,318,120
|
Django VideoField
|
<p>I'm want to make a custom FileField that only allows video uploads
I'm trying to use pymediainfo to make sure the file uploaded is a video but I keep getting FileNotFoundError</p>
<p>here's my code</p>
<pre><code>from django.core.exceptions import ValidationError
from django.db import models
from pymediainfo import MediaInfo
from django.db.models.fields.files import FieldFile
class VideoField(models.FileField):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def validate(self, file : FieldFile, model_instance):
super().validate(file, model_instance)
media_info = MediaInfo.parse(file.path)
if media_info.tracks[0].track_type != 'Video':
raise ValidationError('The file must be a video.')
</code></pre>
<p>and here's the model code</p>
<pre><code>def lecture_video_handler(instance, filename):
return f"chapters/{instance.chapter_id}/lectures/{instance.id}/{filename}"
class Lecture(models.Model):
chapter = models.ForeignKey('course.Chapter', models.CASCADE)
video = VideoField(upload_to=lecture_video_handler)
</code></pre>
<p>in settings.py I'm using the default file storage not a custom one</p>
<pre><code>MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
</code></pre>
<p>what am I doing wrong?</p>
|
<python><django>
|
2023-04-23 10:37:38
| 2
| 484
|
mohamed naser
|
76,084,214
| 8,445,442
|
What is the recommended number of threads for PyTorch relative to available CPU cores?
|
<p>First I want to say that I don't have much experience with pytorch, ML, NLP and other related topics, so I may confuse some concepts. Sorry.</p>
<p>I downloaded few models from Hugging Face, organized them in one Python script and started to perform benchmark to get overview of performance. During benchmark I monitored CPU usage and saw that only 50% of CPU was used. I have 8 vCPU, but only 4 of them are loaded at 100% at the same time. The load is jumping, i.e. there may be cores 1, 3, 5, 7 that are loaded at 100%, then cores 2, 4, 6, 8 that are loaded at 100%. But in total CPU load never raises above 50%, it also never goes below 50%. This 50% load is constant.</p>
<p>After quick googling I found <a href="https://pytorch.org/docs/stable/torch.html#parallelism" rel="nofollow noreferrer">parallelism doc</a>. I called <code>get_num_threads()</code> and <code>get_num_interop_threads()</code> and output was <code>4</code> for both calls. Only 50% of available CPU cores which kind of explains why CPU load was at 50%.</p>
<p>Then I called <code>set_num_threads(8)</code> and <code>set_num_interop_threads(8)</code>, and then performed benchmark. CPU usage was at constant 100%. In general performance was a bit faster, but some models started to work a bit slowly than at 50% of CPU.</p>
<p>So I wonder why pytorch by default uses only half of CPU? It is optimal and recommended way? Should I manually call <code>set_num_threads()</code> and <code>set_num_interop_threads()</code> with all available CPU cores if I want to achieve best performance?</p>
<p>Edit.</p>
<p>I made an additional benchmarks:</p>
<ul>
<li>one pytorch process with 50% of vCPU is a bit faster than one pytorch process with 100% of vCPU. Earlier it was vice versa, so I think it depends on models that are being used.</li>
<li>two pytorch concurrent processes with 50% of vCPU will handle more inputs than one pytorch process with 50% of vCPU, but it is not 2x increase, it is ~1.2x increase. Process time of one input is much slower than with one pytorch process.</li>
<li>two pytorch concurrent processes with 100% of vCPU can't complete even one input. I guess CPU is constantly switching between these processes.</li>
</ul>
<p>So thank you to Phoenix's answer, I think it is completely reasonable to use pytorch default settings which sets number of threads according to number of physical (not virtual) cores.</p>
<p>Edit.</p>
<p>pytorch documentation about this - <a href="https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html</a></p>
|
<python><pytorch><parallel-processing><huggingface-transformers><huggingface>
|
2023-04-23 10:02:53
| 1
| 1,043
|
Amaimersion
|
76,084,159
| 12,466,687
|
Unable to change default selection in streamlit multiselect()
|
<p>I am new to <code>streamlit</code> apps and facing an <strong>issue</strong> of <strong>not able to select any other <code>option</code></strong> other than <strong>default</strong> option when using <code>st.multiselect()</code>. I have also tried <code>st.selectbox</code> but faced an issue with that as well.</p>
<p><strong>Demo web app</strong> link for issue: <a href="https://party-crime-record.streamlit.app/" rel="nofollow noreferrer">https://party-crime-record.streamlit.app/</a></p>
<p><strong>Web page Code</strong> on github: <a href="https://github.com/johnsnow09/Party_Criminal_Records/blob/main/1_Analysis_doubt_github.py" rel="nofollow noreferrer">Code link</a></p>
<p><code>multiselect</code> option code chunk starts from <strong>line number: 46</strong></p>
<p><strong>Snapshot</strong> of <code>multiselect</code> <strong>code chunk</strong> is provided below:</p>
<pre><code>with st.sidebar:
State_List = df.lazy().select(pl.col('State')).unique().collect().to_series().to_list()
# State_Selected = st.selectbox(label="Select State",
# options = State_List)
State_Selected = st.multiselect(label="Select State",
options = State_List,
default = ["Uttar Pradesh"], # Delhi West Bengal
# default = State_List[-1],
max_selections=1
)
</code></pre>
|
<python><streamlit><python-polars>
|
2023-04-23 09:51:31
| 3
| 2,357
|
ViSa
|
76,084,157
| 6,223,328
|
Python inherit methods depending on condition in init
|
<p>lets say I have two classes:</p>
<pre class="lang-py prettyprint-override"><code>class ImgClass():
"""
A class based on an image
"""
def __init__(self):
...
def i_am_an_img(self):
...
class PdfClass():
"""
A class based on a pdf
"""
def __init__(self):
...
def i_am_a_pdf(self):
...
</code></pre>
<p>I want to create a simple wrapper that goes something like this:</p>
<pre class="lang-py prettyprint-override"><code>class Wrapper():
"""
The main wrapper that decides the bases class
"""
def __init__(self, file):
file = Path(file)
if file.suffix == '.jpeg':
ImgClass.__init__(self)
else:
PdfClass.__init__(self)
</code></pre>
<p>but this does not give me the methods from the parent class (obviously, since I do not inherit). If I use inheritance (of both), I get all methods.
So I'm looking for a way to <strong>init and inherit</strong> just one of the two base classes during wrapper init. Is this possible?</p>
|
<python><class><subclass>
|
2023-04-23 09:50:50
| 1
| 357
|
Max
|
76,084,128
| 11,246,056
|
How to update a single value in Polars dataframe?
|
<p>In Pandas, you can update a value with the <strong><a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.at.html" rel="nofollow noreferrer">at</a></strong> property, like this:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({"col1": [1, 2, 3], "col2": [4, 5, 6]})
df_pd = df.to_pandas()
df_pd.at[2, "col2"] = 99
pl.from_pandas(df_pd)
</code></pre>
<pre><code>shape: (3, 2)
┌──────┬──────┐
│ col1 ┆ col2 │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞══════╪══════╡
│ 1 ┆ 4 │
│ 2 ┆ 5 │
│ 3 ┆ 99 │
└──────┴──────┘
</code></pre>
<p>What's the idiomatic way to do that in Polars?</p>
|
<python><dataframe><python-polars><setvalue>
|
2023-04-23 09:43:10
| 2
| 13,680
|
Laurent
|
76,084,095
| 7,331,538
|
Airflow with docker not finding local python scripts and directories (ModuleNotFoundError)
|
<p>I am using Airflow with the default <code>docker-compose.yaml</code> file. I have my own sorce directories which I want to run from <code>DAGs</code>. Structure is like so:</p>
<pre><code>root_package
├── __init__.py
├── __pycache__
├── foo1.py
├── dags
│ ├── __init__.py
│ ├── __pycache__
│ └── update.py
├── subdir
│ ├── __init__.py
│ ├── __pycache__
│ └── foo2.py
└── plugins
</code></pre>
<p>First thing I have to do is add this root directory as volume in the docker-compose.yaml file. This is already done by default for the <code>/dags</code> and <code>/plugins</code> directory. I just added the full root as well like so:</p>
<pre><code>volumes:
- ${AIRFLOW_PROJ_DIR:-.}/dags:/opt/airflow/dags
- ${AIRFLOW_PROJ_DIR:-.}/logs:/opt/airflow/logs
- ${AIRFLOW_PROJ_DIR:-.}/plugins:/opt/airflow/plugins
- ${AIRFLOW_PROJ_DIR:-.}:/opt/airflow
</code></pre>
<p>Secondly, I need to add my python directories to <code>PYTHONPATH</code> because I need to import them from inside a <code>dag</code> (ex: update.py):</p>
<pre><code>from root_package import foo1
with DAG(
dag_id="id123", schedule_interval=None, start_date=datetime(2023, 4, 21)
) as dag:
# call foo1 functions from a PythonOperator
...
</code></pre>
<p>Currently I am getting <code>ModuleNotFoundError: No module named 'root_package'</code> as a webserver error.</p>
<p>How can add PYTHONPATH within docker-compose file so that airflow adds my root directory ?</p>
|
<python><docker><airflow><python-module>
|
2023-04-23 09:35:24
| 1
| 2,377
|
bcsta
|
76,084,046
| 14,562,965
|
Terraform showing only default workspace in Jenkins pipeline
|
<p>I have a Jenkins pipeline I am using to perform terraform actions. The Jenkins pipeline runs python scripts to perform those actions (and some more). One of those actions is <code>terraform workspace select</code>, and the python function is this:</p>
<pre><code>def terraform_plan(options):
provider=get_cluster_info(options)['provider'].upper() // returns "AWS"
workspace=get_cluster_info(options)['terraform_workspace'] // returns abc
working_dir=f'{REPO_PATH}/{provider}/K8S' // returns "./repo_path/AWS/K8S"
run_subprocess(f"/usr/local/bin/terraform -chdir={working_dir} workspace list")
run_subprocess(f"/usr/local/bin/terraform -chdir={working_dir} workspace select {workspace}")
</code></pre>
<p>(NOTE: <code>run_subprocess</code> only executes the command in the parenthesis and prints some other lines).</p>
<p>The pipeline also runs <code>run_subprocess(f"/usr/local/bin/terraform -chdir={working_dir} init")</code> before the function above, and the init is run successfully (according to the output.</p>
<p>If I run the pipeline, the output says that the <code>abc</code> workspace does not exist, and the <code>terraform list</code> before that shows only the default workspace.
However, if I run <code>terraform workspace show</code> in the terminal (in the exact same directory where the pipeline runs), I get all of the workspaces as normal.</p>
<p>I tried running the <code>terraform workspace</code> commands in different order, made sure the init part is successful, and made sure all the files are present.</p>
<p>How can I fix this? Why is this even happening?</p>
|
<python><jenkins><terraform><pipeline>
|
2023-04-23 09:20:42
| 0
| 302
|
Tom Sebty
|
76,084,030
| 16,363,897
|
Get the (dynamic) nth column value for each row in pandas dataframe
|
<p>I have the following dataframe "df1":</p>
<pre><code> reference
date
2023-01-01 1
2023-01-02 2
2023-01-03 3
2023-01-04 4
</code></pre>
<p>I have another dataframe "df2":</p>
<pre><code> 1 2 3 4
date
2023-01-01 9 7 9 3
2023-01-02 6 5 5 6
2023-01-03 9 3 7 5
2023-01-04 2 4 0 4
</code></pre>
<p>I want to add a new column in df1 where, for each row, I need to point to the cell value from df2 where the row index is the same as df1 (so the date) and the column is the reference column from df1.
This is the expected output:</p>
<pre><code> reference output
date
2023-01-01 1 9
2023-01-02 2 5
2023-01-03 3 7
2023-01-04 4 4
</code></pre>
<p>For example, 9 on 2023-01-01 is the value of column "1", row "2023-01-01" from df2.</p>
<p>5 on 2023-01-02 is the value of column "2", row "2023-01-02" from df2, and so on.</p>
<p>Any ideas of how to do it?</p>
|
<python><pandas>
|
2023-04-23 09:16:07
| 2
| 842
|
younggotti
|
76,084,026
| 15,742,150
|
How to mock method in another class method?
|
<p>I got method in <code>notifier</code> file that i want to test:</p>
<pre><code>def notify():
Class_A.another_class_method()
Class A:
@classmethod
def prepare():
helper_method()
def another_class_method():
prepare()
</code></pre>
<p>I need to mock helper_method.
How can i do this ?</p>
<p>I'm trying to patch it like:</p>
<pre><code>with patch(
'notifier.notify.Class_A.another_class_method.prepare.helper_method',
return_value=True
)
</code></pre>
<p>but it is not works.</p>
|
<python><unit-testing><python-unittest>
|
2023-04-23 09:15:10
| 1
| 539
|
akonovalov
|
76,084,000
| 5,319,180
|
Asyncio async_wrap to convert sync functions to async in python. How does it work?
|
<p>I huge chunk of legacy sync code ( a huge function which calls other sync functions, includes sync http API calls with <code>requests</code> library, and so on ). This function is run as a part of a Queue worker (picks up tasks from the queue and executes this function )
Facing performance issues and moving to async function seemed as a way out.</p>
<p>To save on time and avoid converting every single function to async def, I used <code>anyio.to_thread.run_sync</code>.Performance looked great until it broke midway(works 9/10 times ) without any errors being thrown. The Queue worker times out with no errors caught.</p>
<p>Im planning to move to <a href="https://dev.to/0xbf/turn-sync-function-to-async-python-tips-58nn" rel="nofollow noreferrer">https://dev.to/0xbf/turn-sync-function-to-async-python-tips-58nn</a> asyncio (although I dont expect much since anyio already uses asyncio as a backend), ..</p>
<p>But help me understand this. <strong>How is converting a sync function to run in a thread speeding up my worker? Will the GIL not block the main thread when the new thread is run to completion?</strong> Or perhaps when the second thread is idle doing I/O, it gets context switched with the main thread?</p>
<p>As for my concrete problem, I believe its happening because of the increasing number of total threads in the system. Is it fair to assume so since I've just assigned 0.25vCPU to the worker container? Also, are threads CPU intensive or memory intensive? In the context of python(GIL), it doesnt make sense to have threads for a CPU intensive workload right? Does it mean more threads implies more load on memory and not on CPU per se?</p>
|
<python><multithreading><python-asyncio><python-anyio>
|
2023-04-23 09:08:52
| 1
| 429
|
D.B.K
|
76,083,975
| 4,780,574
|
SQLAlchemy declarative dataclass mapping: How to create non-required columns?
|
<p>I am trying to figure out how to use declarative mapping with dataclasses in SQLAlchemy.</p>
<p>Setup:</p>
<pre><code>
from dotenv import load_dotenv
import os
from sqlalchemy.engine import URL
#test abc
load_dotenv()
url_object = URL.create(
"postgresql+psycopg2",
username=os.environ.get('DBUSER'),
password=os.environ.get('DBPASS'), # plain (unescaped) text
host=os.environ.get('DBIP'),
database=os.environ.get('DBNAME'),
port=os.environ.get('DBPORT')
)
from sqlalchemy import create_engine, Table, Column, ForeignKey, Integer, String, Boolean
engine = create_engine(url_object,
echo=True)
from typing_extensions import Annotated
from typing import List
from typing import Optional
from sqlalchemy import ForeignKey
from sqlalchemy import String
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import MappedAsDataclass
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
from sqlalchemy.orm import registry
reg = registry()
class Base(MappedAsDataclass, DeclarativeBase):
"""subclasses will be converted to dataclasses"""
</code></pre>
<p>I can make a simple mapped class and associated table like this:</p>
<pre><code>class Test(Base):
__tablename__ = 'test'
id: Mapped[int] = mapped_column(init=False, primary_key=True)
intVar: Mapped[int]
strVar: Mapped[str]
Test.__table__
Base.metadata.create_all(engine)
</code></pre>
<p>The table is created as expected, with columns for 'id', 'intVar', and 'strVar'</p>
<p>However, when I try to create variables with defaults, those columns are not created, my table only has the first four. When I look at the table test2, I only see the first four columns, not the last three. All of the variables whose columns were not created have default values:</p>
<pre><code>import datetime
class Test2(Base):
__tablename__ = 'test2'
id: Mapped[int] = mapped_column(init=False, primary_key=True)
intVar: Mapped[int]
strVar: Mapped[str]
anotherVar: Mapped[str]
optVar: Mapped[Optional[str]] = ''
yaVAR: Mapped[str] = ''
date: Mapped[datetime.datetime] = datetime.datetime.now()
Test2.__table__
Base.metadata.create_all(engine)
</code></pre>
<p>However, as far as I can tell, I need default values for variables that are optional, otherwise if I try to create an instance of the class without the "optional" variable, it complains about missing positional arguments:</p>
<pre><code>import datetime
class Test3(Base):
__tablename__ = 'test3'
id: Mapped[int] = mapped_column(init=False, primary_key=True)
intVar: Mapped[int]
strVar: Mapped[str]
anotherVar: Mapped[str]
optVar: Mapped[Optional[str]]
yaVAR: Mapped[str]
date: Mapped[datetime.datetime] = datetime.datetime.now()
Test3.__table__
Base.metadata.create_all(engine)
t3 = Test3(intVar = 0, strVar = 'abc', anotherVar = 'xyz', optVar = "bs", yaVAR='morebs', date = datetime.datetime.now())
t4 = Test3(intVar = 0, strVar = 'abc', anotherVar = 'xyz', yaVAR='morebs', date = datetime.datetime.now())
t4 = Test3(intVar = 0, strVar = 'abc', anotherVar = 'xyz', yaVAR='morebs', date = datetime.datetime.now())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() missing 1 required positional argument: 'optVar'
</code></pre>
<h3>So how do I make a dataclass with optional variables and have the columns get created when the table is created?</h3>
|
<python><sqlalchemy><orm><default-value><python-dataclasses>
|
2023-04-23 09:03:50
| 1
| 814
|
Stonecraft
|
76,083,927
| 4,405,160
|
FastAPI returns 404 'detail not found' when using multiple Optional path parameters
|
<p>Trying to create this FastAPI app using three optional path paarameters.
I tried expanding the code into three optional inputs shown below, it now forces me to input all three; otherwise, a 404 'detail not found' error is raised.</p>
<pre><code>import json
from datetime import date
from typing import Optional
@app.get("/order/{country}/{menu}/{dt}")
async def ppp(country: Optional[str] = None, menu: Optional[str] = None, dt: Optional[date] = None):
a = {}
if country is not None:
a["a"] = country
else:
return "nothing inside"
if menu is not None:
a["b"] = menu
else:
return a
if dt is not None:
a["c"] = dt
return a
</code></pre>
<p><strong>Expected result:</strong></p>
<p><strong>url input:</strong>
<a href="http://127.0.0.1:8000/order/usa/vegan/2023-01-01" rel="nofollow noreferrer">http://127.0.0.1:8000/order/usa/vegan/2023-01-01</a></p>
<p><strong>correct output:</strong>
{"a":"usa","b":"vegan","c":"2023-01-01"}</p>
<p><strong>However:</strong></p>
<hr />
<p>(1/3)</p>
<p><strong>url input:</strong> <a href="http://127.0.0.1:8000/order/usa/vegan/" rel="nofollow noreferrer">http://127.0.0.1:8000/order/usa/vegan/</a></p>
<p><strong>output:</strong> {"detail":"Not Found"}</p>
<p><strong>expected output:</strong> {"a":"usa","b":"vegan"}</p>
<hr />
<p>(2/3)</p>
<p><strong>url input:</strong> <a href="http://127.0.0.1:8000/order/usa/" rel="nofollow noreferrer">http://127.0.0.1:8000/order/usa/</a></p>
<p><strong>output:</strong> {"detail":"Not Found"}</p>
<p><strong>expected output:</strong> {"a":"usa"}</p>
<hr />
<p>(3/3)</p>
<p><strong>url input:</strong> <a href="http://127.0.0.1:8000/order/" rel="nofollow noreferrer">http://127.0.0.1:8000/order/</a></p>
<p><strong>output:</strong> {"detail":"Not Found"}</p>
<p><strong>expected output:</strong> "nothing inside"</p>
<hr />
<p>All three errors shown '404 Not Found' in my uvicorn terminal</p>
|
<python><fastapi><optional-parameters>
|
2023-04-23 08:51:01
| 0
| 574
|
beavis11111
|
76,083,905
| 19,325,656
|
AttributeError: 'ContactSerializer' object has no attribute 'get_status'
|
<p>Hi all I have this error when im trying to save my serializer</p>
<pre><code>AttributeError: 'ContactSerializer' object has no attribute 'get_status'
</code></pre>
<p>I render my serializer as form so I've created custom template for forms just to add new classes</p>
<pre><code>class ContactSerializer(serializers.ModelSerializer):
status = serializers.SerializerMethodField(read_only=True)
name = serializers.CharField(
style={
"template": TEMPLATE_PATH,
"class": "custom_form_control",
"input_type": "input"
}
)
email = serializers.CharField(
style={
"template": TEMPLATE_PATH,
"class": "custom_form_control",
"input_type": "input"
}
)
subject = serializers.CharField(
style={
"template": TEMPLATE_PATH,
"class": "custom_form_control",
"input_type": "select",
"choice": Contact.SUBJECT
}
)
message = serializers.CharField(
style={
"template": TEMPLATE_PATH,
"input_type": "textarea",
}
)
class Meta:
model = Contact
fields = "__all__"
</code></pre>
<p>And everything works great but when I'm trying to POST it (save it) i get attribute error.</p>
<p>My guess is that this is related to status field that is read-only and defaults to "New"</p>
<p><strong>models</strong></p>
<pre><code>class Contact(models.Model):
APP = 'app support'
PAY = 'payment support'
HR = 'HR/Jobs'
OTHER = 'other'
SUBJECT = [
(APP, _('App Support')),
(PAY, _('Payment Support')),
(HR, _('HR & Jobs')),
(OTHER, _('Non related (Other)'))
]
STATUS = (
('New', _('New')),
('In Progres', _('In Progres - somone is taking an action')),
('Resolved', _('Resolved - action was made')),
)
name = models.CharField(blank=False, max_length=50, validators=[MinLengthValidator(4)])
email = models.EmailField(max_length=55, blank=False)
subject = models.CharField(choices=SUBJECT, max_length=15, blank=False)
message = models.TextField(max_length=500, blank=False)
status = models.CharField(choices=STATUS, max_length=15, default=STATUS[0][0])
def __str__(self):
return str(f'{self.subject} is {self.status}')
</code></pre>
<p><strong>views</strong></p>
<pre><code>class ContactAPIView(APIView):
renderer_classes = [TemplateHTMLRenderer]
template_name = 'contact.html'
def get(self, request):
serializer = ContactSerializer()
return Response({'serializer': serializer})
def post(self, request):
serializer = ContactSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_200_OK)
return Response(serializer.errors, status=status.HTTP_422_UNPROCESSABLE_ENTITY
</code></pre>
|
<python><django><django-rest-framework><django-serializer>
|
2023-04-23 08:47:20
| 1
| 471
|
rafaelHTML
|
76,083,492
| 7,133,942
|
Tensorflow does not detect GPU altough the GPU driver and Cuda are installed
|
<p>I have an Nvidia GPU (Geforce RTX 3090) and the driver is displayed in Nvidia Control Panel. I also have installed the latest version of Cuda. However, when using the following code in Python with TensorFlow:</p>
<pre><code>gpus = tf.config.list_physical_devices('GPU')
if not gpus:
print("No GPUs detected")
else:
print("GPUs detected:")
for gpu in gpus:
print(gpu)
</code></pre>
<p>It always shows me, that no GPU is detected. Can you tell me what I have to do in order to make Tensorflow use the GPU?</p>
<p><strong>EDIT</strong>: I am using PyCharm and downloaded Python directly (so I don't use something like Anaconda).</p>
<p><strong>Update</strong>: Here is the nvidia-smi output from the cmd:</p>
<pre><code>U:\>nvidia-smi
Wed Jul 12 09:13:40 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 531.14 Driver Version: 531.14 CUDA Version: 12.1 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3090 WDDM | 00000000:65:00.0 On | N/A |
| 0% 36C P8 13W / 350W| 2085MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 3252 C+G ...m Files\Mozilla Firefox\firefox.exe N/A |
| 0 N/A N/A 4364 C+G ...h2txyewy\InputApp\TextInputHost.exe N/A |
| 0 N/A N/A 11312 C+G ...soft Office\root\Office16\EXCEL.EXE N/A |
| 0 N/A N/A 19072 C+G ...5n1h2txyewy\ShellExperienceHost.exe N/A |
| 0 N/A N/A 21476 C+G ...m Files\Mozilla Firefox\firefox.exe N/A |
| 0 N/A N/A 23832 C+G ....Search_cw5n1h2txyewy\SearchApp.exe N/A |
| 0 N/A N/A 24544 C+G ..._8wekyb3d8bbwe\Microsoft.Photos.exe N/A |
| 0 N/A N/A 25932 C+G ...x64__8wekyb3d8bbwe\ScreenSketch.exe N/A |
| 0 N/A N/A 33528 C+G ...ekyb3d8bbwe\PhoneExperienceHost.exe N/A |
| 0 N/A N/A 36580 C+G ...on 2022.3.1\jbr\bin\jcef_helper.exe N/A |
| 0 N/A N/A 42128 C+G ...cal\Microsoft\OneDrive\OneDrive.exe N/A |
+---------------------------------------------------------------------------------------+
</code></pre>
<p><strong>Update</strong>: I downgraded to tensorflow 2.10 and get some new error messages:"2023-07-15 15:15:23.440924: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2023-07-15 15:15:23.441186: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine."</p>
|
<python><tensorflow><gpu>
|
2023-04-23 06:50:13
| 4
| 902
|
PeterBe
|
76,083,485
| 2,793,602
|
SHAP - instances that have more than one dimension
|
<p>I am very new to SHAP and I would like to give it a try, but I am having some difficulty.</p>
<p>The model is already trained and seems to perform well. I then use the training data to test SHAP with. It looks like so:</p>
<pre><code> var_Braeburn var_Cripps Pink var_Dazzle var_Fuji var_Granny Smith \
0 1 0 0 0 0
1 0 1 0 0 0
2 0 1 0 0 0
3 0 1 0 0 0
4 0 1 0 0 0
var_Other Variety var_Royal Gala (Tenroy) root_CG202 root_M793 \
0 0 0 0 0
1 0 0 1 0
2 0 0 1 0
3 0 0 0 0
4 0 0 0 0
root_MM106 ... frt_BioRich Organic Compost_single \
0 1 ... 0
1 0 ... 0
2 0 ... 0
3 1 ... 0
4 1 ... 0
frt_Biomin Boron_single frt_Biomin Zinc_single \
0 0 1
1 0 0
2 0 0
3 0 0
4 0 0
frt_Fertco Brimstone90 sulphur_single frt_Fertco Guano _single \
0 0 0
1 0 0
2 0 0
3 0 0
4 0 0
frt_Gro Mn_multiple frt_Gro Mn_single frt_Organic Mag Super_multiple \
0 0 0 0
1 1 0 1
2 1 0 1
3 1 0 1
4 1 0 1
frt_Organic Mag Super_single frt_Other Fertiliser
0 0 0
1 0 0
2 0 0
3 0 0
4 0 0
</code></pre>
<p>I then do <code>explainer = shap.Explainer(model)</code> and <code>shap_values = explainer(X_train)</code></p>
<p>This runs without error and <code>shap_values</code> gives me this:</p>
<pre><code>.values =
array([[[ 0.00775555, -0.00775555],
[-0.03221035, 0.03221035],
[-0.0027203 , 0.0027203 ],
...,
[ 0.00259787, -0.00259787],
[-0.00459262, 0.00459262],
[-0.0303394 , 0.0303394 ]],
[[-0.00068313, 0.00068313],
[-0.03006355, 0.03006355],
[-0.00245706, 0.00245706],
...,
[-0.00418809, 0.00418809],
[-0.00088372, 0.00088372],
[-0.00030019, 0.00030019]],
[[-0.00068313, 0.00068313],
[-0.03006355, 0.03006355],
[-0.00245706, 0.00245706],
...,
[-0.00418809, 0.00418809],
[-0.00088372, 0.00088372],
[-0.00030019, 0.00030019]],
...,
</code></pre>
<p>However, when I then run <code>shap.plots.beeswarm(shap_values)</code>, I get the following error:</p>
<p><code>ValueError: The beeswarm plot does not support plotting explanations with instances that have more than one dimension!</code></p>
<p>What am I doing wrong here?</p>
|
<python><machine-learning><shap>
|
2023-04-23 06:49:07
| 1
| 457
|
opperman.eric
|
76,083,450
| 691,197
|
Segmentation fault: 11 when loading Haystack DensePassageRetriever
|
<p>I am trying to run this Question Answering model using this doc <a href="https://haystack.deepset.ai/tutorials/12_lfqa" rel="nofollow noreferrer">https://haystack.deepset.ai/tutorials/12_lfqa</a>, but running into segmentation fault as below. I am using Python 3.9.17. May laptop has 64 GB RAM, and when I checked the Activity Monitory while running below, Memory used is around 16.8 GB. So, not sure what is causing this issue.</p>
<pre><code>>>> retriever = DensePassageRetriever(
... document_store=document_store,
... query_embedding_model="vblagoje/dpr-question_encoder-single-lfqa-wiki",
... passage_embedding_model="vblagoje/dpr-ctx_encoder-single-lfqa-wiki",
... )
/opt/anaconda3/lib/python3.9/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
Segmentation fault: 11
/opt/anaconda3/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
bash-3.2$
</code></pre>
<p>Can any one help me with this issue?</p>
|
<python><segmentation-fault><nlp-question-answering><haystack>
|
2023-04-23 06:35:24
| 0
| 1,009
|
user691197
|
76,083,428
| 11,482,269
|
ImportError: cannot import name 'json_normalize' from 'pandas.io.json'
|
<pre><code>python 3.9.2-3
pandas 2.0.0
pandas-io 0.0.1
</code></pre>
<pre><code>Error:
from pandas.io.json import json_normalize
ImportError: cannot import name 'json_normalize' from 'pandas.io.json' (/home/casaos/.local/lib/python3.9/site-packages/pandas/io/json/__init__.py)
</code></pre>
<p>Apparently this was a problem early on in the pre 1x days of pandas, but seems to have resurfaced.
Suggestions?</p>
<p>I'm running a script which was functional previously, but migrating it to a new host.
It errors out on the line:</p>
<pre><code>from pandas.io.json import json_normalize
</code></pre>
<p>and throws the error</p>
<pre><code>ImportError: cannot import name 'json_normalize' from 'pandas.io.json' (/home/casaos/.local/lib/python3.9/site-packages/pandas/io/json/__init__.py)
</code></pre>
<p>I've attempted to reinstall pandas ('install' option), remove and reinstall, and 'install --force-reinstall' all performed as root so that the base install of python3 has it installed as opposed to a single user</p>
|
<python><json><pandas><importerror><json-normalize>
|
2023-04-23 06:27:35
| 2
| 351
|
Joe Greene
|
76,083,413
| 13,330,700
|
Running an application via docker container on http://localhost:8000/ returns Bad Request, but http://127.0.0.1:8000/ works
|
<p>Disclaimer: I am a docker noob and I'm trying to learn. I am also running this on Windows 10</p>
<p>Here's my <code>Dockerfile.yaml</code></p>
<pre><code>FROM python:3.11
# setup env variables
ENV PYTHONBUFFERED=1
ENV DockerHOME=/app/django-app
# Expose port
EXPOSE 8000
# create work dir
RUN mkdir -p $DockerHOME
# set work dir
WORKDIR $DockerHOME
# copy code to work dir
COPY . $DockerHOME
# install dependencies
RUN pip install -r requirements.txt
# move working dir to where manage.py is
WORKDIR $DockerHOME/flag_games
# set default command (I thinkk)
ENTRYPOINT ["python"]
# run commands for app to run
CMD ["manage.py", "collectstatic", "--noinput"]
CMD ["manage.py", "runserver", "0.0.0.0:8000"]
</code></pre>
<p>Here are the commands I use (I run <code>make docker-build</code> and <code>make docker-run</code>)</p>
<pre><code>docker-build:
docker build --tag docker-django .
docker-run:
docker run -d -p 8000:8000 --name flag-game docker-django
</code></pre>
<p>My container runs fine</p>
<pre><code>(venv) PS C:\Users\Admin\Projects\flag-games> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
175c38b4ea9e docker-django "python manage.py ru…" 26 minutes ago Up 26 minutes 0.0.0.0:8000->8000/tcp flag-game
</code></pre>
<p>When I try to hit the website I get:</p>
<pre><code>(venv) PS C:\Users\Admin\Projects\flag-games> curl.exe -v localhost:8000
* Trying 127.0.0.1:8000...
* Connected to localhost (127.0.0.1) port 8000 (#0)
> GET / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/8.0.1
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< Date: Sun, 23 Apr 2023 06:16:31 GMT
< Server: WSGIServer/0.2 CPython/3.11.3
< Content-Type: text/html
< X-Content-Type-Options: nosniff
< Referrer-Policy: same-origin
< Cross-Origin-Opener-Policy: same-origin
< Connection: close
<
<!doctype html>
<html lang="en">
<head>
<title>Bad Request (400)</title>
</head>
<body>
<h1>Bad Request (400)</h1><p></p>
</body>
</html>
* Closing connection 0
</code></pre>
<p>Even worse for <code>0.0.0.0:8000</code></p>
<pre><code>(venv) PS C:\Users\Admin\Projects\flag-games> curl.exe -v 0.0.0.0:8000
* Trying 0.0.0.0:8000...
* connect to 0.0.0.0 port 8000 failed: Address not available
* Failed to connect to 0.0.0.0 port 8000 after 0 ms: Couldn't connect to server
* Closing connection 0
curl: (7) Failed to connect to 0.0.0.0 port 8000 after 0 ms: Couldn't connect to server
</code></pre>
<p>but <code>http://127.0.0.1:8000/</code> works fine.</p>
<pre><code> curl.exe -v http://127.0.0.1:8000/
* Trying 127.0.0.1:8000...
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/8.0.1
> Accept: */*
>
< HTTP/1.1 200 OK
</code></pre>
<p>Am I misunderstanding how docker works? and how all the ports/network communicates with each other?</p>
<p>EDIT: Adding part of<code>settings.py</code></p>
<pre><code>ALLOWED_HOSTS = ['127.0.0.1', '0.0.0.0']
# Application definition
INSTALLED_APPS = [
'world_flags.apps.WorldFlagsConfig',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
"whitenoise.runserver_nostatic",
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
"whitenoise.middleware.WhiteNoiseMiddleware",
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
ROOT_URLCONF = 'flag_games.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'flag_games.wsgi.application'
</code></pre>
|
<python><django><docker><containers><docker-desktop>
|
2023-04-23 06:20:43
| 1
| 499
|
mike_gundy123
|
76,083,347
| 2,454,357
|
hover not working correctly when using different kinds of subplots in plotly
|
<p>I am trying to generate a plotly-figure that contains one <a href="https://plotly.com/python-api-reference/generated/plotly.graph_objects.Scattermapbox.html" rel="nofollow noreferrer">Scattermapbox</a> and multiple other types of plots (in this case line plots generated with <a href="https://plotly.com/python-api-reference/generated/plotly.graph_objects.Scatter.html" rel="nofollow noreferrer">Scatter</a>). Everything looks good, but when I hover over the line plots, no hover info is displayed.</p>
<p>If I do not create the Scattermapbox-plot, the hover over the line plots works correctly (in the minimal example below you can test this by un-commenting the <code>fig.show()</code> and <code>exit()</code> lines).</p>
<p>Am I doing something wrong and/or is there any way to fix this?</p>
<pre><code>import numpy as np
from plotly.subplots import make_subplots
import plotly.graph_objects as go
fig = make_subplots(
rows=2, cols=2,
specs=[
[{'rowspan':2, 'type' :'mapbox'},{'type' : 'scatter'}],
[None,{'type' : 'scatter'}],
],
print_grid=True,
)
x = np.linspace(0,2*np.pi,100)
y1 = np.sin(x)
y2 = np.cos(x)
fig.add_trace(
go.Scatter(
x=x,
y=y1,
),
row=1, col=2,
)
fig.add_trace(
go.Scatter(
x=x,
y=y2,
),
row=2, col=2,
)
# UNCOMMENT THESE TWO LINES TO TEST WITHOUT SCATTERMAPBOX
#fig.show()
#exit()
coordinates = {
'Berlin' : (52.5200, 13.4050),
'Munich' : (48.1351, 11.5820),
'Hamburg' : (53.5488, 9.9872),
'Frankfurt' : (50.1109, 8.6821),
'Dresden' : (51.0504, 13.7373),
}
z, lat, lon, names = zip(*[
(i,v[0],v[1],k) for i,(k,v) in enumerate(coordinates.items())])
fig.add_trace(
go.Scattermapbox(
lat = lat,
lon = lon,
mode = 'markers+text',
text = names,
below = '',
marker_size=12,
marker_color=z,
name = 'town',
),
col=1, row=1,
)
fig.update_layout(margin=dict(l=20, r=0, t=40, b=40))
fig.update_layout(
mapbox1=dict(
zoom=4, style='carto-positron',
center={k:v for k,v in zip(['lat','lon'],coordinates['Frankfurt'])},
),
)
fig.show()
</code></pre>
|
<python><plotly>
|
2023-04-23 05:56:50
| 1
| 9,890
|
Thomas Kühn
|
76,083,008
| 15,599,248
|
How to install 'tensorflow_examples' in colab
|
<p>I run <a href="https://www.tensorflow.org/tutorials/images/segmentation" rel="nofollow noreferrer">Image segmentation in Tensorflow Core tutorial</a> in Google Colab.
When running</p>
<pre class="lang-bash prettyprint-override"><code>!pip install git+https://github.com/tensorflow/examples.git
</code></pre>
<p>get an error</p>
<pre><code>Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting git+https://github.com/tensorflow/examples.git
Cloning https://github.com/tensorflow/examples.git to /tmp/pip-req-build-u7_jdkn8
Running command git clone --filter=blob:none --quiet https://github.com/tensorflow/examples.git /tmp/pip-req-build-u7_jdkn8
Resolved https://github.com/tensorflow/examples.git to commit e3849ccb6d981aad311f0174281c8575d5e21646
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Preparing metadata (setup.py) ... error
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>I want to use 'tensorflow_examlples' like</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow_examples.models.pix2pix import pix2pix
</code></pre>
|
<python><tensorflow><google-colaboratory>
|
2023-04-23 03:36:29
| 1
| 351
|
cc01cc
|
76,082,807
| 4,281,353
|
Why numpy array is not of Sequence type?
|
<p>Help understand why numpy array is not a Sequence type object.</p>
<pre><code>from typing import Sequence
import numpy as np
x = np.array([1,2,3])
isinstance(x, Sequence)
---
False
</code></pre>
<ul>
<li><a href="https://docs.python.org/3/glossary.html#term-sequence" rel="noreferrer">sequence</a></li>
</ul>
<blockquote>
<p>An iterable which supports efficient element access using integer indices via the <code>__getitem__()</code> special method and defines a <code>__len__()</code> method that returns the length of the sequence. Some built-in sequence types are list, str, tuple, and bytes. Note that dict also supports <code>__getitem__()</code> and <code>__len__()</code>, but is considered a mapping rather than a sequence because the lookups use arbitrary immutable keys rather than integers.</p>
<p>The <a href="https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence" rel="noreferrer">collections.abc.Sequence</a> abstract base class defines a much richer interface that goes beyond just <code>__getitem__()</code> and <code>__len__()</code>, adding count(), index(), <code>__contains__()</code>, and <code>__reversed__()</code>.</p>
</blockquote>
|
<python><numpy><python-typing>
|
2023-04-23 02:11:14
| 1
| 22,964
|
mon
|
76,082,804
| 6,222,331
|
how to convert 'dask_cudf' column to datetime?
|
<p>How can we convert a <strong>dask_cudf</strong> column of string or nanoseconds to a datetime object? <code>to_datetime</code> is available in pandas and cudf. See sample data below</p>
<pre><code>import pandas
import cudf
# with pandas
df = pandas.DataFrame( {'city' : ['Dallas','Bogota','Chicago','Juarez'],
'timestamp' : [1664828099973725440,1664828099972763136,1664828094775313920,1664828081313273856]})
df['datetime'] = pd.to_datetime(df['timestamp'])
# with cdf
cdf = cudf.DataFrame( {'city' : ['Dallas','Bogota','Chicago','Juarez'],
'timestamp' : [1664828099973725440,1664828099972763136,1664828094775313920,1664828081313273856]})
cdf['datetime'] = cudf.to_datetime(cdf['timestamp'])
print(df)
print(cdf)
</code></pre>
<p>in either case, the result is the same:</p>
<pre><code> city timestamp datetime
0 Dallas 1664828099973725440 2022-10-03 20:14:59.973725440
1 Bogota 1664828099972763136 2022-10-03 20:14:59.972763136
2 Chicago 1664828094775313920 2022-10-03 20:14:54.775313920
3 Juarez 1664828081313273856 2022-10-03 20:14:41.313273856
</code></pre>
<p><a href="https://stackoverflow.com/questions/75889803/dask-cudf-dataframe-convert-column-of-datetime-string-to-column-of-datetime-obje">This recent SO question</a> suggests using dask:</p>
<pre><code>import dask_cudf
from dask import dataframe as dd
ddf = dask_cudf.from_cudf(cdf, npartitions=2)
dd.to_datetime(ddf['timestamp']).head()
</code></pre>
<p>produces an error. I am creating a dask_cudf from a large number of csv files in one directory.</p>
|
<python><dask><rapids><cudf>
|
2023-04-23 02:10:18
| 0
| 2,314
|
dleal
|
76,082,746
| 1,601,580
|
How to transform a quickdraw image to 84 by 84 in pytorch using the learn2learn library?
|
<p>I was trying to use learn2learn's QuickDraw but it seems like I'm getting errors when I attempt to apply transformations (resizing to 84x84, random crop) on it. The issue stems from when l2l's quickdraw library attempts to apply transformations onto a quickdraw image (that is apparently in the form of a np.memmap/.npy record that PIL can't understand) and so I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/home/pzy2/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/maml_patricks_l2l.py", line 2300, in <module>
loop_through_l2l_indexable_benchmark_with_model_test()
File "/home/pzy2/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/maml_patricks_l2l.py", line 2259, in loop_through_l2l_indexable_benchmark_with_model_test
for benchmark in [quickdraw_l2l_tasksets()]: #hdb8_l2l_tasksets(),hdb9_l2l_tasksets(), delaunay_l2l_tasksets()]:#[dtd_l2l_tasksets(), cu_birds_l2l_tasksets(), fc100_l2l_tasksets()]:
File "/home/pzy2/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/maml_patricks_l2l.py", line 2216, in quickdraw_l2l_tasksets
_transforms: tuple[TaskTransform, TaskTransform, TaskTransform] = get_task_transforms_quickdraw(_datasets,
File "/home/pzy2/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/maml_patricks_l2l.py", line 2184, in get_task_transforms_quickdraw
train_transforms: TaskTransform = DifferentTaskTransformIndexableForEachDataset(train_dataset,
File "/home/pzy2/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/common.py", line 130, in __init__
self.indexable_dataset = MetaDataset(indexable_dataset)
File "learn2learn/data/meta_dataset.pyx", line 59, in learn2learn.data.meta_dataset.MetaDataset.__init__
File "learn2learn/data/meta_dataset.pyx", line 96, in learn2learn.data.meta_dataset.MetaDataset.create_bookkeeping
File "learn2learn/data/meta_dataset.pyx", line 65, in learn2learn.data.meta_dataset.MetaDataset.__getitem__
File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/learn2learn/vision/datasets/quickdraw.py", line 511, in __getitem__
image = self.transform(image)
File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/torchvision/transforms/transforms.py", line 60, in __call__
img = t(img)
File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/torchvision/transforms/transforms.py", line 900, in forward
i, j, h, w = self.get_params(img, self.scale, self.ratio)
File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/torchvision/transforms/transforms.py", line 859, in get_params
width, height = F._get_image_size(img)
File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/torchvision/transforms/functional.py", line 67, in _get_image_size
return F_pil._get_image_size(img)
File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/torchvision/transforms/functional_pil.py", line 26, in _get_image_size
raise TypeError("Unexpected type {}".format(type(img)))
TypeError: Unexpected type <class 'numpy.memmap'>
</code></pre>
<p>original:</p>
<ul>
<li><a href="https://github.com/learnables/learn2learn/issues/392" rel="nofollow noreferrer">https://github.com/learnables/learn2learn/issues/392</a></li>
<li><a href="https://discuss.pytorch.org/t/how-to-transform-a-quickdraw-image-to-84-by-84-in-pytorch-using-the-learn2learn-library/178292" rel="nofollow noreferrer">https://discuss.pytorch.org/t/how-to-transform-a-quickdraw-image-to-84-by-84-in-pytorch-using-the-learn2learn-library/178292</a></li>
</ul>
|
<python><machine-learning><deep-learning><pytorch><learn2learn>
|
2023-04-23 01:45:24
| 4
| 6,126
|
Charlie Parker
|
76,082,734
| 4,281,353
|
How to check if an object can slice
|
<p>Is this the right way to check if a Python object can be sliced or are there better/correct ways?</p>
<pre><code>import typing
def can_slice(object) -> bool:
return "__getitem__" in dir(object)
</code></pre>
<p>I am looking for a way not using exception to test if an object can be sliced.</p>
<h2>Update</h2>
<p>The above <code>can_slice</code> is incorrect because a dictionary has <code>__getitem__</code> but cannot be sliced.</p>
<p>Cannot use <code>typing.Sequence</code> because numpy array is not of type Sequence.</p>
<pre><code>from typing import Sequence
isinstance(np.array([1,2,3]), Sequence)
---
False
</code></pre>
|
<python><slice>
|
2023-04-23 01:42:28
| 2
| 22,964
|
mon
|
76,082,697
| 6,676,101
|
How do we remove duplicates from an iterator and put it in sorted order?
|
<p>How do we remove duplicates from an iterator and put it in sorted order?</p>
<p>Sample input:</p>
<pre><code>it = iter(["black", "aqua", "aqua", "black", "black", "blue", "aqua", "blue", "fuchsia", "gray", "gray", "green", "green", "aqua", "lime", "lime"])
</code></pre>
<p>Expected output:</p>
<pre><code>it = iter(["aqua", "black", "blue", "fuchsia", "gray", "green", "lime",])
</code></pre>
<p>Similar questions have been asked before:</p>
<ul>
<li><p>someone previously asked <a href="https://stackoverflow.com/questions/32012878/iterator-object-for-removing-duplicates-in-python">how to remove duplicates from an iterator</a> about implementing the iterator protocol and their own class with <code>__iter__</code> and <code>__next__</code> methods defined.</p>
</li>
<li><p>someone previously asked <a href="https://stackoverflow.com/questions/43623304/python-iterate-through-list-and-remove-duplicates-without-using-set">how to remove duplicates from an iterator</a> restricted to <em>not</em> using Python's built-in <code>set</code> class.</p>
</li>
</ul>
<p>There are almost no restrictions on this question other than it be written in Python.</p>
|
<python>
|
2023-04-23 01:27:03
| 1
| 4,700
|
Toothpick Anemone
|
76,082,693
| 1,330,719
|
VS Code: Connecting to a python interpreter in docker container without using remote containers
|
<p>I know it is generally possible to connect to a container's python interpreter with:</p>
<ul>
<li>remote containers</li>
<li>remote ssh</li>
</ul>
<p>The problems I have with these solutions:</p>
<ul>
<li>it opens a new window where I need to install/specify all extensions again</li>
<li>it opens a new window per container. I am working in a monorepo where each services's folder is mounted in a different container (connected via docker compose)</li>
</ul>
<p>Is there a solution that allows me to specify a remote container to connect to simply for the python interpreter (and not for an entirely new workspace)?</p>
|
<python><docker><visual-studio-code>
|
2023-04-23 01:25:21
| 1
| 1,269
|
rbhalla
|
76,082,604
| 3,116,936
|
Python regular expression to capture phrases?
|
<p>I want to figure out a regex pattern capable of capturing the operation and the two registers or addresses it acts on for a given snippet of assembly code. This is what I have so far:</p>
<pre><code>import re
assembly_code = """
lea r8, [rcx + 8*rax]
movsd xmm0, qword ptr [rcx + 8*rax] ## xmm0 = mem[0],zero
mov rcx, r9
xor edi, edi
.p2align 4, 0x90
LBB0_12: ## Parent Loop BB0_2 Depth=1
## Parent Loop BB0_3 Depth=2
## Parent Loop BB0_4 Depth=3
## Parent Loop BB0_10 Depth=4
## Parent Loop BB0_11 Depth=5
## => This Inner Loop Header: Depth=6
movsd xmm1, qword ptr [r13 + 8*rdi] ## xmm1 = mem[0],zero
mulsd xmm1, qword ptr [rcx]
addsd xmm0, xmm1
movsd qword ptr [r8], xmm0
add rcx, 2048
lea r12, [rsi + rdi]
add r12, 1
add rdi, 1
cmp r12, r14
jl LBB0_12
## %bb.13: ## in Loop: Header=BB0_11 Depth=5
add rax, 1
add r9, 8
cmp rax, rbx
jl LBB0_11
"""
pattern = r"\b(mov|movaps|movups|movaps|movss|movsd|movlps|movhps|movlpd|movhpd|movd|movq)\b\s+(\S+)\s*,\s*(\S+(\s*\[.*?\])?)"
matches = re.findall(pattern, assembly_code)
for match in matches:
print("Instruction: ", match[0])
print("Operand 1: ", match[1])
print("Operand 2: ", match[2])
print("---")
</code></pre>
<p>But the output looks as follows:</p>
<pre><code>Instruction: movsd
Operand 1: xmm0
Operand 2: qword
---
Instruction: mov
Operand 1: rcx
Operand 2: r9
---
Instruction: movsd
Operand 1: xmm1
Operand 2: qword
---
</code></pre>
<p>I am targeting patterns like <code>qword ptr [r13 + 8*rdi]</code> in its complete form. How to modify the pattern to make it capture the full string properly?</p>
|
<python><regex>
|
2023-04-23 00:50:25
| 1
| 506
|
user3116936
|
76,082,551
| 11,740,629
|
Scrape all Word Document files from a website using Python or R
|
<p>I'm trying to find a way to download all the docx files from the following URL, using Python or R:</p>
<p><a href="https://www.microsoft.com/en-us/Investor/annual-reports.aspx" rel="nofollow noreferrer">https://www.microsoft.com/en-us/Investor/annual-reports.aspx</a></p>
<p>I've looked into similar questions (<a href="https://stackoverflow.com/questions/74102567/downloading-pdfs-from-a-website-using-python">here</a>, <a href="https://stackoverflow.com/questions/54616638/download-all-pdf-files-from-a-website-using-python">here</a>, and <a href="https://stackoverflow.com/questions/69733921/r-code-for-downloading-all-the-pdfs-given-on-a-site-web-scraping">here</a> [I'm sorry, I noticed the files in the website are not PDFs but docx files) but none of the codes worked for me. Essentially, I'd like to download all the annual reports at the same time.</p>
<p>Thanks in advance</p>
|
<python><r><web-scraping>
|
2023-04-23 00:28:04
| 1
| 583
|
lovestacksflow
|
76,082,526
| 7,776,212
|
CrossEntropyLoss using weights gives RuntimeError: expected scalar type Float but found Long neural network
|
<p>I am using a Feedforward neural network for a classification task with 4 classes. The classes are imbalanced and hence, I want to use a weight with the CrossEntropyLoss as mentioned <a href="https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss" rel="nofollow noreferrer">here</a>.</p>
<p>Here is my neural network:</p>
<pre class="lang-py prettyprint-override"><code>class FeedforwardNeuralNetModel(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(FeedforwardNeuralNetModel, self).__init__()
# Linear function
self.fc1 = nn.Linear(input_dim, hidden_dim)
# Non-linearity
self.relu = nn.ReLU()
# Linear function (readout)
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
# Linear function
out = self.fc1(x)
# Non-linearity
out = self.relu(out)
# Linear function (readout)
out = self.fc2(out)
return out
</code></pre>
<p>And here is how I am using it:</p>
<pre class="lang-py prettyprint-override"><code>learning_rate = 0.1
batch_size = 64
def train_loop(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
for batch, (X, y) in enumerate(dataloader):
# Compute prediction and loss
pred = model(X)
print(pred, y.long())
loss = loss_fn(pred, y.long())
print(loss)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), (batch + 1) * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
ffn_model = FeedforwardNeuralNetModel(input_dim=32, hidden_dim=128, output_dim=4)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(ffn_model.parameters(), lr=0.1)
num_epochs = 10
for t in range(num_epochs):
print(f"Epoch {t+1}\n-------------------------------")
train_loop(train_dataloader, ffn_model, loss_fn, optimizer)
print("Done!")
</code></pre>
<p>The above works fine. However, when I try to use weights in the CrossEntropyLoss as follows:</p>
<pre class="lang-py prettyprint-override"><code>loss_fn = nn.CrossEntropyLoss(weight=torch.tensor([1, 1000, 1000, 1000]))
</code></pre>
<p>It gives the following error:</p>
<pre class="lang-bash prettyprint-override"><code>RuntimeError Traceback (most recent call last)
Cell In[91], line 13
11 for t in range(num_epochs):
12 print(f"Epoch {t+1}\n-------------------------------")
---> 13 train_loop(train_dataloader, ffn_model, loss_fn, optimizer)
14 print("Done!")
Cell In[90], line 10, in train_loop(dataloader, model, loss_fn, optimizer)
8 pred = model(X)
9 print(pred, y.long())
---> 10 loss = loss_fn(pred, y.long())
11 print(loss)
13 # Backpropagation
File ~/miniconda3/envs/gnn/lib/python3.9/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/miniconda3/envs/gnn/lib/python3.9/site-packages/torch/nn/modules/loss.py:1174, in CrossEntropyLoss.forward(self, input, target)
1173 def forward(self, input: Tensor, target: Tensor) -> Tensor:
-> 1174 return F.cross_entropy(input, target, weight=self.weight,
1175 ignore_index=self.ignore_index, reduction=self.reduction,
1176 label_smoothing=self.label_smoothing)
File ~/miniconda3/envs/gnn/lib/python3.9/site-packages/torch/nn/functional.py:3029, in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
3027 if size_average is not None or reduce is not None:
3028 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 3029 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: expected scalar type Float but found Long
</code></pre>
|
<python><machine-learning><pytorch><neural-network><cross-entropy>
|
2023-04-23 00:14:54
| 1
| 779
|
diviquery
|
76,082,452
| 7,448,860
|
How to get the current stock price using fidelity's screener?
|
<p>I am trying to get the current stock price using fidelity's screener. For example, the current price of AAPL is <code>$165.02</code> on <a href="https://digital.fidelity.com/prgw/digital/research/quote/dashboard/summary?symbol=AAPL" rel="nofollow noreferrer">https://digital.fidelity.com/prgw/digital/research/quote/dashboard/summary?symbol=AAPL</a>.</p>
<p>When I inspect the webpace, the price is shown here: <code><div _ngcontent-cxa-c16="" class="nre-quick-quote-price">$165.02</div></code>.</p>
<p>I used this code:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
def stock_price(symbol: str = "AAPL") -> str:
url = f"https://digital.fidelity.com/prgw/digital/research/quote/dashboard/summary?symbol={symbol}"
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
price_tag = soup.find('div', class_='nre-quick-quote-price')
current_price = price_tag['value']
return current_price
</code></pre>
<p>But got this error:</p>
<pre><code>Traceback (most recent call last):
File "get_price.py", line 160, in <module>
print(f"Current {symbol:<4} stock price is {stock_price(symbol):>8}")
File "get_price.py", line 63, in stock_price
current_price = price_tag['value']
TypeError: 'NoneType' object is not subscriptable
</code></pre>
<p>I also used this code:</p>
<pre><code>from selenium import webdriver
def stock_price(symbol: str = "AAPL") -> str:
driver = webdriver.Chrome()
url = "https://digital.fidelity.com/prgw/digital/research/quote/dashboard/summary?symbol=" + symbol
driver.get(url)
current_price = driver.find_element('div.nre-quick-quote-price').text
return current_price
</code></pre>
<p>But got this error:</p>
<pre><code>Traceback (most recent call last):
File "get_price.py", line 103, in <module>
price_tag = driver.find_element('div.nre-quick-quote-price')
File "C:\Users\X\miniconda3\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 831, in find_element
return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"]
File "C:\Users\X\miniconda3\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 440, in execute
self.error_handler.check_response(response)
File "C:\Users\X\miniconda3\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 245, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.InvalidArgumentException: Message: invalid argument: invalid locator
</code></pre>
<p>Please help!</p>
|
<python><selenium-webdriver><web-scraping><beautifulsoup><python-requests>
|
2023-04-22 23:42:44
| 1
| 1,250
|
Beginner
|
76,082,425
| 6,339,279
|
Is it possible to create an attribute of an instance of a Python class outside of the __init__ method?
|
<p>All of the Python 3.x documentation that I can find implies that an instance attribute can be created <em>only</em> in the <code>__init__</code> method of a class. I have not been able to find documentation (or examples) suggesting that an instance attribute can be created in other instance methods of a class. I must be missing something, because the code example below works. I'd expect it to throw an <code>AttributeError</code>.</p>
<ol>
<li><p>Why does this work?</p>
</li>
<li><p>Is it documented somewhere?</p>
</li>
<li><p>To me it seems like a bad feature because if, in an instance method, one were to mistype the name of an attribute declared in <code>__init__</code>, then one accidentally would be creating a new instance attribute. Instead, I would expect an error to the be thrown to tell the programmer that they've mistyped.</p>
</li>
</ol>
<pre><code>class Something:
def __init__(self):
self.some_value = 1
def store_values(self, new_value):
self.some_value = new_value
self.other_value = new_value + 2 # Shouldn't this be an `AttributeError` because there is no such attribute on `self`?
something = Something()
something.store_values(8)
print(something.some_value, something.other_value) # prints 8 10
</code></pre>
<hr />
<p><strong>EDIT AFTER ANSWER ACCEPTED</strong></p>
<p>@Carcigenicate very helpfully points to <code>__slots__</code> as contemplating otherwise unlimited flexibility in creating instance attributes.</p>
<p>@jonrsharpe points to an excellent <a href="https://docs.python-guide.org/writing/style/#we-are-all-responsible-users" rel="nofollow noreferrer">blog post</a> about code style in Python.</p>
<p>@user2357112 points to an explicit reference in <a href="https://docs.python.org/3/tutorial/classes.html#instance-objects" rel="nofollow noreferrer">Section 9.3.3 of the Python tutorial</a>. There, an example is provided in which an instance attribute is created, briefly used, and then deleted. That example demonstrates the ability to arbitrarily add and delete instance attributes. It seems to suggest that an instance's attributes dictionary should be used like a scratchpad for holding state.</p>
<p>Still, I do not yet see any examples in Python documentation or tutorials that demonstrate a use case for creating instance attributes outside of <code>__init__</code> where the use case could not be achieved in a much cleaner, clearer way without the arbitrary attribute creation. I'm hopeful that someone reading this will point out a good use case or two.</p>
<p>The Python Wiki's entry on <a href="https://wiki.python.org/moin/UsingSlots" rel="nofollow noreferrer">UsingSlots</a> discusses some aspects of the under-the-hood workings of <code>__dict__</code>. It observes, "The <code>__dict__</code> attribute is a dictionary whose keys are the variable names and whose values are the variable values. This allows for dynamic variable creation but can also lead to uncaught errors. For example, with the default <code>__dict__</code>, a misspelled variable name results in the creation of a new variable, but with <code>__slots__</code> it raises in an <code>AttributeError</code>."</p>
<p>It is worth noting that the code example given, above, only works if the <code>store_values</code> method is called before trying to access the value of the extra attribute. So, for example, if the next-to-last line is removed, the last line will raise an <code>AttributeError</code>. For an inexperienced Python user (me), debugging that scenario was very confusing. Sometimes the method creating the extra attribute was being called, so the extra attribute was present, and subsequent code that relied upon that attribute worked just fine. Other times the method was not called, so the extra attribute was not created, and the subsequent code crashed.</p>
<p>Best practice: confine instance attribute declarations to <code>__init__</code>.</p>
|
<python>
|
2023-04-22 23:31:24
| 1
| 651
|
Matthew Rips
|
76,082,398
| 6,439,229
|
How to make a QIcon switch appearance without resetting it on the widget?
|
<p>Now that dark mode has finally come to Windows with Qt 6.5, I noticed that a lot of my icons don't look too well on a dark background. So I'd like to use different icons for light and dark mode. And, to make things difficult, have the icons change (their appearance) when the user switches mode on his OS.</p>
<p>In order to avoid having to <code>setIcon()</code> on all kinds of widgets all over the place, I thought I'd subclass <code>QIcon</code> and have it change its pixmap on the <code>colorSchemeChanged</code> signal.</p>
<pre><code>class ThemeAwareIcon(QIcon):
def __init__(self, dark_pixmap, light_pixmap, *args):
super().__init__(*args)
self.dark_pm = dark_pixmap
self.light_pm = light_pixmap
self.app = QApplication.instance()
self.app.styleHints().colorSchemeChanged.connect(self.set_appropriate_pixmap)
self.set_appropriate_pixmap()
def set_appropriate_pixmap(self):
current_scheme = self.app.styleHints().colorScheme()
pm = self.dark_pm if current_scheme == Qt.ColorScheme.Dark else self.light_pm
self.addPixmap(pm, QIcon.Mode.Normal, QIcon.State.On)
</code></pre>
<p>This works almost as intended; pixmaps are changed upon the signal. It's just that the changed pixmap isn't displayed on the widget that the icon is set on. The only way I found to make the change visible is to reset the icon on the widget, and that is what I was trying to avoid in the first place.</p>
<p>So, can my icon class be rescued somehow or is what I want just not possible this way?</p>
|
<python><pyqt><qicon>
|
2023-04-22 23:22:25
| 1
| 1,016
|
mahkitah
|
76,082,137
| 1,934,510
|
Routes from new blueprint not found and getting 404
|
<p>I'm building a new blueprint for my flask app and for a new blueprint. Its routes are getting 404 errors.</p>
<p>This is the new Blueprint (shopify) views.py file:</p>
<pre><code>...
shopify = Blueprint('shopify', __name__,
template_folder='templates', url_prefix='/shopify')
# @admin.before_request
@login_required
@role_required('admin')
def before_request():
""" Protect all of the admin endpoints. """
pass
@shopify.route('/app_launched', methods=['GET'])
# @helpers.verify_web_call
def app_launched():
print("-----------------INICIO----------------")
shop = request.args.get('shop')
global ACCESS_TOKEN, NONCE
if ACCESS_TOKEN:
return render_template('welcome.html', shop=shop)
# The NONCE is a single-use random value we send to Shopify so we know the next call from Shopify is valid (see #app_installed)
# https://en.wikipedia.org/wiki/Cryptographic_nonce
NONCE = uuid.uuid4().hex
redirect_url = helpers.generate_install_redirect_url(
shop=shop, scopes=SCOPES, nonce=NONCE, access_mode=ACCESS_MODE)
return redirect(redirect_url, code=302)
...
</code></pre>
<p>This is my app.py:</p>
<pre><code>...
def create_app(settings_override=None):
"""
Create a Flask application using the app factory pattern.
:param settings_override: Override settings
:return: Flask app
"""
app = Flask(__name__, instance_relative_config=True)
dev_env = os.getenv('DEV_ENV')
print("DEV ENV------"+str(dev_env))
if dev_env == 'DEV':
app.config.from_object('config.settings')
app.config.from_pyfile('settings.py', silent=True)
elif dev_env == 'PROD':
app.config.from_object('config.settings_prod')
app.config.from_pyfile('settings_prod.py', silent=True)
else:
app.config.from_object('config.settings_prod')
app.config.from_pyfile('settings_prod.py', silent=True)
stripe.api_key = app.config.get('STRIPE_SECRET_KEY')
stripe.api_version = app.config.get('STRIPE_API_VERSION')
middleware(app)
error_templates(app)
exception_handler(app)
app.register_blueprint(admin)
app.register_blueprint(page)
app.register_blueprint(contact)
app.register_blueprint(user)
app.register_blueprint(billing)
app.register_blueprint(stripe_webhook)
app.register_blueprint(shopify)
# app.register_blueprint(copy_writer)
# app.register_blueprint(bet)
template_processors(app)
extensions(app)
authentication(app, User)
return app
...
</code></pre>
<p>This is the app folder structure:</p>
<pre><code>/app
/blueprints
/shopify
/templates
welcome.html
__init__.py
views.py
app.py
</code></pre>
<p>Every time a route inside shopify blueprints is requested I get 404 error, while in other blueprints they are working well.</p>
<pre><code> [22/Apr/2023:21:45:43 +0000] "GET /app_launchedHTTP/1.1" 404 11057 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36" in 264289µs
</code></pre>
<p>How can I fix it?</p>
|
<python><flask>
|
2023-04-22 22:06:43
| 1
| 8,851
|
Filipe Ferminiano
|
76,082,017
| 18,949,720
|
Loading train/val/test datasets with images in separate folders using Pytorch
|
<p>For my first Pytorch project, I have to perform image classification using a dataset containing jpg image of clouds. Im am struggling with data importation, because the train/validation/test sets are not separated and the images are located in different folders according to their class. So, the folders structure looks like this:</p>
<pre><code>-dataset_folder
-Class_1
img1
img2
...
-Class_2
img1
img2
...
-Class_3
img1
img2
...
-Class_4
img1
img2
...
</code></pre>
<p>I saw that the ImageFolder() class could handle this kind of folder structure, but I have no idea how to combine this with separating the dataset into 3 parts.</p>
<p>Can someone please show me a way to do this ?</p>
|
<python><import><pytorch><classification>
|
2023-04-22 21:31:24
| 3
| 358
|
Droidux
|
76,081,685
| 764,195
|
Post list field In urllib3
|
<p>I'm creating a simple AWS lambda function to post some data to an API but the API expects a list however I can't get it to work. I've got the following code that I'm testing with</p>
<pre class="lang-py prettyprint-override"><code>import urllib3
import json
http = urllib3.PoolManager()
file_path = 'file.png'
with open(file_path, 'rb') as fp:
file_data = fp.read()
payload = {
'from': ['a', 'b', 'c'],
'file': ('file.png', file_data),
}
r = http.request(
'POST',
'http://httpbin.org/post',
fields=payload
)
json.loads(r.data.decode('utf-8'))
</code></pre>
<p>But this returns an error about expecting the field to be a certain type and basically not being allowed to be a list which is fine, but how can I make this work.</p>
<p>I'm connecting to a Rails based server, and in order for a form-encoded form submission to be a list, I need to send multiple params with a name containing <code>[]</code> on the end, but of course if I try creating a dict with multiple keys the same, it'll just ignore them all so I'm not really sure how to handle this.</p>
<p>I'm pretty new to python so sorry if this is a stupid question</p>
|
<python><urllib3>
|
2023-04-22 20:00:07
| 1
| 4,448
|
PaReeOhNos
|
76,081,557
| 4,404,805
|
Getting duplicates after list comprehension
|
<p>I have two lists of dictionaries such as:</p>
<pre><code>l1 = [{"ticket_type": "A1", "ticket_count": 30, "ticket_price": 1000.0},
{"ticket_type": "A2", "ticket_count": 70, "ticket_price": 500.0},
{"ticket_type": "A3", "ticket_count": 5, "ticket_price": 1200.0}]
l2 = [{"ticket_type": "A2", "ticket_booked": 2},
{"ticket_type": "A1", "ticket_booked": 2},
{"ticket_type": "A3", "ticket_booked": 2}]
</code></pre>
<p>If the key <code>ticket_type</code> in both l1 and l2 match, then subtract corresponding <code>ticket_booked</code> in l2 from <code>ticket_count</code> in l1. The desired output is:</p>
<pre><code>[{'ticket_type': 'A1', 'ticket_count': 28, 'ticket_price': 1000.0},
{'ticket_type': 'A2', 'ticket_count': 68, 'ticket_price': 500.0},
{'ticket_type': 'A3', 'ticket_count': 3, 'ticket_price': 1200.0}]
</code></pre>
<p>This is what I have tried:</p>
<pre><code>for x in l1:
for y in l2:
if x["ticket_type"] == y["ticket_type"]:
x["ticket_count"] -= y["ticket_booked"]
</code></pre>
<p>I am trying to optimize this method using list comprehension such as:</p>
<pre><code>[{**x, **{"ticket_count": x["ticket_count"] - y["ticket_booked"]}} if x["ticket_type"] == y["ticket_type"] else x for x in l1 for y in l2]
</code></pre>
<p>But I am getting duplicate entries in the final result:</p>
<pre><code>[{'ticket_type': 'A1', 'ticket_count': 30, 'ticket_price': 1000.0},
{'ticket_type': 'A1', 'ticket_count': 28, 'ticket_price': 1000.0},
{'ticket_type': 'A1', 'ticket_count': 30, 'ticket_price': 1000.0},
{'ticket_type': 'A2', 'ticket_count': 68, 'ticket_price': 500.0},
{'ticket_type': 'A2', 'ticket_count': 70, 'ticket_price': 500.0},
{'ticket_type': 'A2', 'ticket_count': 70, 'ticket_price': 500.0},
{'ticket_type': 'A3', 'ticket_count': 5, 'ticket_price': 1200.0},
{'ticket_type': 'A3', 'ticket_count': 5, 'ticket_price': 1200.0},
{'ticket_type': 'A3', 'ticket_count': 3, 'ticket_price': 1200.0}]
</code></pre>
|
<python>
|
2023-04-22 19:31:55
| 2
| 1,207
|
Animeartist
|
76,081,104
| 5,034,651
|
Proper way to pass datetime from pyspark to pandas
|
<p>I'm trying to convert a spark dataframe to pandas but it is erroring out on new versions of pandas and warns the user on old versions of pandas.</p>
<p>On python==3.9, pyspark==3.4.0, and pandas==1.5.3, the warning looks as follows:</p>
<pre><code>/Users/[me]/miniconda3/envs/py39/lib/python3.9/site-packages/pyspark/sql/pandas/conversion.py:251: FutureWarning: Passing unit-less datetime64 dtype to .astype is deprecated and will raise in a future version. Pass 'datetime64[ns]' instead
series = series.astype(t, copy=False)
</code></pre>
<p>I don't have a copy of the exact error on pandas==2.0.0, but it is basically the same thing, it wants you to pass datetime64[ns].</p>
<p>Here is the <a href="https://github.com/apache/spark/blob/c4f7f137ed2262a782414e217520b29bca77f960/python/pyspark/sql/pandas/conversion.py#L251" rel="nofollow noreferrer">line in pyspark</a> that is throwing this error.</p>
<p>Here is an example of what I'm trying to do that is causing this error:</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql.types import StructType,StructField, StringType, IntegerType, TimestampType, FloatType
from pyspark.sql import SparkSession
spark = (
SparkSession
.builder
.getOrCreate()
)
testAggData = [
{
'postId': '1234567',
'title': 'Test1',
'createdTSUTC': datetime.strptime('2023-04-19 03:14:30', '%Y-%m-%d %H:%M:%S'),
},
{
'postId': '1234568',
'title': 'Test2',
'createdTSUTC': datetime.strptime('2023-04-20 03:14:30', '%Y-%m-%d %H:%M:%S'),
}
]
testSchema = StructType([
StructField("postId",StringType(),False),
StructField("title",StringType(),False),
StructField("createdTSUTC", TimestampType(), False),
])
testAggDataDf = spark.createDataFrame(testAggData, testSchema).toPandas()
</code></pre>
|
<python><pandas><pyspark>
|
2023-04-22 17:51:01
| 1
| 616
|
Ken Myers
|
76,080,957
| 2,476,219
|
SQLAlchemy 2 generic mixin
|
<p>My goal is to create a generic mixin, which concept can best be explained by an example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Generic
from sqlalchemy.orm import Mapped, mapped_column, MappedAsDataclass, DeclarativeBase
from sqlalchemy import create_engine
T = TypeVar("T")
class MyGenericMixin(Generic[T]):
id: Mapped[int] = mapped_column(primary_key=True, unique=True, autoincrement=True, nullable=False)
my_data: Mapped[T] = mapped_column(nullable=False)
class Base(MappedAsDataclass, DeclarativeBase):
pass
class MyIntImpl(Base, MyGenericMixin[int]):
__tablename__ = "my_int_impl"
class MyStrImpl(Base, MyGenericMixin[str]):
__tablename__ = "my_str_impl"
engine = create_engine("sqlite:///example.db")
Base.metadata.create_all(engine)
</code></pre>
<p>Is something similar possible in SQLAlchemy 2?</p>
|
<python><generics><sqlalchemy>
|
2023-04-22 17:17:33
| 1
| 3,688
|
Aart Stuurman
|
76,080,822
| 348,183
|
wxPython: Automatically resize the frame to keep free space out
|
<p>I have a very simple wxPython app which consist of a <code>frame</code> containing:</p>
<ol>
<li><code>control1</code> that keeps its aspect ratio on resize.</li>
<li><code>control2</code> that keeps its height constant and fills the entire width of the <code>frame</code>.</li>
</ol>
<p>I'm using a <code>BoxSizer</code> with the <code>SHAPED</code> attribute set for <code>control1</code>.</p>
<hr />
<p>It looks fine initially, <code>control1</code> is in dark-gray, <code>control2</code> is in black:
<a href="https://i.sstatic.net/rxQBS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rxQBS.png" alt="initial layout of the frame" /></a></p>
<p>However, when I resize the frame horizontally (by dragging the left or right border of the window), as the <code>frame</code>'s height is kept constant, and <code>control1</code> keeps its aspect ratio, there is empty space on the right (in red):
<a href="https://i.sstatic.net/neRvw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/neRvw.png" alt="frame resized horizontally" /></a></p>
<p>I would like it to resize <em>vertically</em> at the same time as I am resizing it horizontally, so that there is no empty space, i.e.:
<a href="https://i.sstatic.net/UsnhX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UsnhX.png" alt="frame resized horizontally with automatic height adjustment" /></a></p>
<p>Of course, the same problem happens when resizing it vertically (by dragging the top or bottom borders of the <code>frame</code>).</p>
<hr />
<p>Here is the source code:</p>
<pre><code>import wx
app = wx.App()
frame = wx.Frame(None)
frame.SetBackgroundColour(wx.Colour(0xFF, 0, 0))
sizer = wx.BoxSizer(wx.VERTICAL)
control1 = wx.Panel(frame, size=(640, 360))
control1.SetBackgroundColour(wx.Colour(0x40, 0x40, 0x40))
sizer.Add(control1, 0, wx.EXPAND | wx.SHAPED)
control2 = wx.Panel(frame, size=(32, 32))
control2.SetBackgroundColour(wx.Colour(0, 0, 0))
sizer.Add(control2, 0, wx.EXPAND)
frame.SetSizerAndFit(sizer)
frame.Show()
app.MainLoop()
</code></pre>
|
<python><layout><wxpython><wxwidgets>
|
2023-04-22 16:51:50
| 1
| 12,646
|
Albus Dumbledore
|
76,080,764
| 480,118
|
Is there anyway to get the class instance given the member variable?
|
<p>While this might seem like a silly example, it's the simplest way i can think of to explain what im after...let's say i have:</p>
<pre><code>class MyClass:
def __init__(self) -> None:
self.table_df = pd.DataFrame()
self.table_df2 = pd.DataFrame()
</code></pre>
<p>I can do something like this</p>
<pre><code>inst=MyClass()
eval('print(table_df)', inst.__dict__)
</code></pre>
<p>But what i really want to do is given a pointer/reference to property of an instance, be able to get it's parent instance and do something like this (which of course doesn't work)</p>
<pre><code>inst=MyClass()
table_ref = inst.table_df
eval('print(table_df2)', table_ref.parent.__dict__)
</code></pre>
|
<python>
|
2023-04-22 16:40:30
| 0
| 6,184
|
mike01010
|
76,080,711
| 15,098,472
|
Stack arrays on specified dimension with arbitrary dimension size
|
<p>Consider the following data:</p>
<pre><code>data = np.array([[i for i in range(3)] for _ in range(9)])
print(data)
print(f'data has shape {data.shape}')
[[0 1 2]
[0 1 2]
[0 1 2]
[0 1 2]
[0 1 2]
[0 1 2]
[0 1 2]
[0 1 2]
[0 1 2]]
data has shape (9, 3)
</code></pre>
<p>And some parameter, let's call it <code>history</code>. The functionality of history is, that it stacks <code>history</code> many arrays <code>[0 1 2]</code> on the first dimension. As an example, consider 1 iteration of that process with <code>history=2</code></p>
<pre><code>history = 2
data = np.array([[[0, 1, 2], [0, 1, 2]]])
print(f'data has now shape {data.shape}')
data has now shape (1, 2, 3)
</code></pre>
<p>Now, let's consider 2 iterations:</p>
<pre><code>history = 2
data = np.array([[[0, 1, 2], [0, 1, 2]],[[0, 1, 2], [0, 1, 2]]])
print(f'data has now shape {data.shape}')
data has now shape (2, 2, 3)
</code></pre>
<p>This process should be repeated, until the data is fully processed. That implies, that we might lose some data at the end, because <code>data.shape[0]/history % 2 != 0</code>.
The final result for <code>history=2</code> would thus be</p>
<pre><code> ([[[0, 1, 2],
[0, 1, 2]],
[[0, 1, 2],
[0, 1, 2]],
[[0, 1, 2],
[0, 1, 2]],
[[0, 1, 2],
[0, 1, 2]]])
</code></pre>
<p>How can this be done performant?</p>
|
<python><arrays><numpy>
|
2023-04-22 16:28:25
| 1
| 574
|
kklaw
|
76,080,692
| 12,411,536
|
Dash multipage "stored component" values
|
<p>I am trying to implement a multipage Dash webapp and I am having a hard time to "store" changes in components when switching back and forth across pages. Namely, each time I move to a new page, the layout is refreshed.</p>
<p>Below there is a (minimal?) code example for reproducibility purpose, where in Page 1 there is a html table with the first row allowing for user input and add button, then it is possible to remove the inserted row as well.
Page 2 is just a placeholder for switching back and forth.</p>
<p>I guess one strategy could be to have different <code>dcc.Store</code>s and record what happens, and populate pages from that when switching. But I hope there is a better strategy tbh :)</p>
<p><strong>index.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import dash
from dash import html, dcc
import dash_bootstrap_components as dbc
# Dash app
app = dash.Dash(
__name__,
external_stylesheets=[
dbc.themes.FLATLY,
dbc.icons.BOOTSTRAP,
],
use_pages=True,
)
sidebar = dbc.Col(
id="sidebar",
children=[
dbc.Nav(
id="sidebar-nav",
children=[
dbc.NavLink(children="page1", id="page1", href="/page1", active="exact"),
dbc.NavLink(children="page2", id="page2", href="/page2", active="exact")
],
),
],
)
main_content = dbc.Col(
id="main-content",
className="main-content-expanded",
children=dash.page_container,
)
page = dbc.Row(
id="page",
children=[
sidebar,
main_content,
],
)
url = dcc.Location(id="url", refresh=False)
# App layout
app.layout = html.Div(
id="layout",
children=[page, url],
)
if __name__ == "__main__":
app.run_server(
debug=True,
host=0.0.0.0,
port=8080,
)
</code></pre>
<p><strong>pages/page1.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import dash
import dash_bootstrap_components as dbc
from dash import MATCH, Input, Output, Patch, State, callback, html, dcc
from dash.exceptions import PreventUpdate
dash.register_page(__name__, path="/page1", name="Page1")
def make_filled_row(button_clicked: int, name: str) -> html.Tr:
"""
Creates a filled row in the table, with dynamic id using the button_clicked
parameter.
"""
return html.Tr(
[
html.Td(name),
html.Td(
html.Button(
"Delete",
id={"index": button_clicked, "type": "delete"},
className="delete-user-btn",
)
),
],
id={"index": button_clicked, "type": "table-row"},
)
header = [
html.Thead(
html.Tr(
[
html.Th("First Name"),
html.Th(""),
]
),
id="table-header",
)
]
add_row = html.Tr([
html.Td(dcc.Input(id="name-input")),
html.Td(html.Button("Add", id="add-user-btn",)),
])
body = [html.Tbody([add_row], id="table-body")]
table = html.Table(header + body)
# Layout of the page
layout = dbc.Container([
dbc.Row([dbc.Col([table])]),
])
# List of callbacks for the page
@callback(
Output("table-body", "children", allow_duplicate=True),
Output("name-input", "value"),
Input("add-user-btn", "n_clicks"),
State("name-input", "value"),
prevent_initial_call=True,
)
def on_add(n_clicks: int, name: str):
"""Adds a new row to the table, and clears the input fields"""
table_body = Patch()
table_body.append(make_filled_row(n_clicks, name))
return table_body, ""
@callback(
Output({"index": MATCH, "type": "table-row"}, "children"),
Input({"index": MATCH, "type": "delete"}, "n_clicks"),
prevent_initial_call=True,
)
def on_delete(n_clicks: int) -> html.Tr:
"""Deletes the row from the table"""
if not n_clicks:
raise PreventUpdate
return html.Tr([])
</code></pre>
<p><strong>pages/page2.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import dash
import dash_bootstrap_components as dbc
from dash import html
dash.register_page(__name__, path="/page2", name="Page2")
# Layout of the page
layout = dbc.Container([
dbc.Row([dbc.Col(html.Div("You are on page 2"))]),
])
</code></pre>
|
<python><plotly-dash>
|
2023-04-22 16:24:20
| 2
| 6,614
|
FBruzzesi
|
76,080,627
| 8,267,332
|
Elasticsearch Docker Authentication Exception with Python client
|
<p>Elasticsearch Docker Version: 8.5.0
Python Version: 3.10
elastic-transport==8.4.0
elasticsearch==8.7.0</p>
<p>Elastic Search Docker Run command:
<code>docker run -d -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=true" -e "ELASTIC_PASSWORD=my_pass" docker.elastic.co/elasticsearch/elasticsearch:8.5.0</code></p>
<p>Python connection code:</p>
<pre><code>from elasticsearch import Elasticsearch, helpers
es = Elasticsearch(
['http://localhost:9200'],
basic_auth=('elastic', 'my_pass')
)
print(es.ping())
print(es.info())
</code></pre>
<p>I'm always getting:</p>
<p><code>elasticsearch.AuthenticationException: AuthenticationException(401, 'security_exception', 'unable to authenticate user [elastic] for REST request [/]') </code></p>
<p>Changing password, or also setting the user per ENV Variable in the docker run, makes no difference, tried all variations. Always the same error. Elastic Logs are nowhere to be found, and default logs do not help. What might be the issue?</p>
|
<python><docker><elasticsearch>
|
2023-04-22 16:10:38
| 0
| 369
|
PssstZzz
|
76,080,581
| 11,246,056
|
How to sum durations in Polars dataframe?
|
<p>I have the following dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import polars as pl
df = pl.DataFrame(
{
"idx": [259, 123],
"timestamp": [
[
datetime.datetime(2023, 4, 20, 1, 45),
datetime.datetime(2023, 4, 20, 1, 51, 7),
datetime.datetime(2023, 4, 20, 2, 29, 50),
],
[
datetime.datetime(2023, 4, 19, 6, 0, 1),
datetime.datetime(2023, 4, 19, 6, 0, 17),
datetime.datetime(2023, 4, 19, 6, 0, 26),
datetime.datetime(2023, 4, 19, 19, 53, 29),
datetime.datetime(2023, 4, 19, 19, 54, 4),
datetime.datetime(2023, 4, 19, 19, 57, 52),
],
],
}
)
</code></pre>
<pre><code>print(df)
# Output
shape: (2, 2)
┌─────┬───────────────────────────────────────────────────────────────────┐
│ idx ┆ timestamp │
│ --- ┆ --- │
│ i64 ┆ list[datetime[μs]] │
╞═════╪═══════════════════════════════════════════════════════════════════╡
│ 259 ┆ [2023-04-20 01:45:00, 2023-04-20 01:51:07, 2023-04-20 02:29:50] │
│ 123 ┆ [2023-04-19 06:00:01, 2023-04-19 06:00:17, … 2023-04-19 19:57:52] │
└─────┴───────────────────────────────────────────────────────────────────┘
</code></pre>
<p>I want to know the total duration of each id, so I do:</p>
<pre class="lang-py prettyprint-override"><code>df = df.with_columns(
pl.col("timestamp")
.map_elements(lambda x: [x[i + 1] - x[i] for i in range(len(x)) if i + 1 < len(x)])
.alias("duration")
)
</code></pre>
<p>Which gives me:</p>
<pre class="lang-py prettyprint-override"><code>shape: (2, 2)
┌─────┬─────────────────────┐
│ idx ┆ duration │
│ --- ┆ --- │
│ i64 ┆ list[duration[μs]] │
╞═════╪═════════════════════╡
│ 259 ┆ [6m 7s, 38m 43s] │
│ 123 ┆ [16s, 9s, … 3m 48s] │
└─────┴─────────────────────┘
</code></pre>
<p>Now, in Pandas, I would have used <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.total_seconds.html" rel="nofollow noreferrer">total_seconds</a> when calling <code>apply</code> and <code>sum</code> the list, like this:</p>
<pre class="lang-py prettyprint-override"><code>df["duration"] = (
df["timestamp"]
.apply(
lambda x: sum(
[(x[i + 1] - x[i]).total_seconds() for i in range(len(x)) if i + 1 < len(x)]
)
)
.astype(int)
)
</code></pre>
<p>Which would give me the <strong>expected result</strong>:</p>
<pre class="lang-py prettyprint-override"><code>print(df[["idx", "duration"]])
# Output
idx duration
0 259 2690
1 123 50271
</code></pre>
<p>What would be the equivalent, idiomatic way, to do this in Polars?</p>
|
<python><dataframe><duration><python-polars>
|
2023-04-22 16:02:04
| 1
| 13,680
|
Laurent
|
76,080,418
| 19,130,803
|
MyPy: Incompactible arg-type for a property
|
<p>I am working on python web app. I have defined an <code>enum</code> as below:</p>
<pre><code>class AccountTypes(StrEnum):
A: str = "aaa"
B: str = "bbb"
</code></pre>
<p>Then, I have a <code>class</code> as below:</p>
<pre><code>class Account:
_types: type[AccountTypes] = AccountTypes
@classmethod
def types(cls) -> type[AccountTypes]:
return cls._types
</code></pre>
<p>Then, I have defined <code>usecase</code> as below:</p>
<pre><code>class AccountUsecase:
@classmethod
def types(cls) -> type[AccountTypes]:
return Account.types()
</code></pre>
<p>Then, I have defined a <code>service</code> as below:</p>
<pre><code>class AccountService:
Types = property(fget=AccountUsecase.types)
</code></pre>
<p>I am using <code>service</code> inside <code>infra/web</code> as below:</p>
<pre><code>
if account_type == AccountService.Types.A:
# some code to execute
</code></pre>
<p>On running the code, I am getting an error in the <code>service</code> file as below:</p>
<pre><code> error: Argument "fget" to "property" has incompatible type "Callable[[], Type[AccountTypes]]"; expected "Optional[Callable[[Any], Any]]" [arg-type]
</code></pre>
<p>What I am missing?</p>
|
<python>
|
2023-04-22 15:26:42
| 0
| 962
|
winter
|
76,080,358
| 13,099,604
|
Why conditioning on dataframe return NAN?
|
<p>I have some coordinate data, and it seems they are very dispersed. So, I want to keep all outliers that are 2 distance away from std. There are two problem:</p>
<pre><code># Load the data
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
df = pd.read_csv("map.csv")
# calculate mean and std of longitude columns
lat_mean = df[['pickup_latitude', 'dropoff_latitude']].mean()
lat_std = df[['pickup_latitude', 'dropoff_latitude']].std()
lon_mean = df[['pickup_longitude', 'dropoff_longitude']].mean()
lon_std = df[['pickup_longitude', 'dropoff_longitude']].std()
</code></pre>
<p>Then I want to keep all within 2 std away:</p>
<pre><code>df[np.abs(df[['pickup_latitude', 'dropoff_latitude']] - lat_mean) < 2 * lat_std]
</code></pre>
<p>But I get <code>NAN</code>! Why??</p>
|
<python><pandas><dataframe>
|
2023-04-22 15:14:52
| 1
| 325
|
Hemfri
|
76,080,237
| 5,168,463
|
Delete 80K images from google drive at once
|
<p>I accidentaly unzipped a folder with 80K images in my main google drive folder <strong>MyDrive</strong>. And now I am trying to delete these files. I found a solution at this <a href="https://stackoverflow.com/questions/62496307/remove-files-on-google-drive-using-google-colab">link</a> and I am using the following code:</p>
<pre><code>import os
import glob
fileList = glob.glob('/content/drive/MyDrive/*.png')
print("Number of files: ",len(fileList))
for filePath in fileList:
try:
os.remove(filePath)
except:
print("Error while deleting file : ", filePath)
</code></pre>
<p>The problem here is that, the mounted drive is only showing and deleting few images at a time (between 1000-1500). I have to manually unmount, restart the runtime and mount again to get the next batch of images to delete. Doing this manually will take a lot of time. Is there a way to delete all of it at once?</p>
|
<python><python-3.x><google-drive-api><google-colaboratory>
|
2023-04-22 14:47:12
| 2
| 515
|
DumbCoder
|
76,079,894
| 16,363,897
|
Expanding standard deviation of all columns
|
<p>I have the following dataframe:</p>
<pre><code> a b c
day
1 2 2 8
2 1 2 2
3 7 2 3
4 2 9 7
5 4 6 4
</code></pre>
<p>I want to get a new column ("std") with the expanding standard deviation of all column values. NaNs should be ignored.
This is the expected output:</p>
<pre><code> a b c std
day
1 2 2 8 2.828427
2 1 2 2 2.339278
3 7 2 3 2.346524
4 2 9 7 2.782635
5 4 6 4 2.542090
</code></pre>
<p>For example, 2.339278 is equal to numpy.std ([2,2,8,1,2,2])</p>
<p>I tried the following:</p>
<pre><code>df['std'] = df.reset_index().groupby('day').expanding().std()
</code></pre>
<p>but resulted in the following TypeError: incompatible index of inserted column with frame index</p>
<p>Any help? Thanks</p>
|
<python><pandas><numpy>
|
2023-04-22 13:28:28
| 3
| 842
|
younggotti
|
76,079,841
| 3,204,942
|
run `python.exe` to run the python shell interactively
|
<p>I am trying to run a exec.Command <code>python.exe</code> to run the python shell interactively. How can I type into it?</p>
<pre><code>package main
import (
"fmt"
"os"
"os/exec"
"io"
"log"
)
func main() {
fmt.Println("Before Python shell:")
cmd := exec.Command("python.exe")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
_ = cmd.Run()
}
</code></pre>
|
<python><shell><go>
|
2023-04-22 13:18:22
| 1
| 2,349
|
Gary
|
76,079,813
| 3,323,526
|
Python paramiko SSH: why doesn't reading of stdout free some buffer space?
|
<p>I'm writing testing code to check how does the SSH buffer work, and how exactly many bytes of none-read output blocks the remote program from proceeding. Here is the code:</p>
<p>Client side:</p>
<pre><code>from paramiko import SSHClient, AutoAddPolicy
from datetime import datetime
def log(msg, *args, **kwargs):
dt_str = datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')
print(f"[{dt_str}] {msg}", *args, **kwargs)
def main():
ssh_client = SSHClient()
ssh_client.set_missing_host_key_policy(AutoAddPolicy())
ssh_client.connect(hostname='127.0.0.1', username='my_user', password='my_password', port=3322)
stdin, stdout, stderr = ssh_client.exec_command('python3 /home/my_user/long_output.py', timeout=30)
log('Print stdout')
for line in stdout:
log(line, end='')
log('Print stderr')
for line in stderr:
print(line, end='')
if __name__ == '__main__':
main()
</code></pre>
<p>The client side code works fine with scripts which generate ordinary size of output. But I want to test how exactly many of bytes of output blocks the program, so I made the following server side script (save it on the SSH server under $HOME with name as 'long_output.py'):</p>
<pre><code>import sys
import string
from time import sleep
def main():
print('started', flush=True) # [a] 8 bytes to <stdout>
sleep(2)
s = string.ascii_letters+string.digits+string.punctuation + '\n' # length of each line: 95 bytes
# target size: 2 MiB = 2 * 1024 * 1024 = 2097152 bytes
for _ in range(22075):
# 22075 * 95 = 2097125 bytes of content is written to <stderr>
print(s, end='', file=sys.stderr)
# there are still 2097152 - 2097125 = 27 bytes to the target size
print('123456789' + '\n', end='', file=sys.stderr) # [b] but actually you can add at most 9 bytes only
sleep(2)
print('finished', flush=True) # [c] 9 bytes to <stdout>
if __name__ == '__main__':
main()
</code></pre>
<p>My client side code reads <strong>stdout</strong> first, so if there is too much <strong>stderr</strong> which fills up the buffer , the client side code won't get a chance to read any more data from <strong>stdout</strong>. Size of the buffer is 2 MiB, i.e. 2,097,152 bytes, but I found that my client side code hangs when server side <strong>stderr</strong> is more than 2,097,134 bytes, which is 18 bytes less than the limitation. You can try it by modifying line [b] in server side code: if charater '9' is removed, the client side works fine, otherwise it blocks.</p>
<p>I thought those 18 bytes are used by some overhead data, such as controlling flags. But soon I found that by removing some characters from line [a] and [c], the client code works again. So I think the buffer is shared by <strong>stdout</strong> and <strong>stderr</strong>, and the limitation is exactly 2 MiB - 1 byte.</p>
<p>But it brings me a new question: I've read the <strong>stdout</strong> in the client side code! Why doesn't the 8 bytes + 9 bytes of <strong>stdout</strong> get freed from the buffer?</p>
<p>As per documentation of paramiko:</p>
<blockquote>
<p>Because SSH2 has a windowing kind of flow control, if you stop reading
data from a Channel and its buffer fills up, the server will be unable
to send you any more data until you read some of it.</p>
</blockquote>
<p>(<a href="https://docs.paramiko.org/en/stable/api/channel.html#paramiko.channel.Channel" rel="nofollow noreferrer">https://docs.paramiko.org/en/stable/api/channel.html#paramiko.channel.Channel</a>)</p>
<p>I have "read some of it" (the <strong>stdout</strong>), why can't the server side script write more <strong>stderr</strong> data to the buffer?</p>
<p>Platform information:</p>
<ul>
<li>Client side: Windows 10, Python 3.8.10, paramiko 3.1.0</li>
<li>Server side: Ubuntu 20.04.2 LTS, Python 3.8.10, OpenSSH_8.2p1 Ubuntu-4ubuntu0.2, OpenSSL 1.1.1f 31 Mar 2020</li>
</ul>
|
<python><ssh><paramiko><openssh>
|
2023-04-22 13:12:21
| 0
| 3,990
|
Vespene Gas
|
76,079,804
| 15,528,750
|
Singularity: Python package not available even though it is in docker
|
<p>I built myself a docker image from this docker file:</p>
<pre><code>FROM ubuntu:22.04
RUN apt-get update
RUN apt-get upgrade -y
# Install python:
RUN apt-get install software-properties-common -y
RUN add-apt-repository ppa:deadsnakes/ppa -y
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install python3.11 -y
RUN python3 --version
RUN apt-get upgrade
# Install pip:
RUN apt-get install python3-pip -y
# Install disba (https://github.com/keurfonluu/disba/):
RUN pip install disba[full] --user
</code></pre>
<p>(<code>disba</code> is a Python package used for the modeling of surface wave dispersions, and I need this package).</p>
<p>I noticed that when I try importing <code>disba</code> in python3 from a docker container, it works, but when I try to do the same thing in singularity (which is installed on our HPC cluster), it complains:</p>
<pre><code>Singularity> python3
Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import disba
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'disba'
</code></pre>
<p>(The way I built the singularity container is by pushing my docker image to dockerhub and then doing</p>
<pre><code>singularity build <container_name> docker://<docker_url>
</code></pre>
<p>followed by</p>
<pre><code>singularity run <container_name>
</code></pre>
<p>)
Unfortunately, I'm at a loss to understand why I cannot import the module I need... I'd appreciate any hints or ideas.</p>
<p>EDIT: Commenting on the answer by @Henrique Andrade:
i) When I type <code>python3.11</code> in singularity and then try importing <code>disba</code>, the error persists.
ii) Also in docker, when I type <code>python3</code>, Python 3.10.6 is started, but importing <code>disba</code> works, so it doesn't seem to be an issue with the Python version.</p>
|
<python><docker><singularity-container>
|
2023-04-22 13:09:44
| 1
| 566
|
Imahn
|
76,079,787
| 904,992
|
Making moviepy concatenations faster
|
<p>I'm adding clips to a main clip:</p>
<pre><code>final_clip = VideoFileClip('video.mp4')
secondary_clip = VideoFileClip('video2.mp4')
for tr in time_ranges:
clip = secondary_clip.subclip(tr['start'], tr['end']).set_position((0, 0))
final_clip = concatenate_videoclips([final_clip.subclip(
0, tr['end']), clip, final_clip.subclip(tr['end'], final_clip.duration)], method="compose")
</code></pre>
<p>Problem is that the video is 1GB+ and is 35 minutes long, so it barely renders on my laptop.
Any suggestions to make it faster?</p>
<p>Thoughts I had:</p>
<ul>
<li>Try to use something different than "compose" (I need the secondary clip to be fullscreen).</li>
<li>Parallelize it and concat the operations later.</li>
</ul>
|
<python><moviepy>
|
2023-04-22 13:04:09
| 1
| 8,294
|
funerr
|
76,079,776
| 1,999,585
|
How can I compute the value of softmax function on only a couple of Dataframe rows?
|
<p>I have the following Dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>Plassering</th>
<th>Average FGrating</th>
</tr>
</thead>
<tbody>
<tr>
<td>22943</td>
<td>1</td>
<td>100.43</td>
</tr>
<tr>
<td>22944</td>
<td>2</td>
<td>93.5</td>
</tr>
<tr>
<td>22945</td>
<td>3</td>
<td>104.6</td>
</tr>
<tr>
<td>22746</td>
<td>4</td>
<td>101.3</td>
</tr>
<tr>
<td>22947</td>
<td>1</td>
<td>102.05</td>
</tr>
<tr>
<td>22948</td>
<td>2</td>
<td>107.35</td>
</tr>
<tr>
<td>22949</td>
<td>3</td>
<td>109.12</td>
</tr>
</tbody>
</table>
</div>
<p>I am trying to apply the <code>softmax</code> function for the <code>Average FGrating</code> column, for the entire DataFrame, while the <code>Plassering</code> values are increasing. This means that I want to apply <code>softmax</code> for the first four rows in the DataFrame, then for the next 3 rows, separately, and so on.</p>
<p>The entire DataFrame, having about 5000 rows, is structured like this.</p>
<p>My first attempt is to cycle through the rows of this DataFrame, using <code>iterrows()</code> and, while <code>Plassering</code> is increasing, the <code>Average FGrating</code> value is added to a list. When the <code>Plassering</code> value is smaller that the value from the previous row, I compute the <code>softmax</code> passing the list as a parameter, then empty the list and the cycle goes on. However, I read <a href="https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe-in-pandas">here</a> that it is not a good idea, performance-wise.</p>
<p>Do you have any better ideas than mine?</p>
|
<python><pandas><dataframe><loops>
|
2023-04-22 13:02:03
| 2
| 2,424
|
Bogdan Doicin
|
76,079,763
| 7,987,455
|
How do I scrape data from a div-container?
|
<p>I'm trying to scrape apps names (which exist at the bottom of the website) from [This Website] <a href="https://www.workato.com/integrations/salesforce" rel="nofollow noreferrer">1</a> using requests_html and CSS selectors, but it returns an empty list. Can you please provide an explanation?
The code:</p>
<pre><code>import requests_html
from requests_html import HTMLSession
s = HTMLSession()
headers = {
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
}
url = 'https://www.workato.com/integrations/salesforce'
r = s.get(url, headers=headers)
r.html.render(sleep=4)
apps = r.html.find('#__layout > div > div > div > div > div > main > article.apps-page__section.apps-page__section_search > div > div > div.apps-page__integrations > div > ul')
print(apps)
</code></pre>
<p>I tried the following:</p>
<pre><code>for app in apps:
print(app)
</code></pre>
<p>and I also used <code>.text</code></p>
<p>but the output always says:</p>
<pre><code>[]
</code></pre>
|
<python><web-scraping><beautifulsoup><css-selectors><python-requests-html>
|
2023-04-22 12:58:50
| 1
| 315
|
Ahmad Abdelbaset
|
76,079,748
| 7,906,206
|
Cant import ydata_profiling import into my google colab environment
|
<p>I cant import ydata_profiling into my python environment. I do understand that pandas_profiling has been deprecated. Is there anything wrong with my code:</p>
<p><a href="https://i.sstatic.net/tMQhf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tMQhf.png" alt="enter image description here" /></a></p>
|
<python>
|
2023-04-22 12:56:16
| 1
| 1,216
|
Immortal
|
76,079,700
| 9,524,424
|
ModuleNotFoundError: No module named 'tensorflow.compat' / Keras compability issue?
|
<p>I have Python 3.11.3 (Windows) installed and use tensorflow and keras (both version 2.12.0). I keep getting the error:</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'tensorflow.compat'</p>
</blockquote>
<p>There is a lot of discussion about the error but I could not find a proper solution so far.</p>
<p>The error occurs at the first import from Keras:</p>
<pre><code>from keras.preprocessing.image import ImageDataGenerator
</code></pre>
<p>The traceback is:</p>
<pre><code> File "C:\Python3\Lib\site-packages\keras\__init__.py", line 21, in <module>
from keras import models
File "C:\Python3\Lib\site-packages\keras\models\__init__.py", line 18, in <module>
from keras.engine.functional import Functional
File "C:\Python3\Lib\site-packages\keras\engine\functional.py", line 24, in <module>
import tensorflow.compat.v2 as tf
ModuleNotFoundError: No module named 'tensorflow.compat'
</code></pre>
<p>When I dig into the Keras files where the error occurs I see that the problem occurs in <code>functional.py</code> where <code>import tensorflow.compat.v2 as tf</code> is called.</p>
<p>When I look into the tensorflow installation, I see that there is a folder</p>
<pre><code>...\Lib\site-packages\tensorflow\_api\v2\compat
</code></pre>
<p>I can import</p>
<pre><code>import tensorflow._api.v2 as tf
</code></pre>
<p>but what will not work is</p>
<pre><code>import tensorflow.compat.v2 as tf
</code></pre>
<p>I suppose this is a compability issue from Keras. Has anyone solved this problem so far? I think replacing the import might be a solution. However, I would prefer not to mess around in the Keras installation. What are my options here?</p>
|
<python><tensorflow><keras>
|
2023-04-22 12:45:06
| 1
| 2,453
|
Peter
|
76,079,575
| 8,324,480
|
Move the pointer in a bytearray as seek does for a BinaryIO
|
<p>If I have a binary file, I can open it in mode <code>rb</code> and move the pointer with <code>.seek()</code>:</p>
<pre><code>with open(fname, "rb") as fid:
fid.seek(101)
</code></pre>
<p>But this is not possible with a <code>bytearray</code>:<code>bytearray(10).seek(1)</code>.</p>
<hr />
<p>Does a <code>bytearray</code> which supports <code>seek</code> exist?</p>
<p>I have 2 almost identical code snippets reading data from a binary file/buffer that I would like to merge, one reading from a binary file and one from a byte array. The reading operation is done with <code>numpy</code>, with either <code>numpy.fromfile</code> or <code>numpy.frombuffer</code>. Both accept an argument <code>offset</code> to control the pointer position, but in a slightly different manner. <code>fromfile</code> defines the offset from the <em>current</em> position while <code>frombuffer</code> defines the offset from the beginning of the buffer.</p>
<p>Any idea on which object I could use instead of <code>bytearray</code> to be able to run the same reader code snippet on either an opened binary file <code>fid</code> or on a <code>bytearray-like</code> buffer?</p>
|
<python><python-3.x><byte><binaryfiles><bytebuffer>
|
2023-04-22 12:14:58
| 1
| 5,826
|
Mathieu
|
76,079,388
| 610,569
|
How to use cross-encoder with Huggingface transformers pipeline?
|
<p>There're a set of models on huggingface hubs that comes from the <code>sentence_transformers</code> library, e.g. <a href="https://huggingface.co/cross-encoder/mmarco-mMiniLMv2-L12-H384-v1" rel="nofollow noreferrer">https://huggingface.co/cross-encoder/mmarco-mMiniLMv2-L12-H384-v1</a></p>
<p>The suggested usage examples are:</p>
<pre><code># Using sentence_transformers
from sentence_transformers import CrossEncoder
model_name = 'cross-encoder/mmarco-mMiniLMv2-L12-H384-v1'
model = CrossEncoder(model_name)
scores = model.predict([
['How many people live in Berlin?', 'How many people live in Berlin?'],
['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.']
])
scores
</code></pre>
<p>[out]:</p>
<pre><code>array([ 0.36782095, -4.2674575 ], dtype=float32)
</code></pre>
<p>Or</p>
<pre><code># From transformers.
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
import torch
# cross-encoder/ms-marco-MiniLM-L-12-v2
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/mmarco-mMiniLMv2-L12-H384-v1')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/mmarco-mMiniLMv2-L12-H384-v1')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'],
['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'],
padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
</code></pre>
<p>[out]:</p>
<pre><code>tensor([[10.7615],
[-8.1277]])
</code></pre>
<p>If a user wants to use the <code>transformers.pipeline</code> on these cross-encoder model, it throws an error:</p>
<pre><code>from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
import torch
# cross-encoder/ms-marco-MiniLM-L-12-v2
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/mmarco-mMiniLMv2-L12-H384-v1')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/mmarco-mMiniLMv2-L12-H384-v1')
pipe = pipeline(model=model, tokenizer=tokenizer)
</code></pre>
<p>It throws an error:</p>
<pre><code>---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_108/785368641.py in <module>
----> 1 pipe = pipeline(model=model, tokenizer=tokenizer)
/opt/conda/lib/python3.7/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
711 if not isinstance(model, str):
712 raise RuntimeError(
--> 713 "Inferring the task automatically requires to check the hub with a model_id defined as a `str`."
714 f"{model} is not a valid model_id."
715 )
RuntimeError: Inferring the task automatically requires to check the hub with a model_id defined as a `str`.
</code></pre>
<p><strong>Q: How to use cross-encoder with Huggingface transformers pipeline?</strong></p>
<p><strong>Q: If a model_id is needed, is it possible to add the model_id as an <code>args</code> or <code>kwargs</code> in <code>pipeline</code>?</strong></p>
<p>There's a similar question <a href="https://stackoverflow.com/questions/74425802/error-inferring-the-task-automatically-requires-to-check-the-hub-with-a-model-i">Error: Inferring the task automatically requires to check the hub with a model_id defined as a `str`. AraBERT model</a> but I'm not sure it's the same issue, since the other question is on <code>'aubmindlab/bert-base-arabertv02'</code> but not the cross-encoder class of models from <code>sentence_transformers</code>.</p>
|
<python><nlp><huggingface-transformers><sentence-transformers><large-language-model>
|
2023-04-22 11:23:03
| 1
| 123,325
|
alvas
|
76,079,353
| 1,112,097
|
Plotly choropleth not working as expected
|
<p>I am trying to understand why no data is showing up on a plotly choropleth map</p>
<p>The full code is in <a href="https://www.kaggle.com/code/andrewstaroscik/plotly-express-choropleth-not-working-as-expected?scriptVersionId=126678658" rel="nofollow noreferrer">this kaggle notebook</a></p>
<p>I am trying to graph some summary data by state using a choropleth map with Plotly Express. I've recreated the problem in the notebook above using county-level population data from the US Census Bureau.</p>
<p>The raw data has multiple entries per state, which I am using groupby to summarize by state.</p>
<pre class="lang-py prettyprint-override"><code> df = df_raw.groupby('State')['Population'].agg(['count','sum','mean','median','max','min']).reset_index()
</code></pre>
<p>Then graphing it with plotly</p>
<pre class="lang-py prettyprint-override"><code>fig = px.choropleth(
df,
locations='State',
color = 'sum',
locationmode='USA-states',
scope='usa'
)
fig.show()
</code></pre>
<p>The chart renders and the scale to the right is scaled properly, but there is no color fill in the state shapes.</p>
<p><a href="https://i.sstatic.net/s6RWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s6RWk.png" alt="enter image description here" /></a></p>
|
<python><plotly><choropleth>
|
2023-04-22 11:15:54
| 1
| 2,704
|
Andrew Staroscik
|
76,079,104
| 1,096,660
|
Can you have multiple python versions with Conda and Apache2?
|
<p>I'm trying to wrap my head around configuring Apache2 to run my Django project which runs by using conda. After switching recently from venv the goal is to run multiple projects with their own environments.</p>
<p>So far so good and there are multiple tutorials how to set this up.</p>
<p>What I do not understand is the relationship between the different python versions and mod_wsgi. I can compile mod_wsgi either on the system or in a conda env, but then it will always be tied to a specific version.</p>
<p>Does that mean I have to have all my conda envs run the same python version?
Or is it possible to create multiple mod_wsgi modules for every desired python version?</p>
<p>Thank you for your help.</p>
|
<python><python-3.x><apache2><conda><mod-wsgi>
|
2023-04-22 10:15:32
| 1
| 2,629
|
JasonTS
|
76,078,821
| 3,886,898
|
pyinstaller: doesn't save the video from the webcam
|
<p>I have a python script which records the webcam and saves the video using cv2.VideoWriter. The code works with no issues when I run it directly. After creating an exe file on windows using pyinstaller, it opens the webcam and creates output.avi, but the file doesn't open. This is my sample.py script:</p>
<pre><code>import cv2
# Set up video capture
video_capture = cv2.VideoCapture(1)
fourcc = cv2.VideoWriter_fourcc(*'XVID')
video_out = cv2.VideoWriter('output.avi', fourcc, 12.0, (640, 480))
# Record video and audio
while True:
ret, frame = video_capture.read()
video_out.write(frame)
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release resources
video_out.release()
video_capture.release()
cv2.destroyAllWindows()
</code></pre>
<p>Then I ran the following command in terminal:</p>
<pre><code>pyinstaller --onefile sample.py
</code></pre>
<p>I also added opencv_videoio_ffmpeg412_64.dll to the folder next to sample.exe as other people suggested and still same issue exists. I even tried this command:</p>
<pre><code>pyinstaller --onefile sample2.py --add-binary opencv_videoio_ffmpeg412_64.dll;.
</code></pre>
<p>still no luck. Any suggestions please?</p>
|
<python><opencv><ffmpeg><pyinstaller>
|
2023-04-22 09:00:51
| 0
| 1,108
|
Mohammad
|
76,078,707
| 2,998,077
|
Pandas to add new column to indicate if the row fits multiple condition
|
<p>A simple dataframe that I want to use the "df.loc" way to add a new column (green) to indicate if the row fits the condition, like:</p>
<p><a href="https://i.sstatic.net/9QtxC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QtxC.png" alt="enter image description here" /></a></p>
<pre><code>import pandas as pd
csvfile_now = StringIO(
"""
Name A B C D
Mike 5 11 50 10
Kate 60 9 20 19
Jane 11 14 20 12
""")
df = pd.read_csv(csvfile_now, sep = '\t', engine='python')
</code></pre>
<p>I've tried:</p>
<pre><code>df_new = df.loc[(df['B'] > (df['A'] * 0.5)) & (df['D'] > (df['C'] * 0.5)), 'Ind'] = 'Yes'
</code></pre>
<p>Also tried:</p>
<pre><code>df['Ind'] = df.loc[:,(df['B'] > (df['A'] * 0.5)) & (df['D'] > (df['C'] * 0.5))]
</code></pre>
<p>Neither of them worked.</p>
<p>What's the good way to do so?</p>
|
<python><pandas><dataframe>
|
2023-04-22 08:31:34
| 1
| 9,496
|
Mark K
|
76,078,688
| 16,383,578
|
Python turn a list of numbers into a Counter of ranges
|
<p>Given a <code>list</code> of numbers (<code>int</code>s or <code>float</code>s, usually <code>float</code>s), and a number <code>step</code>, what is the best way to turn the <code>list</code> into a <code>Counter</code>, so that the keys are two-element <code>tuple</code>s, each a multiple of <code>step</code>, and the values are the counts of numbers that fall into the range greater than the first number and less than the second number for the respective key?</p>
<p>I have written a minimal reproducible example, to illustrate what I intend to achieve. I already found a solution, but I don't think it is the most efficient way.</p>
<pre class="lang-py prettyprint-override"><code>import random
from bisect import bisect
from collections import Counter
def get_sample(n=256, high=20):
return [random.random()*high for _ in range(n)]
sample = get_sample()
def analyze_sample(sample, step):
ceiling = round(max(sample) / step) + 1
steps = [i*step for i in range(ceiling)]
bands = [(a, b) for a, b in zip(steps, steps[1:])]
result = Counter()
for n in sample:
result[bands[bisect(steps, n)-1]] += 1
return Counter({k: v for k, v in sorted(result.items())})
analyze_sample(sample, 0.5)
</code></pre>
<p>What is a better way?</p>
<hr />
<h2>Update</h2>
<p>I just turned my code into a linear search, it is much faster, but still slow.</p>
<pre><code>def linear_analyze_sample(sample, step):
sample = sorted(sample)
ceiling = round(sample[-1] / step) + 1
steps = [i*step for i in range(ceiling)]
bands = [(a, b) for a, b in zip(steps, steps[1:])]
result = dict()
i = 0
it = iter(sample)
n = next(it)
for step in steps[1:]:
count = 0
if n == 1e309:
break
while n <= step:
count += 1
n = next(it, 1e309)
result[bands[i]] = count
i += 1
return {k: v for k, v in sorted(result.items())}
</code></pre>
<pre><code>In [120]: linear_analyze_sample(sample, 0.5) == analyze_sample(sample, 0.5)
Out[120]: True
In [121]: %timeit analyze_sample(sample, 0.5)
179 µs ± 2.25 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [122]: %timeit linear_analyze_sample(sample, 0.5)
62.5 µs ± 380 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
</code></pre>
|
<python><python-3.x><grouping>
|
2023-04-22 08:25:32
| 2
| 3,930
|
Ξένη Γήινος
|
76,078,448
| 3,541,631
|
Retry failed downloads (using multiple workers) and check the parent if multiple urls with the same parent fail
|
<p>I'm creating a download worker using queues.</p>
<pre><code>class Item:
url: str
download_path: str
is_downloaded: bool
</code></pre>
<p>The worker code:</p>
<pre><code>SENTINEL = "END"
def download_item(to_download: Queue, downloaded: Queue):
item: Item = to_download.get()
while not (item == SENTINEL):
try:
response = request.urlopen(item.url)
except error.HTTPError as e:
item = to_download.get()
continue
item = write_to_file(item.download_path, content=response.read())
downloaded_q.put(item)
item = to_download.get()
to_download.put(SENTINEL)
</code></pre>
<p>I want to introduce a retry mechanism and besides that a check parent folder mechanism.
For example:</p>
<pre><code>bam/a - fail
ram/s - ok
bam/c - fail
bam/d - fail
</code></pre>
<p>If more than count(2 files) that have "bam" as parent fail, check if the parent exist.
If the parent doesn't exist all the next urls that come from the queue and have "bam" in them automatically fail without trying to download.</p>
<p>The complexity is created by the fact that are more workers:</p>
<pre><code> with concurrent.futures.ThreadPoolExecutor(max_workers=50) as executor:
executor.submit(download_item, to_download_q, downloaded_q)
</code></pre>
<p>I'm looking for a solution, code or pseudo-code to approach this problem. Also I can't use third party packages in the environment.</p>
|
<python><multithreading><python-3.8>
|
2023-04-22 07:13:51
| 0
| 4,028
|
user3541631
|
76,078,303
| 18,349,319
|
async sqlalchemy 'NoneType' object has no attribute 'send'
|
<p>I have a problem with async sqlalchemy. I am trying to create a record in my table in the database but it is not being created.</p>
<p>database connection</p>
<pre><code>from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.ext.asyncio import create_async_engine
from sqlalchemy.orm import sessionmaker
from contextlib import asynccontextmanager
DATABASE_URL = "postgresql+asyncpg://postgres:postgres@localhost:5432/asyncalchemystudy"
engine = create_async_engine(DATABASE_URL, pool_size=20, max_overflow=0)
Base = declarative_base()
async_session = sessionmaker(
engine, class_=AsyncSession, expire_on_commit=False, autocommit=False, autoflush=False
)
@asynccontextmanager
async def get_session() -> AsyncSession:
async with async_session() as session:
yield session
async def init_models():
async with engine.begin() as conn:
# await conn.run_sync(Base.metadata.drop_all)
await conn.run_sync(Base.metadata.create_all)
</code></pre>
<p>sqlalchemy model:</p>
<pre><code>class users(Base):
__tablename__ = "users"
password = Column(TEXT)
username = Column(TEXT)
id = Column(Integer, primary_key=True)
</code></pre>
<p>Code for create user</p>
<pre><code>async def create_user(session: AsyncSession, username: text, password: text) -> tasks | None:
user = users(username=username, password=password)
try:
session.add(task)
await session.commit() <- problem here
await session.refresh(user)
return task
except Exception as ex:
print(ex)
await session.rollback()
</code></pre>
<p>After which the created user is used in the aiohttp session:</p>
<pre><code> async with get_session() as sql_session:
task = await queries.create_user(sql_session, username, password)
async with aiohttp.ClientSession() as session:
</code></pre>
<p>commit exception:</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'send'
</code></pre>
<p>Traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\Python\Lib\asyncio\base_events.py", line 761, in call_soon
self._check_closed()
File "C:\Python\Lib\asyncio\base_events.py", line 519, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\user\PycharmProjects\pythonProject\main.py", line 68, in <module>
asyncio.run(telegrator_parser())
File "C:\Python\Lib\asyncio\runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Python\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\user\PycharmProjects\pythonProject\main.py", line 40, in telegrator_parser
task = await queries.create_task(sql_session)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\PycharmProjects\pythonProject\sql\queries.py", line 23, in create_task
raise ex
File "C:\Users\user\PycharmProjects\pythonProject\sql\queries.py", line 18, in create_task
await session.commit()
File "C:\Python\Lib\site-packages\sqlalchemy\ext\asyncio\session.py", line 578, in commit
return await greenlet_spawn(self.sync_session.commit)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\sqlalchemy\util\_concurrency_py3k.py", line 134, in greenlet_spawn
result = context.throw(*sys.exc_info())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\sqlalchemy\orm\session.py", line 1431, in commit
self._transaction.commit(_to_root=self.future)
File "C:\Python\Lib\site-packages\sqlalchemy\orm\session.py", line 829, in commit
self._prepare_impl()
File "C:\Python\Lib\site-packages\sqlalchemy\orm\session.py", line 808, in _prepare_impl
self.session.flush()
File "C:\Python\Lib\site-packages\sqlalchemy\orm\session.py", line 3363, in flush
self._flush(objects)
File "C:\Python\Lib\site-packages\sqlalchemy\orm\session.py", line 3502, in _flush
with util.safe_reraise():
File "C:\Python\Lib\site-packages\sqlalchemy\util\langhelpers.py", line 70, in __exit__
compat.raise_(
File "C:\Python\Lib\site-packages\sqlalchemy\util\compat.py", line 207, in raise_
raise exception
File "C:\Python\Lib\site-packages\sqlalchemy\orm\session.py", line 3463, in _flush
flush_context.execute()
File "C:\Python\Lib\site-packages\sqlalchemy\orm\unitofwork.py", line 456, in execute
rec.execute(self)
File "C:\Python\Lib\site-packages\sqlalchemy\orm\unitofwork.py", line 630, in execute
util.preloaded.orm_persistence.save_obj(
File "C:\Python\Lib\site-packages\sqlalchemy\orm\persistence.py", line 244, in save_obj
_emit_insert_statements(
File "C:\Python\Lib\site-packages\sqlalchemy\orm\persistence.py", line 1237, in _emit_insert_statements
result = connection._execute_20(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\sqlalchemy\engine\base.py", line 1620, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\sqlalchemy\sql\elements.py", line 325, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\sqlalchemy\engine\base.py", line 1487, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\sqlalchemy\engine\base.py", line 1851, in _execute_context
self._handle_dbapi_exception(
File "C:\Python\Lib\site-packages\sqlalchemy\engine\base.py", line 2036, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File "C:\Python\Lib\site-packages\sqlalchemy\util\compat.py", line 207, in raise_
raise exception
File "C:\Python\Lib\site-packages\sqlalchemy\engine\base.py", line 1808, in _execute_context
self.dialect.do_execute(
File "C:\Python\Lib\site-packages\sqlalchemy\engine\default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "C:\Python\Lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py", line 479, in execute
self._adapt_connection.await_(
File "C:\Python\Lib\site-packages\sqlalchemy\util\_concurrency_py3k.py", line 76, in await_only
return current.driver.switch(awaitable)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\sqlalchemy\util\_concurrency_py3k.py", line 129, in greenlet_spawn
value = await result
^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py", line 408, in _prepare_and_execute
await adapt_connection._start_transaction()
File "C:\Python\Lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py", line 716, in _start_transaction
self._handle_exception(error)
File "C:\Python\Lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py", line 684, in _handle_exception
raise error
File "C:\Python\Lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py", line 714, in _start_transaction
await self._transaction.start()
File "C:\Python\Lib\site-packages\asyncpg\transaction.py", line 138, in start
await self._connection.execute(query)
File "C:\Python\Lib\site-packages\asyncpg\connection.py", line 317, in execute
return await self._protocol.query(query, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "asyncpg\protocol\protocol.pyx", line 338, in query
File "asyncpg\protocol\protocol.pyx", line 331, in asyncpg.protocol.protocol.BaseProtocol.query
File "asyncpg\protocol\coreproto.pyx", line 1078, in asyncpg.protocol.protocol.CoreProtocol._simple_query
File "asyncpg\protocol\protocol.pyx", line 929, in asyncpg.protocol.protocol.BaseProtocol._write
File "C:\Python\Lib\asyncio\proactor_events.py", line 365, in write
self._loop_writing(data=bytes(data))
File "C:\Python\Lib\asyncio\proactor_events.py", line 401, in _loop_writing
self._write_fut = self._loop._proactor.send(self._sock, data)
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'send'
Process finished with exit code 1
</code></pre>
|
<python><postgresql><sqlalchemy>
|
2023-04-22 06:26:02
| 1
| 345
|
TASK
|
76,078,240
| 3,517,025
|
is there a way to force a cluster to go on top of the diagram in diagrams
|
<p>I'm using diagrams and I'm trying to force a cluster to go on top.</p>
<p>Aplogies for the long code example, I didn't manage to render anything with less complexity that conveyed the same issue.</p>
<pre class="lang-py prettyprint-override"><code>import os
from diagrams import Cluster, Diagram, Edge
from diagrams.onprem.client import User
with Diagram("test", show=False) as diag:
a = User("a")
b = User("b")
c = User("c")
with Cluster("THIS CLUSTER SHOULD BE ON TOP"):
d1 = User("d1")
d2 = User("d2")
d3 = User("d3")
d1 - Edge(style="invis") - d2
with Cluster("E"):
e1 = User("e1")
e2 = User("e2")
e1 >> e2
e_anchor = User("")
e_anchor >> d1
f = User("f")
with Cluster("G"):
with Cluster("H"):
g1 = User("g1")
g2 = User("g2")
g3 = User("g3")
g_anchor = User("")
g_anchor >> d2
g4 = User("g4")
g5 = User("g5")
g1 >> g2
g2 >> g3
g3 >> g5
g4 >> g5
g6 = User("g6")
with Cluster("I"):
i1 = User("i1")
i2 = User("i2")
i3 = User("i3")
Cs = [i1, i2, i3]
with Cluster("J"):
j1 = User("j1")
k = User("k")
c << d3
k >> d3
a >> b
b >> c
c >> e1
e2 >> f
f >> g1
f >> g4
g5 >> g6
g6 >> Cs
for ccc in Cs:
ccc >> j1
j1 >> k
</code></pre>
<p>Which renders:
<a href="https://i.sstatic.net/piWPY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/piWPY.png" alt="enter image description here" /></a></p>
<p>Note how if the highlighted cluster was forced to go on the top/bottom of the diagram, the edges to/from it would pollute it much less</p>
<p><a href="https://i.sstatic.net/EltOr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EltOr.png" alt="enter image description here" /></a></p>
<p>Any ideas? ChatGPT4 hallucinates this to no end...</p>
|
<python><graphviz><diagram>
|
2023-04-22 06:01:26
| 2
| 5,409
|
Joey Baruch
|
76,078,003
| 252,060
|
Regex works on regextester, but not python
|
<p>I tried this</p>
<pre><code>
pattern_time = (
r"\b\d{1,2}([: .]?)(\d{2})?(\s?)((P|p)?)(.?)((M|m)?)(.?)((next|this)?)(\s?)((tomorrow|today|day|evening|morning|(mid?)night|((after|before)?)(noon))?)\b"
)
test_string = "The meeting is at 3pm today or 5 tomorrow or 7 this afternoon or 00:00 midnight. Let's meet at 11.30 p.M. We can also do 8:45 pm or 1200 hr or 00hr."
matches = re.findall(pattern_time, test_string, re.IGNORECASE)
print(matches)
</code></pre>
<p>and obtained in python</p>
<pre><code>[('', '', '', 'p', 'p', 'm', '', '', ' ', '', '', '', 'today', 'today', '', '', '', ''), (' ', '', '', '', '', '', '', '', '', '', '', '', 'tomorrow', 'tomorrow', '', '', '', ''), (' ', '', '', '', '', '', '', '', '', 'this', 'this', ' ', 'afternoon', 'afternoon', '', 'after', 'after', 'noon'), (':', '00', ' ', '', '', '', '', '', '', '', '', '', 'midnight', 'midnight', 'mid', '', '', ''), ('.', '30', ' ', 'p', 'p', '.', 'M', 'M', '.', '', '', ' ', '', '', '', '', '', ''), (':', '45', ' ', 'p', 'p', 'm', '', '', ' ', '', '', '', '', '', '', '', '', ''), ('', '00', ' ', '', '', 'h', '', '', 'r', '', '', ' ', '', '', '', '', '', ''), ('', '', '', '', '', 'h', '', '', 'r', '', '', '', '', '', '', '', '', '')]
</code></pre>
<p>but regextest is showing that it is correct.</p>
<p>How can we rectify this in Python?</p>
<p>Thank you for your help.</p>
<p><a href="https://i.sstatic.net/x3SNx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x3SNx.png" alt="enter image description here" /></a></p>
|
<python><regex>
|
2023-04-22 04:17:52
| 1
| 921
|
Ursa Major
|
76,077,950
| 5,583,772
|
How do you copy a dataframe in polars?
|
<p>In polars, what is the way to make a copy of a dataframe? In pandas it would be:</p>
<pre><code>df_copy = df.copy()
</code></pre>
<p>But what is the syntax for polars?</p>
|
<python><dataframe><python-polars>
|
2023-04-22 03:55:15
| 2
| 556
|
Paul Fleming
|
76,077,814
| 1,938,410
|
The simplest and most pythonic way to test if a variable is between two other variables?
|
<p>I have two variables <code>n1</code> and <code>n2</code>, without knowing which is bigger. I want to test if <code>n</code> is between <code>n1</code> and <code>n2</code>.</p>
<p>While I can definitely test <code>if min(n1, n2) < n < max(n1, n2)</code>, I am just curious if there is any more pythonic way to do it.</p>
<p>I am thinking something like <code>if n in range(sorted(n1, n2))</code>, but this does not work because I can't put a list in <code>range( )</code>.</p>
<p>I also find there is <code>between()</code> function in <code>pandas</code>, but it is way too powerful than what I need.</p>
|
<python>
|
2023-04-22 03:03:56
| 3
| 507
|
SamTest
|
76,077,754
| 11,737,958
|
How to pack a button in custom area of window
|
<p>I am new to python. I am using python 3.10 version. I created a window with two buttons using tkinter. I need to place the buttons in custom area rather than any sides which tkinter choose. I dont want to use "place" method. How to place the buttons with specific coordinates using pack?</p>
<p>Thanks in advance!</p>
<p><a href="https://i.sstatic.net/TBAZq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TBAZq.png" alt="enter image description here" /></a></p>
<pre><code>import tkinter as tk
from tkinter import *
core = Tk()
core.geometry("640x360")
core.title("txteditor")
diffbtn = Button(text = "Check Difference")
# If i use place, then i can choose a custom area in the window
# diffbtn.place(x = 200, y = 5, height = 30, width = 150)
diffbtn.pack(padx = 200, pady = 20)
diffbtn1 = Button(text = "Check Difference")
# diffbtn.place(x = 200, y = 5, height = 30, width = 150)
diffbtn1.pack(side = TOP,pady = 20)
core.mainloop()
</code></pre>
<p>Edit:</p>
<p>When the tkinter window is expanded to right side, the button does not move with the window with place. How to pin the button with the window during expansion or collapse of the window?</p>
<p><a href="https://i.sstatic.net/XCzpE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XCzpE.png" alt="enter image description here" /></a></p>
|
<python><tkinter>
|
2023-04-22 02:35:24
| 1
| 362
|
Kishan
|
76,077,714
| 10,173,383
|
install_python() raises Error in setwd(root) : cannot change working directory
|
<p>I was following the code at <a href="https://tensorflow.rstudio.com/install/" rel="nofollow noreferrer">https://tensorflow.rstudio.com/install/</a>. Then I encountered the following error that stopped me from continue.</p>
<pre><code>> library(reticulate)
> path_to_python <- install_python()
Error in setwd(root) : cannot change working directory
</code></pre>
<p>Here is my <code>sessionInfo()</code>.</p>
<pre><code>> sessionInfo()
R version 4.3.0 (2023-04-21 ucrt)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 11 x64 (build 22621)
Matrix products: default
locale:
[1] LC_COLLATE=English_United States.utf8
[2] LC_CTYPE=English_United States.utf8
[3] LC_MONETARY=English_United States.utf8
[4] LC_NUMERIC=C
[5] LC_TIME=English_United States.utf8
time zone: America/Chicago
tzcode source: internal
attached base packages:
[1] stats graphics grDevices utils datasets methods
[7] base
other attached packages:
[1] reticulate_1.28
loaded via a namespace (and not attached):
[1] compiler_4.3.0 Matrix_1.5-4 tools_4.3.0 rappdirs_0.3.3
[5] Rcpp_1.0.10 grid_4.3.0 jsonlite_1.8.4 png_0.1-8
[9] lattice_0.21-8
</code></pre>
<p>How can I fix it and move on? Thanks.</p>
<hr />
<p>Update on April 23:</p>
<p>The question was posted on <a href="https://github.com/rstudio/tensorflow.rstudio.com/issues/56" rel="nofollow noreferrer">https://github.com/rstudio/tensorflow.rstudio.com/issues/56</a>.</p>
|
<python><r><reticulate>
|
2023-04-22 02:19:46
| 0
| 435
|
J.Z.
|
76,077,605
| 754,136
|
Handling exceptions and CTRL+C in python multiprocessing and multithread
|
<p>My code does the following:</p>
<ul>
<li>Starts processes to collect data</li>
<li>Starts processes to test model</li>
<li>One thread takes care of aggregating the data to train the model</li>
<li>One thread aggregates the test results</li>
</ul>
<p>I need to handle exceptions such that if the training thread fails everything terminates, but if the testing thread fails training continues.</p>
<p>Below is a MWE (there is no model, but how I manage processes and threads is the same).</p>
<p>It works but I cannot catch CTRL+C. If I press it, I get a long text of messages, but the program keeps running.<br />
I have read that I should set the main thread as <code>daemon = True</code> but nothing changes. I have tried placing a <code>try except</code> around the main call to <code>run</code> to no avail.</p>
<p>How can I gracefully end the program with CTRL+C?</p>
<p>And other questions:</p>
<ul>
<li>Is there a better way to end the program when the training thread ends? I am using the variable <code>terminate</code> since <code>daemon = True</code> doesn't work. I have read about using <code>concurrent.future</code> but I cannot find a simple guide for that.</li>
<li>Is it better to use one pool, as in my example, or two (one for the collect processes, one for the training processes)? Would there be any difference?</li>
<li>How can I make each thread to catch the right exception? My code raises exceptions in <code>collect</code> and <code>test</code>, which are run by the processes. However, an exception raised by <code>collect</code> must be caught by <code>run_train</code>, while an exception raised by <code>test</code> must be caught by <code>run_test</code>.</li>
</ul>
<pre><code>import threading
import logging
import traceback
import torch
import time
from torch import multiprocessing as mp
try:
mp.set_start_method('spawn')
except:
pass
shandle = logging.StreamHandler()
log = logging.getLogger('rl')
log.propagate = False
log.addHandler(shandle)
log.setLevel(logging.INFO)
def collect(id, queue, data_collect):
log.info('Collect %i started ...', id)
try:
while True:
idx = queue.get()
if idx is None:
break
data_collect[idx] = torch.rand(1)
queue.task_done()
# actually do something meaningful
except Exception as e:
log.error('Exception in collect process %i', id)
traceback.print_exc()
raise e
def test(id, queue, data_test):
log.info('Test %i started ...', id)
try:
while True:
idx = queue.get()
if idx is None:
break
data_test[idx] = torch.rand(1)
queue.task_done()
# actually do something meaningful
except Exception as e:
log.error('Exception in test process %i', id)
traceback.print_exc()
raise e
def run():
steps = 0
num_collect_procs = 3
num_test_procs = 2
max_steps = 10
terminate = False
data_collect = torch.zeros(num_collect_procs).share_memory_()
data_test = torch.zeros(num_test_procs).share_memory_()
manager = mp.Manager()
pool = mp.Pool()
collect_queue = manager.JoinableQueue()
test_queue = manager.JoinableQueue()
# Start collection and testing processes
for i in range(num_collect_procs):
pool.apply_async(collect, args=(i, collect_queue, data_collect))
for i in range(num_test_procs):
pool.apply_async(test, args=(i, test_queue, data_test))
# Define target function for the learning thread
def run_train():
nonlocal steps, terminate
log.info('Training thread started ...')
while steps < max_steps and not terminate:
try:
for idx in range(num_collect_procs):
collect_queue.put(idx)
collect_queue.join()
time.sleep(0.1)
log.info('Training, %i %f', steps, data_collect.sum())
steps += 1
except:
terminate = True
for idx in range(num_collect_procs):
collect_queue.put(None)
break
log.info('Training done')
# Define target function for the testing thread
def run_test():
nonlocal steps, terminate
log.info('Testing thread started ...')
while steps < max_steps and not terminate:
try:
for idx in range(num_test_procs):
test_queue.put(idx)
test_queue.join()
time.sleep(0.1)
log.info('Testing, %i %f', steps, data_test.sum())
except:
for idx in range(num_test_procs):
test_queue.put(None)
break
log.info('Testing done')
learning_thread = threading.Thread(target=run_train, name='train')
learning_thread.start()
testing_thread = threading.Thread(target=run_test, name='test')
testing_thread.start()
learning_thread.join()
testing_thread.join()
collect_queue.join()
test_queue.join()
pool.terminate()
pool.join()
if __name__ == '__main__':
run()
</code></pre>
|
<python><multithreading><exception><multiprocessing>
|
2023-04-22 01:29:38
| 2
| 5,474
|
Simon
|
76,077,561
| 469,224
|
CPython: Understanding the difference between tp_dealloc and tp_finalize
|
<p>After reading the documentation for both <a href="https://docs.python.org/3.11/c-api/typeobj.html#c.PyTypeObject.tp_dealloc" rel="nofollow noreferrer">tp_dealloc</a> and <a href="https://docs.python.org/3.11/c-api/typeobj.html#c.PyTypeObject.tp_finalize" rel="nofollow noreferrer">tp_finalize</a>, and the finalization and deallocation <a href="https://docs.python.org/3.11/extending/newtypes.html#finalization-and-de-allocation" rel="nofollow noreferrer">tutorial</a>, it's still not clear to me what to move to finalize and what to keep in dealloc. The <a href="https://peps.python.org/pep-0442/" rel="nofollow noreferrer">PEP</a> talks about implementation, and less about how to use the finalizers..</p>
<p>What I understood: the actual object freeing must be kept in dealloc (and therefore dealloc is a required method), the finalizer will only be called once, and there is a suggestion to move "Any call to a non-trivial object or API" to finalize. But what is non-trivial here?</p>
<ul>
<li>is calling C functions (that don't call back into Python) trivial?</li>
<li>is calling any CPython function non-trivial?</li>
<li>and is it safe to change the object state in the finalizer to an invalid state (violating whatever properties the object is expected to maintain during normal life), i.e. is the finalizer guaranteed to be followed by a dealloc?</li>
</ul>
<p>On a more practical basis, summing all the above: let's consider a simple type that has an attribute that (always) points to a Python object. When tearing down, should the DECREF for that attribute be done in dealloc, or in finalize?</p>
|
<python>
|
2023-04-22 01:09:09
| 1
| 373
|
iustin
|
76,077,346
| 8,954,291
|
Python semisort list of objects by attribute
|
<p>I've got a list of an object:</p>
<pre class="lang-py prettyprint-override"><code>class packet():
def __init__(self, id, data):
self.id, self.data = id, data
my_list = [packet(1,"blah"),packet(2,"blah"),packet(1,"blah"),packet(3,"blah"),packet(4,"blah")]
</code></pre>
<p>I want to extract all objects with a certain <code>id</code> (in the case of the above, I've made it 1) and paste them to the end. The solution I've come up with is:</p>
<pre class="lang-py prettyprint-override"><code>def semisort(data, id):
return [o for o in data if o.id != id] + [o for o in my_list if o.id = id]
semisort(my_list, 1)
</code></pre>
<p>It works and is relatively clear. I just feel like there's a way I could use <code>sort</code> to do this that would make it more pythonic.</p>
|
<python><sorting><data-partitioning>
|
2023-04-21 23:41:11
| 1
| 1,351
|
Jakob Lovern
|
76,077,335
| 13,079,519
|
Gurobi variable creation giving different name than what I input
|
<p>I am trying to figure out why my gurobi variable name is different from what I input.</p>
<p>Here is my code:</p>
<pre><code>v_dc_mfc_wc = m.addVars(total_wc, name=v_dc_mfc_weightcube, vtype=GRB.BINARY)
</code></pre>
<p>where</p>
<pre><code>v_dc_mfc_weightcube = ['v1_Pennsylvania_BKN-2']
total_wc = len(v_dc_mfc_weightcube)
</code></pre>
<p>It is a list because variable sometimes have multiple items in it.</p>
<p>The result I am getting:</p>
<pre><code>v_dc_mfc_wc = {0: <gurobi.Var v1_Pennsylvania_BKN-2[0]>}
</code></pre>
<p>For some reason, even tho the name doesn't have '[0]' in the end, the variable name just keep having it. My observation is that when the list only has one item, this happens, is there a way to avoid it?</p>
|
<python><optimization><gurobi>
|
2023-04-21 23:38:32
| 0
| 323
|
DJ-coding
|
76,077,205
| 15,525,854
|
Join inner directories with parent directory
|
<p>Given this directory structure:</p>
<pre><code>└── names
├── name1
│ └── subdir1
│ ├── subnames
│ │ └── cool
│ │ ├── another-subdir1
│ │ │ ├── file
│ │ │ │ ├── file.txt
│ │ │ ├── subnames
│ │ │ │ ├── awesome
│ │ │ │ │ └── another-subdir2
│ │ │ │ │ ├── files
│ │ │ │ │ │ ├── file.txt
│ │ │ │ │ ├── subname
│ │ │ │ │ │ └── super-awesome
│ │ │ │ │ │ └── another-subdir3
│ │ │ │ │ │ ├── file
│ │ │ │ │ │ │ ├── file.txt
│ │ │ │ └── great
│ │ │ │ └── great-subdir1
│ │ │ │ ├── random-name
│ │ │ │ │ ├── file.txt
└── name2
└── subdir2
├── subnames
│ └── nice
│ ├── file.txt
</code></pre>
<p>I am trying to iterate through the whole directory structure but I am having a hard time to make it work.
I have tried below code but it does not exactly give what I want:</p>
<pre><code>import os
def get_full_names(parent_dir):
names = []
for dir_name in os.scandir(parent_dir):
if dir_name.is_dir():
names.append(dir_name.name)
subname_dir = os.path.join(parent_dir, dir_name.name)
for root, dirs, files in os.walk(subname_dir):
if 'subnames' in dirs:
subnames = get_full_names(os.path.join(root, 'subnames'))
for subname in subnames:
names.append(f"{subname}-{dir_name.name}")
return names
parent_dir = '/path/to/names'
names = get_full_names(parent_dir)
print(names)
</code></pre>
<p>This code gives:</p>
<pre><code>['name1', 'cool-name1', 'great-cool-name1', 'awesome-cool-name1', 'super-awesome-awesome-cool-name1', 'super-awesome-cool-name1', 'great-name1', 'awesome-name1', 'super-awesome-awesome-name1', 'super-awesome-name1', 'name2', 'nice-name2']
</code></pre>
<p>Expected output:</p>
<pre><code>['name1', 'cool-name1', 'awesome-cool-name1', 'super-awesome-awesome-cool-name1', 'great-cool-name1', 'name2', 'nice-name2']
</code></pre>
<p>In this case it should exclude <code>'super-awesome-cool-name1', 'great-name1', 'awesome-name1', 'super-awesome-awesome-name1' 'super-awesome-name1'</code> since:</p>
<p>super-awesome is not directly a subname/cool subdirectory</p>
<p>great is not directly a names/name1 sudirectory</p>
<p>awesome is not directly a names/name1 subdirectory</p>
<p>and so on...</p>
<p>The pattern will always be <code>names/parent-names-that-will-change/any-name-that-might-change/subnames/sub-parent-name/any-name-that-might-change/subnames/another-sub-parent-name/any-name-that-might-change/last-sub-dir-name</code></p>
<p>Above example should become <code>last-sub-dir-name/another-sub-parent-name/sub-parent-name/parent-names-that-will-change</code>
Note that I have used slashes "/" just to differentiate the "-".</p>
<p>Basically, I just want to join the subnames according to it's direct parent subname directory up to the root parent directory (name1 and name2).</p>
|
<python><file><filesystems>
|
2023-04-21 23:00:02
| 3
| 384
|
ayuuk ja'ay
|
76,077,166
| 6,296,626
|
Python Tkinter listbox color individual word
|
<p>From <a href="https://stackoverflow.com/questions/5348454">this answer</a> I know how to color individual items in a <code>listbox</code>. However, is it possible to color individual words/characters inside the item of a Tkinter <code>listbox</code>?</p>
<p>If yes, how to achieve that?</p>
<p>If no, is there a way to work around the limitations, such as transforming a Tkinter TextBox into a ListBox (so you can click on individual lines), while still being able to color anything and everything?</p>
|
<python><tkinter><textbox><listbox><customtkinter>
|
2023-04-21 22:45:20
| 1
| 1,479
|
Programer Beginner
|
76,077,118
| 16,363,897
|
Fast way to implement a polynomial regression on each pandas dataframe row
|
<p>I have the following pandas dataframe:</p>
<pre><code>df = pd.DataFrame({0: [11, 12, 31], 1: [6, 14, 27], 2: [11, 24, 21], 3: [1, 24, 20]})
0 1 2 3
0 11 6 11 1
1 12 14 24 24
2 31 27 21 20
</code></pre>
<p>For each row at the time, I want to implement a polynomial regression, where column names are X and row values are Y.</p>
<p>I know I can use iterrows:</p>
<pre><code>x=(df.columns).to_numpy()
for index, row in df.iterrows():
print(np.polyfit(x,row,2))
</code></pre>
<p>which produces:</p>
<pre><code>[-1.25 1.25 9.75]
[-0.5 6.1 11.1]
[ 0.75 -6.15 31.35]
</code></pre>
<p>but this can take a long time on large dataframes.
Is there a faster way to do this? Thanks</p>
|
<python><pandas><numpy>
|
2023-04-21 22:30:12
| 1
| 842
|
younggotti
|
76,077,047
| 8,340,881
|
looping through each row of snowpark/pyspark dataframe and concatenating list of dataframes
|
<p>I have a PySpark/Snowpark dataframe called df_meta. I want to loop through each row of <code>df_meta</code> dataframe and create a new dataframe based on the query and appending to an empty list called new_dfs. Once the looping is complete, I want to concatenate those list of dataframes. I have the below code which works fine in the case where all dataframe in the list have same columns, but it fails where the say one dataframe returns less/more columns.</p>
<pre><code>new_dfs = []
for row in df_meta:
database = row["db"]
schema = row["schema"]
table = row["src_tb_nm"]
col_name = row["src_col_nm"]
query = f"""SELECT {col_name} FROM {database}.{schema}.{table}"""
new_df = session.sql(query)
new_dfs.append(new_df)
df_out = reduce(DataFrame.unionAll, new_dfs)
</code></pre>
|
<python><apache-spark><pyspark><snowflake-cloud-data-platform>
|
2023-04-21 22:15:28
| 1
| 1,255
|
Shanoo
|
76,076,846
| 13,448,665
|
How can I use the Python debugger (with breakpoints) while using the interactive mode in VS Code?
|
<p>Here is a simplified example to show the problem I have in VS Code:</p>
<pre class="lang-py prettyprint-override"><code>#%% cell 1
import time
def myfunc():
a = 5 # <-- Put breakpoint here
b = 0
return a/b
#%% cell 2
t = time.time()
print("started at", t)
myfunc()
</code></pre>
<p>I put a breakpoint at <code>a = 5</code> then I press <code>SHIFT+ENTER</code> twice to execute both cells.
The interactive window shows the print, but then show the exception: <code>ZeroDivisionError: division by zero</code> instead of activating my breakpoint.</p>
<p><strong>How can I activate the debugger without losing the current variables</strong> (like <code>t</code>) from the interactive session?</p>
<p>I want to make it stop at the breakpoint the next time I press <code>SHIFT+ENTER</code> and be able to step through <code>myfunc</code>.</p>
<p><strong>EDIT:</strong></p>
<p>As @MingJie-MSFT pointed out, I can use <code>CTRL+SHIFT+P</code> > <code>Jupyter: Debug Current Cell</code> command to enable the debugger.</p>
<p>Is it normal that when I invoke this command, I immediately see line 3445 of interactiveshell.py, instead of breaking into my code?
<a href="https://i.sstatic.net/ZnUmy.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZnUmy.jpg" alt="Internal debugger code" /></a></p>
|
<python><python-3.x><visual-studio-code><vscode-debugger>
|
2023-04-21 21:35:55
| 1
| 361
|
BinaryMonkey
|
76,076,783
| 651,174
|
Set operations for filter values
|
<p>Let's say I have a list column and I want to apply a filter to it. Here are the values in the column:</p>
<pre><code>row People
--------------------
1 ['Veli']
2 ['David']
3 ['David', 'Veli']
4 ['George']
5 ['David', 'George']
</code></pre>
<p>The user selects the values <code>Veli</code> and <code>David</code> in the options list of what they want to check for. Then, here are the various filter operations they can do with the corresponding rows that should be returned:</p>
<ol>
<li>People contains any value from list --> 1, 2, 3, 5</li>
<li>People contains no value from list --> 4</li>
<li>People contains all values from list --> 3</li>
<li>People contains only values from list --> 1, 2, 3</li>
</ol>
<p>How would these four operations be expressed with sets? For example, the first one might be:</p>
<pre><code>rows = (set(['Veli']), set(['David']), set(['Veli', 'David']), set(['George']), set(['George', 'David']))
criteria = {'David', 'Veli'}
filter(lambda row: row&criteria, rows)
# (set(['Veli']), set(['David']), set(['Veli', 'David']), set(['George', 'David']))
</code></pre>
<p>How would I express the other three operations?</p>
|
<python><set>
|
2023-04-21 21:24:03
| 2
| 112,064
|
David542
|
76,076,695
| 6,218,849
|
Given events with start and end timex, how merge events closer than a threshold?
|
<p>I have a data frame <code>capsules</code> that hold events localized in time:</p>
<pre><code> Start End
0 2022-01-05 04:35:00 2022-01-05 04:45:00
1 2022-02-04 21:05:00 2022-02-04 21:15:00
2 2022-03-09 04:35:00 2022-03-09 04:45:00
3 2022-03-09 04:35:00 2022-03-09 04:45:00
4 2022-03-09 20:15:00 2022-03-09 20:25:00
5 2022-03-09 20:25:00 2022-03-09 21:15:00
6 2022-04-27 17:05:00 2022-04-27 17:25:00
7 2022-04-27 17:05:00 2022-04-27 17:15:00
8 2022-04-27 21:05:00 2022-04-27 21:55:00
9 2022-04-27 21:05:00 2022-04-27 21:15:00
10 2022-05-06 12:45:00 2022-05-06 12:55:00
11 2022-05-06 12:45:00 2022-05-06 12:55:00
12 2022-05-06 13:15:00 2022-05-06 13:25:00
13 2022-05-06 13:45:00 2022-05-06 13:55:00
14 2022-05-06 16:50:00 2022-05-06 16:50:00
15 2022-05-06 17:35:00 2022-05-06 17:55:00
16 2022-05-06 22:45:00 2022-05-06 22:55:00
17 2022-05-07 00:45:00 2022-05-07 00:55:00
18 2022-05-07 02:15:00 2022-05-07 02:25:00
19 2022-06-21 06:25:00 2022-06-21 06:35:00
20 2022-06-21 19:25:00 2022-06-21 19:35:00
21 2022-06-21 21:35:00 2022-06-21 21:45:00
22 2022-06-22 15:25:00 2022-06-22 15:55:00
23 2022-06-22 16:15:00 2022-06-22 16:25:00
24 2022-06-22 18:30:00 2022-06-22 18:55:00
25 2022-06-22 19:25:00 2022-06-22 19:35:00
26 2022-06-22 21:05:00 2022-06-22 21:15:00
27 2022-06-23 07:35:00 2022-06-23 07:45:00
28 2022-06-23 07:35:00 2022-06-23 07:45:00
29 2022-07-31 18:35:00 2022-07-31 18:45:00
30 2022-07-31 19:05:00 2022-07-31 19:15:00
31 2022-07-31 19:25:00 2022-07-31 19:35:00
32 2022-07-31 22:00:00 2022-07-31 22:00:00
33 2022-07-31 23:55:00 2022-08-01 00:05:00
34 2022-08-03 23:35:00 2022-08-03 23:45:00
35 2022-08-06 07:20:00 2022-08-06 07:20:00
36 2022-08-08 04:35:00 2022-08-08 04:40:00
37 2022-10-17 12:05:00 2022-10-17 12:15:00
38 2022-10-21 19:05:00 2022-10-21 19:15:00
39 2022-10-22 17:35:00 2022-10-22 17:45:00
40 2022-10-23 07:25:00 2022-10-23 07:35:00
41 2022-11-01 18:25:00 2022-11-01 18:45:00
42 2022-11-01 18:25:00 2022-11-01 18:35:00
43 2022-11-01 23:05:00 2022-11-01 23:15:00
44 2022-11-01 23:05:00 2022-11-01 23:25:00
45 2022-11-02 02:35:00 2022-11-02 03:25:00
46 2022-11-02 03:15:00 2022-11-02 03:25:00
47 2022-11-30 23:45:00 2022-11-30 23:55:00
48 2022-11-30 23:45:00 2022-11-30 23:55:00
49 2022-12-01 00:15:00 2022-12-01 00:35:00
50 2022-12-01 00:55:00 2022-12-01 01:05:00
51 2022-12-01 01:15:00 2022-12-01 01:25:00
52 2022-12-01 03:15:00 2022-12-01 03:25:00
53 2022-12-01 03:35:00 2022-12-01 03:45:00
54 2022-12-01 03:45:00 2022-12-01 03:55:00
55 2022-12-01 04:35:00 2022-12-01 05:15:00
56 2022-12-01 05:25:00 2022-12-01 05:35:00
57 2022-12-01 22:05:00 2022-12-01 22:15:00
58 2022-12-01 23:05:00 2022-12-01 23:15:00
59 2022-12-09 07:45:00 2022-12-09 07:55:00
60 2022-12-09 08:05:00 2022-12-09 08:15:00
61 2022-12-09 08:05:00 2022-12-09 08:15:00
62 2022-12-11 15:15:00 2022-12-11 15:35:00
</code></pre>
<p>Some of these overlap, or are closer than 10 minute apart. What I want to achieve is to merge these events into a single event, so that a new <code>DataFrame</code> is generated where no event overlaps with any adjacent event.</p>
<p>I try to identify which time window (capsule) that should be merge with the next, but how to actually merge them?</p>
<pre class="lang-py prettyprint-override"><code>capsules["Gap to Next"] = -(capsules.End - capsules.Start.shift(-1)) / pd.Timedelta(minutes=1)
capsules["Merge with Next"] = capsules["Gap to Next"].abs() <= THRS_MERGE
</code></pre>
<p>Executable code with a binder badge for this example can be found at this gist: <a href="https://gist.github.com/Brakjen/92b7196eed941f093fcafc0a9f4e8ef9" rel="nofollow noreferrer">https://gist.github.com/Brakjen/92b7196eed941f093fcafc0a9f4e8ef9</a></p>
<p><a href="https://mybinder.org/v2/gist/Brakjen/92b7196eed941f093fcafc0a9f4e8ef9/HEAD" rel="nofollow noreferrer"><img src="https://mybinder.org/badge_logo.svg" alt="Binder" /></a></p>
|
<python><pandas>
|
2023-04-21 21:04:52
| 1
| 710
|
Yoda
|
76,076,610
| 16,179,502
|
Elastic Beanstalk ModuleNotFoundError Flask App
|
<p>I am trying to deploy my flask app to EB. My current structure is as follows:</p>
<pre><code>requirements.txt
backend/
__init__.py
application.py
configurations/
__init__.py
</code></pre>
<p>With the core part of my <code>application.py</code> looking something like</p>
<pre><code>from flask import Flask
from backend import configurations
application = Flask(__name__)
</code></pre>
<p>I have also set the <code>WSGI</code> path within beanstalk to point to my <code>backend/application.py</code> file as such<a href="https://i.sstatic.net/JA8hB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JA8hB.png" alt="enter image description here" /></a></p>
<p>However, my issue occurs when deploying. EB logs the following error when building which causes my app to be degraded</p>
<pre><code>File "/usr/lib64/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'backend'
</code></pre>
<p>It looks like something to do with the module importing but I'm not sure where to go from here as doing something like <code>FLASK_APP=backend.application python3 -m flask run</code> works locally</p>
|
<python><flask><amazon-elastic-beanstalk>
|
2023-04-21 20:48:19
| 0
| 349
|
Malice
|
76,076,547
| 7,384,676
|
Glue Spark Converting string to datetime - Unable to create Parquet converter for data type "timestamp" whose Parquet type is optional binary
|
<p>I am trying to read in partitioned parquet files. And convert a parquet string in this format: <code>2023-12-10T11:00:00.826+0000</code> to a timestamp. I am using this command:</p>
<pre><code>df = df.withColumn(
"etaTs",
to_timestamp(col("etaTs"), "yyyy-MM-dd'T'HH:mm:ss.SSSZ")
)
</code></pre>
<p>There are nulls in some of the parquet partitions. I keep getting this error:</p>
<pre><code>`An error occurred while calling o119.collectToPython. Unable to create Parquet converter for data type "timestamp" whose Parquet type is optional binary etaTs (UTF8)`
</code></pre>
<p>I have logged all the types of these columns and they are all strings. How do I convert these strings to timestamps, why am I getting this error?</p>
|
<python><apache-spark><pyspark><apache-spark-sql><aws-glue>
|
2023-04-21 20:36:08
| 1
| 3,634
|
Ian Wesley
|
76,076,494
| 3,380,902
|
apply custom color scale to pydeck h3hexagon layer
|
<p>I am using pydeck to render a visualization using spatial data. I was hoping to use a custom color scale and apply the gradient to the hexagons based on counts.</p>
<p>here's some <code>json</code> data:</p>
<pre><code>[{"count": "12", "hexIds": ["82beeffffffffff"]}, {"count": "35", "hexIds": ["82be77fffffffff"]}, {"count": "51", "hexIds": ["82b917fffffffff"]}, {"count": "32", "hexIds": ["82bf4ffffffffff"]}, {"count": "93", "hexIds": ["82be67fffffffff"]}, {"count": "51", "hexIds": ["82c997fffffffff"]}, {"count": "13", "hexIds": ["82be5ffffffffff"]}, {"count": "11", "hexIds": ["82bed7fffffffff"]}, {"count": "52", "hexIds": ["82be47fffffffff"]}, {"count": "9", "hexIds": ["82c987fffffffff"]}, {"count": "13", "hexIds": ["82b9a7fffffffff"]}, {"count": "26", "hexIds": ["82a737fffffffff"]}, {"count": "38", "hexIds": ["82be8ffffffffff"]}, {"count": "3", "hexIds": ["829d77fffffffff"]}, {"count": "85", "hexIds": ["82be0ffffffffff"]}, {"count": "12", "hexIds": ["82b9b7fffffffff"]}, {"count": "23", "hexIds": ["82be6ffffffffff"]}, {"count": "2", "hexIds": ["82b84ffffffffff"]}, {"count": "6", "hexIds": ["829d4ffffffffff"]}, {"count": "6", "hexIds": ["82b85ffffffffff"]}, {"count": "7", "hexIds": ["82bec7fffffffff"]}, {"count": "32", "hexIds": ["82be57fffffffff"]}, {"count": "2", "hexIds": ["82a7affffffffff"]}, {"count": "30", "hexIds": ["82a727fffffffff"]}, {"count": "6", "hexIds": ["82a787fffffffff"]}, {"count": "21", "hexIds": ["82bee7fffffffff"]}, {"count": "10", "hexIds": ["82b847fffffffff"]}, {"count": "5", "hexIds": ["82a617fffffffff"]}, {"count": "6", "hexIds": ["82a6a7fffffffff"]}, {"count": "7", "hexIds": ["8294effffffffff"]}, {"count": "17", "hexIds": ["82bef7fffffffff"]}, {"count": "1", "hexIds": ["8294e7fffffffff"]}, {"count": "6", "hexIds": ["82a78ffffffffff"]}, {"count": "13", "hexIds": ["82a79ffffffffff"]}, {"count": "3", "hexIds": ["82b877fffffffff"]}, {"count": "5", "hexIds": ["82a797fffffffff"]}, {"count": "28", "hexIds": ["82be4ffffffffff"]}, {"count": "7", "hexIds": ["829487fffffffff"]}, {"count": "4", "hexIds": ["82bedffffffffff"]}, {"count": "2", "hexIds": ["82945ffffffffff"]}, {"count": "10", "hexIds": ["82b997fffffffff"]}, {"count": "4", "hexIds": ["82b9affffffffff"]}, {"count": "9", "hexIds": ["829c27fffffffff"]}, {"count": "16", "hexIds": ["82a707fffffffff"]}, {"count": "3", "hexIds": ["829d07fffffffff"]}, {"count": "8", "hexIds": ["82c9b7fffffffff"]}, {"count": "2", "hexIds": ["8294affffffffff"]}, {"count": "5", "hexIds": ["829d5ffffffffff"]}, {"count": "5", "hexIds": ["829d57fffffffff"]}, {"count": "1", "hexIds": ["82b80ffffffffff"]}, {"count": "11", "hexIds": ["82beaffffffffff"]}, {"count": "2", "hexIds": ["82b8b7fffffffff"]}, {"count": "1", "hexIds": ["829497fffffffff"]}, {"count": "7", "hexIds": ["829d27fffffffff"]}, {"count": "2", "hexIds": ["82a7a7fffffffff"]}, {"count": "6", "hexIds": ["82b887fffffffff"]}, {"count": "7", "hexIds": ["829457fffffffff"]}, {"count": "4", "hexIds": ["82c99ffffffffff"]}, {"count": "2", "hexIds": ["8294cffffffffff"]}, {"count": "4", "hexIds": ["82b88ffffffffff"]}, {"count": "3", "hexIds": ["82b98ffffffffff"]}, {"count": "7", "hexIds": ["82b837fffffffff"]}, {"count": "9", "hexIds": ["829d0ffffffffff"]}, {"count": "2", "hexIds": ["8294c7fffffffff"]}, {"count": "6", "hexIds": ["829d2ffffffffff"]}, {"count": "2", "hexIds": ["829d47fffffffff"]}, {"count": "3", "hexIds": ["82b867fffffffff"]}, {"count": "1", "hexIds": ["82b807fffffffff"]}, {"count": "5", "hexIds": ["82b8a7fffffffff"]}, {"count": "2", "hexIds": ["829d67fffffffff"]}, {"count": "1", "hexIds": ["82a717fffffffff"]}, {"count": "2", "hexIds": ["82b82ffffffffff"]}, {"count": "1", "hexIds": ["829c6ffffffffff"]}, {"count": "2", "hexIds": ["829c2ffffffffff"]}, {"count": "1", "hexIds": ["8294dffffffffff"]}, {"count": "1", "hexIds": ["82d897fffffffff"]}, {"count": "8", "hexIds": ["82b86ffffffffff"]}, {"count": "1", "hexIds": ["82b91ffffffffff"]}, {"count": "3", "hexIds": ["82948ffffffffff"]}, {"count": "3", "hexIds": ["829c4ffffffffff"]}, {"count": "5", "hexIds": ["82b897fffffffff"]}, {"count": "1", "hexIds": ["82b89ffffffffff"]}, {"count": "1", "hexIds": ["829c07fffffffff"]}, {"count": "1", "hexIds": ["82b937fffffffff"]}, {"count": "1", "hexIds": ["82949ffffffffff"]}, {"count": "1", "hexIds": ["82b99ffffffffff"]}, {"count": "1", "hexIds": ["82b987fffffffff"]}, {"count": "1", "hexIds": ["8294d7fffffffff"]}, {"count": "1", "hexIds": ["82b8dffffffffff"]}, {"count": "1", "hexIds": ["829ce7fffffffff"]}, {"count": "15", "hexIds": ["82becffffffffff"]}, {"count": "13", "hexIds": ["82be1ffffffffff"]}, {"count": "1", "hexIds": ["82b827fffffffff"]}]
</code></pre>
<pre><code>import pandas as pd
import pydeck
df = pd.read_json('aus_h3.duckgl.json')
h3_layer = pydeck.Layer(
"H3ClusterLayer",
df,
pickable=True,
stroked=True,
filled=True,
extruded=False,
get_hexagons="hexIds",
get_fill_color="[255, (1 - count / 500) * 255, 0]",
get_line_color=[255, 255, 255],
line_width_min_pixels=2,
)
view_state = pydeck.ViewState(latitude=-25.7773677126431,
longitude=135.084939479828,
zoom=4,
bearing=0,
pitch=45)
pydeck.Deck(
layers=[h3_layer],
initial_view_state=view_state,
tooltip={"text": "Density: {count}"}
).to_html("aus_h3.duckgl.html")
</code></pre>
<p>How do I specify a custom color scale instead on <code>[255, (1 - count / 500) * 255, 0]</code> in <code>get_fill_color</code> ? For example, I'd like to use 6-class color scale: <a href="https://colorbrewer2.org/#type=sequential&scheme=YlOrRd&n=6" rel="nofollow noreferrer">https://colorbrewer2.org/#type=sequential&scheme=YlOrRd&n=6</a></p>
|
<python><geospatial><deck.gl><pydeck>
|
2023-04-21 20:21:46
| 1
| 2,022
|
kms
|
76,076,330
| 7,700,802
|
Cannot import Forecaster from greykite
|
<p>I try running this</p>
<pre><code>from greykite.framework.templates.forecaster import Forecaster
</code></pre>
<p>and I get this error</p>
<pre><code>ImportError: cannot import name 'Literal' from 'statsmodels.compat.python' (/home/user/anaconda3/lib/python3.8/site-packages/statsmodels/compat/python.py)
</code></pre>
<p>I could not find anything online on how to fix this issue.</p>
|
<python><time-series>
|
2023-04-21 19:50:03
| 0
| 480
|
Wolfy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.