QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,409,051
| 1,801,125
|
mlflow artifacts not showing up in databricks
|
<p>When I use the code below, the metrics, and parameters are successfully listed as artifacts in the user interface(ui). However, the training data, is not listed in the user interface(ui). Any ideas on what I am doing wrong? Thank you!</p>
<pre><code>import mlflow
import mlflow.sklearn
import pandas as pd
import matplotlib.pyplot as plt
import tempfile
from numpy import savetxt
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_diabetes
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
db = load_diabetes()
X = db.data
y = db.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
with mlflow.start_run():
# Set the model parameters.
n_estimators = 100
max_depth = 6
max_features = 3
# Create and train model.
rf = RandomForestRegressor(n_estimators = n_estimators, max_depth = max_depth, max_features = max_features)
rf.fit(X_train, y_train)
# Use the model to make predictions on the test dataset.
predictions = rf.predict(X_test)
# Log the model parameters used for this run.
mlflow.log_param("num_trees", n_estimators)
mlflow.log_param("maxdepth", max_depth)
mlflow.log_param("max_feat", max_features)
# Define a metric to use to evaluate the model.
mse = mean_squared_error(y_test, predictions)
# Log the value of the metric from this run.
mlflow.log_metric("mse", mse)
# Log the model created by this run.
mlflow.sklearn.log_model(rf, "random-forest-model")
#Log the training data
train=pd.concat([X_train, y_train], axis=1)
with tempfile.TemporaryDirectory() as tmp:
path=os.path.join(tmp, 'train.csv')
train.to_csv(path)
mlflow.log_artifacts(tmp)
mlflow.end_run()
</code></pre>
|
<python><databricks><mlflow>
|
2023-06-05 18:20:55
| 1
| 307
|
tom
|
76,408,950
| 7,362,388
|
SQLAlchemy <=> Pydantic relationships field required despite init=False
|
<p>I have defined a model with SQLAlchemy that is based on a Base class that inherits from the pydantic datalass.</p>
<pre><code>class Base(
MappedAsDataclass,
DeclarativeBase,
dataclass_callable=pydantic.dataclasses.dataclass,
):
class Config:
arbitrary_types_allowed = True
class Transcript(Base):
__tablename__ = "Transcript"
id: Mapped[uuid.UUID] = mapped_column(
"id",
UUID(as_uuid=True),
primary_key=True,
nullable=False,
index=True,
server_default=sa.text("gen_random_uuid()"),
default=None,
)
# many to one relationship
owner_id = mapped_column(
"owner_id",
UUID(as_uuid=True),
ForeignKey("User.id"),
)
owner: Mapped[User] = relationship(
"User",
back_populates="transcripts",
init=False,
)
</code></pre>
<p>I use the flag <code>init=False</code> as described in the docs <a href="https://docs.sqlalchemy.org/en/20/orm/dataclasses.html" rel="nofollow noreferrer">here</a>. However, I still get the following error:</p>
<pre><code>web_1 | pydantic.error_wrappers.ValidationError: 1 validation error for Transcript
web_1 | owner
web_1 | field required (type=value_error.missing)
</code></pre>
<p>How can I make aware that owner is a relationship field that is only available after the model was created?</p>
|
<python><sqlalchemy><pydantic>
|
2023-06-05 18:06:11
| 1
| 1,573
|
siva
|
76,408,930
| 13,892,301
|
TypeError: 'int' object is not an interator
|
<p>i want to get the RAM usage from a Windows machine with Python. Therefore im using the pysnmp libary. By executing the following code i get a error message saying:</p>
<pre class="lang-none prettyprint-override"><code> line 38, in getRamUsage
error_indication, error_status, error_index, var_binds = next(iterator)
TypeError: 'int' object is not an iterator
</code></pre>
<pre class="lang-py prettyprint-override"><code>from pysnmp.hlapi.asyncore import *
def getRamUsage():
iterator = getCmd(SnmpEngine(),
CommunityData('public'),
UdpTransportTarget(('192.168.1.10', 161)),
ContextData(),
ObjectType(ObjectIdentity('1.3.6.1.2.1.25.2.3.1.6.2')),
lexicographicMode=False)
error_indication, error_status, error_index, var_binds = next(iterator)
if error_indication:
print('Error: %s' % error_indication)
elif error_status:
print('Error: %s at %s' % (error_status.prettyPrint(), error_index and var_binds[int(error_index) - 1][0] or '?'))
else:
for var_bind in var_binds:
print('RAM usage: %s' % var_bind.prettyPrint())
</code></pre>
<p>I researched on stackoverflow and other sites but found no solution for this kind of problem. I want to get a respone from the SNMP Agent.</p>
<p>Does anynone know how to fix this error?</p>
|
<python><snmp><pysnmp>
|
2023-06-05 18:02:50
| 0
| 505
|
Niklas
|
76,408,850
| 13,764,814
|
drf_spectacular.utils.PolymorphicProxySerializer.__init__() got an unexpected keyword argument 'context'
|
<p>I am using a PolymorphicProxySerializer provided by <a href="https://drf-spectacular.readthedocs.io/en/latest/drf_spectacular.html#drf_spectacular.utils.PolymorphicProxySerializer" rel="nofollow noreferrer">drf_spectacular</a> but I am getting a strange error when attempting to load the schema</p>
<p>usage</p>
<pre><code> @extend_schema(
parameters=[NoteQueryParameters],
responses=PolymorphicProxySerializer(
component_name="NoteSerializer",
serializers=[NoteSerializer, NoteSerializerWithJiraData],
resource_type_field_name=None,
),
)
def list(self, request, *args, **kwargs):
return super().list(request, *args, **kwargs)
</code></pre>
<p>serializers</p>
<pre><code>
class CleanedJiraDataSerializer(serializers.Serializer):
key = serializers.CharField(max_length=20, allow_null=True)
class BugSerializer(serializers.Serializer):
failures = serializers.CharField(max_length=10, required=False, allow_null=True)
suite = serializers.CharField(max_length=100, required=False, allow_null=True)
notes = serializers.CharField(max_length=1000, required=False, allow_null=True)
tags = StringListField(required=False, allow_null=True, allow_empty=True)
testCaseNames = StringListField(required=False, allow_null=True, allow_empty=True)
testCaseIds = StringListField(required=False, allow_null=True, allow_empty=True)
jira = CleanedJiraDataSerializer(required=False, allow_null=True)
class BugSerializerWithJiraData(BugSerializer):
jira = serializers.DictField()
class NoteSerializer(serializers.ModelSerializer):
bug = serializers.ListField(child=BugSerializer())
class Meta:
model = Notes
fields = "__all__"
class NoteSerializerWithJiraData(serializers.ModelSerializer):
bug = serializers.ListField(child=BugSerializerWithJiraData())
class Meta:
model = Notes
fields = "__all__"
</code></pre>
<p>Basically, if a boolean query parameter is added to the request, I will inject some dynamic data fetched from the jira api. I am trying to update the api docs to represent to two distinct possible schema</p>
<pre><code>PolymorphicProxySerializer.__init__() got an unexpected keyword argument 'context'
</code></pre>
|
<python><django><django-rest-framework><openapi><drf-spectacular>
|
2023-06-05 17:48:28
| 1
| 311
|
Austin Hallett
|
76,408,801
| 7,516,523
|
Using the predict() methods of fitted models with gekko
|
<p>Many model-fitting Python packages have a <code>predict()</code> method, which outputs a prediction of the fitted model given observations of the predictor(s).</p>
<p><strong>Question:</strong> How would I use these <code>predict()</code> methods to predict a single value when the observation is a variable in a <code>gekko</code> model?</p>
<p>Below is a very simple reproducible example:</p>
<p><em><strong>Note:</strong> The actual model that I am fitting is a B-spline using <code>statsmodels.gam.smooth_basis.BSplines</code> and <code>statsmodels.gam.generalized_additive_model.GLMGam</code>. However, I am hoping that this simple example with <code>sklearn.linear_model.LinearRegression</code> will translate to more complex model classes from other packages.</em></p>
<pre><code>from sklearn.linear_model import LinearRegression
import numpy as np
import matplotlib.pyplot as plt
from gekko import GEKKO
# create example data
x = np.arange(100)[:, np.newaxis]
y = np.arange(100) * 2 + 10
plt.plot(x, y) # plot x vs y data
# plt.show()
model = LinearRegression() # instantiate linear model
model.fit(x, y) # fit model
x_predict = np.arange(100, 200)[:, np.newaxis] # create array of predictor observations
y_predict = model.predict(x_predict) # use model to make prediction
plt.plot(x_predict, y_predict) # plot prediction
# plt.show()
m = GEKKO() # instantiate gekko model
x2 = m.FV() # instantiate free variable
x2.STATUS = 1 # make variable available for solver
y2 = 50 # true value
# place x2 variable in numpy array to adhere to predict()'s argument requirements
x2_arr = np.array(x2).reshape(1, -1)
# minimize squared error between the true value and the model's prediction
m.Minimize((y2 - model.predict(x2_arr)) ** 2)
m.options.IMODE = 3
# solve for x2
m.solve(disp=True)
print(f"x2 = {x2.value[0]:3f}")
</code></pre>
<p>I get the following sequential errors:</p>
<p><code>TypeError: float() argument must be a string or a real number, not 'GK_FV'</code></p>
<p><code>ValueError: setting an array element with a sequence.</code></p>
<p>My first thought is that I would have to create a wrapper class around the <code>gekko.gk_parameter.GK_FV</code> class to modify the <code>float()</code> method, but that's where my knowledge and skills end.</p>
|
<python><machine-learning><scikit-learn><predict><gekko>
|
2023-06-05 17:38:03
| 1
| 345
|
Florent H
|
76,408,751
| 1,711,271
|
How to refit a model on multiple train sets, without changing the hyperparameters fit with RandomizedSearchCV
|
<p>I find the best hyperparameters for a model using <code>RandomizedSearchCV</code>:</p>
<pre><code>from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import uniform
iris = load_iris()
logistic = LogisticRegression(solver='saga', tol=1e-2, max_iter=200, random_state=0)
distributions = dict(C=uniform(loc=0, scale=4), penalty=['l2', 'l1'])
clf = RandomizedSearchCV(logistic, distributions, random_state=0)
model = clf.fit(iris.data, iris.target)
</code></pre>
<p>Now I want to refit the same model to different datasets (having of course the same input and output features) <em>keeping all hyperparameters fixed</em>: the model weights instead change, because of the refitting. How can I do that? I don't think using the <code>fit</code> method of <code>model</code> would work, because that's the <code>fit</code> method of a <code>RandomizedSearchCV</code> object, thus it would refit the hyperparameters too...</p>
|
<python><scikit-learn><hyperparameters>
|
2023-06-05 17:29:36
| 0
| 5,726
|
DeltaIV
|
76,408,545
| 11,426,624
|
pairwise reshaping of a dataframe
|
<p>I have a dataframe</p>
<pre><code>df = pd.DataFrame({'col1':['a', 'a', 'a', 'b', 'b', 'c', 'c'],
'col2':['a', 'b', 'c', 'b', 'c', 'a', 'c'],
'count':[12, 2, 45, 4, 6, 17, 20]})
</code></pre>
<pre><code> col1 col2 count
0 a a 12
1 a b 2
2 a c 45
3 b b 4
4 b c 6
5 c a 17
6 c c 20
</code></pre>
<p>and I would like to reshape it so I have a pairwise matrix as below (if there is no row combination of <code>col1</code>, <code>col2</code> in the dataframe above the entry should be 0 or else nan. So it would look like that</p>
<pre><code> a b c
a 12 NaN 17.0
b 2 4.0 NaN
c 45 6.0 20.0
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-05 16:57:42
| 1
| 734
|
corianne1234
|
76,408,538
| 7,362,388
|
SQLAlchemy + Pydantic: set id field in db
|
<p>I want to define a model using SQLAlchemy and use it with Pydantic.</p>
<p>Model definition:</p>
<pre><code>from sqlalchemy.orm import DeclarativeBase, MappedAsDataclass, sessionmaker
import pydantic
class Base(
MappedAsDataclass,
DeclarativeBase,
dataclass_callable=pydantic.dataclasses.dataclass,
):
class Config:
arbitrary_types_allowed = True
class Transcript(Base):
__tablename__ = "Transcript"
id: Mapped[uuid.UUID] = mapped_column(
"id",
UUID(as_uuid=True),
primary_key=True,
index=True,
server_default=sa.text("gen_random_uuid()"),
)
text: Mapped[str] = mapped_column("text", Text)
display_name: Mapped[str] = mapped_column("display_name", String)
</code></pre>
<p>I run into a problem when I want to create a Transcript object.</p>
<pre><code>transcript = {text: "hello", display_name: "world"} // dummy data for pydantic object
db_transcript = models.Transcript(**transcript.dict())
</code></pre>
<p>Error: <code>TypeError: Transcript.__init__() missing 1 required positional argument: 'id'</code></p>
<p>I can't figure out how to not require the field "id" of the SQLAlchemy model in the init function. Instead the id should be set in the db when the object is created for the first time.</p>
<p>Ideally I want the "id" to be None per default in the init function so that it gets overridden when the object is stored in the db.</p>
|
<python><sqlalchemy><pydantic>
|
2023-06-05 16:56:11
| 1
| 1,573
|
siva
|
76,408,485
| 4,439,019
|
Python - replace integer in json body
|
<p>I need to replace an integer in a json body:</p>
<pre><code>file1.json
{
"id": 0,
"col_1": "some value",
"col_2": "another value"
}
</code></pre>
<p>I know to replace a string, I would use:</p>
<pre><code>import json
with open('file1.json') as f:
data = json.load(f)
data["col_1"] = data["col_1"].replace("some value", "new value")
</code></pre>
<p>But how would I replace the <code>id</code>, to the number 5 for example?</p>
|
<python><json>
|
2023-06-05 16:48:23
| 1
| 3,831
|
JD2775
|
76,408,483
| 5,287,011
|
How to plot a Gantt chart
|
<p>This topic was addressed in other questions but none of the recommendations worked in my case.</p>
<p>I have macOS M1</p>
<pre><code>#Manage the data
import pandas as pd
import numpy as np
#Graphic tools
import matplotlib
import matplotlib.pyplot as plt
import plotly.figure_factory as ff
from plotly.figure_factory import create_gantt
from datetime import date
# Make data for chart
df = [dict(Task="Job A", Start='2009-01-01',
Finish='2009-02-30', Complete=10),
dict(Task="Job B", Start='2009-03-05',
Finish='2009-04-15', Complete=60),
dict(Task="Job C", Start='2009-02-20',
Finish='2009-05-30', Complete=95)]
# Create a figure with Plotly colorscale
fig = create_gantt(df, colors='Blues', index_col='Complete',
show_colorbar=True, bar_width=0.5,
showgrid_x=True, showgrid_y=True)
fig.show()
</code></pre>
<p>After running the code (Jupiter notebook), there are no errors. However, it is just a blank output cell. Nothing to see.</p>
<p>I checked ALL previous answers. Unfortunately nothing works so far.</p>
<p>the use of jupyter labextension install jupyterlab-plotly does not work because lab extension is deprecated according to my output.</p>
<p>Totally confused. matplotlib works fine with other programs on the same notebook.</p>
<p>UPDATE:</p>
<p>I tried this and it worked:</p>
<pre><code>import plotly.express as px
fig = px.bar(x=["a", "b", "c"], y=[1, 3, 2])
import plotly.graph_objects as go
fig_widget = go.FigureWidget(fig)
fig_widget
</code></pre>
<p>So there is a situation that just plotly does not work in MY system (while it does in other configs - see Trenton's response) BUT with it works with graph_objects.</p>
<p>What do you suggest?</p>
<p>UPDATE</p>
<p>it worked after I added two last lines of code. Still it must have worked with show()</p>
<pre><code># Make data for chart
import plotly.express as px
df = [dict(Task="Job A", Start='2009-01-01',
Finish='2009-02-30', Complete=10),
dict(Task="Job B", Start='2009-03-05',
Finish='2009-04-15', Complete=60),
dict(Task="Job C", Start='2009-02-20',
Finish='2009-05-30', Complete=95)]
# Create a figure with Plotly colorscale
fig2 = create_gantt(df, colors='Blues', index_col='Complete',
show_colorbar=True, bar_width=0.5,
showgrid_x=True, showgrid_y=True)
#fig2.show()
fig_widget2 = go.FigureWidget(fig2)
fig_widget2
</code></pre>
|
<python><jupyter-notebook><plotly><gantt-chart>
|
2023-06-05 16:48:12
| 0
| 3,209
|
Toly
|
76,408,272
| 10,452,700
|
How can communicate between HTML docs using Django server running via Google Colab notebook?
|
<p>I want to experiment with this <a href="https://youtu.be/WNvxR8RFzBg" rel="nofollow noreferrer">tutorial</a> and reflect inputs of <code>Home.html</code>, which is the summation of them in <code>result.html</code> using Django Server running in Google Colab notebook.</p>
<p>I can run the Django server successfully in Google Colab medium, as shown in this <a href="https://i.imgur.com/zEY1CbN.jpg" rel="nofollow noreferrer">screenshot</a>
You just need to <strong>manually</strong> set <code>ALLOWED_HOSTS = ['colab.research.google.com']</code> in <code>settings.py</code> once you have installed <a href="/questions/tagged/django" class="post-tag" title="show questions tagged 'django'" aria-label="show questions tagged 'django'" rel="tag" aria-labelledby="tag-django-tooltip-container">django</a> library in your cluster.</p>
<p>The problem is I can't communicate and reflect the results of <code>Home.html</code> and the summation of them in <code>result.html</code>.</p>
<p>I saw in the Tutorial the browser of the local machine when opening HTML docs are:
<img src="https://i.imgur.com/V8pFpun.png" alt="img" /></p>
<p>I tried following the setup, but I could still figure out how to communicate between HTML docs running by Django in Google Colab medium. I think I'm close to a solution, but I still can't manage it.</p>
<pre class="lang-py prettyprint-override"><code>ALLOWED_HOSTS = ['127.0.0.1', 'localhost']
</code></pre>
<p>Please feel free to access the <a href="https://colab.research.google.com/drive/12zdPUgaMxj-ilv6n_VWT9YmQiXVysBBC?usp=sharing" rel="nofollow noreferrer">notebook</a> for quick troubleshooting.</p>
<p>You can quickly create HTML docs using Win <a href="/questions/tagged/notepad" class="post-tag" title="show questions tagged 'notepad'" aria-label="show questions tagged 'notepad'" rel="tag" aria-labelledby="tag-notepad-tooltip-container">notepad</a> and save them with <code>.html</code>.</p>
<p><code>home.html</code> or download from this <a href="https://drive.google.com/file/d/1xFKNJpWsNrdHVfGHpGWrhVdN7Dt-Pw64/view?usp=sharing" rel="nofollow noreferrer">link</a>:</p>
<pre class="lang-html prettyprint-override"><code><h2> Welcome to my Web </h2>
<form action="result">
<input type="number" name="no1" placeholder="input1"/>
<input type="number" name="no2" placeholder="input2"/>
<input type="submit"/>
</form>
</code></pre>
<hr />
<p><code>result.html</code> or download from this <a href="https://drive.google.com/file/d/12bC7heFKD_UDqRvwTbxmThpDi_Wrtwul/view?usp=sharing" rel="nofollow noreferrer">link</a>::</p>
<pre class="lang-html prettyprint-override"><code><h1>{{ answer }}</h1>
</code></pre>
<hr />
<p>The following code is used in the Google Colab notebook to reflect inputs of <code>Home.html</code> after summation into <code>result.html</code> using Django Server running in the Google Colab cluster. The HTML docs are also updated and available in the cluster before running the Pythonic code.</p>
<pre class="lang-py prettyprint-override"><code>from django.shortcuts import render, HttpResponse
# Create your views here
def home(request):
return render(request, 'http://127.0.0.1:8000/home.html')
def result(request):
no1 = int(request.GET.get('no1') )
no2 = int(request.Get.get('no2') )
answer = no1 + no2
return render(request, 'http://127.0.0.1:8000/result.html', {'answer' : answer})
</code></pre>
<p>After running this cell:</p>
<pre class="lang-py prettyprint-override"><code>!python /content/portfolio/manage.py runserver 8000
#Watching for file changes with StatReloader
#Performing system checks...
#System check identified no issues (0 silenced).
#You have 18 unapplied migration(s). Your project may not work properly #until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
#Run 'python manage.py migrate' to apply them.
#June 23, 2023 - 13:23:03
#Django version 4.2.2, using settings 'portfolio.settings'
#Starting development server at http://127.0.0.1:8000/
#Quit the server with CONTROL-C.
#[23/Jun/2023 13:23:03] "GET / HTTP/1.1" 200 10664
#Not Found: /favicon.ico
#[23/Jun/2023 13:23:04] "GET /favicon.ico HTTP/1.1" 404 2124
#[23/Jun/2023 13:28:07] "GET / HTTP/1.1" 200 10664
#Not Found: /favicon.ico
#[23/Jun/2023 13:28:08] "GET /favicon.ico HTTP/1.1" 404 2124
</code></pre>
<p>when I click on <a href="http://127.0.0.1:8000/" rel="nofollow noreferrer">http://127.0.0.1:8000/</a> gives me</p>
<blockquote>
<p>This site can’t be reached 127.0.0.1 refused to connect.</p>
</blockquote>
<p>I doubled checked proxy settings to ensure that I'm not using a proxy server via: Go to the Chrome menu > Settings > Show advanced settings… > Change proxy settings… > LAN Settings and <strong>deselect</strong> "Use a proxy server for your LAN".</p>
<p>If I click on generated address by the previous line:</p>
<pre class="lang-py prettyprint-override"><code>from google.colab.output import eval_js
print(eval_js("google.colab.kernel.proxyPort(8000)"))
#https://XXXXXXXX-XXXXXXXXXXXXX-8000-colab.googleusercontent.com/
</code></pre>
<p>I have :
<img src="https://i.imgur.com/deQokYB.jpg" alt="img" /></p>
<p>I tried to reflect summation inputs (10 + 20) into <code>result.html</code> unsuccessfully:
<code>http://127.0.0.1:8000/result?no1=10&no2=20</code> and get again:</p>
<blockquote>
<p>This site can’t be reached 127.0.0.1 refused to connect.</p>
</blockquote>
<p>I would be thankful if one could guide me on how to practice Django in [Google Colab] (<a href="https://colab.research.google.com/" rel="nofollow noreferrer">https://colab.research.google.com/</a>) medium in practice? I find this <a href="https://stackoverflow.com/q/60571301/10452700">post</a>, but not sure if it is the case!</p>
<p>Edit1: today, I attempted using this <a href="https://stackoverflow.com/a/46026551/10452700">answer</a> by setting <code>settings.py</code> unsuccessfully:</p>
<pre class="lang-py prettyprint-override"><code>ALLOWED_HOSTS = ['*']
# start Django server using
#!python manage.py runserver 127.0.0.1:8000
!python manage.py runserver 0.0.0.0:8000
</code></pre>
<hr />
<p><img src="https://i.imgur.com/xaauLO5.jpg" alt="img" /></p>
<p>PS: I also activated Localhost on my machine: <a href="https://i.imgur.com/O3IxG24.jpg" rel="nofollow noreferrer">screenshot</a></p>
<hr />
<p>Edit2: I feel that probably instead of inserting <code>127.0.0.1:8000</code> in my browser, I need to insert <code>https://XXXXXXXX-XXXXXXXXXXXXX-8000-colab.googleusercontent.com/</code> I think I also need to include some <a href="https://developer.mozilla.org/en-US/docs/Learn/Server-side/Django/Home_page#url_mapping" rel="nofollow noreferrer">URL mapping</a> in urls.py using <a href="https://docs.djangoproject.com/en/2.0/topics/class-based-views/" rel="nofollow noreferrer"><code>TemplateView</code></a> based on this <a href="https://stackoverflow.com/a/50047075/10452700">answer</a>:
<img src="https://developer.mozilla.org/en-US/docs/Learn/Server-side/Django/Home_page/basic-django.png" alt="img" /></p>
<pre class="lang-py prettyprint-override"><code>from django.contrib import admin
from django.views.generic import
from django.urls import path
urlpatterns = [
path('admin/', admin.site.urls),
path('home/', TemplateView.as_view(template_name="home.html"))
</code></pre>
<p>but still, I couldn't manage to fix and establish with HTML pages so far
and get <strong>301 HTTP error</strong> in Django server status</p>
|
<python><html><django><google-colaboratory>
|
2023-06-05 16:16:39
| 1
| 2,056
|
Mario
|
76,408,258
| 8,957,308
|
Python | Create a Dictionary with Values as expressions without evaluating or treating it as string
|
<p>I need to create a dynamic synthetic Pandas DF in Python and am using Faker and Random functions to create the same.
To do so, I am creating a dictionary as a source to build pandas df where some columns will have static values whereas some column's values are generated through some expressions or Faker objects based on user input.\</p>
<p>for eg:</p>
<p>if user wants to create synthetic dataframe of size 5 with 2 columns, the user provides two inputs as arguments:</p>
<pre><code>col1 = 1
col2 = tuple('1', '2')
</code></pre>
<p>so the dictionary should ideally be constructed as:</p>
<pre class="lang-python prettyprint-override"><code>dict1 = {
'col1': 1,
'col2': fake.random_element(elements=('1','2'))
}
</code></pre>
<p>And the Dataframe can be constructed by running the expressions within the dictionary in a loop, as:</p>
<pre class="lang-python prettyprint-override"><code>
from faker import Faker
import pandas as pd
fake = Faker()
def generate_fake_data(numRows, inputDict):
output = [
inputDict for x in range(numRows)
]
return output
num_records = 5
df_sythetic = pd.DataFrame(generate_fake_data(num_records, dict1))
</code></pre>
<p>The problem is that if I try to create the dictionary without letting it evaluate the expression for col2 beforehand, it binds the value as string like:</p>
<pre class="lang-python prettyprint-override"><code>dict1 = {
'col1': 1,
'col2': 'fake.random_element(elements=('1','2'))'
}
</code></pre>
<p>How can I create the dictionary without evaluating the expression columns and also not treating them as string ?</p>
|
<python><python-3.x><dictionary>
|
2023-06-05 16:14:49
| 2
| 544
|
ABusy_developerT
|
76,408,247
| 19,189,069
|
Is there a language detection that detects Arabic and Persian languages?
|
<p>I have a dataset of twitter texts. Most of the tweets in this dataset are in Persian and some of them are in Arabic.
I want to find Arabic tweets.
Is there an API or a tool that can do it for me?
If I want to explain more, I want a language detection that classifies tweets in Persian and Arabic languages.
Thanks.</p>
|
<python><nlp><arabic><farsi><language-detection>
|
2023-06-05 16:13:24
| 3
| 385
|
HosseinSedghian
|
76,408,188
| 1,967,239
|
Constrict a complex MagicMock in Python from code with nested calls
|
<p>I would like to build a MagicMock for a nested call. Not quite sure how to do this. Please could you advise?</p>
<p>Many Thanks</p>
<p>Here is my code :</p>
<pre><code>def kms_aliases(ids_include):
client = boto3.client('kms')
paginator = client.get_paginator('list_aliases')
response_iterator = paginator.paginate(PaginationConfig={'MaxItems': 100})
aliases = response_iterator.build_full_result() # I would like to mock this one
</code></pre>
<p>Here is my test:</p>
<pre><code>class ReportTests(unittest.TestCase):
def test_kms_aliases(self):
# arrange
boto3.client = MagicMock()
ids_include = ["string1", "string2"]
kms_aliases = {blah}
# here we go... how do I mock the call to response_iterator.build_full_result()
# wrong!
response_iterator.build_full_result() = MagicMock(return_value = kms_aliases)
expected = {blah}
# act
actual = info.kms_aliases(ids_include)
# assert
assert expected == actual
</code></pre>
|
<python><python-unittest><python-unittest.mock>
|
2023-06-05 16:03:55
| 1
| 1,680
|
Banoona
|
76,408,114
| 4,485,898
|
how to make abstract class that forces implementors to be dataclass
|
<p>I have an abstract class, which for any <code>T</code> that implements it, lets you convert a <code>List[T]</code> into a csv file string. It looks like this:</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
from dataclasses import dataclass, fields
from typing import List, Type, TypeVar
T = TypeVar('T')
class CsvableDataclass(ABC):
@classmethod
def to_csv_header(cls) -> str:
return ','.join(f.name for f in fields(cls))
def to_csv_row(self) -> str:
return ','.join(self.format_field(f.name) for f in fields(self))
@abstractmethod
def format_field(self, field_name: str) -> str:
pass
# actually make a csv string using the methods above
@staticmethod
def to_csv_str(t: Type[T], data: List[T]):
return '\n'.join([t.to_csv_header()] + [r.to_csv_row() for r in data])
@dataclass
class A(CsvableDataclass):
x: int
y: int
def format_field(self, field_name: str) -> str:
if field_name == 'x': return str(self.x)
if field_name == 'y': return str(self.y)
raise Exception(f"invalid field_name: {field_name}")
CsvableDataclass.to_csv_str(A, [A(1,2),A(3,4)])
# "x,y\n1,2\n3,4"
</code></pre>
<p>I'm using <code>fields()</code> from the <code>dataclasses</code> module to get the fields to make the header row and all other rows of the csv. <code>fields()</code> only works on instances or classes that are <code>@dataclass</code>.</p>
<p>How do I enforce (in the type annotations) that a class that extends <code>CsvableDataclass</code> must be a <code>@dataclass</code>?</p>
|
<python><abstract-class><python-dataclasses>
|
2023-06-05 15:52:54
| 1
| 540
|
Captain_Obvious
|
76,408,051
| 357,313
|
Do failures seeking backwards in a gzip.GzipFile mean it's broken?
|
<p>I have files with a small header (8 bytes, say <code>zrxxxxxx</code>), followed by a gzipped stream of data. Reading such files works fine most of the time. However in very specific cases, seeking backwards fails. This is a simple way to reproduce:</p>
<pre><code>from gzip import GzipFile
f = open('test.bin', 'rb')
f.read(8) # Read zrxxxxxx
h = GzipFile(fileobj=f, mode='rb')
h.seek(8192)
h.seek(8191) # gzip.BadGzipFile: Not a gzipped file (b'zr')
</code></pre>
<p>Unfortunately I cannot share my file, but it looks like <em>any</em> similar file will do.</p>
<p>Debugging the situation, I noticed that DecompressReader.seek (in Lib/_compression.py) sometimes rewinds the original <em>file</em>, which I suspect causes the issue:</p>
<pre><code>#...
# Rewind the file to the beginning of the data stream.
def _rewind(self):
self._fp.seek(0)
#...
def seek(self, offset, whence=io.SEEK_SET):
#...
# Make it so that offset is the number of bytes to skip forward.
if offset < self._pos:
self._rewind()
else:
offset -= self._pos
#...
</code></pre>
<p>Is this a bug? Or is it me doing it wrong?</p>
<p>Any simple workaround?</p>
|
<python><gzip><seek>
|
2023-06-05 15:44:20
| 1
| 8,135
|
Michel de Ruiter
|
76,407,974
| 6,552,836
|
Pyomo CBC non-zero return code (-6) error
|
<p>I have a optimization script which works fine in a jupyter notebook, however when I converted into a python script I get this cbc error? I've tried many things, still can't figure out what's causing this unsual error?</p>
<pre><code>ERROR: Solver (cbc) returned non-zero return code (-6)
Traceback (most recent call last):
File "path/t/file.py", line 109, in <module>
opt.solve(model, strategy='GOA', mip_solver='cbc', threads=4)
File "/opt/anaconda3/envs/conda/lib/python3.10/site-packages/pyomo/contrib/mindtpy/MindtPy.py", line 113, in solve
return SolverFactory(_supported_algorithms[config.strategy][0]).solve(
File "/opt/anaconda3/envs/conda/lib/python3.10/site-packages/pyomo/contrib/mindtpy/algorithm_base_class.py", line 2804, in solve
self.MindtPy_iteration_loop(config)
File "/opt/anaconda3/envs/conda/lib/python3.10/site-packages/pyomo/contrib/mindtpy/algorithm_base_class.py", line 2889, in MindtPy_iteration_loop
main_mip, main_mip_results = self.solve_main(config)
File "/opt/anaconda3/envs/conda/lib/python3.10/site-packages/pyomo/contrib/mindtpy/algorithm_base_class.py", line 1670, in solve_main
main_mip_results = mainopt.solve(
File "/opt/anaconda3/envs/conda/lib/python3.10/site-packages/pyomo/opt/base/solvers.py", line 627, in solve
raise ApplicationError("Solver (%s) did not exit normally" % self.name)
pyomo.common.errors.ApplicationError: Solver (cbc) did not exit normally
</code></pre>
|
<python><exe><pyomo><coin-or-cbc>
|
2023-06-05 15:35:52
| 0
| 439
|
star_it8293
|
76,407,896
| 9,874,393
|
regex: remove keyword(s) at start but not in all of the string
|
<p>A path name starts with one or two times the same folder name <code>pictures</code>. I need to remove these but keep any folders with the same name <code>pictures</code> later in the path. I came up with this solution:</p>
<pre><code>import re
path_name = 'd:\pictures\pictures\hallo\pictures\\'
cleaned_path_name = re.sub(r'^pictures\\', '', re.sub(r'^.:\\pictures\\', '', path_name))
print(path_name)
print(cleaned_path_name)
</code></pre>
<pre><code>d:\pictures\pictures\hallo\pictures\
hallo\pictures\
</code></pre>
<p>Is there a way to do this in one regex expression?</p>
|
<python><regex>
|
2023-06-05 15:25:13
| 2
| 3,555
|
Bruno Vermeulen
|
76,407,887
| 15,959,591
|
Error while creating a pandas dataframe out of numpy ndarrays
|
<p>I'm trying to concatenate numpy ndarrays of size (19200,) to make a data frame. Each 1D array would be a row in my data frame. My code looks like that:</p>
<pre><code>new_array_1 = pd.DataFrame(new_array_1, index=['new_array_1'])
new_array_2 = pd.DataFrame(new_array_2, index=['new_array_2'])
new_array_3 = pd.DataFrame(new_array_3, index=['new_array_3'])
df = pd.concat([df, new_array_1, new_array_2, new_array_3])
</code></pre>
<p>but I got the error:</p>
<pre><code>Shape of passed values is (19200, 1), indices imply (1, 1)
</code></pre>
<p>But then after I add the square brackets around the new_arrays like this:</p>
<pre><code>new_array_1 = pd.DataFrame([new_array_1], index=['new_array_1'])
df = pd.concat([df, new_array_1])
</code></pre>
<p>and I got the error:</p>
<pre><code>Must pass 2-d input. shape=(1, 1, 19200)
</code></pre>
<p>What should I do to solve the problem, please?
Please note that I cannot add all my arrays at the same time, I update my data frame by adding the rows of data whenever I got the data.</p>
|
<python><pandas><dataframe><numpy><numpy-ndarray>
|
2023-06-05 15:24:05
| 1
| 554
|
Totoro
|
76,407,806
| 8,195,331
|
Get all mailboxes with Python exchangelib
|
<p>I'm using Python exchangelib for loading information from Microsoft Exchange. I need to load all mailboxes from Exchange server.</p>
<p>Can I get it with exchangelib?</p>
|
<python><exchange-server><exchangewebservices><exchangelib><mailbox>
|
2023-06-05 15:12:06
| 1
| 703
|
grasdy
|
76,407,803
| 10,018,352
|
define an output schema for a nested json in langchain
|
<p>Whats the recommended way to define an output schema for a nested json, the method I use doesn't feel ideal.</p>
<pre><code># adding to planner -> from langchain.experimental.plan_and_execute import load_chat_planner
refinement_response_schemas = [
ResponseSchema(name="plan", description="""{'1': {'step': '','tools': [],'data_sources': [],'sub_steps_needed': bool},
'2': {'step': '','tools': [<empty list>],'data_sources': [<>], 'sub_steps_needed': bool},}"""),] #define json schema in description, works but doesn't feel proper
refinement_output_parser = StructuredOutputParser.from_response_schemas(refinement_response_schemas)
refinement_format_instructions = refinement_output_parser.get_format_instructions()
refinement_output_parser.parse(output)
</code></pre>
<p>gives:</p>
<pre><code>{'plan': {'1': {'step': 'Identify the top 5 strikers in La Liga',
'tools': [],
'data_sources': ['sports websites', 'official league statistics'],
'sub_steps_needed': False},
'2': {'step': 'Identify the top 5 strikers in the Premier League',
'tools': [],
'data_sources': ['sports websites', 'official league statistics'],
'sub_steps_needed': False},
...
'6': {'step': 'Given the above steps taken, please respond to the users original question',
'tools': [],
'data_sources': [],
'sub_steps_needed': False}}}
</code></pre>
<p>it works but I want to know if theres a better way to go about this.</p>
|
<python><openai-api><langchain><py-langchain>
|
2023-06-05 15:11:34
| 1
| 549
|
Zizi96
|
76,407,801
| 670,446
|
FTP_TLS client hangs even though curl is working
|
<p>We are able to connect to a server which is using explicit FTPS using curl with the following command:</p>
<pre><code>> curl --ssl --list-only --user USER:PASSWD ftp://ftp.COMPANY.com:port/public
</code></pre>
<p>Yet this code hangs for a bit after logging "prot_p complete", and then returns "Connection timed out".</p>
<pre><code>import ftplib
with ftplib.FTP_TLS() as ftps:
print('Connecting')
ftps.connect(URL, PORT)
print('Connected')
print('Logging in')
ftps.login(LOGIN, PASSWD)
print('Logged in')
ftps.prot_p()
print('prot_p complete')
ftps.retrlines('LIST')
</code></pre>
|
<python><ftps>
|
2023-06-05 15:11:11
| 1
| 3,739
|
bpeikes
|
76,407,773
| 8,360,167
|
Installation error with pycURL on Python 3.10.6 (Ubuntu 22.04.2)
|
<p>After running <code>pip install libcurl</code>, I keep getting the following output:</p>
<pre><code>Defaulting to user installation because normal site-packages is not writeable
Collecting pycurl
Using cached pycurl-7.45.2.tar.gz (234 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [23 lines of output]
Traceback (most recent call last):
File "/tmp/pip-install-02m9ugoh/pycurl_396654a7451a42138d4e279114b11e59/setup.py", line 229, in configure_unix
p = subprocess.Popen((self.curl_config(), '--version'),
File "/usr/lib/python3.10/subprocess.py", line 969, in _init_
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.10/subprocess.py", line 1845, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'curl-config'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-02m9ugoh/pycurl_396654a7451a42138d4e279114b11e59/setup.py", line 970, in <module>
ext = get_extension(sys.argv, split_extension_source=split_extension_source)
File "/tmp/pip-install-02m9ugoh/pycurl_396654a7451a42138d4e279114b11e59/setup.py", line 634, in get_extension
ext_config = ExtensionConfiguration(argv)
File "/tmp/pip-install-02m9ugoh/pycurl_396654a7451a42138d4e279114b11e59/setup.py", line 93, in _init_
self.configure()
File "/tmp/pip-install-02m9ugoh/pycurl_396654a7451a42138d4e279114b11e59/setup.py", line 234, in configure_unix
raise ConfigurationError(msg)
_main_.ConfigurationError: Could not run curl-config: [Errno 2] No such file or directory: 'curl-config'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
</code></pre>
<p>I have tried installing <code>python-setuptools</code> and <code>ez_setup</code> as well as upgrading <code>setuptools</code> and <code>pip</code>, as suggested <a href="https://askubuntu.com/questions/975523/pip-install-gives-command-python-setup-py-egg-info-failed-with-error-code-1">here</a>, all to no avail.</p>
|
<python><python-3.x><ubuntu><pycurl><ubuntu-22.04>
|
2023-06-05 15:07:54
| 1
| 2,022
|
liam
|
76,407,515
| 8,869,570
|
Attribute not found when using python3.7 from terminal but not within ipython
|
<p>I have a script in <code>test.py</code>, which has the following:</p>
<pre><code>import custom_lib
a = custom_lib.mod.value()
</code></pre>
<p>When I run <code>python3.7 test.py</code>, I get the error:</p>
<pre><code>AttributeError: module 'custom_lib' has no attribute 'mod'
</code></pre>
<p>But when I got into <code>ipython</code>, and copy and paste <code>test.py</code> into it, I get no such error.</p>
<p>The python executable that is used within <code>ipython</code> is the same as the one <code>python3.7</code> points to. I checked by doing <code>which python3.7</code> and in ipython, I did</p>
<pre><code>import sys
print(sys.executable)
</code></pre>
<p>and both point to the same <code>python3.7</code>. What could be causing this error?</p>
|
<python><ipython>
|
2023-06-05 14:34:10
| 0
| 2,328
|
24n8
|
76,407,241
| 1,473,517
|
Why is cython so much slower than numba for this simple loop?
|
<p>I have a simple loop that just sums the second row of a numpy array. In numba I need only do:</p>
<pre><code>from numba import njit
@njit('float64(float64[:, ::1])', fastmath=True)
def fast_sum_nb(array_2d):
s = 0.0
for i in range(array_2d.shape[1]):
s += array_2d[1, i]
return s
</code></pre>
<p>If I time the code I get:</p>
<pre><code>In [3]: import numpy as np
In [4]: A = np.random.rand(2, 1000)
In [5]: %timeit fast_sum_nb(A)
305 ns ± 7.81 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
</code></pre>
<p>To do the same thing in cython I need first make setup.py which has:</p>
<pre><code>from setuptools import setup
from Cython.Build import cythonize
from setuptools.extension import Extension
ext_modules = [
Extension(
'test_sum',
language='c',
sources=['test.pyx'], # list of source files
extra_compile_args=['-Ofast', '-march=native'], # example extra compiler arguments
)
]
setup(
name = "test module",
ext_modules = cythonize(ext_modules, compiler_directives={'language_level' : "3"})
)
</code></pre>
<p>I have the highest possible compilation options. The cython summation code is then:</p>
<pre><code>#cython: language_level=3
from cython cimport boundscheck
from cython cimport wraparound
@boundscheck(False)
@wraparound(False)
def fast_sum(double[:, ::1] arr):
cdef int i=0
cdef double s=0.0
for i in range(arr.shape[1]):
s += arr[1, i]
return s
</code></pre>
<p>I compile it with:</p>
<pre><code>python setup.py build_ext --inplace
</code></pre>
<p>If I now time this I get:</p>
<pre><code>In [2]: import numpy as np
In [3]: A = np.random.rand(2, 1000)
In [4]: %timeit fast_sum(A)
564 ns ± 1.25 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
</code></pre>
<p>Why is the cython version so much slower?</p>
<hr />
<p>The annotated C code from cython looks like this:</p>
<p><a href="https://i.sstatic.net/Y5YP2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y5YP2.png" alt="enter image description here" /></a></p>
<p>The assembly produced by numba seems to be:</p>
<pre><code>vaddpd -96(%rsi,%rdx,8), %ymm0, %ymm0
vaddpd -64(%rsi,%rdx,8), %ymm1, %ymm1
vaddpd -32(%rsi,%rdx,8), %ymm2, %ymm2
vaddpd (%rsi,%rdx,8), %ymm3, %ymm3
addq $16, %rdx
cmpq %rdx, %rcx
jne .LBB0_5
vaddpd %ymm0, %ymm1, %ymm0
vaddpd %ymm0, %ymm2, %ymm0
vaddpd %ymm0, %ymm3, %ymm0
vextractf128 $1, %ymm0, %xmm1
vaddpd %xmm1, %xmm0, %xmm0
vpermilpd $1, %xmm0, %xmm1
vaddsd %xmm1, %xmm0, %xmm0
</code></pre>
<p>I don't know how to get the assembly for the cython code. The C file it produces is huge and the <code>.so</code> file disassembles to a large file as well.</p>
<p>This speed difference persists (in fact it increases) if I increase the number of columns in the 2d array so it doesn't seem to be a calling overhead issue.</p>
<p>I am using Cython version 0.29.35 and numba version 0.57.0 on Ubuntu.</p>
|
<python><gcc><clang><cython><numba>
|
2023-06-05 14:05:00
| 2
| 21,513
|
Simd
|
76,407,144
| 1,105,011
|
Python class hierarchy dynamic argument before args
|
<p>I have two classes</p>
<pre><code>class A(Base):
@classmethod
def which_active(cls, request: HttpRequest, *variants: str):
super().which_active(...)
class B(Base):
@classmethod
def which_active(cls, *variants: str):
super().which_active(...)
</code></pre>
<p>The logic for both <code>which_active</code> methods are very similar and i want to abstract that into a super class like this:</p>
<pre><code>class Base:
@classmethod
def which_active(cls, x: Any, *variants: str):
if cls.is_eligible(x):
do_this()
cls.assign(x)
....
@classmethod
def is_eligible(cls, x:Any):
....
@classmethod
def assign(cls, x:Any)
....
....
</code></pre>
<p>This will cause an error for class B because the signature of the <code>which_active</code> method in the base class doesn't match the superclass. How do I design this to use the same method names but different arguments with the possibility of omitting the first argument for the method in the subclasses?</p>
|
<python><oop><inheritance><overriding><class-method>
|
2023-06-05 13:54:03
| 2
| 4,098
|
danial
|
76,407,128
| 1,892,584
|
Why is platform.processor() returning the wrong value when python is run from my editor?
|
<p>I occasionally like to use python from within <a href="https://orgmode.org/" rel="nofollow noreferrer">org-mode</a> in <a href="https://www.gnu.org/software/emacs/" rel="nofollow noreferrer">emacs</a>? However, I've recently switched from intel architecture to m1 and this isn't working very well.</p>
<p>The pydantic module imports fine from my system python - but when I try to import it from within by editor python for some reason seems to be picking the wrong architecture.</p>
<p>From within emacs:</p>
<pre><code>#+begin_src python :results output
import sys
print(sys.executable)
import pydantic
#+end_src
: /Library/Developer/CommandLineTools/usr/bin/python3
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
ImportError: dlopen(/Users/tom/Library/Python/3.9/lib/python/site-packages/pydantic/__init__.cpython-39-darwin.so, 0x0002): tried: '/Users/tom/Library/Python/3.9/lib/python/site-packages/pydantic/__init__.cpython-39-darwin.so' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/tom/Library/Python/3.9/lib/python/site-packages/pydantic/__init__.cpython-39-darwin.so' (no such file), '/Users/tom/Library/Python/3.9/lib/python/site-packages/pydantic/__init__.cpython-39-darwin.so' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64'))
[ Babel evaluation exited with code 1 ]
</code></pre>
<p>From a terminal, everything works fine:</p>
<pre><code>python3
Python 3.9.6 (default, Mar 10 2023, 20:16:38)
[Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Could not open PYTHONSTARTUP
FileNotFoundError: [Errno 2] No such file or directory: '/Users/USER/.pythonrc.py'
>>> import sys
>>> print(sys.executable)
/Library/Developer/CommandLineTools/usr/bin/python3
>>> import pydantic
>>>
</code></pre>
<p>(Note that the python executable is the same)</p>
<p>I did some digging and within my editor <code>platform.processor()</code> is returning i386 while from a python shell it is returning <code>arm</code>, which is confusing.</p>
<p><strong>Why is the processor wrong from within my editor?</strong></p>
<p>My suspicion is that it's something to do with some sort of path be it the system path, the library path, or python's path</p>
<h1>Approaches tried</h1>
<ul>
<li>This question: [https://stackoverflow.com/questions/72308682/mach-o-file-but-is-an-incompatible-architecture-have-x86-64-need-arm64e] seems like it might be relevant?</li>
<li><code>platform</code> is implemented in python. Looking at the source code is calling <code>os.uname()</code> directly and this seems to be lying about the architecture in my editor</li>
</ul>
<pre><code>#+begin_src python :results output
import os
print(os.uname())
#+end_src
#+RESULTS:
: posix.uname_result(sysname='Darwin', nodename='Danielas-MacBook-Air.local', release='22.4.0', version='Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:41 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T8103', machine='x86_64')
</code></pre>
<p><code>os.uname</code> appears to be a builtin and I assume that it is implemented in C which makes it difficult to debug... but hoping that the shell <code>uname</code> commands works in the same way as <code>os.uname</code> it looks like <code>uname -a</code> is lying as well.</p>
<pre><code>#+begin_src shell :results output
uname -a
#+end_src
#+RESULTS:
: Darwin computer.local 22.4.0 Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:41 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T8103 x86_64
</code></pre>
<p>I'm suspicious that some sort of emulation is going on...</p>
|
<python><macos><apple-m1>
|
2023-06-05 13:52:20
| 0
| 1,947
|
Att Righ
|
76,407,120
| 12,214,867
|
The same code snippet works well as a standalone script but gets stuck when running in a Jupyter Notebook cell. How do I fix it?
|
<p>I'm using OpenCV 4.7.0 on macOS 13 with an M1 chip. The following Python code works well as a standalone script, where it captures camera input and renders it on the screen. Pressing 'q' quits the program:</p>
<pre class="lang-py prettyprint-override"><code>import cv2
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
cv2.imshow('OpenCV Feed', frame)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>However, when I try running the same code in a Jupyter Notebook cell, it gets stuck indefinitely after rendering the camera feed when I press 'q'.</p>
<p><a href="https://i.sstatic.net/YEy1X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YEy1X.png" alt="enter image description here" /></a></p>
<p>I have considered a few potential factors causing this issue:</p>
<ol>
<li><p>Jupyter Notebook: There could be a compatibility issue or a bug specific to Jupyter Notebook that causes the code to get stuck when 'q' is pressed.</p>
</li>
<li><p>OpenCV: There might be an issue with OpenCV. I have already fetched the latest code from GitHub, built and installed it.</p>
</li>
<li><p>Incorrect usage: It's possible that there is an error in how I'm using the code in the Jupyter Notebook environment.</p>
</li>
</ol>
<p>Any suggestions on where I should dig deeper to troubleshoot this issue would be appreciated.</p>
|
<python><macos><opencv><jupyter-notebook>
|
2023-06-05 13:51:19
| 0
| 1,097
|
JJJohn
|
76,406,958
| 6,562,240
|
Python: When using a for loop to create list, if blank insert value?
|
<p>I have the following code:</p>
<pre><code> for car in car_list:
specs = car.find_elements(By.XPATH, ".//li[@class='sc-kcuKUB bMKvpn']")
car_specs = []
for spec in specs:
entry = spec.get_attribute('innerHTML')
entry = spec.text
car_specs.append(entry)
all_specs.append(car_specs)
</code></pre>
<p>Which correctly produces output such as the following:</p>
<p><code>print(all_specs)</code>:</p>
<blockquote>
<p>['2021 (21 reg)', 'SUV', '35,008 miles', '395BHP', 'Automatic',
'Electric', '1 owner'] [['2021 (21 reg)', 'SUV', '35,008 miles',
'395BHP', 'Automatic', 'Electric', '1 owner']]</p>
</blockquote>
<p>However, sometimes the specs for a car are blank, resulting in the following:</p>
<blockquote>
<p>['2021 (21 reg)', 'SUV', '35,008 miles', '395BHP', 'Automatic',
'Electric', '1 owner'] [['2021 (21 reg)', 'SUV', '35,008 miles',
'395BHP', 'Automatic', 'Electric', '1 owner'] []]</p>
</blockquote>
<p>(Note the empty list at the end)</p>
<p>When I come to concat this dataframe to another, there is a row transposition issue as there is a blank list in this dataframe, but not in the other.</p>
<p>How can I insert a text value to at least populate the list if spec or specs are blank?</p>
<p>I have tried the following:</p>
<pre><code>for spec in specs:
if spec is None:
car_specs.append('texthere')
else:
entry = spec.get_attribute('innerHTML')
entry = spec.text
car_specs.append(entry)
</code></pre>
|
<python><list>
|
2023-06-05 13:33:25
| 2
| 705
|
Curious Student
|
76,406,866
| 9,758,790
|
Selenium: Get the URL that Javascript codes redircted to without changing current page
|
<p>I want to get the URL I am re-directed to when I click one button on the website, and still stay on the same page instead of being redirected. The button is not a simple hyperlink and will trigger a complex Javascript function.</p>
<h3>click() does not work for me</h3>
<p>I refer to <a href="https://stackoverflow.com/questions/74396709/selenium-get-url-of-a-tag-without-href-attribute/74401877#74401877">Selenium get URL of "a" Tag without href attribute</a> and <a href="https://stackoverflow.com/questions/49588021/selenium-getting-ultimate-href-link-without-clicking-on-it-when-its-a-javascri">Selenium: Getting ultimate href/link without clicking on it when it's a javascript call</a>, which shows that the button has to be clicked to get the URL. Of course, I can click() the button and use 'driver.current_url', but in this way I am redirected to a new URL. I do not want the redirection since I have more things to do on the current page and want to stay on the same page. I can not use <kbd>CTRL</kbd> + <kbd>click</kbd> to open the new page in a new tab, either.</p>
<h3>My idea</h3>
<p>I read about how to find the source Javascript code (<a href="https://stackoverflow.com/questions/23472334/how-to-find-what-code-is-run-by-a-button-or-element-in-chrome-using-developer-to">How to find what code is run by a button or element in Chrome using Developer Tools</a>), but I found the codes too complex for me. Should I find the codes that produce the URL?</p>
<p>I wonder can I just "fork" the current page? Or something like simulating/executing the Javascript code in a separate sandbox? I am using python.</p>
|
<javascript><python><html><selenium-webdriver>
|
2023-06-05 13:22:54
| 2
| 3,084
|
hellohawaii
|
76,406,816
| 22,009,322
|
How to set broken bar order after grouping the dataframe
|
<p>The code example below draws a broken barh diagram with a list of persons which joined and left a music band during a period of time:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
result = pd.DataFrame([['Bill', 1972, 1974],
['Bill', 1976, 1978],
['Bill', 1967, 1971],
['Danny', 1969, 1975],
['Danny', 1976, 1977],
['James', 1971, 1972],
['Marshall', 1967, 1975]],
columns=['Person', 'Year_start', 'Year_left'])
fig, ax = plt.subplots()
names = sorted(result['Person'].unique())
colormap = plt.get_cmap('plasma')
slicedColorMap = colormap(np.linspace(0, 1, len(names)))
height = 0.5
for y, (name, g) in enumerate(result.groupby('Person')):
ax.broken_barh(list(zip(g['Year_start'],
g['Year_left']-g['Year_start'])),
(y-height/2, height),
facecolors=slicedColorMap[y]
)
ax.set_ylim(0-height, len(names)-1+height)
ax.set_xlim(result['Year_start'].min()-1, result['Year_left'].max()+1)
ax.set_yticks(range(len(names)), names)
ax.grid(True)
plt.show()
</code></pre>
<p>The output result is this diagram:
<a href="https://i.sstatic.net/zDabL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zDabL.png" alt="enter image description here" /></a></p>
<p>I need to order the bars (along with the Persons in y axis) by 'Year_start' and 'Year_left', both in ascending order.</p>
<p>I know how to aggregate and order values in dataframe after the data is grouped, and that I should reset index afterwards:</p>
<pre><code>sorted_result = result.groupby('Person').agg({'Year_start': min, 'Year_left': max})
sorted_result = sorted_result.sort_values(['Year_start', 'Year_left'], ascending=[True, True]).reset_index()
print(sorted_result)
</code></pre>
<p>But I am having a hard time to embed this sorting into existing "for in" loop when drawing the ax.broken_barh (also because as I understood it is not possible to perform "sort_values" with "groupby" using "agg" in a single iteration).
Is this sorting possible in this script at all or I should completely reconsider the script structure?
Many thanks!</p>
|
<python><pandas><matplotlib>
|
2023-06-05 13:17:56
| 2
| 333
|
muted_buddy
|
76,406,637
| 2,263,683
|
How to add custom HTML content to FastAPI Swagger UI docs?
|
<p>I need to add a custom button in Swagger UI of my FastAPI application. I found <a href="https://stackoverflow.com/questions/74661044/add-a-custom-javascript-to-the-fastapi-swagger-ui-docs-webpage-in-python/75022153#75022153">this answer</a> which suggest a good solution to add custom javascript to Swagger UI along with <a href="https://fastapi.tiangolo.com/ru/advanced/extending-openapi/" rel="nofollow noreferrer">this documentations</a> from FastAPI. But this solution only works for adding custom javascript code. I tried to add some HTML code for adding a new button to it using the swagger UI Authorise button style:</p>
<pre><code>custom_html = '<div class="scheme-containerr"><section class="schemes wrapper block col-12"><div class="auth-wrapper"><button class="btn authorize"><span>Authorize Google</span><svg width="20" height="20"><use href="#unlocked" xlink:href="#unlocked"></use></svg></button></div></section></div>'
@app.get("/docs", include_in_schema=False)
async def custom_swagger_ui_html():
return get_swagger_ui_html(
openapi_url=app.openapi_url,
title=app.title + " - Swagger UI",
oauth2_redirect_url=app.swagger_ui_oauth2_redirect_url,
swagger_js_url="/static/swagger-ui-bundle.js",
swagger_css_url="/static/swagger-ui.css",
custom_js_url=google_custom_button,
custom_html=custom_html,
)
def get_swagger_ui_html(
*,
...
custom_html: Optional[str] = None,
) -> HTMLResponse:
...
html = f"""
<!DOCTYPE html>
<html>
<head>
<link type="text/css" rel="stylesheet" href="{swagger_css_url}">
<link rel="shortcut icon" href="{swagger_favicon_url}">
<title>{title}</title>
</head>
<body>
<div id="swagger-ui">
{custom_html if custom_html else ""} # <-- I added the HTML code here
</div>
"""
....
</code></pre>
<p>But looks like whatever I put between <code><div id="swagger-ui"></div></code> gets overwritten somehow and won't make it in the Swagger UI.</p>
<p>How to add custom HTML (in this case, buttons like Swagger's Authorise button) for specific needs in Swagger UI using FastAPI?</p>
<p><strong>Update</strong></p>
<p>If I add the custom HTML outside of the <code><div id="swagger-ui"></div></code> I can see my custom button in Swagger UI like this:</p>
<p><a href="https://i.sstatic.net/dSZD4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dSZD4.png" alt="enter image description here" /></a></p>
<p>But I would like to add my button where the original Authorise button is.</p>
|
<python><swagger><fastapi><swagger-ui><openapi>
|
2023-06-05 12:56:00
| 1
| 15,775
|
Ghasem
|
76,406,611
| 6,562,240
|
IndexError: list index out of range error: Python
|
<p>A basic error I know, but i'm unsure why its occurring after troubleshooting.</p>
<p>I have <code>all_specs</code> a list of lists:</p>
<pre><code>[['2020 (70 reg)', 'SUV', '34,181 miles', '395BHP', 'Automatic', 'Electric', '1 owner'], ['2019 (69 reg)', 'SUV', '43,243 miles', '400PS', 'Automatic'], ['2019 (19 reg)', 'SUV', '62,300 miles', '400PS', 'Automatic', 'Electric', '1 owner'], ['2018 (68 reg)', 'SUV', '26,850 miles', '400PS', 'Automatic', 'Electric', 'Full dealership history'], ['2020 (20 reg)', 'SUV', '46,000 miles', '395BHP', 'Automatic', 'Electric', '1 owner'], ['2020 (20 reg)', 'SUV', '34,000 miles', '400PS', 'Automatic', 'Electric'], ['2019 (68 reg)', 'SUV', '31,559 miles', '395BHP', 'Automatic', 'Electric'], ['2018 (68 reg)', 'SUV', '70,000 miles', '400PS', 'Automatic', 'Electric', '1 owner', 'Full dealership history'], ['2019 (19 reg)', 'SUV', '30,000 miles', '395BHP', 'Automatic', 'Electric', '1 owner'], ['2019 (19 reg)', 'SUV', '48,800 miles', '400PS', 'Automatic', 'Electric', '1 owner'], ['2020 (20 reg)', 'SUV', '18,043 miles', '395BHP', 'Automatic', 'Electric', '1 owner'], ['2019 (69 reg)', 'SUV', '39,000 miles', '400PS', 'Automatic', 'Electric', '1 owner', 'Full dealership history'], ['2022 (22 reg)', 'SUV', '13,733 miles', '299PS', 'Automatic', 'Electric', '1 owner'], [], ['2019 (69 reg)', 'SUV', '24,418 miles', '395BHP', 'Automatic', 'Electric', '1 owner'], ['2020 (20 reg)', 'SUV', '35,700 miles', '395BHP', 'Automatic', 'Electric', '1 owner'], ['2019 (19 reg)', 'SUV', '22,611 miles', '400PS', 'Automatic', 'Electric', 'Full service history'], ['2019 (69 reg)', 'SUV', '39,000 miles', '400PS', 'Automatic', 'Electric', '1 owner', 'Full dealership history'], ['2020 (20 reg)', 'SUV', '41,434 miles', '400PS', 'Automatic'], ['2019 (69 reg)', 'SUV', '41,019 miles', '400PS', 'Automatic', 'Electric'], ['2018 (18 reg)', 'SUV', '30,688 miles', '400PS', 'Automatic', 'Electric'], ['2019 (19 reg)', 'SUV', '26,393 miles', '400PS', 'Automatic', 'Electric'], ['2018 (68 reg)', 'SUV', '21,038 miles', '400PS', 'Automatic', 'Electric'], ['2019 (19 reg)', 'SUV', '64,000 miles', '400PS', 'Automatic', 'Electric', '1 owner', 'Full dealership history'], ['2020 (70 reg)', 'SUV', '34,600 miles', '400PS', 'Automatic', 'Electric', '2 owners'], ['2019 (69 reg)', 'SUV', '43,243 miles', '400PS', 'Automatic'], ['2020 (20 reg)', 'Saloon', '22,691 miles', '241BHP', 'Automatic', 'Electric']]
</code></pre>
<p>I am trying to create a list containing the first element of each of the sublists above. Using <a href="https://stackoverflow.com/questions/25050311/extract-first-item-of-each-sublist-in-python">this Stackoverflow question here</a> I've tried the following:</p>
<pre><code>all_ages = [item[0] for item in all_specs]
</code></pre>
<p>But that is generating the error:</p>
<blockquote>
<p>IndexError: list index out of range</p>
</blockquote>
<p>I'm unsure why, because if I do a print of all_specs it appears to work:</p>
<pre><code>[print(item[0]) for item in all_specs]
</code></pre>
<p>Outputs:</p>
<blockquote>
<p>2020 (70 reg) 2019 (69 reg) 2019 (19 reg) 2018 (68 reg) 2020 (20 reg)
2020 (20 reg) 2019 (68 reg) 2018 (68 reg)</p>
</blockquote>
|
<python><index-error>
|
2023-06-05 12:53:13
| 3
| 705
|
Curious Student
|
76,406,487
| 16,306,516
|
search Method Optimization for searching a field in odoo
|
<p>actually I have a field in sale order which I update in onchange with a datetime field, and that has to be optimized, I am unable to figure out what to be done so need help, here is the method</p>
<p>here the <code>time_slot</code> field is a selection field with AM and PM</p>
<pre><code> def test(self):
tomorrow = datetime.datetime.today() + datetime.timedelta(days=1)
date_list = [(tomorrow + datetime.timedelta(days=x)).date() for x in range(30)]
technician_line = self.env["allocated_technician_line"].search(
[
("service_type_id", "=", self.service_type_id.id),
("time_slot", "=", self.time_slot),
(
"technician_id.user_id",
"in",
self.area_id.technicians.ids,
),
(
"allocated_technician_id.date",
"in",
date_list,
),
]
)
allocated_technician_schedule_ids = []
for line in technician_line:
allocated_sale_order = self.env["sale.order"].search(
[
(
"preferred_service_schedule",
"=",
line.allocated_technician_id.id,
),
("service_type_id", "=", line.service_type_id.id),
("time_slot", "=", line.time_slot),
("state", "!=", "cancel"),
]
)
if self.env.context.get('active_id'):
allocated_sale_order = allocated_sale_order.filtered(
lambda r: r.id != self.env.context.get('active_id')
)
available_allocation = line.allocation - len(allocated_sale_order)
if available_allocation > 0:
allocated_technician_schedule_ids.append(line.allocated_technician_id.id)
allocated_technician_schedule_recs = self.env["allocated_technician"].browse(allocated_technician_schedule_ids)
user_ids = allocated_technician_schedule_recs.mapped('user_id').ids
if (self.assigned_asc.id not in user_ids):
self["assigned_asc"] = False
</code></pre>
<p>need help in this method to be optimized just to bring <code>allocated_sale_order</code> search method out of for loop.</p>
|
<python><odoo><odoo-15>
|
2023-06-05 12:36:39
| 1
| 726
|
Sidharth Panda
|
76,406,440
| 14,224,948
|
I cannot pass the values between the methods in classBasedView
|
<p>I need to pass the bundle and the message values to get_context_data() method, but I cannot figure out how to do it.
In this instance the form is valid when I pass it (I can add the better error handling when I figure out why the data gets updated in the post method, but doesn't in the get_context_data()).
The form has just 1 filed and it takes the file.</p>
<p>Please help.</p>
<pre><code>class FirmwareView(FormView, TemplateView):
template_name = "dev_tools/firmware_cbv2.html"
form_class = forms.InitialFirmwareForm2
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.bundle = {}
self.message = {}
def get_success_url(self):
return self.request.path
def post(self, request, *args, **kwargs):
form = self.get_form()
if form.is_valid():
if request.FILES:
if "file" in request.FILES:
file = request.FILES["file"]
try:
content = json.loads(file.read().decode('utf-8'))
folder_name = utils.format_folder_name(content["title"])
units = []
for unit in content["units"]:
units.append(unit)
self.bundle["units"] = units
self.bundle["file"] = {
"name": folder_name,
"content": json.dumps(content)
}
self.message = {
"type": "info",
"content": "The form was parsed successfully"
}
except Exception as e:
print("there was an error", e)
return self.form_valid(form)
else:
print("the form is invalid")
return self.form_invalid(form)
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context["bundle"] = self.bundle
context["msg"] = self.message
return context
def form_valid(self, form):
if isinstance(form, forms.InitialFirmwareForm2):
return super().form_valid(form)
</code></pre>
|
<python><django><class><django-class-based-views>
|
2023-06-05 12:31:47
| 1
| 1,086
|
Swantewit
|
76,406,418
| 20,920,790
|
How to delete white stripes and unite the legends?
|
<p>I got this data:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">reg_month</th>
<th style="text-align: right;">no_ses_users</th>
<th style="text-align: right;">ses_users</th>
<th style="text-align: right;">no_ses_users_ptc</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2021-01-01</td>
<td style="text-align: right;">29</td>
<td style="text-align: right;">101</td>
<td style="text-align: right;">22.31</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">2021-02-01</td>
<td style="text-align: right;">48</td>
<td style="text-align: right;">188</td>
<td style="text-align: right;">20.34</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">2021-03-01</td>
<td style="text-align: right;">86</td>
<td style="text-align: right;">303</td>
<td style="text-align: right;">22.11</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">2021-04-01</td>
<td style="text-align: right;">111</td>
<td style="text-align: right;">381</td>
<td style="text-align: right;">22.56</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">2021-05-01</td>
<td style="text-align: right;">141</td>
<td style="text-align: right;">479</td>
<td style="text-align: right;">22.74</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: left;">2021-06-01</td>
<td style="text-align: right;">177</td>
<td style="text-align: right;">564</td>
<td style="text-align: right;">23.89</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td style="text-align: left;">2021-07-01</td>
<td style="text-align: right;">224</td>
<td style="text-align: right;">661</td>
<td style="text-align: right;">25.31</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td style="text-align: left;">2021-08-01</td>
<td style="text-align: right;">257</td>
<td style="text-align: right;">746</td>
<td style="text-align: right;">25.62</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td style="text-align: left;">2021-09-01</td>
<td style="text-align: right;">289</td>
<td style="text-align: right;">832</td>
<td style="text-align: right;">25.78</td>
</tr>
<tr>
<td style="text-align: right;">9</td>
<td style="text-align: left;">2021-10-01</td>
<td style="text-align: right;">319</td>
<td style="text-align: right;">934</td>
<td style="text-align: right;">25.46</td>
</tr>
<tr>
<td style="text-align: right;">10</td>
<td style="text-align: left;">2021-11-01</td>
<td style="text-align: right;">341</td>
<td style="text-align: right;">1019</td>
<td style="text-align: right;">25.07</td>
</tr>
<tr>
<td style="text-align: right;">11</td>
<td style="text-align: left;">2021-12-01</td>
<td style="text-align: right;">384</td>
<td style="text-align: right;">1111</td>
<td style="text-align: right;">25.69</td>
</tr>
<tr>
<td style="text-align: right;">12</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">422</td>
<td style="text-align: right;">1203</td>
<td style="text-align: right;">25.97</td>
</tr>
<tr>
<td style="text-align: right;">13</td>
<td style="text-align: left;">2022-02-01</td>
<td style="text-align: right;">451</td>
<td style="text-align: right;">1292</td>
<td style="text-align: right;">25.87</td>
</tr>
<tr>
<td style="text-align: right;">14</td>
<td style="text-align: left;">2022-03-01</td>
<td style="text-align: right;">482</td>
<td style="text-align: right;">1377</td>
<td style="text-align: right;">25.93</td>
</tr>
<tr>
<td style="text-align: right;">15</td>
<td style="text-align: left;">2022-04-01</td>
<td style="text-align: right;">518</td>
<td style="text-align: right;">1468</td>
<td style="text-align: right;">26.08</td>
</tr>
<tr>
<td style="text-align: right;">16</td>
<td style="text-align: left;">2022-05-01</td>
<td style="text-align: right;">544</td>
<td style="text-align: right;">1553</td>
<td style="text-align: right;">25.94</td>
</tr>
<tr>
<td style="text-align: right;">17</td>
<td style="text-align: left;">2022-06-01</td>
<td style="text-align: right;">584</td>
<td style="text-align: right;">1633</td>
<td style="text-align: right;">26.34</td>
</tr>
<tr>
<td style="text-align: right;">18</td>
<td style="text-align: left;">2022-07-01</td>
<td style="text-align: right;">620</td>
<td style="text-align: right;">1722</td>
<td style="text-align: right;">26.47</td>
</tr>
<tr>
<td style="text-align: right;">19</td>
<td style="text-align: left;">2022-08-01</td>
<td style="text-align: right;">651</td>
<td style="text-align: right;">1813</td>
<td style="text-align: right;">26.42</td>
</tr>
<tr>
<td style="text-align: right;">20</td>
<td style="text-align: left;">2022-09-01</td>
<td style="text-align: right;">662</td>
<td style="text-align: right;">1847</td>
<td style="text-align: right;">26.39</td>
</tr>
</tbody>
</table>
</div>
<p>I make graph with second axis with plt.bar (y axis), plt.plot (second y axis):</p>
<pre><code> import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
fig, ax = plt.subplots(figsize=(10, 6))
ax1 = plt.bar(
x=df2_3['reg_month'],
height=df2_3['ses_users'],
label='Users with sessions',
edgecolor = 'black',
linewidth = 0,
width=20,
color = '#3049BF'
)
ax2 = plt.bar(
x=df2_3['reg_month'],
height=df2_3['no_ses_users'],
bottom=df2_3['ses_users'],
label='Users without sessions',
edgecolor = 'black',
linewidth = 0,
width=20,
color = '#BF9530'
)
secax = ax.twinx()
secax.set_ylim(min(df2_3['no_ses_users_ptc'])*0.5, max(df2_3['no_ses_users_ptc'])*1.1)
ax3 = plt.plot(
df2_3['reg_month'],
df2_3['no_ses_users_ptc'],
color = '#E97800',
label='Users without sessions, ptc'
)
ax.legend()
secax.legend()
plt.show()
</code></pre>
<p>How can I delete white stipes and unite the legend at one?</p>
<p>Output:
<a href="https://i.sstatic.net/OENjH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OENjH.png" alt="plt.show" /></a></p>
|
<python><matplotlib>
|
2023-06-05 12:28:23
| 1
| 402
|
John Doe
|
76,406,390
| 9,786,534
|
What is the behaviour of xarray when multiplying data arrays?
|
<p>I would like to multiply two data arrays of same dimensions:</p>
<pre class="lang-py prettyprint-override"><code>print(data1)
<xarray.DataArray 'var' (lat: 2160, lon: 4320)>
[9331200 values with dtype=int8]
Coordinates:
* lon (lon) float64 -180.0 -179.9 -179.8 -179.7 ... 179.8 179.9 180.0
* lat (lat) float64 89.96 89.88 89.79 89.71 ... -89.79 -89.88 -89.96
print(data2)
<xarray.DataArray 'var' (lat: 2160, lon: 4320)>
[9331200 values with dtype=float32]
Coordinates:
* lon (lon) float64 -180.0 -179.9 -179.8 -179.7 ... 179.8 179.9 180.0
* lat (lat) float64 89.96 89.88 89.79 89.71 ... -89.79 -89.87 -89.96
</code></pre>
<p><code>data1 * data2</code> returns this error:</p>
<pre><code>ValueError: Cannot apply_along_axis when any iteration dimensions are 0
</code></pre>
<p>Note that following <a href="https://stackoverflow.com/questions/49186609/how-to-multiply-python-xarray-datasets">this thread</a>, I made sure to have consistent dimensions and re-indexed both data arrays.</p>
<p>Since both arrays have different <code>dtype</code>, I have tried <code>data1.astype(np.float64) * data2</code>, but that returned the same error.</p>
<p>On the other hand, this returned an empty array:</p>
<pre class="lang-py prettyprint-override"><code>data3 = data1.astype(np.float64) * data2.astype(np.float64)
print(data3)
<xarray.DataArray 'var' (lat: 0, lon: 0)>
array([], shape=(0, 0), dtype=float64)
Coordinates:
* lon (lon) float64
* lat (lat) float64
</code></pre>
<p>The only way I found to achieve this multiplication was to get the underlying np data:</p>
<pre class="lang-py prettyprint-override"><code>data3 = data1.data * data2.data
</code></pre>
<p>Although this works for my need, I am still curious to understand why the pure xarray method fails. Can anyone inform me or point me towards a part of the documentation I might have missed?</p>
|
<python><python-3.x><numpy><geospatial><python-xarray>
|
2023-06-05 12:24:39
| 1
| 324
|
e5k
|
76,406,296
| 9,072,753
|
how to chain first and last from subchains to chains properly in airflow?
|
<p>Consider the following:</p>
<pre><code>sod = (
DummyOperator(task_id="sod")
>> DummyOperator(task_id="sod_do_this")
>> DummyOperator(task_id="sod_last")
)
(
BranchPythonOperator(
task_id="should_run_sod",
python_callable=lambda: "sod", # no_sod
)
>> [sod, DummyOperator(task_id="no_sod")]
>> DummyOperator(task_id="end")
)
</code></pre>
<p>The idea is to "connect" a subchain "sod" in between. Hoewever, <code>__rshift__</code> returns the last operator in chain, so <code>sod = DummyOperator(task_id="sod_last")</code> and the stuff becomes mixed. <code>should_run_sod</code> is connected to <code>sod_last</code>, not to <code>sod</code>.</p>
<p>Can I write it in some simple way, other than assigning variables everywhere? I would want to get the same result as in the following, this however requires variables for last and first, which becomes more convoluted</p>
<pre><code>sod = DummyOperator(task_id="sod")
sod_do_this = DummyOperator(task_id="sod_do_this")
sod_last = DummyOperator(task_id="sod_last")
sod >> sod_do_this >> sod_last
should_run_sod = BranchPythonOperator(
task_id="should_run_sod",
python_callable=lambda: "sod", # no_sod
)
should_run_sod >> [sod, DummyOperator(task_id="no_sod")]
end = DummyOperator(task_id="end")
sod_last >> end
no_sod >> end
</code></pre>
|
<python><airflow><airflow-2.x>
|
2023-06-05 12:13:28
| 2
| 145,478
|
KamilCuk
|
76,406,138
| 3,829,004
|
Why Python round-off is different from the excel round off? How to get the same result
|
<p>I am facing an issue with python round-off.</p>
<p>Input value = 3.7145,
which I have tried in excel with the</p>
<pre><code>formula: =round(C9/1000,6)
</code></pre>
<p>and I get the
<strong>Output = 0.003715,</strong></p>
<p>Same thing when I tried with Python with the below code:</p>
<p><strong>Python Code:</strong></p>
<pre><code>number = 3.7145
result = round(number / 1000, 6)
print(result)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>0.003714
</code></pre>
<p>How to ensure that we get exact same value, as my requirement is a high precision so even a .000001 variation is not allowed.
I have tried same with other number also where I get the same values in excel and python</p>
<pre><code>Input : 1.0127, 2.3216, 5.4938
Output: 0.001013, 0.002322, 0.005494 (Same in python & excel)
</code></pre>
|
<python><excel><rounding>
|
2023-06-05 11:50:41
| 1
| 2,646
|
coder3521
|
76,406,024
| 15,219,428
|
Display both axes in sorted order for non numerical data
|
<p>How to achieve the correct order for both axes:</p>
<ul>
<li>a-b-c instead of c-a-b</li>
<li>x-y-z instead of y-z-x</li>
</ul>
<pre><code>import matplotlib.pyplot as plt
categories_x = ["c", "a", "b", "c", "b"]
categories_y = ["y", "z", "y", "x", "z"]
plt.scatter(categories_x, categories_y)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/6UkVL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6UkVL.png" alt="This plot is created by above code" /></a></p>
<p>There are a lot of solutions on SO, that rely on either of two properties:</p>
<ol>
<li>The data can be cast to numerical, e.g. ("1", "0", ...) -> cast to numerical</li>
<li>Only one axis has the wrong order -> sort the two arrays by this axis (the reason why this works is, that the axis-ticks are ordered by first occurrence)</li>
</ol>
<p>But for my example neither of these solutions work.</p>
<p>I'm looking for a solution, of <strong>how to get this to work in matplotlib</strong>. I am aware, that there are other probably even better ways to convey the same message, or maybe other libraries that don't have this issue.</p>
|
<python><matplotlib>
|
2023-06-05 11:36:03
| 4
| 1,609
|
MangoNrFive
|
76,406,021
| 1,523,874
|
PermissionError: [Errno 13] Permission denied: '/home/ubuntu/.keras/models/efficientnetb0.h5'
|
<p>I am getting this error while training Nan efficient net model.</p>
<pre><code>Downloading data from https://storage.googleapis.com/keras-applications/efficientnetb0.h5
Traceback (most recent call last):
File "efficient_net.py", line 45, in <module>
base_model = EfficientNetB0(include_top=True, weights='imagenet', input_shape=input_shape)
File "/home/ubuntu/projects/XXXX/vevn37/lib/python3.7/site-packages/tensorflow/python/keras/applications/efficientnet.py", line 540, in EfficientNetB0
**kwargs)
File "/home/ubuntu/projects/XXXX/vevn37/lib/python3.7/site-packages/tensorflow/python/keras/applications/efficientnet.py", line 406, in EfficientNet
file_hash=file_hash)
File "/home/ubuntu/projects/XXXX/vevn37/lib/python3.7/site-packages/tensorflow/python/keras/utils/data_utils.py", line 275, in get_file
urlretrieve(origin, fpath, dl_progress)
File "/usr/lib/python3.7/urllib/request.py", line 257, in urlretrieve
tfp = open(filename, 'wb')
PermissionError: [Errno 13] Permission denied: '/home/ubuntu/.keras/models/efficientnetb0.h5'
</code></pre>
<p>I have tried to change the permission of my python code:</p>
<pre><code>chmod 777 code.py
</code></pre>
<p>It did not work.</p>
|
<python><permission-denied><efficientnet>
|
2023-06-05 11:35:53
| 0
| 2,419
|
Mustafa Celik
|
76,405,650
| 225,396
|
Dataflow - add JSON file to BigQuery
|
<p>I'm doing some POC with GCP Dataflow and add some JSON object to BigQuery.</p>
<pre><code>import apache_beam as beam
import apache_beam.io.gcp.bigquery as b_query
p1 = beam.Pipeline()
trips_schema = 'trip_id:INTEGER,vendor_id:INTEGER,trip_distance:FLOAT,fare_amount:STRING,store_and_fwd_flag:STRING'
freq = (
p1
| beam.Create(["{\"vendor_id\":33,\"trip_id\": 1000474,\"trip_distance\": 2.3999996185302734,\"fare_amount\": 42.13,\"store_and_fwd_flag\": \"Y\"}"])
| beam.Map(print)
| 'Write Record to BigQuery' >> b_query.WriteToBigQuery(table='trips2', dataset='my_poc',
custom_gcs_temp_location='gs://XXXX-stage'
'-xxxx/temp',
schema=trips_schema, project='xxxxxxxx-xxx-2',
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED, )
)
p1.run()
</code></pre>
<p>Now when I'm running this code, I'm getting the following error:</p>
<pre><code>RuntimeError: BigQuery job beam_bq_job_LOAD_AUTOMATIC_JOB_NAME_LOAD_STEP_957_6db0f0222c18bf6ef55dfb301cf9b7b2_2e2519daf47e48888bd08fc7661da2e6 failed. Error Result: <ErrorProto
location: 'gs://xxxx-stage-xxx/temp/bq_load/4b49a738d79b47ba8b188f04d10bf8f0/xxxx-dev-x.my_poc.trips2/258cab64-9d98-4fbc-8c16-087bdd0ea93c'
message: 'Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 1; errors: 1. Please look into the errors[] collection for more details. File: gs://xxxx-stage-xxx/temp/bq_load/4b49a738d79b47ba8b188f04d10bf8f0/xxxxx-dev-x.my_poc.trips2/258cab64-9d98-4fbc-8c16-087bdd0ea93c'
reason: 'invalid'> [while running 'Write Record to BigQuery/BigQueryBatchFileLoads/TriggerLoadJobsWithoutTempTables/ParDo(TriggerLoadJobs)']
</code></pre>
<p>File in the stage bucket which referred in error contains null.</p>
<p>Please help.</p>
|
<python><google-cloud-platform><google-cloud-dataflow><apache-beam>
|
2023-06-05 10:43:06
| 1
| 18,649
|
danny.lesnik
|
76,405,496
| 1,040,718
|
Serving http over my laptop public ip address - Python
|
<p>I'm testing a feature locally, and instead of using an EC2 instance, I want to use my own laptop:</p>
<pre><code>python3 -m http.server
Serving HTTP on :: port 8000 (http://[::]:8000/) ...
</code></pre>
<p>if I do <code>curl http://localhost:8000</code> it will work and the webpage is fetched. However if I do <code>curl http://<my-ip-address>:8000</code> I'm getting:</p>
<pre><code><html><head><title>504 Gateway Timeout</title></head>
<body><h1>Gateway Timeout</h1>
<p>Server error - server <my-ip-address> is unreachable at this moment.<br><br>Please retry the request or contact your administrator.<br></p>
</body></html>
</code></pre>
<p>how can I enable my IP address to serve in the browser through the public ip address assigned to the laptop?</p>
|
<python><python-3.x><apache><http>
|
2023-06-05 10:24:38
| 1
| 11,011
|
cybertextron
|
76,405,424
| 7,405,974
|
How to update/add the new/existing contents of a dictionary inside a list from a list reference
|
<p>I have a dynamic list of contents which varies on timely basis. Based on the contents of the dynamic list, I need to add/update the dictionary inside the another list.</p>
<pre><code>list1 = ["data1", "data2", "data3", ..... "dataN"]
list_dict1 = [{"Id" : "data1"}, {"Id": "data2"}]
</code></pre>
<p>As you can see the list1 contains "N" number of items which is not static so the value and the quantity of the items will be changed on timely manner. Now, <strong>list_dict1</strong> is depending on <strong>list1</strong> to add/update the dictionary data. Right now, in list_dict1 have the values for data1 the requirement is to automatically add/update the dictionary inside <strong>list_dict1</strong> with the count of items or the values from <strong>list1</strong>.</p>
<p>I can do it the following way:</p>
<pre><code># Example if list1 has only one element
count = len(list1)
if count == 1:
list_dict1 = [{"Id" : list1[0]}]
# Example if list1 have only two elements
count = len(list1)
if count == 2:
list_dict1 = [{"Id" : list1[0]},{"Id" : list1[1]}]
# Example if list1 have only four elements
count = len(list1)
if count == 2:
list_dict1 = [{"Id" : list1[0]},{"Id" : list1[1]},{"Id" : list1[2]},{"Id" : list1[3]}]
....
</code></pre>
<p>but this way, if I have 100 elements in the list I need to write 100 if conditions, and need to manually update the list_dict1 accordingly. Is there anyway to automatically write/update the <strong>list_dict1</strong> with the count of items and the respective items from <strong>list1</strong></p>
|
<python><dictionary>
|
2023-06-05 10:15:55
| 1
| 509
|
karthik
|
76,405,296
| 1,581,090
|
How to read telnet output with telnetlib3 and python?
|
<p>Using python 3.8.10 and telnetlib I want to connect to a host and just print out the message the host is sending upon connection. I have been trying the following code:</p>
<pre><code>import telnetlib3
import asyncio
async def run_telnet_session():
total_output = ""
# Establish a Telnet connection with timeout
reader, writer = await telnetlib3.open_connection("192.168.200.10", 9000)
try:
# Read the Telnet output
while True:
output = await reader.read(4096)
if output:
print(output)
total_output += output
else:
break
finally:
writer.close()
await writer.wait_closed()
return total_output
output = asyncio.run(run_telnet_session())
print(output)
</code></pre>
<p>and I get the expected output, but the code is blocked! The function never returns. How to change the code so there is a timeout and the function returns the string?</p>
|
<python><python-3.x><telnet><telnetlib3>
|
2023-06-05 09:59:21
| 1
| 45,023
|
Alex
|
76,405,023
| 13,023,224
|
Merge columns with lists into one
|
<p>I have this dataframe</p>
<pre><code>df = pd.DataFrame({
'c1':['a','f,g,e','a,f,e,h','g,h','b,c,g,h',],
'c2':['1','1,1,0.5','1,2,2.5,1','3,1','2,-1,0.5,-1'],
'c3':['0.05','0.01,0.001,>0.5','>0.9,>0.9,0.01,0.002','>0.9,>0.9','0.05,0.1,<0.01,0.1'],
})
</code></pre>
<p>yielding</p>
<pre><code>c1 c2 c3
a 1 0.05
f,g,e 1,1,0.5 0.01,0.001,>0.5
a,f,e,h 1,2,2.5,1 >0.9,>0.9,0.01,0.002
g,h 3,1 >0.9,>0.9
b,c,g,h 2,-1,0.5,-1 0.05,0.1,<0.01,0.1
</code></pre>
<p>I would like to combine c1,c2 and c3 to create new column c4 (see desired result below)</p>
<pre><code>c1 c2 c3 c4
a 1 0.05 a(1|0.05)
f,g,e 1,1,0.5 0.01,0.001,>0.5 f(1|0.01),g(1|0.001),e(0.5|>0.5)
a,f,e,h 1,2,2.5,1 >0.9,>0.9,0.01,0.002 a(1|>0.9),f(2|>0.9),e(2.5|0.01),h(1|0.02)
g,h 3,1 >0.9,>0.9 g(3|>0.9),h(1|>0.9)
b,c,g,h 2,-1,0.5,-1 0.05,0.1,<0.01,0.1 b(2|0.05),c(-1|0.1),g(0.5<0.01),h(-1|0.1)
</code></pre>
<p>I tried working on answers provided to <a href="https://stackoverflow.com/questions/33098383/merge-multiple-column-values-into-one-column-in-python-pandas">this question</a>, and this <a href="https://stackoverflow.com/questions/53875780/merge-lists-is-multiple-columns-of-a-pandas-dataframe-into-a-sigle-list-in-a-col">question</a>, but it did not work.</p>
|
<python><pandas><list><multiple-columns>
|
2023-06-05 09:20:22
| 1
| 571
|
josepmaria
|
76,404,986
| 11,636,033
|
'Incorrect type warnings' for TypedDicts in unittest assertDictEqual
|
<p>I'm writing some unit tests for a function I've made and the <code>self.assertDictEqual(expected, actual)</code> is giving me a warning for incorrect typing.</p>
<pre><code>Expected type 'Mapping[Any, object]', got 'CustomFooBarDict' instead
</code></pre>
<p>The unit tests run fine but I was wondering if there's a way to tell unittest that this is a <code>TypedDict</code> so not to worry, it's still a mapping, but it's from <code>string</code> to <code>string</code>s, <code>int</code>s and a couple of <code>Enum</code>s which are all par for the course with your standard <code>dict</code>.</p>
<p>For example, in TypeScript I would be able to use the <code>as</code> keyword to tell the transpiler not to worry.</p>
<p>Example here:</p>
<pre class="lang-py prettyprint-override"><code>import typing
import unittest
class FooBarStruct(typing.TypedDict):
foo: str
bar: int
def create_foo_bar_struct() -> FooBarStruct:
return {'foo': 'baz', 'bar': 1}
class TestFooBarStruct(unittest.TestCase):
def test_foo_bar(self):
expected: FooBarStruct = {'foo': 'baz', 'bar': 1}
self.assertDictEqual(expected, create_foo_bar_struct())
</code></pre>
<p>I've verified this minimum reproducible example in VSCode. As I mentioned, the tests run, but the warnings are there and when I am running my pipeline I don't want any warnings.</p>
|
<python><python-unittest><python-typing>
|
2023-06-05 09:13:03
| 0
| 374
|
Ash Oldershaw
|
76,404,918
| 2,532,296
|
extracting columns, skipping certain rows in a file for data processing
|
<p>I am trying to process the <code>input.txt</code> using the <code>test.py</code> script to extract specific information as shown in the expected output. I have got the basic stub, but the regex apparently is not extracting the specific column details I am expecting. I have shown the expected output for your reference.</p>
<p>In general, I am looking for a <code>[XXXYY] {TAG}</code> pattern and once I find that pattern, if the next column starts with <code>J</code>, extract column 1, column 2 and (first 3 characters of) column3. I am also interested in knowing how to remove certain lines after <code>[00033] GND</code> ( and <code>[00272] POS_3V3</code>) until I see the next <code>[XXXYY] {TAG}</code> pattern. I am restricted to using python 2.7.5, re and csv library and cannot use pandas.</p>
<h5>input.txt</h5>
<pre><code><<< Test List >>>
Mounting Hole MH1 APBC_MH_3.2x7cm
Mounting Hole MH2 APBC_MH_3.2x7cm
Mounting Hole MH3 APBC_MH_3.2x7cm
Mounting Hole MH4 APBC_MH_3.2x7cm
[00001] DEBUG_SCAR_RX
J1 B30 PIO37 PASSIVE TRA6-70-01.7-R-4-7-F-UG
R2 2 2 PASSIVE 4.7kR
[00002] DEBUG_SCAR_TX
J1 B29 PIO36 PASSIVE TRA6-70-01.7-R-4-7-F-UG
[00003] DYOR_DAT_0
J2 B12 APB10_CC_P PASSIVE TRA6-70-01.7-R-4-7-F-UG
[00033] GND
DP1 5 5 PASSIVE MECH, DIP_SWITCH, FFFN-04F-V
DP1 6 6 PASSIVE MECH, DIP_SWITCH, FFFN-04F-V
DP1 7 7 PASSIVE MECH, DIP_SWITCH, FFFN-04F-V
[00271] POS_3.3V_INH
Q2 3 DRAIN PASSIVE 2N7002
R34 2 2 PASSIVE 4.7kR
[00272] POS_3V3
J1 B13 FETO_FAT PASSIVE TRA6-70-01.7-R-4-7-F-UG
J1 B14 FETO_FAT PASSIVE TRA6-70-01.7-R-4-7-F-UG
J2 B59 FETO_HDB PASSIVE TRA6-70-01.7-R-4-7-F-UG
</code></pre>
<h5>test.py</h5>
<pre><code>import re
# Read the input file
with open('input.txt', 'r') as file:
content = file.readlines()
# Process the data and extract the required information
result = []
component_name = ""
for line in content:
line = line.strip()
if line.startswith("["):
s = re.sub(r"([\[0-9]+\]) (\w+)$", r"\2", line)
elif line.startswith("J"):
sp = re.sub(r"^(\w+)\s+(\w+)\s+(\w+)", r"\1\2", line)
print("%s\t%s" % (s, sp))
</code></pre>
<h5>output</h5>
<pre><code>DEBUG_SCAR_RX J1B30 PASSIVE TRA6-70-01.7-R-4-7-F-UG
DEBUG_SCAR_TX J1B29 PASSIVE TRA6-70-01.7-R-4-7-F-UG
DYOR_DAT_0 J2B12 PASSIVE TRA6-70-01.7-R-4-7-F-UG
POS_3V3 J1B13 PASSIVE TRA6-70-01.7-R-4-7-F-UG
POS_3V3 J1B14 PASSIVE TRA6-70-01.7-R-4-7-F-UG
POS_3V3 J2B59 PASSIVE TRA6-70-01.7-R-4-7-F-UG
</code></pre>
<h5>expected</h5>
<pre><code>DEBUG_SCAR_RX J1 B30 PIO
DEBUG_SCAR_TX J1 B29 PIO
DYOR_DAT_0 J2 B12 APB
</code></pre>
|
<python><regex>
|
2023-06-05 09:05:01
| 2
| 848
|
user2532296
|
76,404,911
| 1,343,979
|
Union type passed as the only argument to isinstance() in Python
|
<p>Consider I want to check whether a variable <code>var</code> is an instance of either type <code>A</code> or type <code>B</code>. There's the canonical way to do so, discussed in detail <a href="https://stackoverflow.com/questions/2184955">here</a>:</p>
<pre class="lang-py prettyprint-override"><code>result: bool = isinstance(var, (A, B))
</code></pre>
<p>The following appears to work just as well, however (see <a href="https://peps.python.org/pep-0604/" rel="nofollow noreferrer">PEP 604</a>):</p>
<pre class="lang-py prettyprint-override"><code>result: bool = isinstance(var, A | B)
</code></pre>
<p>The question is: are the above two code fragments equivalent?</p>
|
<python><types>
|
2023-06-05 09:03:35
| 0
| 5,460
|
Андрей Щеглов
|
76,404,811
| 13,086,128
|
AttributeError: 'DataFrame' object has no attribute 'iteritems'
|
<p>I am using pandas to read csv on my machine then I create a pyspark dataframe from pandas dataframe.</p>
<pre><code>df = spark.createDataFrame(pandas_df)
</code></pre>
<p>I updated my pandas from version <code>1.3.0</code> to <code>2.0</code></p>
<p>Now, I am getting this error:</p>
<p><a href="https://i.sstatic.net/qzYhj.png" rel="noreferrer"><img src="https://i.sstatic.net/qzYhj.png" alt="enter image description here" /></a></p>
<hr />
<p><a href="https://i.sstatic.net/OSKFH.png" rel="noreferrer"><img src="https://i.sstatic.net/OSKFH.png" alt="enter image description here" /></a></p>
<pre><code>AttributeError: 'DataFrame' object has no attribute 'iteritems'
</code></pre>
|
<python><python-3.x><pandas><dataframe><pyspark>
|
2023-06-05 08:50:36
| 1
| 30,560
|
Talha Tayyab
|
76,404,712
| 1,432,980
|
Decimal precision exceeds max precision despite decimal having the correct size and precision
|
<p>I have this code</p>
<pre><code>spark = (
SparkSession.builder
.appName("pyspark-sandbox")
.getOrCreate()
)
spark.conf.set("spark.sql.parquet.outputTimestampType", "TIMESTAMP_MICROS") # type: ignore
faker = Faker()
value = faker.pydecimal(left_digits=28, right_digits=10)
print(value) # 8824750032877062776842530687.8719544506
df = spark.createDataFrame([[value]], schema=['DecimalItem'])
df = df.withColumn('DecimalItem', col('DecimalItem').cast(DecimalType(38, 10)))
df.show()
</code></pre>
<p>But on <code>show</code> I get this error</p>
<pre><code>org.apache.spark.SparkArithmeticException: [DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION] Decimal precision 46 exceeds max precision 38.
</code></pre>
<p>The value <code>8824750032877062776842530687.8719544506</code> seems to fit into <code>DecimalType</code>, yet it fails. What is the problem?</p>
|
<python><apache-spark><pyspark>
|
2023-06-05 08:34:15
| 1
| 13,485
|
lapots
|
76,404,516
| 11,611,246
|
Execute Python script to shut down computer: Somewhat unexpected behaviour
|
<p>When I write some file with .py extension and containing the following code:</p>
<pre><code>import subprocess, platform
cmd = "echo Hi"
subprocess.call(cmd, shell = True)
</code></pre>
<p>it will return "Hi" when I double-click the file in Windows or when I write a .bat file which makes Python execute the script.</p>
<p>However, what I actually want to do is to shut down the computer (by executing the Python script from a batch file).</p>
<p>To this end, I wrote the following code:</p>
<pre><code>import subprocess, platform
cmd = "shutdown /s /t 1" if platform.system() == "Windows" \
else "systemctl poweroff"
subprocess.call(cmd, shell = True)
</code></pre>
<p>However, this will start to endlessly run the script without shutting down the computer. I.e., double-clicking the .py file opens a prompt which is continuously populated with the line <code>C:\Users\... py myscript.py</code>.</p>
<p>If I open a command prompt and enter</p>
<pre><code>> py
> import subprocess
> subprocess.call("shutdown /s /t 1")
</code></pre>
<p>the computer is shut down as expected.</p>
<p>So, how is this different from executing the script? Is this some security issue? And how can I get this to work?</p>
<p>I also tried <code>os.system()</code>, as well as entering the subprocess call as a list (setting <code>cmd = ["shutdown", "-s", "-t", "1"]</code>). All behaved similarly: The code works when executed from a Python prompt but it does not when I run Python from a batch file or by double-click. In those instances, the call to <code>echo</code>, however, does work.</p>
|
<python><subprocess>
|
2023-06-05 08:00:41
| 0
| 1,215
|
Manuel Popp
|
76,404,397
| 1,138,523
|
Plotly plot single trace with 2 yaxis
|
<p>I have a plotly graph object bar chart for which I want to display 2 y-axis (different currencies, so the conversion factor ist constant).</p>
<p>Currently I plot 1 trace each, while for the second one I set opacity to 0, disable the legend and hoverinfo. This hack works, but is ugly to maintain.</p>
<p>I'm aware of <a href="https://plotly.com/python/multiple-axes/" rel="noreferrer">https://plotly.com/python/multiple-axes/</a></p>
<p>my current solution looks like this</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import plotly.graph_objects as go
from plotly.subplots import make_subplots
# make up some data
dates = pd.DataFrame(pd.date_range('1/1/2023','1/7/2023'), columns=['date'])
dates["key"] = 0
items = pd.DataFrame(["A","B","C"], columns=['items'])
items["key"] = 0
df = dates.merge(items,on="key",how="outer").drop("key",axis=1)
df['price_USD'] = np.random.randint(0, 100, df.shape[0])
df['price_EURO'] = df['price_USD']/1.5
fig = make_subplots(specs=[[{"secondary_y": True}]])
for item, _df in df.groupby("items",sort=True):
## we may set the colors of the individual items manually here
fig.add_trace(
go.Bar(
x=_df["date"],
y=_df["price_USD"],
showlegend=True,
name=item,
opacity=1.0,
#color=color,
legendgroup=item
),
secondary_y=False,
)
# invisible trace
fig.add_trace(
go.Bar(
x=_df["date"],
y=_df["price_EURO"],
showlegend=False,
opacity=0.0,
name="",
hoverinfo="skip",
legendgroup=item
),
secondary_y=True,
)
fig.update_layout(barmode="stack")
fig.update_yaxes(title_text="<b>Cost USD", secondary_y=False)
fig.update_yaxes(title_text="<b>Cost Euro", showgrid=False, secondary_y=True)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/zMF6w.png" rel="noreferrer"><img src="https://i.sstatic.net/zMF6w.png" alt="enter image description here" /></a></p>
<p>Is there a cleaner way to do this?</p>
|
<python><plotly><visualization><multiple-axes>
|
2023-06-05 07:42:06
| 1
| 27,285
|
Raphael Roth
|
76,404,284
| 15,969,427
|
QCompleter in QLineEdit has no useable height and not showing matches
|
<p>First time I'm trying to use a <code>QCompleter</code> in PyQt5.</p>
<p>My data is a list of dictionary objects with two text fields. I'm using a subclassed <code>QAbstractTableModel</code> to return a combined string in column 0. I'm giving this to a <code>QCompleter</code> in a <code>QLineEdit</code>. But when I try to complete, it gives me an empty popup. When I press up/down, it does go through the options (I have checked using <code>print()</code> statements in the <code>data()</code> method of my <code>QAbstractTableModel</code>) but only occasionally does it put a completed string in the text of the <code>QLineEdit</code>.
<a href="https://i.sstatic.net/lALn3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lALn3.png" alt="enter image description here" /></a></p>
<p>Note the popup stays empty, even when it puts a completed string in the text.
<a href="https://i.sstatic.net/CiaZ6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CiaZ6.png" alt="enter image description here" /></a></p>
<p>I want a "normal" popup list, like all the examples I can find online show. What am I doing wrong?</p>
<pre><code>allSubjects = []
def loadAllSubjects():
global allSubjects
allSubjects = []
cur.execute('''
select trim(sub_code), trim(sub_desc)
from subtab
where cmpy_code='01' and active_flg='Y'
order by sub_desc, sub_code
''')
row = cur.fetchone()
while row:
(sub_code, sub_desc) = row
allSubjects.append({
"code": sub_code,
"desc": sub_desc,
})
row = cur.fetchone()
class SubjectTableModel(QtCore.QAbstractTableModel):
def __init__(self):
super(SubjectTableModel, self).__init__()
def rowCount(self, index) -> int:
global allSubjects
return len(allSubjects)
def columnCount(self, index) -> int:
return 4
def data(self, index, role):
# print(f"data({index.row()},{index.column()})")
global allSubjects
sub = allSubjects[index.row()]
# print(sub)
colidx = index.column()
if colidx == 0:
return sub["desc"] + " (" + sub["code"] + ")"
elif colidx == 1:
return sub
elif colidx == 2:
return sub["code"]
elif colidx == 3:
return sub["desc"]
def headerData(self, colidx, orientation, role):
if colidx == 0:
return "Search"
elif colidx == 1:
return "Code"
elif colidx == 2:
return "Description"
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.resize(1000, 800)
subLabel = QLabel("Subject:")
subSearch = QLineEdit()
subCompleter = QCompleter()
subCompleter.setModel(SubjectTableModel())
subCompleter.setCaseSensitivity(Qt.CaseSensitivity.CaseInsensitive)
# subCompleter.setModelSorting(QCompleter.CaseInsensitivelySortedModel)
subSearch.setCompleter(subCompleter)
subPart = QHBoxLayout()
subPart.addWidget(subLabel)
subPart.addWidget(subSearch)
(...)
widget = QWidget()
widget.setLayout(layout)
self.setCentralWidget(widget)
app = QApplication(sys.argv)
window = MainWindow()
window.show()
loadAllSubjects()
app.exec()
</code></pre>
<p><strong>Edit:</strong> I know it has something to do with using a <code>QAbstractTableModel</code> for my data. If I just use a list of strings instead, it works fine.</p>
|
<python><pyqt5><qlineedit><qcompleter>
|
2023-06-05 07:26:25
| 1
| 375
|
Ian Bailey-Mortimer
|
76,404,268
| 2,586,955
|
MacOS pycharm can not install python Mediapipe version 0.10.0 with pip
|
<p>I am trying to install the latest version of mediapipe 0.10.0 using <code>pip install mediapipe==0.10.0</code> but always I get the same error:
<em>ERROR: Could not find a version that satisfies the requirement mediapipe==0.10.0 (from versions: 0.8.3.1, 0.8.4.2, 0.8.5, 0.8.6.2, 0.8.7.1, 0.8.7.2, 0.8.7.3, 0.8.8, 0.8.8.1, 0.8.9, 0.8.9.1, 0.8.10, 0.8.10.1, 0.8.11, 0.9.0, 0.9.0.1, 0.9.1.0)
ERROR: No matching distribution found for mediapipe==0.10.0</em></p>
<p>I tried python versions 3.8, 3.9, 3.10 and none of them worked.
Although the previous version is always installed without error.</p>
<p>I used pip from terminal and from pycharm managing packages</p>
|
<python><pip><mediapipe>
|
2023-06-05 07:24:35
| 2
| 399
|
user2586955
|
76,404,004
| 6,414,887
|
Python pivot on dataframe and return values as strings
|
<p>I have a Dataframe, userID,Item and Score. I would like to get a pivot on ItemID and have top scored Items on values field. Is it possible to get Items on pivot result. Scores I can get with Max/Mean and similar methods but I couldnt figure out how to get string values.</p>
<p><strong>Here is my Dataframe:</strong></p>
<p><a href="https://i.sstatic.net/xJSuP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xJSuP.png" alt="My Dataframe" /></a></p>
<p><strong>This is what I'm trying to achive:</strong></p>
<p><a href="https://i.sstatic.net/eo2bF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eo2bF.png" alt="What I want to achive" /></a></p>
|
<python><pivot-table>
|
2023-06-05 06:34:53
| 1
| 309
|
Zeir
|
76,403,950
| 16,396,496
|
how to update value of "last_dt" col when using"UPDATE" or "INSERT IGNORE INTO ~ ON DUPLICATE KEY UPDATE ~" statement
|
<p>The last_dt column is the last date the data was changed. It is managed differently from the inserted date. Inserted date is managed by create_dt.</p>
<p>I want to change the update date value only when there is a change in the other columns except the update_dt column.</p>
<p>so gpt gave me that</p>
<pre class="lang-sql prettyprint-override"><code>UPDATE your_table
SET column1 = %s, column2 = %s, column3 = %s, update_date = IF(
column1 <> %s OR column2 <> %s OR column3 <> %s,
NOW(), update_date
)
WHERE id = %s
</code></pre>
<p>But I think this method is a bit odd. I actually tested it and there is an error. It is also possible that I have written the code incorrectly.</p>
<p>Is there any good way? Is the answer provided by gpt the correct way? I think many db admins have solved the same problem. I'm asking because I couldn't find a suitable answer.</p>
|
<python><mysql><sql-update><sql-insert><pymysql>
|
2023-06-05 06:23:29
| 1
| 341
|
younghyun
|
76,403,945
| 16,383,578
|
How to turn a video into an image slideshow in Python?
|
<p>The title might be misnamed, since English isn't my first language.</p>
<p>Long story short, I need to analyze the debug information of some VPN application, and it doesn't keep log files locally, it only has a diagnostics information window, so I can't get raw text dump, although there is a save to file button but it is useless, because everytime I try to connect to VPN the window resets, and I am controlling the application programmatically.</p>
<p>So I can only screen record it and analyze the debug information using the video.</p>
<p>I used the following commands to turn the captured videos to images:</p>
<pre><code>gci C:\Users\Xeni\Documents -filter *.wmv | %{
$folder = 'C:\Users\Xeni\Desktop\' + $($_.name -replace '.wmv')
md $folder;
D:\ffmpeg\ffmpeg.exe -hwaccel cuda -i $_.fullname -vf fps=25 "$folder\%05d.bmp"
}
</code></pre>
<p>The videos have a framerate of 25 and resolution of 1920x1080. And I need the output images to be lossless because the debug information is textual.</p>
<p>There are thousands of images in each of the folders, and I quickly realized I cannot hope to use Photos app to do the job.</p>
<p>So I want to turn it into a slide show and found a bunch of questions here: <a href="https://stackoverflow.com/questions/60477816/how-to-create-a-fast-slideshow-in-python">How to create a fast slideshow in python?</a>, <a href="https://stackoverflow.com/questions/59132423/image-slide-show-using-tkinter-and-imagetk">image slide show using tkinter and ImageTk</a>, <a href="https://stackoverflow.com/questions/65256826/python-auto-slideshow-with-pil-tkinter-with-no-saving-images">Python: auto slideshow with PIL/TkInter with no saving images</a>, <a href="https://stackoverflow.com/questions/72758528/trying-to-make-image-slideshow-with-tkinter-python3">Trying to make image slideshow with Tkinter (Python3)</a> ...</p>
<p>But none of them actually solves the problem, despite being ostensibly relevant.</p>
<p>First they load all images upfront, this cannot happen here as each image is exactly 5.9326171875 MiB in size, and each folder measures dozens of Gibibytes and I have only 16GiB RAM, there simply isn't enough memory for it.</p>
<p>Second they just show the images one by one, the images are shown for a fixed period of time, and you cannot control what is being displayed.</p>
<p>None of them is the case here, and that's why the title is a misnomer.</p>
<p>What I want is very simple, first the application should take the path of the folder containing such images as input, and then scan the directory for all files that match <code>'\d{5}.bmp'</code>, and store the <code>list</code> of the file names in memory, and that is what should stay in memory.</p>
<p>When an image is about to be displayed, only then will it be loaded into memory, and it should stay on screen indefinitely until manually switched. After being switched it should either be unloaded from memory immediately, or be unloaded after its index's distance from the currently displayed image's index becomes larger than a fixed small value.</p>
<p>And then there should be a timeline, the timestamp(?) corresponding to each image can be easily calculated, the timestamp in seconds is simply filename divided by 25, and I should be able to control what is displayed by manipulating the timeline.</p>
<p>So this sounds like a video player but it is not a video player, because in video players the next frame is displayed automatically, whereas in this case it <em><strong>MUST NOT</strong></em>, the frames should only be switched manually.</p>
<p>So then why don't I just pause the video? Because I need frame by frame precision, I need to be able to see the next frame by pressing right arrow key and the frame before by pressing left arrow key. Most video players don't have this functionality.</p>
<p>I guess what I have described can be easily done by some expensive video editor, but I don't have enough money and I don't want to buy them. I am very experienced in Python but I didn't write many GUI programs. I am sorry I didn't provide enough code.</p>
<p>How can this be done?</p>
<hr />
<h2><strong>Update</strong></h2>
<p>The suggested examples were very bad, but I quickly came up with the following code that does part of what I wanted in under 10 minutes:</p>
<pre><code>import cv2
import re
import sys
from pathlib import Path
def slideshow(folder):
images = [str(file) for file in Path(folder).iterdir() if re.match('\d{5}.bmp', file.name)]
total = len(images)
index = 0
key = -1
while key != 27:
img = cv2.imread(images[index])
cv2.imshow('', img)
key = cv2.waitKeyEx(0)
if key == 2555904:
index += 1
elif key == 2424832:
index -= 1
index %= total
del img
cv2.destroyAllWindows()
if __name__ == '__main__':
slideshow(sys.argv[1])
</code></pre>
<p>I struggled for a bit because I have to identify the key pressed, and all Google search points to <code>cv2.waitKey</code> which is totally useless in this regard, it can't identify non-letter keys. I only stumbled upon <a href="https://stackoverflow.com/a/66573722/16383578"><code>cv2.waitKeyEx</code></a> by chance.</p>
<p>But this isn't what I wanted, it isn't fullscreen, and doesn't have any GUI elements at all. So no timeline.</p>
<p>Now how do I add timeline to this thing?</p>
|
<python><opencv>
|
2023-06-05 06:22:51
| 1
| 3,930
|
Ξένη Γήινος
|
76,403,287
| 2,769,240
|
St.spinner doesn't show (most times)
|
<p>I have a text area on a streamlit app where I ask a user to enter the query.</p>
<p>Once they do that and click Button or press enter, a function call is made to give a response.</p>
<p>To let a user know that the response is getting processed, I want to add a spinner.</p>
<p>However, in many cases I see spinner does't show on the app and only default "Running" on top right of Streamlit appears.</p>
<p>Here's the code I used for the same.</p>
<pre><code>with st.container():
query = st.text_area("**Ask a question here:**", height= 100, key= "query_text")
button = st.button("Submit", key="button")
if query:
with st.spinner('Fetching Answer...'):
response = custom_qa.qa.run(query)
st.info(response.strip())
</code></pre>
<p><strong>Edit: Full code</strong></p>
<pre><code>if uploaded_file is not None:
display_pdf(uploaded_file)
print(st.session_state.uploaded_file.name)
if st.session_state.qa is None:
with tempfile.NamedTemporaryFile(delete=False) as temp_file:
temp_file.write(uploaded_file.getvalue())
temp_file_path = temp_file.name
print(temp_file_path)
st.session_state.qa = call_custom_qna(temp_file_path, model) # Passing the doc for chunking, indexing and retrieving
print(f'QA Status Post Fn Call: {st.session_state.qa}')
custom_qa = st.session_state.qa
with st.container():
query = st.text_area("**Ask a question here:**", height= 100, key= "query_text")
button = st.button("Submit", key="button")
if query:
with st.spinner('Fetching Answer...'):
response = custom_qa.qa.run(query)
st.info(response.strip())
</code></pre>
|
<python><streamlit>
|
2023-06-05 03:05:41
| 0
| 7,580
|
Baktaawar
|
76,403,216
| 2,539,954
|
How can I generate a sine wave with consistent "vibrato"
|
<p>I am trying to create a .wav file which contains a 440Hz sine wave tone, with 10Hz vibrato that varies the pitch between 430Hz and 450Hz. Something must be wrong with my approach, because when I listen to the generated .wav file, it sounds like the "amplitude" of the vibrato (e.g. the highest/lowest pitch reached by the peaks and troughs of the waveform of the vibrato) just progressively increases over time, instead of staying between 430-450Hz. What is wrong with my approach here? Here is some minimal python code which illustrates the issue:</p>
<pre><code>import math
import wave
import struct
SAMPLE_RATE = 44100
NOTE_PITCH_HZ = 440.0 # Note pitch, Hz
VIBRATO_HZ = 10.0 # Vibrato frequency, Hz
VIBRATO_VARIANCE_HZ = 10.0 # Vibrato +/- variance from note pitch, Hz
NOTE_LENGTH_SECS = 2.0 # Length of .wav file to generate, in seconds
NUM_SAMPLES = int(SAMPLE_RATE * NOTE_LENGTH_SECS)
# Generates a single point on a sine wave
def _sine_sample(freq: float, sine_index: int):
return math.sin(2.0 * math.pi * float(freq) * (float(sine_index) / SAMPLE_RATE))
samples = []
for i in range(NUM_SAMPLES):
# Generate sine point for vibrato, map to range -VIBRATO_VARIANCE_HZ:VIBRATO_VARIANCE_HZ
vibrato_level = _sine_sample(VIBRATO_HZ, i)
vibrato_change = vibrato_level * VIBRATO_VARIANCE_HZ
# Mofidy note pitch based on vibrato state
note_pitch = NOTE_PITCH_HZ + vibrato_change
sample = _sine_sample(note_pitch, i) * 32767.0
# Turn amplitude down to 80%
samples.append(int(sample * 0.8))
# Create mono .wav file with a 2 second 440Hz tone, with 10Hz vibrato that varies the
# pitch by +/- 10Hz (between 430Hz and 450Hz)
with wave.open("vibrato.wav", "w") as wavfile:
wavfile.setparams((1, 2, SAMPLE_RATE, NUM_SAMPLES, "NONE", "not compressed"))
for sample in samples:
wavfile.writeframes(struct.pack('h', sample))
</code></pre>
|
<python><signal-processing><waveform><sine-wave>
|
2023-06-05 02:33:41
| 2
| 1,297
|
Erik Nyquist
|
76,403,200
| 3,571,110
|
Interact with Chrony using Unix Sockets in Python
|
<p>I have a python application where I need to be able to dynamically add an NTP server to Chrony. From the command line I can do:</p>
<pre><code>sudo chronyc add server time.google.com
</code></pre>
<p>My understanding is that chronyc interacts with /var/run/chrony/chronyd.sock to dynamically change chronyd. Looking at the <a href="https://github.com/mlichvar/chrony/blob/master/client.c" rel="nofollow noreferrer">source code</a> I think I should be doing something like:</p>
<pre><code>import socket
client = socket.socket( socket.AF_UNIX, socket.SOCK_STREAM )
client.bind('/tmp/my_chrony_sock.sock')
client.connect('/var/run/chrony/chronyd.sock')
client.send(b'add server time.google.com\n')
data = client.recv(4096)
</code></pre>
<p>But that just hangs never receiving a response</p>
|
<python><unix-socket><chrony>
|
2023-06-05 02:26:20
| 1
| 677
|
proximous
|
76,403,155
| 1,008,636
|
Does it make sense to decorate a @staticmethod with an @lru_cache?
|
<p>I have in python3:</p>
<pre><code>class MyClass:
@lru_cache(maxsize=None)
@staticmethod
def run_expensive_computation() -> bool:
return expensive_function("hard_coded_string")
</code></pre>
<p>Assuming:</p>
<ul>
<li><p>expensive_function() itself is NOT cached</p>
</li>
<li><p>"hard_coded_string" will never change so no need to make that as
input to <code>run_expensive_computation</code></p>
</li>
</ul>
<p>Is it a waste of time to put <code>@lru_cache(maxsize=None)</code> here if <code>run_expensive_computation</code> takes no inputs ? What will be the key saved in the cache in this case ?</p>
|
<python><python-3.x>
|
2023-06-05 02:08:26
| 3
| 3,245
|
user1008636
|
76,402,886
| 13,494,917
|
SQL Statement execution successful but not committing in python script
|
<p>I've got an SQL statement that executes just fine when I run it in SSMS:</p>
<pre class="lang-sql prettyprint-override"><code>DECLARE @sql NVARCHAR(max)='' SELECT @sql += ' Drop table ' + QUOTENAME(TABLE_SCHEMA) + '.'+ QUOTENAME(TABLE_NAME) + '; ' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' Exec Sp_executesql @sql
</code></pre>
<p>Basically just drops all tables in the database. However, when I go to try and execute this in a python script, it succeeds, but the tables don't actually end up dropping. I'm establishing a connection to my database with an account that has permissions to do so, so that shouldn't be a problem.</p>
<p>Here's my engine creation and the part of the code where I may be doing something wrong:</p>
<pre class="lang-py prettyprint-override"><code>engine_azure2 = create_engine(conn_str2,echo=True)
conn2 = engine_azure2.connect()
deleteQuery = text("DECLARE @sql NVARCHAR(max)='' SELECT @sql += ' Drop table ' + QUOTENAME(TABLE_SCHEMA) + '.'+ QUOTENAME(TABLE_NAME) + '; ' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' Exec Sp_executesql @sql")
conn2.execute(deleteQuery)
</code></pre>
<p>I've also tried encapsulating it inside of a BEGIN...END, no luck.</p>
|
<python><sql-server><pandas><sqlalchemy>
|
2023-06-05 00:02:50
| 0
| 687
|
BlakeB9
|
76,402,711
| 11,938,023
|
In Pandas or Numpy how do you multipaly reduce with XOR a row until all reductions are reduced until a final bottowm row
|
<p>In Pandas or Numpy how do you multiplely reducea row until all reductions are reduced until a final bottowm row</p>
<p>ok i want to reduce my B column in way keeps xoring down each item until the final row: [1 ,8 , 6 , 12 , 1 , 2] so i can save it row C</p>
<p>if tried using an apply loo but this can get very expensive for large datasets. Does anyone have a shortcutor better method thah using a loop to create row after row with
not the easist logic to reduce this to the row?</p>
<p>Here is the data the second number version how i xor each next row to obtain an answer for a new list and keep continuing down to a final result. this is quite slow and
was looking for a better solution</p>
<pre><code> A B C
0 12 2 0
1 10 6 0
2 2 8 0
3 9 11 0
4 5 12 0
5 0 5 0
6 4 4 0
for example column B looks like this with a .T transform:
0 1 2 3 4 5 6
A 12 10 2 9 5 0 4
B 2 6 8 11 12 5 4
so basically is I do these operations:
2 ^ 6. which is 4
6 ^ 8, which is 14
8 ^ 11 which is 3
11 ^ 12 which is 7
5 ^ 4 which is 1. which I store in a separate array because I only want the final result. So now I have
so is have [ 4, 14, 3, 7, 1 ] and a second array with [1]
4^14 = 10, 14^3=13 3^7=4, 7^1 =1
no I have to keep reducing
[10, 3, 4, 14, 8]
10^3 = 9 3^4 = 7, 4^14=8. and I manually move the last term to the new array which is now [1, 8]
I do this so forth so my array has all the final results and end up with
the final result of each iterative reduction: [1 , 8 , 6 , 12 ,1 ,2] as shown below:
A B
[[12 2]
[10 6] 4
[ 2 8] 14 10
[ 9 11] 3 3 9
[ 5 12] 7 4 7 14
[ 0 5] 9 14 10 13 3
[ 4 4] 1 8 6 12 1 2
so that the final out come is the last row in column C
A B C
0 12 2 1
1 10 6 8
2 2 8 6
3 9 11 12
4 5 12 1
5 0 5 2
6 4 4 0
</code></pre>
<p>ok i want to reduce my B column in a that way keeps xoring down each item until the final row: [1 ,8 , 6 , 12 , 1 , 2] so i can save it row C</p>
<p>if tried using an apply function and loop but this can get very expensive for large datasets. Does anyone have a shortcutor better method thah using a loop to create row after row with not the easist logic to reduce this to the row?(as I work with very large data sets, this is just an example )</p>
|
<python><numpy>
|
2023-06-04 22:52:29
| 1
| 7,224
|
oppressionslayer
|
76,402,704
| 44,330
|
Why does Pandas not group weeks starting with the specified day?
|
<p>I want to group data by week (Sunday - Saturday) and sum each of the days in that week. For some reason, Pandas doesn't group the way I expect. Here is some sample data starting with Sunday Feb 12 2023, and ending with Saturday Feb 25 2023. I expect to pass in <code>pd.Grouper(freq='W')</code> or <code>pd.Grouper(freq='W-SUN')</code> but it doesn't use conventional weeks unless I pass in <code>freq='W-SAT'</code>.</p>
<p>What's going on?</p>
<pre><code>$ python
Python 3.8.12 | packaged by conda-forge | (default, Oct 12 2021, 21:50:56)
[Clang 11.1.0 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>>
>>> S = pd.Series([1,2,3,70,2,3,4,5,6,10,20,4,9,1],
... pd.DatetimeIndex(['2023-02-%d' % d for d in range(12,26)]))
>>> S
2023-02-12 1
2023-02-13 2
2023-02-14 3
2023-02-15 70
2023-02-16 2
2023-02-17 3
2023-02-18 4
2023-02-19 5
2023-02-20 6
2023-02-21 10
2023-02-22 20
2023-02-23 4
2023-02-24 9
2023-02-25 1
dtype: int64
>>> S.groupby(pd.Grouper(freq='W-SUN',origin='start_day')).sum()
2023-02-12 1
2023-02-19 89
2023-02-26 50
Freq: W-SUN, dtype: int64
>>> S.groupby(pd.Grouper(freq='W-SAT',origin='start_day')).sum()
2023-02-18 85
2023-02-25 55
Freq: W-SAT, dtype: int64
</code></pre>
|
<python><pandas><group-by>
|
2023-06-04 22:49:25
| 1
| 190,447
|
Jason S
|
76,402,530
| 13,916,049
|
TypeError: concat() takes 1 positional argument but 2 were given
|
<p>I want to concatenate the pandas dataframes in a column-wise manner and reset the index.</p>
<pre><code>import pandas as pd
import numpy as np
com_mut = kirc_mut.loc[common_samples]
com_mut = com_mut.sort_index()
com_mut = com_mut.T
com_mut = com_mut.dropna()
com_mut = com_mut.groupby(com_mut.index).first()
com_mut = com_mut.T
l=[com_mut[x].apply(pd.Series).stack() for x in com_mut.columns]
common_mut=pd.concat(l,1).reset_index(level=1,drop=True)
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [24], in <cell line: 9>()
7 com_mut = com_mut.T
8 l=[com_mut[x].apply(pd.Series).stack() for x in com_mut.columns]
----> 9 common_mut=pd.concat(l,1).reset_index(level=1,drop=True)
10 common_mut.columns=com_mut.columns
12 common_mut
TypeError: concat() takes 1 positional argument but 2 were given
</code></pre>
<p>Data:</p>
<p><code>common_samples[1:10]</code></p>
<pre><code>Index(['TCGA-A3-3367-01', 'TCGA-A3-3387-01', 'TCGA-B0-4698-01',
'TCGA-B0-4710-01', 'TCGA-B0-4810-01', 'TCGA-B0-4811-01',
'TCGA-B0-4815-01', 'TCGA-B0-4818-01', 'TCGA-B0-4821-01'],
dtype='object')
</code></pre>
<p><code>kirc_mut.iloc[0:9,0:9]</code></p>
<pre><code>pd.DataFrame({'A1BG': {'TCGA-A3-3313-01': 0,
'TCGA-A3-3313-01A.': 0,
'TCGA-A3-3317-01': 1,
'TCGA-A3-3319-01': 0,
'TCGA-A3-3319-01A.': 0,
'TCGA-A3-3320-01': 0,
'TCGA-A3-3320-01A.': 0,
'TCGA-A3-3331-01': 0,
'TCGA-A3-3346-01': 0},
'A1CF': {'TCGA-A3-3313-01': 0,
'TCGA-A3-3313-01A.': 0,
'TCGA-A3-3317-01': 0,
'TCGA-A3-3319-01': 0,
'TCGA-A3-3319-01A.': 0,
'TCGA-A3-3320-01': 0,
'TCGA-A3-3320-01A.': 0,
'TCGA-A3-3331-01': 0,
'TCGA-A3-3346-01': 0},
'A2M': {'TCGA-A3-3313-01': 0,
'TCGA-A3-3313-01A.': 1,
'TCGA-A3-3317-01': 0,
'TCGA-A3-3319-01': 0,
'TCGA-A3-3319-01A.': 1,
'TCGA-A3-3320-01': 0,
'TCGA-A3-3320-01A.': 0,
'TCGA-A3-3331-01': 0,
'TCGA-A3-3346-01': 0},
'A2ML1': {'TCGA-A3-3313-01': 0,
'TCGA-A3-3313-01A.': 0,
'TCGA-A3-3317-01': 0,
'TCGA-A3-3319-01': 0,
'TCGA-A3-3319-01A.': 0,
'TCGA-A3-3320-01': 0,
'TCGA-A3-3320-01A.': 0,
'TCGA-A3-3331-01': 0,
'TCGA-A3-3346-01': 0},
'A4GNT': {'TCGA-A3-3313-01': 0,
'TCGA-A3-3313-01A.': 0,
'TCGA-A3-3317-01': 0,
'TCGA-A3-3319-01': 0,
'TCGA-A3-3319-01A.': 0,
'TCGA-A3-3320-01': 1,
'TCGA-A3-3320-01A.': 0,
'TCGA-A3-3331-01': 0,
'TCGA-A3-3346-01': 0},
'AAAS': {'TCGA-A3-3313-01': 0,
'TCGA-A3-3313-01A.': 1,
'TCGA-A3-3317-01': 0,
'TCGA-A3-3319-01': 0,
'TCGA-A3-3319-01A.': 0,
'TCGA-A3-3320-01': 0,
'TCGA-A3-3320-01A.': 0,
'TCGA-A3-3331-01': 0,
'TCGA-A3-3346-01': 0},
'AADAC': {'TCGA-A3-3313-01': 0,
'TCGA-A3-3313-01A.': 0,
'TCGA-A3-3317-01': 0,
'TCGA-A3-3319-01': 0,
'TCGA-A3-3319-01A.': 0,
'TCGA-A3-3320-01': 0,
'TCGA-A3-3320-01A.': 0,
'TCGA-A3-3331-01': 0,
'TCGA-A3-3346-01': 0},
'AADACL3': {'TCGA-A3-3313-01': 0,
'TCGA-A3-3313-01A.': 0,
'TCGA-A3-3317-01': 0,
'TCGA-A3-3319-01': 0,
'TCGA-A3-3319-01A.': 0,
'TCGA-A3-3320-01': 1,
'TCGA-A3-3320-01A.': 0,
'TCGA-A3-3331-01': 0,
'TCGA-A3-3346-01': 0},
'AADACL4': {'TCGA-A3-3313-01': 0,
'TCGA-A3-3313-01A.': 0,
'TCGA-A3-3317-01': 0,
'TCGA-A3-3319-01': 0,
'TCGA-A3-3319-01A.': 0,
'TCGA-A3-3320-01': 1,
'TCGA-A3-3320-01A.': 0,
'TCGA-A3-3331-01': 0,
'TCGA-A3-3346-01': 0}})
</code></pre>
|
<python><pandas>
|
2023-06-04 21:46:26
| 2
| 1,545
|
Anon
|
76,402,313
| 14,620,854
|
best practice to update a field in list of dictionary of list and dictionary
|
<p>Assuming the input is a list of dictionary.
I need to update a specific field with new values.
There are repeated keys, values in the input so cannot distinguish the field need to be picked by its name.
Here is an example:</p>
<pre><code>test = [
{
"key": "prog",
"value": {
"instance[]": [
{
"key": "name",
"value": {
"string": "NAME"
},
"verif": 22222222
},
{
"key": "user",
"value": {
"string": "AAAAA"
},
"verif": 22222222
}
]
},
"verif": 22222222
},
{
"key": "sytem_platform",
"value": {
"bool": "false"
},
"verif": 22222222
},
{
"key": "system_beh",
"value": {
"bool": "true"
},
"verif": 22222222
},
{
"key": "check_beh",
"value": {
"bool": "true"
},
"verif": 22222222
},
{
"key": "order",
"value": {
"instance[]": [
{
"key": "order_det",
"value": {
"instance[]": [
{
"key": "requires",
"value": {
"string[]": []
},
"verif": 22222222
},
{
"key": "status",
"value": {
"string[]": []
},
"verif": 22222222
},
{
"key": "system_status",
"value": {
"string[]": [
"sys1.out",
"sys1.checking"
]
},
"verif": 22222222
},
{
"key": "weather_status",
"value": {
"instance[]": [
{
"key": "humdiity",
"value": {
"double": 5.0
},
"verif": 22222222
},
{
"key": "temp",
"value": {
"double": 70.0
},
"verif": 22222222
}
]
},
"verif": 22222222
},
{
"key": "environment",
"value": {
"string[]": []
},
"verif": 22222222
}
]
},
"verif": 22222222
}
]
},
"verif": 22222222
}
</code></pre>
<p>]</p>
<p>I am trying to update the following section of it like adding "aaaa" and "bbb"</p>
<pre><code> "instance[]": [
{
"key": "requires",
"value": {
"string[]": ["aaaa", "bbb"]
},
</code></pre>
<p>I was wondering if there is any built-in library in python that i can use for this goal?</p>
<p>i can filter it out and store it in a list like below, but i am not sure how i can update it.</p>
<pre><code>newl = [d for d in test if d["key"] == "order" ]
print(newl)
#which prints
[{'key': 'order', 'value': {'instance[]': [{'key': 'order_det', 'value': {'instance[]': [{'key': 'requires', 'value': {'string[]': []}, 'verif': 22222222}, {'key': 'status', 'value': {'string[]': []}, 'verif': 22222222}, {'key': 'system_status', 'value': {'string[]': ['sys1.out', 'sys1.checking']}, 'verif': 22222222}, {'key': 'weather_status', 'value': {'instance[]': [{'key': 'humdiity', 'value': {'double': 5.0}, 'verif': 22222222}, {'key': 'temp', 'value': {'double': 70.0}, 'verif': 22222222}]}, 'verif': 22222222}, {'key': 'environment', 'value': {'string[]': []}, 'verif': 22222222}]}, 'verif': 22222222}]}, 'verif': 22222222}]
</code></pre>
|
<python><json><dictionary>
|
2023-06-04 20:34:15
| 1
| 4,021
|
Ashkanxy
|
76,402,289
| 6,884,119
|
Fuzzy Wuzzy: Obtain accurate scores
|
<p>I have a case where users posts multiple messages daily to our work channel. These messages are an issue that can be related to multiple applications that we support (Okta, Slack, Cloudflare etc...). We have multiple solution documented that can help these users, and I want our bot to be able to send them the correct documentation automatically based on their messages.</p>
<p>I am using the string matching approach to achieve this with the <a href="https://pypi.org/project/fuzzywuzzy/" rel="nofollow noreferrer">FuzzyWuzzy</a> Python library.</p>
<p>I run the message posted by a user in the list of applications that we support, and whenever FuzzyWuzzy gives me a score of <code>> 90</code>, I assume that this is the application that is being requested in the message. I am using the code below:</p>
<p><code>requests.py</code></p>
<pre><code>from enum import Enum
from fuzzywuzzy import fuzz
class RequestType(Enum):
OKTA_ACCESS = "okta access"
OKTA_UNLOCK = "okta unlock"
CLOUDFLARE = "cannot connect to cloudflare"
class RequestsMatcher:
def __init__(self, message: str) -> None:
scores = [
(
type,
fuzz.partial_token_set_ratio(message, type.value)
)
for type in RequestType
]
mvs = [score for score in scores if score[1] > 75]
hits = [score for score in scores if score[1] > 90]
print(scores)
self.scores = scores
self.highest = mvs
self.hit = next(iter(hits), None)
</code></pre>
<p>The issue with this code is that, for a sample message like the one below, both <code>cloudflare</code> and <code>okta</code> will give a score of <code>> 90</code> when it should've been just Cloudflare to return a score of <code>> 90</code>:</p>
<pre><code> '''
Hello team, it looks like I am having an issue with Cloudflare. I have been added to the following Okta group.
Can someone look into this for me, please?
'''
</code></pre>
<p>Is there a better score calculation or better approach to obtain the necessary application that is being requested on support?</p>
|
<python><python-3.x><nlp><string-matching><fuzzy-search>
|
2023-06-04 20:28:52
| 0
| 2,243
|
Mervin Hemaraju
|
76,402,242
| 5,881,882
|
Pytorch: how will next() behave for a list of DataLoaders of different length
|
<p>My data has several conditions A, B, C. I would like to do the following.</p>
<ul>
<li>Draw a sample for each condition</li>
<li>Draw a random sample from the full data set</li>
<li>Some training magic</li>
</ul>
<p>Thus, I would have in one batch something like</p>
<pre><code>[condition_A, condition_B, condition_C, random_sample]
</code></pre>
<p>I have created a dictionary of the form</p>
<pre><code>loader_dict = {
cond_A : DataLoader(...Subset Magic...),
cond_B : DataLoader(...Subset Magic...),
cond_C : DataLoader(...Subset Magic...)
}
train_loader = DataLoader(...full dataset...)
</code></pre>
<p>Now during each epoch I would like to</p>
<ol>
<li>Get a batch from each of the 4 loaders</li>
<li>Process them in some net shenanigans</li>
</ol>
<p>Currently, I am a bit stuck on the 1st point.</p>
<p>My approach so far is</p>
<pre><code># get a list of form [loader_A, loader_B, loader_C]
train_loaders = list(zip(*loader_dict.values()))
for batch_idx, batch in enumerate(tqdm(train_loader)):
condit_sample = [next(loader) for loader in train_loaders]
# do something with torch.cat([batch, condit_sample])
</code></pre>
<p>Now I am not sure - will the <code>next()</code> call actually always just pick the first batch of the conditions loaders (<strong>not desired</strong>) or will it actually iterate through the samples of the conditions?</p>
<p>Also, my data has something like <code>50% condition_A, 35% condition_B, 15% condition_C</code></p>
<p>Thus, I wonder, whether my code would run e.g. through all 100 batches of the full dataset and repeat condition_A twice, condition_B nearly 3 times and condition_C 6 times? Or will the code just run through all samples of condition C and break down?</p>
<p>Currently, the multiple cycling through the conditional samples would suffice.</p>
<p>For later purposes, I would like to consider the following:</p>
<ul>
<li>just pick a really random sample (in each epoch something different) from the full dataset</li>
<li>cycle through all the conditional loader samples</li>
<li>terminate the epoch whenever the smallest condition sample is "cycled through"</li>
</ul>
|
<python><pytorch><iterator><pytorch-dataloader>
|
2023-06-04 20:16:30
| 1
| 388
|
Alex
|
76,402,197
| 1,438,082
|
pygame event does not work when embedded within tkinter window
|
<p>The following code works perfect on a Windows OS though does not work on the raspberry pi. I'm looking for information as to why this is, I have scoured the web for a solution but cannot find one.</p>
<pre class="lang-py prettyprint-override"><code>from distutils.command.config import config
import logging
from logging.handlers import RotatingFileHandler
import os
from logging import root
import tkinter
import tkinter as tk
import tkinter.messagebox
import customtkinter
from threading import Thread
from tkinter import Label
from random import *
from itertools import count, cycle
from tkinter import *
from tkinter import ttk, Scrollbar
import platform
import pygame
class App(customtkinter.CTk):
def __init__(self):
super().__init__()
tabview = customtkinter.CTkTabview(self, height=500, width=500)
tabview.place(x=-5, y=-50, anchor=tkinter.NW)
tabview.add("tab_main")
tabview.set("tab_main")
try:
pygame_width = 400
pygame_height = 400
embed_pygame = tk.Frame(tabview.tab("tab_main"), width=pygame_width, height=pygame_height)
embed_pygame.place(x=10, y=10)
self.update()
os.environ['SDL_WINDOWID'] = str(embed_pygame.winfo_id())
system = platform.system()
if system == "Windows":
os.environ['SDL_VIDEODRIVER'] = 'windib'
elif system == "Linux":
os.environ['SDL_VIDEODRIVER'] = 'x11'
display_surface = pygame.display.set_mode((pygame_width, pygame_height))
display_surface.fill((0, 50, 0))
pygame.init()
background_surface = pygame.Surface((pygame_width, pygame_height))
clock = pygame.time.Clock()
def handle_tkinter_events():
self.update_idletasks()
self.update()
while True:
handle_tkinter_events()
for event in pygame.event.get():
if event.type == pygame.QUIT:
exit()
elif event.type == pygame.MOUSEBUTTONDOWN:
print('Clicked on image')
display_surface.blit(background_surface, (0, 0))
pygame.display.update()
clock.tick(60)
except Exception as e:
logging.critical("An exception occurred: {}".format(e))
print("An exception occurred: {}".format(e))
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
|
<python><tkinter><pygame><customtkinter>
|
2023-06-04 20:03:30
| 1
| 2,778
|
user1438082
|
76,401,901
| 823,859
|
Tracing source of error in difflib due to very different string comparison
|
<p>I am processing a large amount of text data (11m rows), and get the error below. Is there a way I can trace the row of text that's causing this error?</p>
<p>My code:</p>
<pre><code>from difflib import ndiff # find differences between strings
import pandas as pd
from tqdm import tqdm # add a timer to pandas apply()
tqdm.pandas() # start timer
# read in all keystrokes
dat = pd.read_csv("all_ks_dat_good.csv", delimiter="|",
encoding="ISO-8859-1")
# use the ndiff function to find additions to strings, i.e. c[0]=='+'
def diff(x):
s1 = str(x['last_text'])
s2 = str(x['scrubbed_text'])
l = [c[-1] for c in ndiff(s1, s2) if c[0] == '+']
return ''.join(l)
# add a column for the additional keystrokes,
# using tqdm's progress_apply() instead of apply()
dat['add_ks'] = dat.progress_apply(diff, axis=1)
dat.to_csv('all_ks_word_dat.csv', sep="|", encoding="utf-8")
</code></pre>
<p>The abridged error:</p>
<pre><code> File "/home/goodkindan/.conda/envs/ks0/lib/python3.11/difflib.py", line 997, in _fancy_helper
yield from g
File "/home/goodkindan/.conda/envs/ks0/lib/python3.11/difflib.py", line 985, in _fancy_replace
yield from self._fancy_helper(a, best_i+1, ahi, b, best_j+1, bhi)
File "/home/goodkindan/.conda/envs/ks0/lib/python3.11/difflib.py", line 997, in _fancy_helper
yield from g
File "/home/goodkindan/.conda/envs/ks0/lib/python3.11/difflib.py", line 915, in _fancy_replace
cruncher = SequenceMatcher(self.charjunk)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/goodkindan/.conda/envs/ks0/lib/python3.11/difflib.py", line 182, in __init__
self.set_seqs(a, b)
File "/home/goodkindan/.conda/envs/ks0/lib/python3.11/difflib.py", line 194, in set_seqs
self.set_seq2(b)
File "/home/goodkindan/.conda/envs/ks0/lib/python3.11/difflib.py", line 248, in set_seq2
self.__chain_b()
File "/home/goodkindan/.conda/envs/ks0/lib/python3.11/difflib.py", line 288, in __chain_b
for elt in b2j.keys():
</code></pre>
|
<python><pandas><difflib>
|
2023-06-04 18:34:19
| 1
| 7,979
|
Adam_G
|
76,401,694
| 11,898,208
|
S3 Presigned URL for large files not working - python boto3
|
<p>I am using Cloudflare's R2 storage (s3 compatible), and the pre-signed URL for downloading files from a private bucket works well for files smaller than 1GB. When the file size is more than 1GB it returns access denied.</p>
<p><strong>Code to generate pre-signed URL is here:</strong></p>
<pre><code>temp_link = s3.generate_presigned_url('get_object',Params={'Bucket': bucketName,'Key': File_key, 'ResponseContentDisposition': 'attachment'},ExpiresIn=3600*8)
</code></pre>
|
<python><amazon-s3><boto3><cloudflare>
|
2023-06-04 17:49:45
| 2
| 343
|
itsmehemant7
|
76,401,612
| 10,200,497
|
Groupby sequence of values in one column
|
<p>This is my dataframe:</p>
<pre><code>df = pd.DataFrame(
{
'a': [1, 10, 20, 30, 40, 90, 100, 200, 11],
'b': ['x', 'y', 'h', 'z', 'x', 'z', 'x', 'a', 'z']
}
)
</code></pre>
<p>And this is the way that I want to group it:</p>
<pre><code>0 1 x
1 10 y
2 20 h
3 30 z
4 40 x
5 90 z
6 100 x
7 200 a
8 11 z
</code></pre>
<p>I want to start grouping when there is x in column <code>b</code> and end the group when there is z in <code>b</code>. Obviously I want to include everything that comes in between x and z like the first group for example.</p>
<p>I tried the answers of this <a href="https://stackoverflow.com/questions/73104011/groupby-streak-of-numbers-in-one-column-of-pandas-dataframe">question</a> but still couldn't solve the problem.</p>
|
<python><pandas>
|
2023-06-04 17:27:26
| 2
| 2,679
|
AmirX
|
76,401,586
| 14,459,861
|
How to fix python regex if statement?
|
<p>I'm trying to find certain phrases in a data frame. These are the combinations of words I would like to find:</p>
<pre><code> apples
bananas
oranges
apples and bananas
apples and oranges
bananas and oranges
apples, bananas, and oranges
</code></pre>
<p>However, my code is not working for the cases where there are only 2 matching words. For instance, if the a row contains <code>'apple, banana, and orange'</code> my code will only output <code>'apple and banana'</code></p>
<p>This is my code:</p>
<pre><code>for i in copy.loc[fruits]:
print(i)
#if match apple
if re.match(r'((?=.*apple)|(?=.*apples))',i):
#if says no apple
if re.match(r'(\bno\b)',i) :
print('no apples, only banana and oranges')
print('')
#if has apple and orange
elif re.match(r'(?=.*orange)|(?=.*\boranges\b)',i):
print('apples and oranges')
print('')
#if has apple and banana
elif re.match(r'(?=.*banana)|(?=.*bananas)',i):
print('apples and bananas')
print('')
#has apple, banana, and orange
elif re.match(r'(?=.*banana)|(?=.*bananas)(?=.*orange)|(?=.*\boranges\b)',i):
print('apples, bananas, and oranges')
print('')
#has only apple
else:
print('apples')
print('')
#only oranges
elif re.match(r'(?=.*orange)|(?=.*\boranges\b)',i):
print('oranges')
print('')
#only banana
else:
print('bananas')
print('')
</code></pre>
<p>My code does not work when there are only 2 matching words. How can I fix this?</p>
<p>Thank you for taking the time to read and help out. I really appreciate it!</p>
|
<python><pandas><regex><if-statement>
|
2023-06-04 17:22:01
| 2
| 325
|
shorttriptomars
|
76,401,582
| 6,561,375
|
Why should window capture of Chrome return a black screen
|
<p>Why should taking a screenshot give a black window, particularly if it's visible? That's a short way to pose this question</p>
<p>I glued together some other python recipes in SO to take a screenshot of just the visible window of a particular process. For instance, if I wanted to capture a dos prompt I would pass the string command, and if I wanted to capture SO in a chrome browser i would pass a string like stack. Whilst this worked for most applications I tried, for Google Chrome I just had a black screen returned.
So I checked to see if it had a visible child window... it did but capturing this didn't work either,
I don't understand at all why this failed. Can anyone help out? If I'm using the underlying windows API I am lost as to why it should fail.</p>
<pre><code>import win32gui
import win32ui
import win32con
import time
import sys
from PIL import Image
from PIL import ImageChops
hw = []
bm = []
abc = 'abcdefghijklmnopqrstuvwxyz'
pos = 0
target = sys.argv[1]
print('target:',target)
baby = 0
def screenshot(hwnd):
print ( hex( hwnd ), win32gui.GetWindowText( hwnd ) )
l,t,r,b=win32gui.GetWindowRect(hwnd)
h=b-t
w=r-l
hDC = win32gui.GetWindowDC(hwnd)
myDC=win32ui.CreateDCFromHandle(hDC)
newDC=myDC.CreateCompatibleDC()
amap = win32ui.CreateBitmap()
amap.CreateCompatibleBitmap(myDC, w, h)
newDC.SelectObject(amap)
win32gui.SetForegroundWindow(hwnd)
time.sleep(0.05)
newDC.BitBlt((0,0),(w, h) , myDC, (0,0), win32con.SRCCOPY)
amap.Paint(newDC)
amap.SaveBitmapFile(newDC, abc[pos] + '.bmp')
return 1
def winEnumHandler( hwnd, ctx ):
if win32gui.IsWindowVisible( hwnd ):
if len( win32gui.GetWindowText( hwnd ) ) > 2:
print ( hex( hwnd ), win32gui.GetWindowText( hwnd ) )
hw.append(hwnd)
def winEnumHandlerStrict( hwnd, ctx ):
if win32gui.IsWindowVisible( hwnd ):
if target.lower() in win32gui.GetWindowText( hwnd ).lower() :
print ( hex( hwnd ), win32gui.GetWindowText( hwnd ) )
hw.append(hwnd)
l,t,r,b=win32gui.GetWindowRect(hwnd)
print(str(l),str(t),str(r),str(b))
def enumBebe(hwnd, ctx ):
if win32gui.IsWindowVisible( hwnd ):
print('visible child', hwnd)
baby = hwnd
print ( hex( hwnd ), win32gui.GetWindowText( hwnd ) )
l,t,r,b=win32gui.GetWindowRect(hwnd)
print(str(l),str(t),str(r),str(b))
else:
print('invisible child', hwnd)
win32gui.EnumWindows( winEnumHandlerStrict, None )
print('\n---\n',hw)
win32gui.EnumChildWindows(hw[1],enumBebe,None)
print(time.time())
for x in range(0,10):
#pos = pos + screenshot(hw[1]) # command line is always first process to contain name of string we pass in
pos = pos + screenshot(baby)
print(time.time())
image_one = Image.open( abc[0] + '.bmp' )
image_two = Image.open( abc[6] + '.bmp' )
diff = ImageChops.difference(image_one, image_two)
print( len(set(ImageChops.difference(image_one , image_two ).getdata()) ))
</code></pre>
|
<python><windows><google-chrome><pywin32>
|
2023-06-04 17:21:09
| 0
| 791
|
SlightlyKosumi
|
76,401,543
| 489,088
|
Speeding up loop for reorganizing pandas DataFrame into numpy array using slicing throws exception - what am I missing?
|
<p>I have a pandas DataFrame like so:</p>
<pre><code>raw_data = DataFrame({
'date_idx': [0, 1, 2, 0, 1, 2],
'element_idx': [0, 0, 0, 1, 1, 1],
'a': [10, 20, 30, 40, 50, 60],
'b': [11, 21, 31, 41, 51, 61],
'c': [12, 22, 32, 42, 52, 62],
})
</code></pre>
<p>I call the columns other than <code>date_idx</code> and <code>element_idx</code> "inputs". I want to reorganize it into a 3d numpy array by <code>date_idx</code> -> <code>input_idx</code> -> <code>element_idx</code>, so that the result is like so:</p>
<pre><code>[[[10. 40.]
[11. 41.]
[12. 42.]]
[[20. 50.]
[21. 51.]
[22. 52.]]
[[30. 60.]
[31. 61.]
[32. 62.]]]
</code></pre>
<p>I did it with two for loops, and it works well:</p>
<pre><code>date_idx = [0, 1, 2, 0, 1, 2]
element_idx = [0, 0, 0, 1, 1, 1]
raw_data = DataFrame({
'date_idx': date_idx,
'element_idx': element_idx,
'a': [10.0, 20.0, 30.0, 40.0, 50.0, 60.0],
'b': [11.0, 21.0, 31.0, 41.0, 51.0, 61.0],
'c': [12.0, 22.0, 32.0, 42.0, 52.0, 62.0],
})
inputs = ['a', 'b', 'c']
unique_dates = set(date_idx)
unique_elements = set(element_idx)
data = np.zeros(shape=(len(unique_dates), len(inputs), len(unique_elements)), dtype=np.float64)
for i in range(len(raw_data)):
row = raw_data.iloc[i]
date_idx = int(row['date_idx'])
element_idx = int(row['element_idx'])
for input_idx in range(len(inputs)):
data[date_idx][input_idx][element_idx] = float(row[inputs[input_idx]])
print(data)
</code></pre>
<p>However, this is very slow. I have millions of entries for the <code>date_idx</code> array, and dozens for both <code>inputs</code> and <code>element_idx</code>. It takes 7 hours on my machine for this to complete with my real data set.</p>
<p>I have a feeling this could be done with slicing, no loops, but my attempts always fail - I'm missing something.</p>
<p>For example, I tried to eliminate the inner loop with:</p>
<pre><code>for i in range(len(raw_data)):
row = raw_data.iloc[i]
date_idx = int(row['date_idx'])
element_idx = int(row['element_idx'])
data[date_idx][:][element_idx] = list(dict(row[inputs]).values())
</code></pre>
<p>And it fails with:</p>
<pre><code>Traceback (most recent call last):
File "/home/stark/Work/mmr6/test2.py", line 84, in <module>
data[date_idx][:][element_idx] = list(dict(row[inputs]).values())
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
ValueError: could not broadcast input array from shape (3,) into shape (2,)
</code></pre>
<p>My question is, can slicing and / or fast technique be used to reorganize this DataFrame in that fashion on the plain numpy array, or do I really need the loops here?</p>
|
<python><arrays><pandas><dataframe><numpy>
|
2023-06-04 17:12:09
| 1
| 6,306
|
Edy Bourne
|
76,401,474
| 18,972,785
|
How to implement SIR model for node ranking in graph?
|
<p>I am working on node ranking in complex networks. I have heard about the SIR (susceptible-infected-recovered) model in graphs that measures the infection power of each node as its spreader and influence rank (a node with high infected nodes has higher spreader rank). I saw several SIR source codes and even a library called ndlib in python for SIR model, but these implementations do not rank the nodes based on their infection power and just simulate the SIR.</p>
<p>How can I sort nodes of a graph based on the power of node in infecting other nodes?</p>
|
<python><graph-theory>
|
2023-06-04 16:51:19
| 0
| 505
|
Orca
|
76,401,453
| 525,865
|
gathering data from clutch.io : some issues with BS4 while working on colab
|
<p><strong>update:</strong> what bout selenium - support in colab: i have checked
this..see below!</p>
<p><strong>update 2:</strong> thanks to badduker and his reply with the colab-workaround and results - i have tried to add some more code in order to parse some of the results</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup
import pandas as pd
options = Options()
options.add_argument("--headless")
options.add_argument("--no-sandbox")
driver = webdriver.Chrome(options=options)
driver.get("https://clutch.co/it-services/msp")
page_source = driver.page_source
driver.quit()
soup = BeautifulSoup(page_source, "html.parser")
# Extract the data using some BeautifulSoup selectors
# For example, let's extract the names and locations of the companies
company_names = [name.text for name in soup.select(".company-name")]
company_locations = [location.text for location in soup.select(".locality")]
# Store the data in a Pandas DataFrame
data = {
"Company Name": company_names,
"Location": company_locations
}
df = pd.DataFrame(data)
# Save the DataFrame to a CSV file
df.to_csv("clutch_data.csv", index=False)
</code></pre>
<p>but this leads to no results.</p>
<p>i will try digg any deeper into that - but probably with an new thread.. - thank you dear badduker.</p>
<p>End of the last update - the second update - written on june 22th
malaga</p>
<p>good day dear experts - well at the moment i am trying to figure out a simple way and method to obtain data from clutch.io</p>
<p>note: i work with google colab - and sometimes i think that some approches were not supported on my collab account - some due cloudflare-things and issues.</p>
<p>but see this one -</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = 'https://clutch.co/it-services/msp'
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
links = []
for l in soup.find_all('li',class_='website-link website-link-a'):
results = (l.a.get('href'))
links.append(results)
print(links)
</code></pre>
<p>this also do not work - do you have any idea - how to solve the issue</p>
<p>it gives back a empty result.</p>
<p>update: hello dear user510170 . many thanks for the answer and the selenium solution - tried it out in google.colab and found these following results</p>
<pre><code>--------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
<ipython-input-2-4f37092106f4> in <cell line: 4>()
2 from selenium import webdriver
3
----> 4 driver = webdriver.Chrome()
5
6 url = 'https://clutch.co/it-services/msp'
5 frames
/usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response)
243 alert_text = value["alert"].get("text")
244 raise exception_class(message, screen, stacktrace, alert_text) # type: ignore[call-arg] # mypy is not smart enough here
--> 245 raise exception_class(message, screen, stacktrace)
WebDriverException: Message: unknown error: cannot find Chrome binary
Stacktrace:
#0 0x56199267a4e3 <unknown>
#1 0x5619923a9c76 <unknown>
#2 0x5619923d0757 <unknown>
#3 0x5619923cf029 <unknown>
#4 0x56199240dccc <unknown>
#5 0x56199240d47f <unknown>
#6 0x561992404de3 <unknown>
#7 0x5619923da2dd <unknown>
#8 0x5619923db34e <unknown>
#9 0x56199263a3e4 <unknown>
#10 0x56199263e3d7 <unknown>
#11 0x561992648b20 <unknown>
#12 0x56199263f023 <unknown>
#13 0x56199260d1aa <unknown>
#14 0x5619926636b8 <unknown>
#15 0x561992663847 <unknown>
#16 0x561992673243 <unknown>
#17 0x7efc5583e609 start_thread
</code></pre>
<p>to me it seems to have to do with the line 4 - the</p>
<pre><code> ----> 4 driver = webdriver.Chrome()
</code></pre>
<p>is it this line that needs a minor correction and change!?</p>
<p>update: thanks to tarun i got notice of this workaround here:</p>
<p><a href="https://medium.com/cubemail88/automatically-download-chromedriver-for-selenium-aaf2e3fd9d81" rel="nofollow noreferrer">https://medium.com/cubemail88/automatically-download-chromedriver-for-selenium-aaf2e3fd9d81</a></p>
<p>did it: in other words i appied it to google-colab and tried to run the following:</p>
<pre><code>from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
#if __name__ == "__main__":
browser = webdriver.Chrome(ChromeDriverManager().install())
browser.get("https://www.reddit.com/")
browser.quit()
</code></pre>
<p>well - finally it should be able to run with this code in colab:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = 'https://clutch.co/it-services/msp'
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
links = []
for l in soup.find_all('li',class_='website-link website-link-a'):
results = (l.a.get('href'))
links.append(results)
print(links)
</code></pre>
<p>update: see below - the check in colab - and the question - is colab genearlly selenium capable and selenium-ready!?</p>
<p><a href="https://i.sstatic.net/HVpUZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HVpUZ.png" alt="enter image description here" /></a></p>
<p>look forward to hear from you</p>
<p>thanks to @user510170 who has pointed me to another approach :<a href="https://stackoverflow.com/questions/51046454/">How can we use Selenium Webdriver in colab.research.google.com?</a></p>
<p>Recently Google collab was upgraded and since Ubuntu 20.04+ no longer distributes chromium-browser outside of a snap package, you can install a compatible version from the Debian buster repository:</p>
<p>Then you can run selenium like this:</p>
<pre><code>from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.headless = True
wd = webdriver.Chrome('chromedriver',options=chrome_options)
wd.get("https://www.webite-url.com")
</code></pre>
<p>cf this thread <a href="https://stackoverflow.com/questions/51046454/">How can we use Selenium Webdriver in colab.research.google.com?</a></p>
<p>i need to try out this.... - on colab</p>
|
<python><web-scraping><beautifulsoup><python-requests>
|
2023-06-04 16:47:47
| 2
| 1,223
|
zero
|
76,401,428
| 4,115,031
|
How can I auto-format a docstring in PyCharm?
|
<p>I frequently have a situation where I'll write a paragraph in
a docstring for a function, then come back later and want to add to some part of the middle of the paragraph, and that will make me have to re-adjust the rest of the lines to have them be the proper length (not too long, not too short). This feels like something that should have an automation available for it.</p>
<p>I guess the trickiest part would be that the plugin would need to understand when to take words off of one line and add them to the beginning of the next line.</p>
<p>Is there some plugin or built-in action that will do this for me?</p>
|
<python><pycharm>
|
2023-06-04 16:40:51
| 1
| 12,570
|
Nathan Wailes
|
76,401,348
| 1,537,093
|
scipy curve_fit does not find the best fit
|
<p>I'd like to find out whether there is a peak in my time series or not. To do so, I'm trying to fit the data points to a gauss curve. It works well for thousands of my samples:</p>
<p><a href="https://i.sstatic.net/dBjR2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dBjR2.png" alt="good fit" /></a></p>
<p>but few are not fitter properly despite there is an obvious peak:
<a href="https://i.sstatic.net/RizgN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RizgN.png" alt="wrong fit" /></a></p>
<p>(see the very low peak with the highest point around 0.03)</p>
<p>Here's the code:</p>
<pre><code>def gauss(x, a, x0, sigma):
return a * np.exp(-(x - x0) ** 2 / (2 * sigma ** 2))
param, covariance = curve_fit(gauss, x, y)
</code></pre>
<p>I've noticed that the magnitude of the y values plays a role in the fitting so I've rescaled the data into <0, 100> interval. This helped but did not solve all the cases. Is there anything else I can do to improve the fitting? Different initialization? Smaller optimization step?</p>
<p>Here are some facts about the data:</p>
<ul>
<li>Every sample has 3-20 data points.</li>
<li>The peak (if there is any) must have its highest point inside the span.</li>
<li>x axis is from 0 to 20</li>
<li>y axis is from 0 to 100</li>
</ul>
<p>I've browse through other similar questions at stackoverflow but did not find a solution to my problem.</p>
<p>If anybody knows a better solution to determine whether there is a peak in the time series or not, I'd appreciate to hear it in the comments. Anyway, I'd like to know why some curves are not fitted properly.</p>
|
<python><scipy><curve-fitting>
|
2023-06-04 16:19:28
| 3
| 1,081
|
dpelisek
|
76,401,196
| 6,171,575
|
Create one row DataFrame from dict, where one of value is a list
|
<p>A little problem that leads me to two (or even 3) questions. And I have some difficulties to google the answer.</p>
<p>Here the very simple example :</p>
<pre><code>>>> import pandas as pd
>>> my_list = [1,2,3]
>>> year = 2023
>>> item = 'obj'
>>> res_dict = {"year" : year, "values" : my_list, "name" : item }
>>> print(res_dict)
{'year': 2023, 'values': [1, 2, 3], 'name': 'obj'}
>>> df = pd.DataFrame(res_dict)
>>> print(df)
year values name
0 2023 1 obj
1 2023 2 obj
2 2023 3 obj
</code></pre>
<p>My problem that I want a slightly different DataFrame.</p>
<p>My first idea was to create a DataFrame where in values we store a list.
So something like this :</p>
<pre><code> year values name
0 2023 [1, 2, 3] obj
</code></pre>
<p>But, and here is the first question : it seems that while it's possible to put list into cell of DataFrame it"s not really a good idea.
If it's so...</p>
<p>The second question. How could I instead create a DataFrame with columns for each element of my list ?
To get something like this :</p>
<pre><code> year values1 values2 values3 name
0 2023 1 2 3 obj
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-04 15:42:59
| 2
| 577
|
Paul Zakharov
|
76,401,091
| 13,793,478
|
cant pass data between Django and Chartjs
|
<p>I wanted to pass an array to chartjs for a chart the data is getting passed but for some strange reason the data gets changed, the array contains days of the month [1,2,3,so on] but on chart it displays "[ 0 1 3 5 7 9 0 so on ]"</p>
<p>the data is good when I print it to console</p>
<pre><code> return render(request, 'notes/home.html',{
'day':daysOfMonth,
})
</code></pre>
<pre><code> <div class="mainChart">
<canvas id="myChart"></canvas>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script>
const ctx = document.getElementById('myChart');
const days = '{{day}}';
new Chart(ctx, {
type: 'bar',
data: {
labels: days,
datasets: [{
// label: '# of Votes',
data: [1,2,3,4,5,6,7,8,9,10,1,2,3,4,5,6,7,8,9,10,1,2,3,4,5,6,7,8,9,10,],
borderWidth: 10
}]
},
options: {
scales: {
y: {
beginAtZero: true
}
}
}
});
</script>
</div>
</code></pre>
|
<python><django>
|
2023-06-04 15:20:41
| 1
| 514
|
Mt Khalifa
|
76,400,975
| 14,210,773
|
Polars memory usage as compared to {data.table}
|
<p>Fairly new to python-<code>polars</code>.</p>
<p>How does it compare to Rs <code>{data.table}</code> package in terms of memory usage?</p>
<p>How does it handle shallow copying?</p>
<p>Is in-place/by reference updating possible/the default?</p>
<p>Are there any recent benchmarks on memory efficiency of the big 4 in-mem data wrangling libs (polars vs data.table vs pandas vs dplyr)?</p>
|
<python><r><data.table><python-polars>
|
2023-06-04 14:51:32
| 1
| 420
|
persephone
|
76,400,717
| 20,920,790
|
How to improve values annotation on sns.lineplot?
|
<p>I've got this data:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">mentor_cnt</th>
<th style="text-align: right;">mentee_cnt</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">7</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">18</td>
<td style="text-align: right;">19</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: right;">24</td>
<td style="text-align: right;">27</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: right;">37</td>
<td style="text-align: right;">35</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td style="text-align: right;">53</td>
<td style="text-align: right;">56</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td style="text-align: right;">63</td>
<td style="text-align: right;">70</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td style="text-align: right;">86</td>
<td style="text-align: right;">89</td>
</tr>
<tr>
<td style="text-align: right;">9</td>
<td style="text-align: right;">102</td>
<td style="text-align: right;">114</td>
</tr>
<tr>
<td style="text-align: right;">10</td>
<td style="text-align: right;">135</td>
<td style="text-align: right;">149</td>
</tr>
<tr>
<td style="text-align: right;">11</td>
<td style="text-align: right;">154</td>
<td style="text-align: right;">169</td>
</tr>
<tr>
<td style="text-align: right;">12</td>
<td style="text-align: right;">174</td>
<td style="text-align: right;">202</td>
</tr>
<tr>
<td style="text-align: right;">13</td>
<td style="text-align: right;">232</td>
<td style="text-align: right;">287</td>
</tr>
<tr>
<td style="text-align: right;">14</td>
<td style="text-align: right;">298</td>
<td style="text-align: right;">386</td>
</tr>
<tr>
<td style="text-align: right;">15</td>
<td style="text-align: right;">343</td>
<td style="text-align: right;">475</td>
</tr>
<tr>
<td style="text-align: right;">16</td>
<td style="text-align: right;">384</td>
<td style="text-align: right;">552</td>
</tr>
<tr>
<td style="text-align: right;">17</td>
<td style="text-align: right;">446</td>
<td style="text-align: right;">684</td>
</tr>
<tr>
<td style="text-align: right;">18</td>
<td style="text-align: right;">509</td>
<td style="text-align: right;">883</td>
</tr>
<tr>
<td style="text-align: right;">19</td>
<td style="text-align: right;">469</td>
<td style="text-align: right;">757</td>
</tr>
</tbody>
</table>
</div>
<p>This is my code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
fig, ax = plt.subplots(figsize=(10, 6))
sns.lineplot(x=df['month'], y=df['mentor_cnt'], color = 'b', label='Mentors', markers=True, marker='o')
sns.lineplot(x=df['month'], y=df['mentee_cnt'], color = 'r', label='Mentee', markers=True, marker='o')
plt.ylim(min(df['mentee_cnt'])-60, max(df['mentee_cnt'])+50)
columns =['mentor_cnt', 'mentee_cnt']
colors = ['b', 'w']
facecolors = ['none', 'r']
x_position = [0, -10]
y_position = [-18, 5]
for col, fc, c, x_p, y_p in zip(columns, facecolors, colors, x_position, y_position):
for x, y in zip(df['month'], df[col]):
label = '{:.0f}'.format(y)
plt.annotate(
label,
(x, y),
textcoords='offset points',
xytext=(x_p, y_p),
ha='center',
color=c
).set_bbox(dict(facecolor=fc, alpha=0.5, boxstyle='round', edgecolor='none'))
ax.legend()
plt.xlabel('Month')
plt.ylabel('Number of persons')
plt.title('Mentee, mentor')
plt.show()
</code></pre>
<p>Output:
<a href="https://i.sstatic.net/0VTlq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0VTlq.png" alt="plt.show" /></a></p>
<p>Line with mentors looks fine for me, but I can't figure out how to make values for mentee looks better.
Is there any way to set best positions of values automatically?
With set_bbox I tried to make it readable, but I think it's look terrible.</p>
<p>I improved my graph this way, maybe it'll help someone (thx to trenton-mckinney):</p>
<pre><code>fig, ax = plt.subplots(figsize=(10, 6))
sns.lineplot(x=df['month'], y=df['mentor_cnt'], color='b', label='Mentors', markers=True, marker='o')
sns.lineplot(x=df['month'], y=df['mentee_cnt'], color='r', label='Mentee', markers=True, marker='o')
plt.ylim(min(df['mentee_cnt'])-60, max(df['mentee_cnt'])+50)
columns = ['mentor_cnt', 'mentee_cnt']
colors = ['b', 'r']
facecolors = ['none', 'r']
x_position = [0, -15]
y_position = [-18, 3]
for col, fc, c, x_p, y_p in zip(columns, facecolors, colors, x_position, y_position):
for x, y in zip(df['month'], df[col]):
# describe_nearest - list with quantiles for data
lst = [0, 0.25, 0.5, 0.75, 1]
describe_nearest = []
[describe_nearest.append(df[col].quantile(el, interpolation='nearest')) for el in lst]
describe_nearest.append(df[col].values[-1::][0])
# add annotation if value in quantiles list
if y in describe_nearest:
label = '{:.0f}'.format(y)
plt.annotate(
label,
(x, y),
textcoords='offset points',
xytext=(x_p, y_p),
ha='center',
color=c
)
ax.legend()
plt.xlabel('Month')
plt.ylabel('Number of persons')
plt.title('Mentee, mentor')
plt.show()
</code></pre>
<p>Output:
<a href="https://i.sstatic.net/WdXX6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WdXX6.png" alt="new output" /></a></p>
|
<python><matplotlib><seaborn><plot-annotations>
|
2023-06-04 13:48:36
| 0
| 402
|
John Doe
|
76,400,690
| 1,691,599
|
How to use tabulate.SEPARATING_LINE when passing data as a list of dictionaries?
|
<p>I'm trying to insert a SEPARATING_LINE in a tabulate in Python.</p>
<p>The example with lists works perfectly:</p>
<p><code>import tabulate</code></p>
<p><code>print(tabulate.tabulate([["A", 200], ["B", 100], tabulate.SEPARATING_LINE, ["Total", 4]], headers=["Col1", "Col2"]))</code></p>
<p>yields:</p>
<p><code>Col1 Col2</code></p>
<p><code>------ ------</code></p>
<p><code>A 200</code></p>
<p><code>B 100</code></p>
<p><code>------ ------</code></p>
<p><code>Total 4</code></p>
<p>However while this works:</p>
<pre><code>print(
tabulate.tabulate(
[
{"Col1": "A", "Col2": 200},
{"Col1": "B", "Col2": 100},
#tabulate.SEPARATING_LINE,
{"Col1": "TOTAL", "Col2": 200},
],
headers="keys",
)
)
</code></pre>
<p>yielding:</p>
<p><code>Col1 Col2</code></p>
<p><code>------ ------</code></p>
<p><code>A 200</code></p>
<p><code>B 100</code></p>
<p><code>TOTAL 200</code></p>
<p>I cannot insert the tabulate.SEPARATING_LINE as is:</p>
<pre><code>Traceback (most recent call last):
File "/home/raul/python-playground/tabulates1.py", line 8, in <module>
tabulate.tabulate(
File "/home/raul/.venv/lib/python3.11/site-packages/tabulate/__init__.py", line 2048, in tabulate
list_of_lists, headers = _normalize_tabular_data(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raul/.venv/lib/python3.11/site-packages/tabulate/__init__.py", line 1409, in _normalize_tabular_data
for k in row.keys():
^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
</code></pre>
<p>Is tabulate.SEPARATING_LINE actually? supported with lists of dicts? Thanks.</p>
<p>I expected to get in the dict-based tabulate the same result as in the list-based one: a separating line (a line with dashes).</p>
|
<python><python-tabulate>
|
2023-06-04 13:40:59
| 1
| 3,700
|
Raúl Salinas-Monteagudo
|
76,400,682
| 6,521,314
|
Convert json directly into dictionary of pandas dataframes
|
<p>I have json files which look like a dictionary of a list of similar dictionaries:</p>
<pre><code>{"People":[{"FirstName":"Max","Surname":"Smith"},{"FirstName":"Jane","Surname":"Smart"}],
"Animals":[{"Breed":"Cat","Name":"WhiteSocks"},{"Breed":"Dog","Name":"Zeus"}]}
</code></pre>
<p>I'm using the following code to convert this into a dictionary of pandas dataframes:</p>
<pre><code>import pandas as pd
import json
# Read the json file
jsonFile = 'exampleJson.json'
with open(jsonFile) as j:
data = json.load(j)
# Convert it to a dictionary of dataframes
dfDict = {}
for dfName, dfContents in data.items():
dfDict[dfName] = pd.DataFrame(dfContents)
display(dfDict[dfName])
</code></pre>
<p>The above code gives me exactly what I want, which is a dictionary of dataframes. However it seems rather inefficient. Is there a way to read the json <strong>directly</strong> into a dictionary of dataframes, rather than reading it into a json object first and then copying that into a dictionary of dataframes? The files I'm working with will be huge.</p>
|
<python><json><pandas><dictionary>
|
2023-06-04 13:38:15
| 1
| 308
|
Michael
|
76,400,615
| 2,647,447
|
how to avoid over lapping legends and pie chart
|
<p>I am trying to draw a pie chart using python <code>matplotlib</code>. How to position the legend and pie chart so they are not overlapping each other. My code is as below:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
y = np.array([1,117,11,5])
mylabels ="Complete","Pass","Fail","Block"
mycolors = ["b","g","r","c"]
myexplode =[0.6,0,0.4,0]
plt.pie(y,labels=mylabels,explode = myexplode,shadow=True,colors = mycolors,autopct
='%1.1f%%')
plt.legend(title ="Test result types")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/0SChi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0SChi.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><legend><pie-chart>
|
2023-06-04 13:19:45
| 1
| 449
|
PChao
|
76,400,400
| 9,703,451
|
Copy arbitrary image formats (raster and vector) to clipboard with PyQt5
|
<p>I am having troubles copying arbitrary data types with PyQt5 and then pasting the content into another software.</p>
<p>Specifically I want to copy either raster-graphics (png, jpeg etc..) OR vector-graphics (svg, eps, pdf) from a PyQt5 application and then paste it in some other application (document or image editing software).</p>
<p>The code I use to copy raster images works perfectly fine and allows me to copy/paste images into arbitrary software.</p>
<ul>
<li>however, it copies a rasterized version of the image even if the format is a vektor-graphics type</li>
</ul>
<p>It looks somewhat like this:</p>
<pre class="lang-py prettyprint-override"><code>import io
from PyQt5.QtWidgets import QApplication
from PyQt5.QtGui import QImage
...
with io.BytesIO() as buffer:
self.savefig(buffer, format="png")
cb = QApplication.clipboard()
cb.setImage(QImage.fromData(buffer.getvalue()))
</code></pre>
<p>Now, to be able to copy vector graphics, I extended the above script to handle arbitrary mimetypes:</p>
<pre class="lang-py prettyprint-override"><code>import io
from PyQt5.QtCore import QMimeData
from PyQt5.QtWidgets import QApplication
...
with io.BytesIO() as buffer:
self.savefig(buffer, format="svg")
data = QMimeData()
data.setData("image/svg+xml", buffer.getvalue())
cb = QApplication.clipboard()
cb.clear(mode=cb.Clipboard)
cb.setMimeData(data, mode=cb.Clipboard)
</code></pre>
<p>Now this still works fine (both raster and vektor) if I paste the copied data into a software like Inkscape, but to some unknown reason I can no longer paste the data (no matter if its vektor or raster images) in other programs like LibreOffice Writer, MarkTex, paint etc. )</p>
<p>Is there something I am missing in my code so that the copied data is recognized properly by all apps?</p>
<h3>Update</h3>
<hr />
<p>Here is a minimal working example to show the behavior:</p>
<p>My question comes down to:<br />
Why do the <code>copy png/svg with setData</code> buttons not work as expected?
Somehow images cannot be pasted into other apps (except for Inkscape).</p>
<pre class="lang-py prettyprint-override"><code>import io
from PyQt5 import QtWidgets
from PyQt5.QtCore import pyqtSlot, QMimeData
from PyQt5.QtWidgets import QApplication
from PyQt5.QtGui import QImage
import matplotlib.pyplot as plt
@pyqtSlot()
def button_pressed_png_set_image():
f, ax = plt.subplots()
ax.set_title("png copied with setImage")
ax.plot([1, 2, 5, 4, 6, 3, 8], "b")
with io.BytesIO() as buffer:
f.savefig(buffer, format="png")
cb = QApplication.clipboard()
cb.setImage(QImage.fromData(buffer.getvalue()))
print("copied png to clipboard!")
plt.close(f)
@pyqtSlot()
def button_pressed_svg_set_image():
f, ax = plt.subplots()
ax.set_title("svg copied with setImage")
ax.plot([1, 2, 5, 4, 6, 3, 8], "r")
with io.BytesIO() as buffer:
f.savefig(buffer, format="svg")
cb = QApplication.clipboard()
cb.setImage(QImage.fromData(buffer.getvalue()))
print("copied svg to clipboard!")
plt.close(f)
@pyqtSlot()
def button_pressed_svg_set_data():
with io.BytesIO() as buffer:
f, ax = plt.subplots()
ax.set_title("svg copied with setData")
ax.plot([1, 2, 5, 4, 6, 3, 8], "b--")
f.savefig(buffer, format="svg")
data = QMimeData()
data.setData("image/svg+xml", buffer.getvalue())
cb = QApplication.clipboard()
cb.clear(mode=cb.Clipboard)
cb.setMimeData(data, mode=cb.Clipboard)
print("copied svg to clipboard!")
plt.close(f)
@pyqtSlot()
def button_pressed_png_set_data():
with io.BytesIO() as buffer:
f, ax = plt.subplots()
ax.set_title("png copied with setData")
ax.plot([1, 2, 5, 4, 6, 3, 8], "r--")
f.savefig(buffer, format="png")
data = QMimeData()
data.setData("image/png", buffer.getvalue())
cb = QApplication.clipboard()
cb.clear(mode=cb.Clipboard)
cb.setMimeData(data, mode=cb.Clipboard)
plt.close(f)
print("copied png to clipboard!")
plt.close(f)
if __name__ == "__main__":
app = QtWidgets.QApplication.instance()
if app is None:
app = QtWidgets.QApplication([])
window = QtWidgets.QMainWindow()
button = QtWidgets.QPushButton("copy png with setImage")
button.pressed.connect(button_pressed_png_set_image)
button2 = QtWidgets.QPushButton("copy svg with setImage")
button2.pressed.connect(button_pressed_svg_set_image)
button3 = QtWidgets.QPushButton("copy png with setData")
button3.pressed.connect(button_pressed_png_set_data)
button4 = QtWidgets.QPushButton("copy svg with setData")
button4.pressed.connect(button_pressed_svg_set_data)
layout = QtWidgets.QVBoxLayout()
layout.addWidget(button)
layout.addWidget(button2)
layout.addWidget(button3)
layout.addWidget(button4)
widget = QtWidgets.QWidget()
widget.setLayout(layout)
window.setCentralWidget(widget)
window.show()
</code></pre>
|
<python><pyqt><pyqt5>
|
2023-06-04 12:22:50
| 0
| 3,179
|
raphael
|
76,400,322
| 6,357,916
|
Not able to change IST to UTC
|
<p>Current IST (Asia/Kolkata) is 5:23 PM and UTC is 11:53 AM same day.
I wanted to convert IST to UTC. I tried following:</p>
<pre><code>from dateutil import tz
from datetime import datetime
from_zone = tz.gettz('IST') # also tried tz.gettz('Asia/Kolkata')
to_zone = tz.gettz('UTC')
t = '2023-06-04 05:23'
t = datetime.strptime(t, '%Y-%m-%d %H:%M')
t = t.replace(tzinfo=from_zone)
t = t.astimezone(to_zone)
print(t.strftime('%Y-%m-%d %H:%M'))
</code></pre>
<p>Above still prints <code>'2023-06-04 05:23'</code>. That is no time zone change happened. Why is this so?</p>
<p>Here is the screenshot of run:
<a href="https://i.sstatic.net/YSdEv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YSdEv.png" alt="enter image description here" /></a></p>
|
<python><django>
|
2023-06-04 12:02:24
| 0
| 3,029
|
MsA
|
76,400,151
| 5,201,707
|
Python Web Scraping: Unable to Log into https://www.vet-ebooks.com/ for Downloading Books
|
<p>I have been trying to iterate through a website book listing using python. The website is <a href="https://www.vet-ebooks.com/" rel="nofollow noreferrer">https://www.vet-ebooks.com/</a>. Issue is i need to perform a login before download the book. but login using the python is not working. Initial code that i have written is:</p>
<pre><code>import requests
# Start a session
session = requests.Session()
# Define the login data
login_payload = {
'log': 'test',
'pwd': 'test',
'rememberme': 'forever',
'ihcaction':'login',
'ihc_login_nonce': '52af5a580a'
}
# Send a POST request to the login page
login_req = session.post('https://www.vet-ebooks.com/user-login/', data=login_payload)
# Check if login was successful
if login_req.status_code == 200:
print("Login successful")
else:
print("Login failed")
</code></pre>
<p>However, it shows that the Login was successful but it does not worked. I had switched my approach to Selenium.</p>
<pre><code># Selenium 4
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.set_capability('browserless:token', 'TOKENHERE')
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--headless")
driver = webdriver.Remote(
command_executor='https://chrome.browserless.io/webdriver',
options=chrome_options
)
driver.get("https://www.vet-ebooks.com/user-login/")
# replace with your username and password
username = "testusername"
password = "testpasshere"
wait = WebDriverWait(driver, 10)
# again, get the login page and set the cookies
driver.get('https://www.vet-ebooks.com/user-login/')
# find username field and enter username
username_field = wait.until(EC.presence_of_element_located((By.NAME, "log")))
username_field.clear()
username_field.send_keys(username)
# find password field and enter password
password_field = wait.until(EC.presence_of_element_located((By.NAME, "pwd")))
password_field.clear()
password_field.send_keys(password)
# find login button and click it
login_button = wait.until(EC.presence_of_element_located((By.NAME, "Submit")))
login_button.click()
# Print the redirected URL
redirected_url = driver.current_url
print("Redirected URL:", redirected_url)
driver.quit()
</code></pre>
<p>But this code does not work either, redirect_url remains the same at /user-login/ please can anyone suggest me how this issue can be resolved?</p>
|
<python><selenium-webdriver><web-scraping><request>
|
2023-06-04 11:27:33
| 2
| 429
|
shzyincu
|
76,400,129
| 3,008,221
|
Reversing a linked list in python: What is wrong in my code?
|
<p>I am trying to reverse a linked list in python. Below is my code which doesn't work (gives an error of 'NoneType' object has no attribute 'next' at the while statement):</p>
<pre><code>class Solution:
def reverseList(self, head: Optional[ListNode]) -> Optional[ListNode]:
currentNode = head
previousNode=None
while currentNode.next is not None:
temp=currentNode.next
currentNode.next=previousNode
previousNode=currentNode
currentNode=temp
currentNode.next=previousNode
return currentNode
</code></pre>
<p>In my mind, what I am trying to do is that when it reaches the last node in the list (the one with next being None), it exits the while loop, and then out of the loop points, point that last node to the previous and return it as the head of the reversed list.</p>
<p>What is wrong in the above code and how can I fix it?</p>
|
<python><python-3.x><linked-list><singly-linked-list>
|
2023-06-04 11:20:04
| 1
| 433
|
Aly
|
76,400,054
| 3,247,006
|
"DATETIME_INPUT_FORMATS" doesn't work in Django Admin while "DATE_INPUT_FORMATS" and "TIME_INPUT_FORMATS" do
|
<p>I use <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#datetimefield" rel="nofollow noreferrer">DateTimeField()</a>, <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#datefield" rel="nofollow noreferrer">DateField()</a> and <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#timefield" rel="nofollow noreferrer">TimeField()</a> in <code>MyModel</code> class as shown below. *I use <strong>Django 4.2.1</strong>:</p>
<pre class="lang-py prettyprint-override"><code># "models.py"
from django.db import models
class MyModel(models.Model):
datetime = models.DateTimeField() # Here
date = models.DateField() # Here
time = models.TimeField() # Here
</code></pre>
<p>Then, I set <a href="https://docs.djangoproject.com/en/4.2/ref/settings/#date-input-formats" rel="nofollow noreferrer">DATE_INPUT_FORMATS</a> and <a href="https://docs.djangoproject.com/en/4.2/ref/settings/#time-input-formats" rel="nofollow noreferrer">TIME_INPUT_FORMATS</a> and set <a href="https://docs.djangoproject.com/en/4.2/ref/settings/#use-l10n" rel="nofollow noreferrer">USE_L10N</a> <code>False</code> to make <code>DATE_INPUT_FORMATS</code> and <code>TIME_INPUT_FORMATS</code> work in <code>settings.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "settings.py"
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = False # Here
USE_TZ = True
DATE_INPUT_FORMATS = ["%m/%d/%Y"] # '10/25/2023' # Here
TIME_INPUT_FORMATS = ["%H:%M"] # '14:30' # Here
</code></pre>
<p>Then, <code>DATE_INPUT_FORMATS</code> and <code>TIME_INPUT_FORMATS</code> work in Django Admin as shown below:</p>
<p><a href="https://i.sstatic.net/xP3tQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xP3tQ.png" alt="enter image description here" /></a></p>
<p>Next, I set <a href="https://docs.djangoproject.com/en/4.2/ref/settings/#std-setting-DATETIME_INPUT_FORMATS" rel="nofollow noreferrer">DATETIME_INPUT_FORMATS</a> and set <code>USE_L10N</code> <code>False</code> to make <code>DATETIME_INPUT_FORMATS</code> work in <code>settings.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "settings.py"
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = False # Here
USE_TZ = True
DATETIME_INPUT_FORMATS = ["%m/%d/%Y %H:%M"] # '10/25/2023 14:30'
</code></pre>
<p>But, <code>DATETIME_INPUT_FORMATS</code> does not work in Django Admin as shown below:</p>
<p><a href="https://i.sstatic.net/pbZKo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pbZKo.png" alt="enter image description here" /></a></p>
<p>In addition, from a <code>MyModel</code> objcet, I get and print <code>datetime</code>, <code>date</code> and <code>time</code> and pass them to <code>index.html</code> in <code>test()</code> in <code>views.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "views.py"
from django.shortcuts import render
from .models import MyModel
def test(request):
obj = MyModel.objects.all()[0]
print(obj.datetime)
print(obj.date)
print(obj.time)
return render(
request,
'index.html',
{"datetime": obj.datetime, "date": obj.date, "time": obj.time}
)
</code></pre>
<p>But, all <code>DATE_INPUT_FORMATS</code>, <code>TIME_INPUT_FORMATS</code> and <code>DATETIME_INPUT_FORMATS</code> don't work according to console as shown below:</p>
<pre class="lang-none prettyprint-override"><code>2023-10-25 14:30:15+00:00
2023-10-25
14:30:15
</code></pre>
<p>Next, I show <code>datetime</code>, <code>date</code> and <code>time</code> in <code>index.html</code> as shown below:</p>
<pre><code># "index.html"
{{ datetime }}<br/>
{{ date }}<br/>
{{ time }}
</code></pre>
<p>But, all <code>DATE_INPUT_FORMATS</code>, <code>TIME_INPUT_FORMATS</code> and <code>DATETIME_INPUT_FORMATS</code> don't work according to browser as shown below:</p>
<pre class="lang-none prettyprint-override"><code>Oct. 25, 2023, 2:30 p.m.
Oct. 25, 2023
2:30 p.m.
</code></pre>
<p>My questions:</p>
<ol>
<li>How can I make <code>DATETIME_INPUT_FORMATS</code> work in Django Admin?</li>
<li>Why doesn't <code>DATETIME_INPUT_FORMATS</code> work in Django Admin?</li>
<li>If <code>DATETIME_INPUT_FORMATS</code> doesn't work in Django Admin, where does <code>DATETIME_INPUT_FORMATS</code> work?</li>
</ol>
|
<python><django><django-models><django-admin><datetime-format>
|
2023-06-04 11:01:26
| 0
| 42,516
|
Super Kai - Kazuya Ito
|
76,399,664
| 6,357,916
|
Checking if varchar column value containing IP address is in subnet
|
<p>In SQL (tried in pgAdmin) I am able to do</p>
<pre><code>SELECT * FROM ...
WHERE inet(s.ip_address) << `10.0.0.2/24`
</code></pre>
<p>I tried to convert it to django ORM code which contained following:</p>
<pre><code>... .filter(ip_address__net_contained=subnet_mask)
</code></pre>
<p>But I am getting</p>
<pre><code>Unsupported lookup 'net_contained' for CharField
</code></pre>
<p>I understand that this is because <code>s.ip_address</code> is of type varchar. In SQL, I am explicitly converting it to IP using <code>inet()</code>. How do I do the same in django ORM? Does <a href="https://github.com/jimfunk/django-postgresql-netfields/issues/24" rel="nofollow noreferrer">this</a> thread say its is unsupported?</p>
|
<python><sql><django><postgresql><django-rest-framework>
|
2023-06-04 09:12:37
| 1
| 3,029
|
MsA
|
76,399,565
| 5,980,143
|
sqlalchemy - alembic run update query without specifying model to avoid later migrations clash
|
<p>I am adding a field to my table using alembic. <br>
I am adding the field <code>last_name</code>, and filling it with data using <code>do_some_processing</code> function which loads data for the field from some other source. <br></p>
<p>This is the table model, I added the field <code>last_name</code> to the model</p>
<pre><code>class MyTable(db.Model):
__tablename__ = "my_table"
index = db.Column(db.Integer, primary_key=True, nullable=False)
age = db.Column(db.Integer(), default=0)
first_name = db.Column(db.String(100), nullable=False)
last_name = db.Column(db.String(100), nullable=False)
</code></pre>
<p>Here is my migration which works well</p>
<pre><code># migration_add_last_name_field
op.add_column('my_table', sa.Column('last_name', sa.String(length=100), nullable=True))
values = session.query(MyTable).filter(MyTable.age == 5).all()
for value in values:
first_name = value.first_name
value.last_name = do_some_processing(first_name)
session.commit()
</code></pre>
<p>The issue is, that using <code>session.query(MyTable)</code> causes issues in future migrations.</p>
<p>For example, if I add in the future a migration which adds a field <code>foo</code> to the table, and add the field to <code>class MyTable</code>,
If I have unupdated environment, it will run <code>migration_add_last_name_field</code> and it fails</p>
<pre><code>sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError)
(1054, "Unknown column 'my_table.foo' in 'field list'")
[SQL: SELECT my_table.`index` AS my_table_index, my_table.first_name AS my_table_first_name,
my_table.last_name AS my_table_last_name, my_table.foo AS my_table_foo
FROM my_table
WHERE my_table.age = %s]
[parameters: (0,)]
(Background on this error at: http://sqlalche.me/e/13/e3q8)
</code></pre>
<p>since the migration that adds <code>foo</code> runs only after, but <code>session.query(MyTable)</code> takes all the fields in <code>MyTable</code> model including <code>foo</code>. <br></p>
<p>I am trying to do the update without selecting all fields to avoid selecting fields that were not created yet, like this:</p>
<pre><code>op.add_column('my_table', sa.Column('last_name', sa.String(length=100), nullable=True))
values = session.query(MyTable.last_name, MyTable.first_name).filter(MyTable.age == 0).all()
for value in values:
first_name = value.first_name
value.last_name = do_some_processing(first_name)
session.commit()
</code></pre>
<p>But this results an error: <code>can't set attribute</code> <br></p>
<p>I also tried different variations of <code>select *</code> also with no success. <br>
What is the correct solution?</p>
|
<python><sqlalchemy><alembic>
|
2023-06-04 08:41:24
| 2
| 4,369
|
dina
|
76,399,513
| 12,863,331
|
Getting the error message 'Getting requirements to build wheel did not run successfully.' when trying to install rpy2
|
<p>Running the command <code>pip install rpy2</code> results in the following output and error message:</p>
<pre><code>Collecting rpy2
Using cached rpy2-3.5.12.tar.gz (217 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [28 lines of output]
Traceback (most recent call last):
File "c:\users\97254\working_project\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "c:\users\97254\working_project\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "c:\users\97254\working_project\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\97254\AppData\Local\Temp\pip-build-env-b21o8iu5\overlay\Lib\site-packages\setuptools\build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "C:\Users\97254\AppData\Local\Temp\pip-build-env-b21o8iu5\overlay\Lib\site-packages\setuptools\build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "C:\Users\97254\AppData\Local\Temp\pip-build-env-b21o8iu5\overlay\Lib\site-packages\setuptools\build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 124, in <module>
File "<string>", line 110, in get_r_c_extension_status
File "./rpy2/situation.py", line 295, in get_r_flags
_get_r_cmd_config(r_home, flags,
File "./rpy2/situation.py", line 255, in _get_r_cmd_config
output = subprocess.check_output(
File "C:\Users\97254\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 420, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\97254\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 501, in run
with Popen(*popenargs, **kwargs) as process:
File "C:\Users\97254\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 947, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\97254\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 1416, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
</code></pre>
<p>I uninstalled and reinstalled R and associated directories according to directions on another Stackoverflow post, and got the same error. I also tried installing a previous version of rpy2 and got the same error.</p>
<p>If anyone has an idea how to resolve this error I'd be glad to get it.<br />
Thanks.</p>
|
<python><r><rpy2>
|
2023-06-04 08:21:44
| 5
| 304
|
random
|
76,399,483
| 13,916,049
|
IndexError: boolean index did not match indexed array along dimension 0 despite identical shape
|
<p>I want to retrieve the features selected by forward sequential selection. Both the <code>feature_names.shape</code> and <code>sfs_forward.get_support().shape</code> are <code>(1185,)</code> but my code raised <code>IndexError: boolean index did not match indexed array along dimension 0; dimension is 6 but corresponding boolean dimension is 1185</code>.</p>
<pre><code>from sklearn.preprocessing import StandardScaler
import numpy as np
import pandas as pd
from sklearn.feature_selection import SequentialFeatureSelector
X = df.iloc[:,:-1]
y = df.T.loc["subtype",:]
scaler = StandardScaler().fit(X)
X_scaled = scaler.transform(X)
X_scaled = X_scaled[~np.isnan(X_scaled).all(axis=1)]
feature_names = np.array(X.columns)
tic_fwd = time()
sfs_forward = SequentialFeatureSelector(
ridge, n_features_to_select=2, direction="forward"
).fit(X_scaled, y_transformed)
toc_fwd = time()
print(
"Features selected by forward sequential selection: "
f"{feature_names[sfs_forward.get_support()]}"
)
print(f"Done in {toc_fwd - tic_fwd:.3f}s")
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Input In [27], in <cell line: 7>()
2 sfs_forward = SequentialFeatureSelector(
3 ridge, n_features_to_select=2, direction="forward"
4 ).fit(X_scaled, y_transformed)
5 toc_fwd = time()
7 print(
8 "Features selected by forward sequential selection: "
----> 9 f"{feature_names[sfs_forward.get_support()]}"
10 )
11 print(f"Done in {toc_fwd - tic_fwd:.3f}s")
IndexError: boolean index did not match indexed array along dimension 0; dimension is 6 but corresponding boolean dimension is 1185
</code></pre>
<p>feature_names.shape</p>
<pre><code>(1185,)
</code></pre>
<p>sfs_forward.get_support().shape</p>
<pre><code>(1185,)
</code></pre>
<p>Data:</p>
<p><code>df.iloc[0:20,0:20]</code></p>
<pre><code>pd.DataFrame({'AAAS': {'TCGA.2K.A9WE.01A': 3.33370417925201,
'TCGA.2Z.A9J1.01A': 3.34925376425734,
'TCGA.2Z.A9J3.01A': 3.30349695880794,
'TCGA.2Z.A9J5.01A': 3.38096309720271,
'TCGA.2Z.A9J6.01A': 3.36824732623076,
'TCGA.2Z.A9J7.01A': 3.38620967759704,
'TCGA.2Z.A9J8.01A': 3.31614461640547,
'TCGA.2Z.A9JD.01A': 3.24948718071353,
'TCGA.2Z.A9JI.01A': 3.31237602614007,
'TCGA.2Z.A9JJ.01A': 3.25339435578418,
'TCGA.2Z.A9JO.01A': 3.2546210341103,
'TCGA.2Z.A9JQ.01A': 3.30663948227448,
'TCGA.4A.A93W.01A': 3.32680623499096,
'TCGA.4A.A93X.01A': 3.36733999973516,
'TCGA.4A.A93Y.01A': 3.2308076918096,
'TCGA.5P.A9JU.01A': 3.34943784290226,
'TCGA.5P.A9JY.01A': 3.37136358330499,
'TCGA.5P.A9KE.01A': 3.25515446854463,
'TCGA.A4.7288.01A': 3.24471275993211,
'TCGA.A4.7583.01A': 3.28988441955171},
'AARSD1': {'TCGA.2K.A9WE.01A': 3.26191613710211,
'TCGA.2Z.A9J1.01A': 3.32510595062856,
'TCGA.2Z.A9J3.01A': 3.33666480834738,
'TCGA.2Z.A9J5.01A': 3.38548662709543,
'TCGA.2Z.A9J6.01A': 3.31393112898833,
'TCGA.2Z.A9J7.01A': 3.3844029363488,
'TCGA.2Z.A9J8.01A': 3.27589892329404,
'TCGA.2Z.A9JD.01A': 3.3066208734883,
'TCGA.2Z.A9JI.01A': 3.32080678472723,
'TCGA.2Z.A9JJ.01A': 3.20905567146907,
'TCGA.2Z.A9JO.01A': 3.31482810573583,
'TCGA.2Z.A9JQ.01A': 3.52527394202135,
'TCGA.4A.A93W.01A': 3.35791864748117,
'TCGA.4A.A93X.01A': 3.37992172225646,
'TCGA.4A.A93Y.01A': 3.18332212236444,
'TCGA.5P.A9JU.01A': 3.25702110554208,
'TCGA.5P.A9JY.01A': 3.40073840803568,
'TCGA.5P.A9KE.01A': 3.37357419609698,
'TCGA.A4.7288.01A': 3.20843689886653,
'TCGA.A4.7583.01A': 3.17575870968208},
'AASS': {'TCGA.2K.A9WE.01A': 2.98072012253114,
'TCGA.2Z.A9J1.01A': 3.05313491041203,
'TCGA.2Z.A9J3.01A': 3.08982082695446,
'TCGA.2Z.A9J5.01A': 3.00519898142942,
'TCGA.2Z.A9J6.01A': 3.28265423939019,
'TCGA.2Z.A9J7.01A': 3.29249451483352,
'TCGA.2Z.A9J8.01A': 3.18509958679916,
'TCGA.2Z.A9JD.01A': 3.2315468449309,
'TCGA.2Z.A9JI.01A': 3.24916735666473,
'TCGA.2Z.A9JJ.01A': 3.1258891511562,
'TCGA.2Z.A9JO.01A': 3.04486585947894,
'TCGA.2Z.A9JQ.01A': 3.23762408943738,
'TCGA.4A.A93W.01A': 3.21564519349256,
'TCGA.4A.A93X.01A': 3.14471380201247,
'TCGA.4A.A93Y.01A': 2.86957141026716,
'TCGA.5P.A9JU.01A': 3.31900309673507,
'TCGA.5P.A9JY.01A': 3.47228157297442,
'TCGA.5P.A9KE.01A': 3.25224311787039,
'TCGA.A4.7288.01A': 3.37181034609328,
'TCGA.A4.7583.01A': 2.97624235592413},
'ABHD11': {'TCGA.2K.A9WE.01A': 3.45442444300391,
'TCGA.2Z.A9J1.01A': 3.63034209087263,
'TCGA.2Z.A9J3.01A': 3.70172491168742,
'TCGA.2Z.A9J5.01A': 3.57422385519763,
'TCGA.2Z.A9J6.01A': 3.58687437256901,
'TCGA.2Z.A9J7.01A': 3.71221697435946,
'TCGA.2Z.A9J8.01A': 3.37728221732592,
'TCGA.2Z.A9JD.01A': 3.46371103636162,
'TCGA.2Z.A9JI.01A': 3.28426031477005,
'TCGA.2Z.A9JJ.01A': 3.30783066520687,
'TCGA.2Z.A9JO.01A': 3.29006167425864,
'TCGA.2Z.A9JQ.01A': 3.73402375932904,
'TCGA.4A.A93W.01A': 3.59371860683494,
'TCGA.4A.A93X.01A': 3.45704018264931,
'TCGA.4A.A93Y.01A': 3.08886551011693,
'TCGA.5P.A9JU.01A': 3.41635876686342,
'TCGA.5P.A9JY.01A': 3.59970761273578,
'TCGA.5P.A9KE.01A': 3.53254635077263,
'TCGA.A4.7288.01A': 3.41278712830029,
'TCGA.A4.7583.01A': 3.42935144956267},
'ABHD14A': {'TCGA.2K.A9WE.01A': 3.23147506581216,
'TCGA.2Z.A9J1.01A': 3.24014574406411,
'TCGA.2Z.A9J3.01A': 3.27383206099273,
'TCGA.2Z.A9J5.01A': 3.43484025303272,
'TCGA.2Z.A9J6.01A': 3.33930301693651,
'TCGA.2Z.A9J7.01A': 3.27533712036305,
'TCGA.2Z.A9J8.01A': 3.21653166674538,
'TCGA.2Z.A9JD.01A': 3.21643407871312,
'TCGA.2Z.A9JI.01A': 3.00036464916413,
'TCGA.2Z.A9JJ.01A': 3.11196153347818,
'TCGA.2Z.A9JO.01A': 3.16294707051804,
'TCGA.2Z.A9JQ.01A': 3.39978790688454,
'TCGA.4A.A93W.01A': 3.31204401612052,
'TCGA.4A.A93X.01A': 3.28175308875134,
'TCGA.4A.A93Y.01A': 3.12689338571482,
'TCGA.5P.A9JU.01A': 3.2136834012932,
'TCGA.5P.A9JY.01A': 3.41998236517117,
'TCGA.5P.A9KE.01A': 3.28578845771729,
'TCGA.A4.7288.01A': 3.11798185586533,
'TCGA.A4.7583.01A': 3.12518643871903},
'ACADVL': {'TCGA.2K.A9WE.01A': 3.78280804969722,
'TCGA.2Z.A9J1.01A': 3.83518906941955,
'TCGA.2Z.A9J3.01A': 3.76545652291492,
'TCGA.2Z.A9J5.01A': 3.85661653659097,
'TCGA.2Z.A9J6.01A': 3.83510247236515,
'TCGA.2Z.A9J7.01A': 3.84184572618095,
'TCGA.2Z.A9J8.01A': 3.69669745512717,
'TCGA.2Z.A9JD.01A': 3.85849814535531,
'TCGA.2Z.A9JI.01A': 3.53745376464629,
'TCGA.2Z.A9JJ.01A': 3.70779377411552,
'TCGA.2Z.A9JO.01A': 3.73824585940869,
'TCGA.2Z.A9JQ.01A': 3.82111655432968,
'TCGA.4A.A93W.01A': 3.74776019274385,
'TCGA.4A.A93X.01A': 3.73967865238708,
'TCGA.4A.A93Y.01A': 3.6096519713675,
'TCGA.5P.A9JU.01A': 3.78368869116589,
'TCGA.5P.A9JY.01A': 3.85963550251591,
'TCGA.5P.A9KE.01A': 3.84480066470514,
'TCGA.A4.7288.01A': 3.77059403979435,
'TCGA.A4.7583.01A': 3.72736487982992},
'ACAP3': {'TCGA.2K.A9WE.01A': 3.32954742137758,
'TCGA.2Z.A9J1.01A': 3.41541368621735,
'TCGA.2Z.A9J3.01A': 3.36566885760338,
'TCGA.2Z.A9J5.01A': 3.33905458589913,
'TCGA.2Z.A9J6.01A': 3.38268360006463,
'TCGA.2Z.A9J7.01A': 3.34139213854099,
'TCGA.2Z.A9J8.01A': 3.17685206178369,
'TCGA.2Z.A9JD.01A': 3.32621206175931,
'TCGA.2Z.A9JI.01A': 3.24132329884539,
'TCGA.2Z.A9JJ.01A': 3.30200263604801,
'TCGA.2Z.A9JO.01A': 3.20113494357782,
'TCGA.2Z.A9JQ.01A': 3.35158056517227,
'TCGA.4A.A93W.01A': 3.33261563288546,
'TCGA.4A.A93X.01A': 3.1966941080367,
'TCGA.4A.A93Y.01A': 3.21085184816322,
'TCGA.5P.A9JU.01A': 3.26776798309195,
'TCGA.5P.A9JY.01A': 3.42466062536747,
'TCGA.5P.A9KE.01A': 3.3650830203683,
'TCGA.A4.7288.01A': 3.29758758210549,
'TCGA.A4.7583.01A': 3.26616973598172},
'ACBD4': {'TCGA.2K.A9WE.01A': 3.21956455097052,
'TCGA.2Z.A9J1.01A': 3.32785669741002,
'TCGA.2Z.A9J3.01A': 3.2418493958081,
'TCGA.2Z.A9J5.01A': 3.30161312786773,
'TCGA.2Z.A9J6.01A': 3.32693496452319,
'TCGA.2Z.A9J7.01A': 3.28751140605353,
'TCGA.2Z.A9J8.01A': 3.20938204942816,
'TCGA.2Z.A9JD.01A': 3.36919924778803,
'TCGA.2Z.A9JI.01A': 3.40181000812383,
'TCGA.2Z.A9JJ.01A': 3.13353744600034,
'TCGA.2Z.A9JO.01A': 3.1145673700832,
'TCGA.2Z.A9JQ.01A': 3.34807071937758,
'TCGA.4A.A93W.01A': 3.32255564941828,
'TCGA.4A.A93X.01A': 3.30076059153253,
'TCGA.4A.A93Y.01A': 3.03931699339692,
'TCGA.5P.A9JU.01A': 3.2826557489973,
'TCGA.5P.A9JY.01A': 3.37714647778826,
'TCGA.5P.A9KE.01A': 3.28917834179844,
'TCGA.A4.7288.01A': 3.28256155124117,
'TCGA.A4.7583.01A': 3.21775779894534},
'ACCS': {'TCGA.2K.A9WE.01A': 3.08010926641183,
'TCGA.2Z.A9J1.01A': 3.09519418745071,
'TCGA.2Z.A9J3.01A': 2.95156697395897,
'TCGA.2Z.A9J5.01A': 3.14871798236364,
'TCGA.2Z.A9J6.01A': 3.03283334713975,
'TCGA.2Z.A9J7.01A': 3.15683664831187,
'TCGA.2Z.A9J8.01A': 3.18228546185869,
'TCGA.2Z.A9JD.01A': 3.04138390728031,
'TCGA.2Z.A9JI.01A': 3.13518600021314,
'TCGA.2Z.A9JJ.01A': 3.22051322331053,
'TCGA.2Z.A9JO.01A': 3.08699890758702,
'TCGA.2Z.A9JQ.01A': 3.06401024012122,
'TCGA.4A.A93W.01A': 3.10037474872627,
'TCGA.4A.A93X.01A': 2.95016687098414,
'TCGA.4A.A93Y.01A': 2.95666547736711,
'TCGA.5P.A9JU.01A': 3.13587768187161,
'TCGA.5P.A9JY.01A': 3.25054412692135,
'TCGA.5P.A9KE.01A': 3.11025888222529,
'TCGA.A4.7288.01A': 3.11711368678687,
'TCGA.A4.7583.01A': 3.2006222452079},
'ACD': {'TCGA.2K.A9WE.01A': 3.19213021131488,
'TCGA.2Z.A9J1.01A': 3.20320857814628,
'TCGA.2Z.A9J3.01A': 3.27909112568773,
'TCGA.2Z.A9J5.01A': 3.26095012781228,
'TCGA.2Z.A9J6.01A': 3.28871557525255,
'TCGA.2Z.A9J7.01A': 3.29586744440765,
'TCGA.2Z.A9J8.01A': 3.08874481336118,
'TCGA.2Z.A9JD.01A': 2.97338991241771,
'TCGA.2Z.A9JI.01A': 3.1370575129655,
'TCGA.2Z.A9JJ.01A': 3.20822079478193,
'TCGA.2Z.A9JO.01A': 3.06189501848873,
'TCGA.2Z.A9JQ.01A': 3.3430789326483,
'TCGA.4A.A93W.01A': 3.23304663525791,
'TCGA.4A.A93X.01A': 3.18568664760788,
'TCGA.4A.A93Y.01A': 3.13147026186541,
'TCGA.5P.A9JU.01A': 3.14939442580704,
'TCGA.5P.A9JY.01A': 3.32606932827815,
'TCGA.5P.A9KE.01A': 3.22704497470063,
'TCGA.A4.7288.01A': 3.18050836413849,
'TCGA.A4.7583.01A': 3.24419966423062},
'ACIN1': {'TCGA.2K.A9WE.01A': 3.52466908578752,
'TCGA.2Z.A9J1.01A': 3.49823196362438,
'TCGA.2Z.A9J3.01A': 3.5384640251653,
'TCGA.2Z.A9J5.01A': 3.52901663287839,
'TCGA.2Z.A9J6.01A': 3.54970618687924,
'TCGA.2Z.A9J7.01A': 3.51153640599344,
'TCGA.2Z.A9J8.01A': 3.51818510120471,
'TCGA.2Z.A9JD.01A': 3.45115379307268,
'TCGA.2Z.A9JI.01A': 3.4771783753439,
'TCGA.2Z.A9JJ.01A': 3.52002142731939,
'TCGA.2Z.A9JO.01A': 3.47275579005186,
'TCGA.2Z.A9JQ.01A': 3.47428336826483,
'TCGA.4A.A93W.01A': 3.56033538819405,
'TCGA.4A.A93X.01A': 3.50976760204467,
'TCGA.4A.A93Y.01A': 3.49973868495901,
'TCGA.5P.A9JU.01A': 3.53737918710136,
'TCGA.5P.A9JY.01A': 3.60706135661057,
'TCGA.5P.A9KE.01A': 3.4657077451802,
'TCGA.A4.7288.01A': 3.45293350052245,
'TCGA.A4.7583.01A': 3.40895204511546},
'ACOT8': {'TCGA.2K.A9WE.01A': 3.27349420687854,
'TCGA.2Z.A9J1.01A': 3.27165000646019,
'TCGA.2Z.A9J3.01A': 3.12574135882714,
'TCGA.2Z.A9J5.01A': 3.24545320135566,
'TCGA.2Z.A9J6.01A': 3.13323469818501,
'TCGA.2Z.A9J7.01A': 3.2594849047236,
'TCGA.2Z.A9J8.01A': 3.18167802281569,
'TCGA.2Z.A9JD.01A': 3.10346157568027,
'TCGA.2Z.A9JI.01A': 3.09334519884808,
'TCGA.2Z.A9JJ.01A': 3.04072639178277,
'TCGA.2Z.A9JO.01A': 3.16322186930956,
'TCGA.2Z.A9JQ.01A': 3.28412005110336,
'TCGA.4A.A93W.01A': 3.2501750422424,
'TCGA.4A.A93X.01A': 3.36050511144836,
'TCGA.4A.A93Y.01A': 3.1401191871727,
'TCGA.5P.A9JU.01A': 3.17911015544583,
'TCGA.5P.A9JY.01A': 3.28520144124094,
'TCGA.5P.A9KE.01A': 3.14282311422587,
'TCGA.A4.7288.01A': 3.01050928041973,
'TCGA.A4.7583.01A': 3.12532635195293},
'ACSL3': {'TCGA.2K.A9WE.01A': 3.31134536026302,
'TCGA.2Z.A9J1.01A': 3.35712910880047,
'TCGA.2Z.A9J3.01A': 3.30632285397452,
'TCGA.2Z.A9J5.01A': 3.27441467093657,
'TCGA.2Z.A9J6.01A': 3.25938746147772,
'TCGA.2Z.A9J7.01A': 3.20057626573749,
'TCGA.2Z.A9J8.01A': 3.34746436020073,
'TCGA.2Z.A9JD.01A': 3.28640782993935,
'TCGA.2Z.A9JI.01A': 3.31470578824798,
'TCGA.2Z.A9JJ.01A': 3.36066736921985,
'TCGA.2Z.A9JO.01A': 3.34372487268323,
'TCGA.2Z.A9JQ.01A': 3.22159484589212,
'TCGA.4A.A93W.01A': 3.2573328000576,
'TCGA.4A.A93X.01A': 3.17519813374988,
'TCGA.4A.A93Y.01A': 3.35098384485933,
'TCGA.5P.A9JU.01A': 3.18530672589519,
'TCGA.5P.A9JY.01A': 3.15541603600036,
'TCGA.5P.A9KE.01A': 3.30879900337964,
'TCGA.A4.7288.01A': 3.28347444017644,
'TCGA.A4.7583.01A': 3.4451627404573},
'ACTR2': {'TCGA.2K.A9WE.01A': 3.61231510694424,
'TCGA.2Z.A9J1.01A': 3.52383957062255,
'TCGA.2Z.A9J3.01A': 3.53563667297091,
'TCGA.2Z.A9J5.01A': 3.52479258838548,
'TCGA.2Z.A9J6.01A': 3.47510625125214,
'TCGA.2Z.A9J7.01A': 3.5161992464569,
'TCGA.2Z.A9J8.01A': 3.61256790636486,
'TCGA.2Z.A9JD.01A': 3.55425150651787,
'TCGA.2Z.A9JI.01A': 3.62664331413751,
'TCGA.2Z.A9JJ.01A': 3.64993947223951,
'TCGA.2Z.A9JO.01A': 3.54973675800032,
'TCGA.2Z.A9JQ.01A': 3.38380496795406,
'TCGA.4A.A93W.01A': 3.53646676734563,
'TCGA.4A.A93X.01A': 3.55261699745398,
'TCGA.4A.A93Y.01A': 3.65133654532084,
'TCGA.5P.A9JU.01A': 3.54739323593656,
'TCGA.5P.A9JY.01A': 3.43657127029103,
'TCGA.5P.A9KE.01A': 3.5608511808228,
'TCGA.A4.7288.01A': 3.59785191986549,
'TCGA.A4.7583.01A': 3.61081652486307},
'ACTR3': {'TCGA.2K.A9WE.01A': 3.57350203736343,
'TCGA.2Z.A9J1.01A': 3.50311866574467,
'TCGA.2Z.A9J3.01A': 3.55921764196203,
'TCGA.2Z.A9J5.01A': 3.52085374326433,
'TCGA.2Z.A9J6.01A': 3.49540458365642,
'TCGA.2Z.A9J7.01A': 3.46274937035809,
'TCGA.2Z.A9J8.01A': 3.61687663274809,
'TCGA.2Z.A9JD.01A': 3.53488318379022,
'TCGA.2Z.A9JI.01A': 3.58139427546552,
'TCGA.2Z.A9JJ.01A': 3.6252928636073,
'TCGA.2Z.A9JO.01A': 3.54126420446724,
'TCGA.2Z.A9JQ.01A': 3.42425304944404,
'TCGA.4A.A93W.01A': 3.55142422147974,
'TCGA.4A.A93X.01A': 3.53947155870953,
'TCGA.4A.A93Y.01A': 3.62867013082177,
'TCGA.5P.A9JU.01A': 3.52046634871136,
'TCGA.5P.A9JY.01A': 3.46109723266569,
'TCGA.5P.A9KE.01A': 3.52279905533904,
'TCGA.A4.7288.01A': 3.54143423875656,
'TCGA.A4.7583.01A': 3.55995707578027},
'ACTR5': {'TCGA.2K.A9WE.01A': 3.06208977903951,
'TCGA.2Z.A9J1.01A': 2.98674440955131,
'TCGA.2Z.A9J3.01A': 2.86631208756685,
'TCGA.2Z.A9J5.01A': 2.8694202685852,
'TCGA.2Z.A9J6.01A': 2.88420379057496,
'TCGA.2Z.A9J7.01A': 3.0767417839652,
'TCGA.2Z.A9J8.01A': 2.94882975283664,
'TCGA.2Z.A9JD.01A': 2.78229187971361,
'TCGA.2Z.A9JI.01A': 3.02573502430246,
'TCGA.2Z.A9JJ.01A': 2.77839929937136,
'TCGA.2Z.A9JO.01A': 2.92446170648467,
'TCGA.2Z.A9JQ.01A': 3.04359241380142,
'TCGA.4A.A93W.01A': 3.09981160346012,
'TCGA.4A.A93X.01A': 3.04610935341119,
'TCGA.4A.A93Y.01A': 2.93746503709784,
'TCGA.5P.A9JU.01A': 2.9687386310118,
'TCGA.5P.A9JY.01A': 3.14682227496638,
'TCGA.5P.A9KE.01A': 2.92895997480749,
'TCGA.A4.7288.01A': 2.96727191950339,
'TCGA.A4.7583.01A': 2.88705049070341},
'ACYP1': {'TCGA.2K.A9WE.01A': 2.68316568819909,
'TCGA.2Z.A9J1.01A': 2.98727853617499,
'TCGA.2Z.A9J3.01A': 2.8747308602562,
'TCGA.2Z.A9J5.01A': 2.90471648931281,
'TCGA.2Z.A9J6.01A': 2.86024251917032,
'TCGA.2Z.A9J7.01A': 2.90719465672888,
'TCGA.2Z.A9J8.01A': 2.58190565590721,
'TCGA.2Z.A9JD.01A': 2.83706565172417,
'TCGA.2Z.A9JI.01A': 2.71150852376803,
'TCGA.2Z.A9JJ.01A': 2.71288741077102,
'TCGA.2Z.A9JO.01A': 2.91917976563717,
'TCGA.2Z.A9JQ.01A': 2.92401107924786,
'TCGA.4A.A93W.01A': 2.94132783915943,
'TCGA.4A.A93X.01A': 2.74314584851536,
'TCGA.4A.A93Y.01A': 2.57130863051519,
'TCGA.5P.A9JU.01A': 2.8605082258947,
'TCGA.5P.A9JY.01A': 2.94478769990945,
'TCGA.5P.A9KE.01A': 2.86417482049571,
'TCGA.A4.7288.01A': 2.61921928074634,
'TCGA.A4.7583.01A': 2.71042008838105},
'ADAM10': {'TCGA.2K.A9WE.01A': 3.42903685557118,
'TCGA.2Z.A9J1.01A': 3.46205062573361,
'TCGA.2Z.A9J3.01A': 3.51938053547695,
'TCGA.2Z.A9J5.01A': 3.14673944797621,
'TCGA.2Z.A9J6.01A': 3.36492099305172,
'TCGA.2Z.A9J7.01A': 3.39490201042367,
'TCGA.2Z.A9J8.01A': 3.46715479737621,
'TCGA.2Z.A9JD.01A': 3.44039615808161,
'TCGA.2Z.A9JI.01A': 3.36116354963951,
'TCGA.2Z.A9JJ.01A': 3.5867772356033,
'TCGA.2Z.A9JO.01A': 3.35339021556805,
'TCGA.2Z.A9JQ.01A': 2.97520964839415,
'TCGA.4A.A93W.01A': 3.38747266012348,
'TCGA.4A.A93X.01A': 3.2851543302658,
'TCGA.4A.A93Y.01A': 3.4227242167674,
'TCGA.5P.A9JU.01A': 3.44397415741342,
'TCGA.5P.A9JY.01A': 3.23530165942945,
'TCGA.5P.A9KE.01A': 3.36142161259566,
'TCGA.A4.7288.01A': 3.49056254536074,
'TCGA.A4.7583.01A': 3.47832924902138},
'ADAM9': {'TCGA.2K.A9WE.01A': 3.53049168632165,
'TCGA.2Z.A9J1.01A': 3.51399144540852,
'TCGA.2Z.A9J3.01A': 3.32862177250441,
'TCGA.2Z.A9J5.01A': 3.45002783224857,
'TCGA.2Z.A9J6.01A': 3.39858172551853,
'TCGA.2Z.A9J7.01A': 3.40951698282201,
'TCGA.2Z.A9J8.01A': 3.53261871918645,
'TCGA.2Z.A9JD.01A': 3.56427113271883,
'TCGA.2Z.A9JI.01A': 3.5666376894204,
'TCGA.2Z.A9JJ.01A': 3.67928221333131,
'TCGA.2Z.A9JO.01A': 3.52846377629195,
'TCGA.2Z.A9JQ.01A': 3.32431774552709,
'TCGA.4A.A93W.01A': 3.38242032101611,
'TCGA.4A.A93X.01A': 3.55510206417121,
'TCGA.4A.A93Y.01A': 3.53322064449544,
'TCGA.5P.A9JU.01A': 3.68697187856072,
'TCGA.5P.A9JY.01A': 3.40680584211452,
'TCGA.5P.A9KE.01A': 3.46389259376506,
'TCGA.A4.7288.01A': 3.57916689781317,
'TCGA.A4.7583.01A': 3.59495681256333},
'ADAMTS13': {'TCGA.2K.A9WE.01A': 2.99717971627462,
'TCGA.2Z.A9J1.01A': 2.95643583141984,
'TCGA.2Z.A9J3.01A': 3.14476055183309,
'TCGA.2Z.A9J5.01A': 2.86800610878318,
'TCGA.2Z.A9J6.01A': 3.08493301972618,
'TCGA.2Z.A9J7.01A': 3.24445948158587,
'TCGA.2Z.A9J8.01A': 2.97799849398308,
'TCGA.2Z.A9JD.01A': 2.99163733709338,
'TCGA.2Z.A9JI.01A': 2.90391601226256,
'TCGA.2Z.A9JJ.01A': 2.53461856974451,
'TCGA.2Z.A9JO.01A': 2.81575762806495,
'TCGA.2Z.A9JQ.01A': 3.11018668033958,
'TCGA.4A.A93W.01A': 3.05917375698737,
'TCGA.4A.A93X.01A': 2.62388941886632,
'TCGA.4A.A93Y.01A': 2.40365175187858,
'TCGA.5P.A9JU.01A': 2.89098304121939,
'TCGA.5P.A9JY.01A': 3.29226405760134,
'TCGA.5P.A9KE.01A': 2.91802360558372,
'TCGA.A4.7288.01A': 2.67994876595419,
'TCGA.A4.7583.01A': 2.96151763370802}})
</code></pre>
|
<python><machine-learning><scikit-learn><sequentialfeatureselector>
|
2023-06-04 08:11:19
| 1
| 1,545
|
Anon
|
76,399,361
| 4,561,887
|
pynput library not working as expected in Python to press Windows + D key
|
<p>I'm trying to do what this question asked (this question has no valid answers with functional code using <code>pynput</code>): <a href="https://stackoverflow.com/q/63489008/4561887">Press Windows+D with <code>pynput</code></a>. But, my attempts are not working as expected.</p>
<p>On Linux Ubuntu, pressing <kbd>Windows</kbd> + <kbd>d</kbd> will minimize all windows, thereby showing the desktop. Doing it again will bring all the windows back as they were.</p>
<p>Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>import time
from pynput.keyboard import Key, Controller
keyboard = Controller()
SUPER_KEY = Key.cmd
keyboard.press(SUPER_KEY)
# time.sleep(1)
keyboard.press('d')
keyboard.release('d')
keyboard.release(SUPER_KEY)
</code></pre>
<p>When I run it, I expect the <kbd>Windows</kbd> + <kbd>d</kbd> shortcut to be pressed, hiding all windows. Instead, only the <kbd>Windows</kbd> key is pressed, which brings up the program launcher search tool, and then a single <code>d</code> is left printed in my terminal, like this:</p>
<pre class="lang-bash prettyprint-override"><code>$ ./pynput_press_Windows+D_to_show_the_desktop.py
$ d
</code></pre>
<p>How do I get this to work?</p>
<p>The reference documentation says (<a href="https://pynput.readthedocs.io/en/latest/keyboard.html" rel="nofollow noreferrer">https://pynput.readthedocs.io/en/latest/keyboard.html</a>) that <code>Key.cmd</code> is the "Super" or "Windows" key. I've also tried with <code>Key.cmd_l</code> and <code>Key.cmd_r</code>.</p>
<blockquote>
<p><code>cmd</code> = 0</p>
<p>A generic command button. On PC platforms, this corresponds to the Super key or Windows key, and on Mac it corresponds to the Command key. This may be a modifier.</p>
<p><code>cmd_l</code> = 0</p>
<p>The left command button. On PC platforms, this corresponds to the Super key or Windows key, and on Mac it corresponds to the Command key. This may be a modifier.</p>
<p><code>cmd_r</code> = 0</p>
<p>The right command button. On PC platforms, this corresponds to the Super key or Windows key, and on Mac it corresponds to the Command key. This may be a modifier.</p>
</blockquote>
<hr />
<p>Update 4 June 2023: keyboard monitor test program, to ensure <code>Key.cmd</code> + <code>d</code> is correct for my keyboard (it is): modified from <a href="https://pynput.readthedocs.io/en/latest/keyboard.html#monitoring-the-keyboard" rel="nofollow noreferrer">https://pynput.readthedocs.io/en/latest/keyboard.html#monitoring-the-keyboard</a>:</p>
<pre class="lang-py prettyprint-override"><code>from pynput import keyboard
print("Keyboard monitor demo program. Press Esc to exit.")
def on_press(key):
try:
print('alphanumeric key {0} pressed'.format(
key.char))
except AttributeError:
print('special key {0} pressed'.format(
key))
def on_release(key):
print('{0} released'.format(
key))
if key == keyboard.Key.esc:
# Stop listener
print("Exiting the program.")
return False
# Collect events until released
with keyboard.Listener(
on_press=on_press,
on_release=on_release) as listener:
listener.join()
</code></pre>
<p>Sample output when I press Super + D:</p>
<pre class="lang-bash prettyprint-override"><code>$ ./pynput_monitor_keyboard.py
Keyboard monitor demo program. Press Esc to exit.
Key.enter released
special key Key.cmd pressed
alphanumeric key d pressed
'd' released
Key.cmd released
</code></pre>
|
<python><linux><keyboard-shortcuts><keyboard-events><pynput>
|
2023-06-04 07:29:21
| 2
| 55,879
|
Gabriel Staples
|
76,399,344
| 4,399,016
|
How to extract the table values and load into pandas data frame?
|
<p>I have this code. I am trying to extract data from <a href="https://www.tsa.gov/travel/passenger-volumes" rel="nofollow noreferrer">this website</a> into pandas.</p>
<pre><code>from pyquery import PyQuery as pq
import requests
import pandas as pd
url = "https://www.tsa.gov/travel/passenger-volumes"
content = requests.get(url).content
doc = pq(content)
Passengers = doc(".views-align-center").text()
</code></pre>
<p>Method 1:</p>
<pre><code>df = pd.DataFrame([x.split(' ') for x in Passengers.split(' ')])
print(df)
</code></pre>
<p>Method 2:</p>
<pre><code>Passengers = Passengers.replace(' ',';')
Passengers
</code></pre>
<p>For Method 1, is it possible to do pandas data frame <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer">unstack</a> to get proper table structure?</p>
<p>Or is it better to do Method 2? How to split string periodically and load into pandas?</p>
|
<python><pandas><python-requests><pyquery>
|
2023-06-04 07:24:56
| 1
| 680
|
prashanth manohar
|
76,399,298
| 14,777,704
|
How to explictly set the horizontal scale of HeatMap in Plotly?
|
<p>I have a dataset having runs scored in a cricket match of 20 overs. On plotting this in heatmap (using plotly and dash), for few cases , the horizontal scale of heat map is going beyond 20 overs although data is present for 20 overs only.</p>
<p>My requirement is - I dont want the scale to go beyond 20.</p>
<p>In the example image as shared<a href="https://i.sstatic.net/2lUSr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2lUSr.png" alt="enter image description here" /></a>, the data set for a specific case has data from overs 14 to 20 but in heatmap, 22 is showing up. (This is happening for few cases only, the horizontal scale is automatically changing.)</p>
<p>Also, it will be very helpful if I can show the divisions for every cell.</p>
<p>The code I have written so far is as follows -</p>
<pre><code>import plotly.express as px
kptable1 = kdf.pivot_table(index='bowling_team', columns='over', values='total_runs', aggfunc='sum', fill_value=0) #kdf is the pandas dataframe
line_fig = px.imshow(kptable1,text_auto=True)
return line_fig
</code></pre>
<p><strong>Edit-</strong>
The sample data is provided below -</p>
<pre><code> over batsman total_runs bowling_team
2097 14 p1 0 someteamx
2098 14 p1 1 someteamx
2100 14 p1 4 someteamx
2101 14 p1 0 someteamx
2108 15 p1 0 someteamx
2110 16 p1 0 someteamx
2111 16 p1 0 someteamx
145862 20 p1 0 someteamy
</code></pre>
<p>(There is no over 22 but in heatmap scale is showing 22)</p>
|
<python><plotly-dash><plotly>
|
2023-06-04 07:09:43
| 0
| 375
|
MVKXXX
|
76,399,272
| 14,325,145
|
PyQt6-Charts: Donut Chart (aka circle graph) crashes on animation
|
<p>This is my minimum reproducible example of a donut chart (aka circle graph)</p>
<p>I am trying to animate it but my code crashes on line 24 with error:</p>
<p><strong>chart.setAnimationOptions(chart.SeriesAnimations)
AttributeError: 'QChart' object has no attribute 'SeriesAnimations'</strong></p>
<pre><code>from PyQt6.QtWidgets import QApplication, QMainWindow, QLabel, QVBoxLayout, QWidget
from PyQt6.QtCharts import QChart, QChartView, QPieSeries
from PyQt6.QtGui import QPainter
from PyQt6.QtCore import Qt
class MainWindow(QMainWindow):
def __init__(self, *args, **kwargs):
super(MainWindow, self).__init__(*args, **kwargs)
self.setWindowTitle("Temperature Chart Example")
self.setMinimumSize(800, 600)
series = QPieSeries()
# Current temperature
series.append("Current Temperature", 30)
# Remaining temperature
series.append("Remaining Temperature", 70)
# Set the hole size
series.setHoleSize(0.6)
chart = QChart()
chart.addSeries(series)
chart.setTitle("Current Temperature")
chart.setAnimationOptions(chart.SeriesAnimations)
chart.legend().setVisible(True)
chart_view = QChartView(chart)
chart_view.setRenderHint(QPainter.RenderHint.Antialiasing)
# Create a layout to hold both the chart and the label
layout = QVBoxLayout()
layout.addWidget(chart_view)
layout.addWidget(self.label)
# Create a central widget to hold the layout
central_widget = QWidget()
central_widget.setLayout(layout)
# Set the widget as the central widget of the window
self.setCentralWidget(central_widget)
app = QApplication([])
window = MainWindow()
window.show()
app.exec()
</code></pre>
|
<python><pyqt6>
|
2023-06-04 07:00:38
| 1
| 373
|
zeroalpha
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.