content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
TypeError: object of type 'NoneType' has no len() when using KerasClassifier
I want to build a logistic regression model using Keras and train with X epochs. I want to obtain the accuracy and loss scores from the model.
My code raised TypeError: object of type 'NoneType' has no len(). However, X_train[cv_train] and y_train[cv_train] are not NoneType.
Code:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1)
def build_logistic_regression_model():
model = Sequential()
model.add(Dense(units=1,kernel_initializer='glorot_uniform', activation='sigmoid',kernel_regularizer=l2(0.)))
# Performance visualization callback
performance_viz_cbk = PerformanceVisualizationCallback(model=model,validation_data=X_val,dat_dir='c:\performance_charts')
model.compile(optimizer='sgd',
loss='binary_crossentropy',
metrics=['accuracy'])
return model
lrscores = []
train_lrscores = []
for cv_train, cv_val in kfold.split(X_train, y_train):
lr_model_logit = KerasClassifier(build_fn=build_logistic_regression_model, batch_size = 10)
hist = lr_model_logit.fit(X_train[cv_train], y_train[cv_train], epochs=200).history_
losses = hist["mean_absolute_error"]
train_lrscores.append(hist * 100)
lr_score = hist.score(X_val, y_val)
lrscores.append(lr_score * 100)
Traceback:
/opt/conda/lib/python3.7/site-packages/scikeras/wrappers.py:302: UserWarning: ``build_fn`` will be renamed to ``model`` in a future release, at which point use of ``build_fn`` will raise an Error instead.
"``build_fn`` will be renamed to ``model`` in a future release,"
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipykernel_18384/2762271288.py in <module>
3 for cv_train, cv_val in kfold.split(X_train, y_train):
4 lr_model_logit = KerasClassifier(build_fn=build_logistic_regression_model, batch_size = 10)
----> 5 hist = lr_model_logit.fit(X_train[cv_train], y_train[cv_train], epochs=200).history_
6 losses = hist["mean_absolute_error"]
7 train_lrscores.append(hist * 100)
/opt/conda/lib/python3.7/site-packages/scikeras/wrappers.py in fit(self, X, y, sample_weight, **kwargs)
1492 sample_weight = 1 if sample_weight is None else sample_weight
1493 sample_weight *= compute_sample_weight(class_weight=self.class_weight, y=y)
-> 1494 super().fit(X=X, y=y, sample_weight=sample_weight, **kwargs)
1495 return self
1496
/opt/conda/lib/python3.7/site-packages/scikeras/wrappers.py in fit(self, X, y, sample_weight, **kwargs)
765 sample_weight=sample_weight,
766 warm_start=self.warm_start,
--> 767 **kwargs,
768 )
769
/opt/conda/lib/python3.7/site-packages/scikeras/wrappers.py in _fit(self, X, y, sample_weight, warm_start, epochs, initial_epoch, **kwargs)
927 X = self.feature_encoder_.transform(X)
928
--> 929 self._check_model_compatibility(y)
930
931 self._fit_keras_model(
/opt/conda/lib/python3.7/site-packages/scikeras/wrappers.py in _check_model_compatibility(self, y)
549 # we recognize the attribute but do not force it to be
550 # generated
--> 551 if self.n_outputs_expected_ != len(self.model_.outputs):
552 raise ValueError(
553 "Detected a Keras model input of size"
TypeError: object of type 'NoneType' has no len()
X_train[cv_train]
array([[ 3.49907650e-01, 1.01934833e+00, 9.22962131e-01, ...,
4.65851423e-01, 5.85124577e-01, -2.30825406e-01],
[-1.66145691e-01, -1.70198795e-01, 7.40812556e-01, ...,
-1.25252966e-01, 6.11333541e-04, -1.85578709e+00],
[-3.34532309e-01, 1.47744989e+00, -7.94889360e-01, ...,
1.10431254e+00, 5.00866647e-01, 5.75451553e-01],
...,
[-1.21341832e+00, 8.56729999e-01, 1.87070578e-01, ...,
-8.38769062e-01, -7.08780127e-02, -6.54645722e-01],
[ 3.45711192e-01, 8.01029131e-01, 9.37260745e-01, ...,
6.35312010e-01, -1.77277404e-01, -1.05178867e+00],
[ 1.65016194e+00, 1.34960903e+00, 1.17654404e+00, ...,
3.79284887e-01, 4.38081218e-01, -3.55481467e-01]])
y_train
array([1, 3, 2, 2, 3, 2, 3, 3, 1, 2, 1, 1, 3, 2, 1, 1, 2, 3, 2, 1, 1, 1,
1, 0, 1, 2, 3, 1, 1, 0, 0, 1, 1, 3, 1, 1, 2, 0, 1, 1, 2, 1, 0, 3,
3, 0, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 2, 3, 3, 3, 2, 3, 1, 1, 3, 2,
3, 1, 1, 2, 1, 2, 1, 1, 0, 2, 2, 3, 3, 2, 1, 1, 3, 1, 3, 1, 1, 3,
1, 2, 0, 1, 2, 0, 2, 2, 2, 3, 1, 1, 2, 1, 0, 2, 2, 1, 1, 0, 2, 3,
3, 3, 3, 1, 1, 1, 1, 2, 3, 2, 1, 1, 1, 2, 2, 0, 3, 2, 1, 2, 3, 3,
2, 0, 3, 0, 1, 1, 1, 1, 2, 3, 3, 3, 2, 0, 3, 2, 3, 1, 3, 1, 2, 1,
2, 3, 2, 2, 3, 3, 1, 0, 3, 1, 3, 2, 2, 2, 2, 3, 3, 1, 3, 2, 3, 1,
3, 1, 2, 2, 1, 2, 3, 3, 1, 1, 2, 0, 2, 1, 2, 1, 3, 3, 3, 1, 3, 1,
1, 2, 3, 1, 1, 1, 2, 1, 2, 2, 1, 1, 2, 0, 2, 0, 3, 1, 2, 3, 1, 1,
3, 1, 3, 0, 3, 1, 3, 1, 1, 1, 1, 0, 3, 3, 2, 2, 3, 3, 1, 3, 1, 2,
1, 2, 2, 3, 2, 1, 2, 3, 3, 3, 3, 1, 2, 3, 1, 2, 1, 1, 1, 2, 1, 2,
3, 2, 1, 2, 1, 2, 1, 2, 3, 3, 1, 2, 0, 1, 2, 2, 2, 1, 1, 3, 3, 1,
3, 3, 2, 1, 3, 1, 3, 1, 1, 1, 3, 1, 3, 1, 2, 1, 0, 1, 2, 1, 2, 2,
1, 1, 2, 1, 2, 2, 2, 1, 3, 1, 2, 3, 2, 2, 3, 1, 2, 0, 0, 3, 2, 2,
2, 3, 2, 1, 1, 1, 1, 2, 2, 2, 1, 3, 1, 2, 1, 3, 2, 2, 1, 1, 1, 2,
3, 3, 2, 3, 2, 3, 1, 2, 2, 1, 2, 1, 1, 3, 3, 3, 2, 1, 1, 3, 2, 3,
3, 2, 1, 1, 1, 2, 3, 0, 1, 2, 1, 1, 2, 0, 2, 1, 0, 2, 0, 3, 2, 3,
2, 1, 1, 2, 3, 0, 0, 2, 2, 2, 1, 1, 1, 3, 1, 0, 1, 2, 2])
A:
Give a good look at your code and error before posting a question. Then if that does not help, thoroughly read the documentation.
Keras fit() documentation -> What does .fit() return?
You have made a typo I believe. You expect the KerasClassifier object to have an attribute .history_. However this attribute is clearly None, looking at your error.
Be aware. Your typo will not help solving future requests, so it's better to look at existing questions in that case.
Existing question
|
TypeError: object of type 'NoneType' has no len() when using KerasClassifier
|
I want to build a logistic regression model using Keras and train with X epochs. I want to obtain the accuracy and loss scores from the model.
My code raised TypeError: object of type 'NoneType' has no len(). However, X_train[cv_train] and y_train[cv_train] are not NoneType.
Code:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1)
def build_logistic_regression_model():
model = Sequential()
model.add(Dense(units=1,kernel_initializer='glorot_uniform', activation='sigmoid',kernel_regularizer=l2(0.)))
# Performance visualization callback
performance_viz_cbk = PerformanceVisualizationCallback(model=model,validation_data=X_val,dat_dir='c:\performance_charts')
model.compile(optimizer='sgd',
loss='binary_crossentropy',
metrics=['accuracy'])
return model
lrscores = []
train_lrscores = []
for cv_train, cv_val in kfold.split(X_train, y_train):
lr_model_logit = KerasClassifier(build_fn=build_logistic_regression_model, batch_size = 10)
hist = lr_model_logit.fit(X_train[cv_train], y_train[cv_train], epochs=200).history_
losses = hist["mean_absolute_error"]
train_lrscores.append(hist * 100)
lr_score = hist.score(X_val, y_val)
lrscores.append(lr_score * 100)
Traceback:
/opt/conda/lib/python3.7/site-packages/scikeras/wrappers.py:302: UserWarning: ``build_fn`` will be renamed to ``model`` in a future release, at which point use of ``build_fn`` will raise an Error instead.
"``build_fn`` will be renamed to ``model`` in a future release,"
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipykernel_18384/2762271288.py in <module>
3 for cv_train, cv_val in kfold.split(X_train, y_train):
4 lr_model_logit = KerasClassifier(build_fn=build_logistic_regression_model, batch_size = 10)
----> 5 hist = lr_model_logit.fit(X_train[cv_train], y_train[cv_train], epochs=200).history_
6 losses = hist["mean_absolute_error"]
7 train_lrscores.append(hist * 100)
/opt/conda/lib/python3.7/site-packages/scikeras/wrappers.py in fit(self, X, y, sample_weight, **kwargs)
1492 sample_weight = 1 if sample_weight is None else sample_weight
1493 sample_weight *= compute_sample_weight(class_weight=self.class_weight, y=y)
-> 1494 super().fit(X=X, y=y, sample_weight=sample_weight, **kwargs)
1495 return self
1496
/opt/conda/lib/python3.7/site-packages/scikeras/wrappers.py in fit(self, X, y, sample_weight, **kwargs)
765 sample_weight=sample_weight,
766 warm_start=self.warm_start,
--> 767 **kwargs,
768 )
769
/opt/conda/lib/python3.7/site-packages/scikeras/wrappers.py in _fit(self, X, y, sample_weight, warm_start, epochs, initial_epoch, **kwargs)
927 X = self.feature_encoder_.transform(X)
928
--> 929 self._check_model_compatibility(y)
930
931 self._fit_keras_model(
/opt/conda/lib/python3.7/site-packages/scikeras/wrappers.py in _check_model_compatibility(self, y)
549 # we recognize the attribute but do not force it to be
550 # generated
--> 551 if self.n_outputs_expected_ != len(self.model_.outputs):
552 raise ValueError(
553 "Detected a Keras model input of size"
TypeError: object of type 'NoneType' has no len()
X_train[cv_train]
array([[ 3.49907650e-01, 1.01934833e+00, 9.22962131e-01, ...,
4.65851423e-01, 5.85124577e-01, -2.30825406e-01],
[-1.66145691e-01, -1.70198795e-01, 7.40812556e-01, ...,
-1.25252966e-01, 6.11333541e-04, -1.85578709e+00],
[-3.34532309e-01, 1.47744989e+00, -7.94889360e-01, ...,
1.10431254e+00, 5.00866647e-01, 5.75451553e-01],
...,
[-1.21341832e+00, 8.56729999e-01, 1.87070578e-01, ...,
-8.38769062e-01, -7.08780127e-02, -6.54645722e-01],
[ 3.45711192e-01, 8.01029131e-01, 9.37260745e-01, ...,
6.35312010e-01, -1.77277404e-01, -1.05178867e+00],
[ 1.65016194e+00, 1.34960903e+00, 1.17654404e+00, ...,
3.79284887e-01, 4.38081218e-01, -3.55481467e-01]])
y_train
array([1, 3, 2, 2, 3, 2, 3, 3, 1, 2, 1, 1, 3, 2, 1, 1, 2, 3, 2, 1, 1, 1,
1, 0, 1, 2, 3, 1, 1, 0, 0, 1, 1, 3, 1, 1, 2, 0, 1, 1, 2, 1, 0, 3,
3, 0, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 2, 3, 3, 3, 2, 3, 1, 1, 3, 2,
3, 1, 1, 2, 1, 2, 1, 1, 0, 2, 2, 3, 3, 2, 1, 1, 3, 1, 3, 1, 1, 3,
1, 2, 0, 1, 2, 0, 2, 2, 2, 3, 1, 1, 2, 1, 0, 2, 2, 1, 1, 0, 2, 3,
3, 3, 3, 1, 1, 1, 1, 2, 3, 2, 1, 1, 1, 2, 2, 0, 3, 2, 1, 2, 3, 3,
2, 0, 3, 0, 1, 1, 1, 1, 2, 3, 3, 3, 2, 0, 3, 2, 3, 1, 3, 1, 2, 1,
2, 3, 2, 2, 3, 3, 1, 0, 3, 1, 3, 2, 2, 2, 2, 3, 3, 1, 3, 2, 3, 1,
3, 1, 2, 2, 1, 2, 3, 3, 1, 1, 2, 0, 2, 1, 2, 1, 3, 3, 3, 1, 3, 1,
1, 2, 3, 1, 1, 1, 2, 1, 2, 2, 1, 1, 2, 0, 2, 0, 3, 1, 2, 3, 1, 1,
3, 1, 3, 0, 3, 1, 3, 1, 1, 1, 1, 0, 3, 3, 2, 2, 3, 3, 1, 3, 1, 2,
1, 2, 2, 3, 2, 1, 2, 3, 3, 3, 3, 1, 2, 3, 1, 2, 1, 1, 1, 2, 1, 2,
3, 2, 1, 2, 1, 2, 1, 2, 3, 3, 1, 2, 0, 1, 2, 2, 2, 1, 1, 3, 3, 1,
3, 3, 2, 1, 3, 1, 3, 1, 1, 1, 3, 1, 3, 1, 2, 1, 0, 1, 2, 1, 2, 2,
1, 1, 2, 1, 2, 2, 2, 1, 3, 1, 2, 3, 2, 2, 3, 1, 2, 0, 0, 3, 2, 2,
2, 3, 2, 1, 1, 1, 1, 2, 2, 2, 1, 3, 1, 2, 1, 3, 2, 2, 1, 1, 1, 2,
3, 3, 2, 3, 2, 3, 1, 2, 2, 1, 2, 1, 1, 3, 3, 3, 2, 1, 1, 3, 2, 3,
3, 2, 1, 1, 1, 2, 3, 0, 1, 2, 1, 1, 2, 0, 2, 1, 0, 2, 0, 3, 2, 3,
2, 1, 1, 2, 3, 0, 0, 2, 2, 2, 1, 1, 1, 3, 1, 0, 1, 2, 2])
|
[
"Give a good look at your code and error before posting a question. Then if that does not help, thoroughly read the documentation.\nKeras fit() documentation -> What does .fit() return?\nYou have made a typo I believe. You expect the KerasClassifier object to have an attribute .history_. However this attribute is clearly None, looking at your error.\nBe aware. Your typo will not help solving future requests, so it's better to look at existing questions in that case.\nExisting question\n"
] |
[
0
] |
[] |
[] |
[
"keras",
"logistic_regression",
"machine_learning",
"python",
"scikit_learn"
] |
stackoverflow_0074476839_keras_logistic_regression_machine_learning_python_scikit_learn.txt
|
Q:
I get bad request when I send a POST request to create a new user
I'm trying to test registering new users, I send a POST request and I get "details": "USER WITH THIS EMAIL ALREADY EXITS!", even though when i check in the database the new user does get created.. I delete it and try again, still get the same outcome.. Having a hard time finding what's wrong with my code
views.py:
class MyTokenObtainPairSerializer(TokenObtainPairSerializer):
def validate(self, attrs):
data = super().validate(attrs)
serializer = UserSerializerWithToken(self.user).data
for k,v in serializer.items():
data[k]=v
return data
class MyTokenObtainPairView(TokenObtainPairView):
serializer_class = MyTokenObtainPairSerializer
@api_view(['GET'])
@permission_classes([IsAuthenticated])
def getUserProfile(request):
user = request.user
serializer = UserSerializer(user,many=False)
return Response(serializer.data)
@api_view(['GET'])
@permission_classes([IsAdminUser])
def getUsers(request):
user = User.objects.all()
serializer = UserSerializer(user,many=True)
return Response(serializer.data)
# Register new users
@api_view(['POST'])
def registerUser(request):
data = request.data
print(data)
try:
user = User.objects.create(
first_name = data['name'],
username = data['email'],
email = data['email'],
password = make_password(data['password']),
)
serializer = UserSerializerWithToken(user, many=False)
return Response(serializer.data)
except:
message = {'details': 'USER WITH THIS EMAIL ALREADY EXITS!'}
return Response(message, status=status.HTTP_400_BAD_REQUEST)
serializer.py
class UserSerializer(serializers.ModelSerializer):
name = serializers.SerializerMethodField(read_only=True)
_id = serializers.SerializerMethodField(read_only=True)
isAdmin = serializers.SerializerMethodField(read_only=True)
class Meta:
model = User
fields = ['id', '_id', 'username', 'email', 'name', 'isAdmin']
def get_name(self, obj):
name = obj.first_name
if name == "":
name = obj.email
return name
def get__id(self, obj):
return obj.id
def get_isAdmin(self, obj):
return obj.is_staff
class UserSerializerWithToken(serializers.ModelSerializer):
token = serializers.SerializerMethodField(read_only=True)
class Meta:
model = User
fields = ['id', '_id', 'username', 'email', 'name', 'isAdmin', 'token']
def get_token(self, obj):
token = RefreshToken.for_user(obj)
return str(token.access_token)
urls.py:
from django.urls import path
from app import views
from rest_framework_simplejwt.views import (
TokenObtainPairView,
)
urlpatterns = [
path('', views.getRoutes, name="get-routes"),
path('users/register/', views.registerUser, name="register"),
path('users/login/', TokenObtainPairView.as_view(), name='token_obtain_pair'),
path('products/', views.getProducts, name="get-products"),
path('products/<str:pk>', views.getProduct, name="get-product"),
path('user/profile/', views.getUserProfile, name="get-user-profile"),
path('users/', views.getUsers, name="get-users"),
]
models.py:
from django.db import models
from django.contrib.auth.models import User
class Product(models.Model):
user=models.ForeignKey(User,on_delete=models.SET_NULL,null=True)
name=models.CharField(max_length=200,null=True,blank=True)
image=models.ImageField(null=True,blank=True)
brand=models.CharField(max_length=200,null=True,blank=True)
category=models.CharField(max_length=200,null=True,blank=True)
description=models.TextField(null=True,blank=True)
rating=models.DecimalField(max_digits=7,decimal_places=2,null=True,blank=True)
numReviews=models.IntegerField(null=True,blank=True,default=0)
price=models.DecimalField(max_digits=7,decimal_places=2,null=True,blank=True)
countInStock=models.IntegerField(null=True,blank=True,default=0)
created=models.DateTimeField(auto_now_add=True)
_id=models.AutoField(primary_key=True,editable=False)
def __str__(self):
return self.name
A:
If the user gets created, the problem is in the next lines. Remove the "try" and you'll probably see an exception in serializer = UserSerializerWithToken(user, many=False) or in return Response(serializer.data). You are currently catching all exceptions, and sending your message ("USER WITH THIS EMAIL ALREADY EXITS!"), even though that may not be the problem. You should avoid using a simple "except" and instead catch the specific exception you need to catch.
|
I get bad request when I send a POST request to create a new user
|
I'm trying to test registering new users, I send a POST request and I get "details": "USER WITH THIS EMAIL ALREADY EXITS!", even though when i check in the database the new user does get created.. I delete it and try again, still get the same outcome.. Having a hard time finding what's wrong with my code
views.py:
class MyTokenObtainPairSerializer(TokenObtainPairSerializer):
def validate(self, attrs):
data = super().validate(attrs)
serializer = UserSerializerWithToken(self.user).data
for k,v in serializer.items():
data[k]=v
return data
class MyTokenObtainPairView(TokenObtainPairView):
serializer_class = MyTokenObtainPairSerializer
@api_view(['GET'])
@permission_classes([IsAuthenticated])
def getUserProfile(request):
user = request.user
serializer = UserSerializer(user,many=False)
return Response(serializer.data)
@api_view(['GET'])
@permission_classes([IsAdminUser])
def getUsers(request):
user = User.objects.all()
serializer = UserSerializer(user,many=True)
return Response(serializer.data)
# Register new users
@api_view(['POST'])
def registerUser(request):
data = request.data
print(data)
try:
user = User.objects.create(
first_name = data['name'],
username = data['email'],
email = data['email'],
password = make_password(data['password']),
)
serializer = UserSerializerWithToken(user, many=False)
return Response(serializer.data)
except:
message = {'details': 'USER WITH THIS EMAIL ALREADY EXITS!'}
return Response(message, status=status.HTTP_400_BAD_REQUEST)
serializer.py
class UserSerializer(serializers.ModelSerializer):
name = serializers.SerializerMethodField(read_only=True)
_id = serializers.SerializerMethodField(read_only=True)
isAdmin = serializers.SerializerMethodField(read_only=True)
class Meta:
model = User
fields = ['id', '_id', 'username', 'email', 'name', 'isAdmin']
def get_name(self, obj):
name = obj.first_name
if name == "":
name = obj.email
return name
def get__id(self, obj):
return obj.id
def get_isAdmin(self, obj):
return obj.is_staff
class UserSerializerWithToken(serializers.ModelSerializer):
token = serializers.SerializerMethodField(read_only=True)
class Meta:
model = User
fields = ['id', '_id', 'username', 'email', 'name', 'isAdmin', 'token']
def get_token(self, obj):
token = RefreshToken.for_user(obj)
return str(token.access_token)
urls.py:
from django.urls import path
from app import views
from rest_framework_simplejwt.views import (
TokenObtainPairView,
)
urlpatterns = [
path('', views.getRoutes, name="get-routes"),
path('users/register/', views.registerUser, name="register"),
path('users/login/', TokenObtainPairView.as_view(), name='token_obtain_pair'),
path('products/', views.getProducts, name="get-products"),
path('products/<str:pk>', views.getProduct, name="get-product"),
path('user/profile/', views.getUserProfile, name="get-user-profile"),
path('users/', views.getUsers, name="get-users"),
]
models.py:
from django.db import models
from django.contrib.auth.models import User
class Product(models.Model):
user=models.ForeignKey(User,on_delete=models.SET_NULL,null=True)
name=models.CharField(max_length=200,null=True,blank=True)
image=models.ImageField(null=True,blank=True)
brand=models.CharField(max_length=200,null=True,blank=True)
category=models.CharField(max_length=200,null=True,blank=True)
description=models.TextField(null=True,blank=True)
rating=models.DecimalField(max_digits=7,decimal_places=2,null=True,blank=True)
numReviews=models.IntegerField(null=True,blank=True,default=0)
price=models.DecimalField(max_digits=7,decimal_places=2,null=True,blank=True)
countInStock=models.IntegerField(null=True,blank=True,default=0)
created=models.DateTimeField(auto_now_add=True)
_id=models.AutoField(primary_key=True,editable=False)
def __str__(self):
return self.name
|
[
"If the user gets created, the problem is in the next lines. Remove the \"try\" and you'll probably see an exception in serializer = UserSerializerWithToken(user, many=False) or in return Response(serializer.data). You are currently catching all exceptions, and sending your message (\"USER WITH THIS EMAIL ALREADY EXITS!\"), even though that may not be the problem. You should avoid using a simple \"except\" and instead catch the specific exception you need to catch.\n"
] |
[
0
] |
[] |
[] |
[
"django_rest_framework",
"postman",
"python",
"serialization"
] |
stackoverflow_0074476942_django_rest_framework_postman_python_serialization.txt
|
Q:
How can I fit circles into a shape using python?
So for a project, I gotta make a web-site that fills a shape with circles that wont intersact at any point.The user is going to upload a shape, and also choose the radius of the circles, and the code is going to place as many circles(with the chosen radius) as it can into the shape.
For example, if the user uploads an 16cmx16cm square and chooses 4cm as the radius of the circles, the system is going to place as many circles with a radius of 4cm as possible into the square, and the circles wont intersact at any point.
I tried many things using python and failed eveytime. The shape can be anything, it can be completely random and no matter what the shape is, the site has to find out where to place the circles with the selected radius, place the circles, and show the final shape. I dont know if there is a way to do this without python, but I am open to every suggestion-solution.
A:
You could try the package circle-packing. It looks like you can get the behavior you want by setting the arguments rho_max and rho_min of the class ShapeFill to the radius provided by user. I've not used it so cannot attest to its' correctness or usability. Please let us know if it works for you.
Note: The license is GPLV2 so keep in mind the implications. And don't forget to attribute.
A:
I believe filling it with the actual possible maximum amount would be far from easy, if you actually just want fill it and don't care about the best solution then it's fairly easy.
just start to for the top left corner place a circle, if collides with another circle or the shape it, shift it to the right of an arbitrary small amount and try again. once you reached the end on the right side, move it down and to the left and start the process again.
|
How can I fit circles into a shape using python?
|
So for a project, I gotta make a web-site that fills a shape with circles that wont intersact at any point.The user is going to upload a shape, and also choose the radius of the circles, and the code is going to place as many circles(with the chosen radius) as it can into the shape.
For example, if the user uploads an 16cmx16cm square and chooses 4cm as the radius of the circles, the system is going to place as many circles with a radius of 4cm as possible into the square, and the circles wont intersact at any point.
I tried many things using python and failed eveytime. The shape can be anything, it can be completely random and no matter what the shape is, the site has to find out where to place the circles with the selected radius, place the circles, and show the final shape. I dont know if there is a way to do this without python, but I am open to every suggestion-solution.
|
[
"You could try the package circle-packing. It looks like you can get the behavior you want by setting the arguments rho_max and rho_min of the class ShapeFill to the radius provided by user. I've not used it so cannot attest to its' correctness or usability. Please let us know if it works for you.\nNote: The license is GPLV2 so keep in mind the implications. And don't forget to attribute.\n",
"I believe filling it with the actual possible maximum amount would be far from easy, if you actually just want fill it and don't care about the best solution then it's fairly easy.\njust start to for the top left corner place a circle, if collides with another circle or the shape it, shift it to the right of an arbitrary small amount and try again. once you reached the end on the right side, move it down and to the left and start the process again.\n"
] |
[
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074475271_python.txt
|
Q:
Search pattern to include square brackets
I am trying to search for exact words in a file. I read the file by lines and loop through the lines to find the exact words. As the in keyword is not suitable for finding exact words, I am using a regex pattern.
def findWord(w):
return re.compile(r'\b({0})\b'.format(w), flags=re.IGNORECASE).search
The problem with this function is that is doesn't recognizes square brackets [xyz].
For example
findWord('data_var_cod[0]')('Cod_Byte1 = DATA_VAR_COD[0]')
returns None whereas
findWord('data_var_cod')('Cod_Byte1 = DATA_VAR_COD')
returns <_sre.SRE_Match object at 0x0000000015622288>
Can anybody please help me to tweak the regex pattern?
A:
It's because of that regex engine assume the square brackets as character class which are regex characters for get ride of this problem you need to escape your regex characters. you can use re.escape function :
def findWord(w):
return re.compile(r'\b({0})\b'.format(re.escape(w)), flags=re.IGNORECASE).search
Also as a more pythonic way to get all matches you can use re.fildall() which returns a list of matches or re.finditer which returns an iterator contains matchobjects.
But still this way is not complete and efficient because
when you are using word boundary your inner word must contains one type characters.
>>> ss = 'hello string [processing] in python.'
>>>re.compile(r'\b({0})\b'.format(re.escape('[processing]')),flags=re.IGNORECASE).search(ss)
>>>
>>>re.compile(r'({})'.format(re.escape('[processing]')),flags=re.IGNORECASE).search(ss).group(0)
'[processing]'
So I suggest to remove the word boundaries if your words are contains none word characters.
But as a more general way you can use following regex which use positive look around that match words that surround by space or come at the end of string or leading:
r'(?: |^)({})(?=[. ]|$) '
A:
That's because [ and ] has special meaning. You should quote the string you're looking for:
re.escape(regex)
Will escape the regex for you. Change your code to:
return re.compile(r'\b({0})\b'.format(re.escape(w)), flags=re.IGNORECASE).search
↑↑↑↑↑↑↑↑↑
You can see what re.quote does for your string, for example:
>>> w = '[xyz]'
>>> print re.escape(w)
\[xyz\]
A:
You need a "smart" way of building the regex:
def findWord(w):
if re.match(r'\w', w) and re.search(r'\w$', w):
return re.compile(r'\b{0}\b'.format(w), flags=re.IGNORECASE).search
if not re.match(r'\w', w) and not re.search(r'\w$', w):
return re.compile(r'{0}'.format(w), flags=re.IGNORECASE).search
if not re.match(r'\w', w) and re.search(r'\w$', w):
return re.compile(r'{0}\b'.format(w), flags=re.IGNORECASE).search
if re.match(r'\w', w) and not re.search(r'\w$', w):
return re.compile(r'\b{0}'.format(w), flags=re.IGNORECASE).search
The problem is that some of your keywords will have word characters at the start only, others - at the end only, most will have word characters on both ends, and some will have non-word characters. To effectively check the word boundary, you need to know if a word character is present at the start/end of the keyword.
Thus, with re.match(r'\w', x) we can check if the keyword starts with a word character, and if yes, add the \b to the pattern, and with re.search(r'\w$', x) we can check if the keyword ends with a word character.
In case you have multiple keywords to check a string against you can check this post of mine.
A:
You can use a \ before [ or ].
For instance, to find 'abc[12]' in 'xyzabc[12]def', one can use
match_pattern = 'abc\[12\]'
|
Search pattern to include square brackets
|
I am trying to search for exact words in a file. I read the file by lines and loop through the lines to find the exact words. As the in keyword is not suitable for finding exact words, I am using a regex pattern.
def findWord(w):
return re.compile(r'\b({0})\b'.format(w), flags=re.IGNORECASE).search
The problem with this function is that is doesn't recognizes square brackets [xyz].
For example
findWord('data_var_cod[0]')('Cod_Byte1 = DATA_VAR_COD[0]')
returns None whereas
findWord('data_var_cod')('Cod_Byte1 = DATA_VAR_COD')
returns <_sre.SRE_Match object at 0x0000000015622288>
Can anybody please help me to tweak the regex pattern?
|
[
"It's because of that regex engine assume the square brackets as character class which are regex characters for get ride of this problem you need to escape your regex characters. you can use re.escape function :\ndef findWord(w):\n return re.compile(r'\\b({0})\\b'.format(re.escape(w)), flags=re.IGNORECASE).search\n\nAlso as a more pythonic way to get all matches you can use re.fildall() which returns a list of matches or re.finditer which returns an iterator contains matchobjects.\nBut still this way is not complete and efficient because\nwhen you are using word boundary your inner word must contains one type characters.\n>>> ss = 'hello string [processing] in python.' \n>>>re.compile(r'\\b({0})\\b'.format(re.escape('[processing]')),flags=re.IGNORECASE).search(ss)\n>>> \n>>>re.compile(r'({})'.format(re.escape('[processing]')),flags=re.IGNORECASE).search(ss).group(0)\n'[processing]'\n\nSo I suggest to remove the word boundaries if your words are contains none word characters.\nBut as a more general way you can use following regex which use positive look around that match words that surround by space or come at the end of string or leading:\nr'(?: |^)({})(?=[. ]|$) '\n\n",
"That's because [ and ] has special meaning. You should quote the string you're looking for:\nre.escape(regex)\n\nWill escape the regex for you. Change your code to:\nreturn re.compile(r'\\b({0})\\b'.format(re.escape(w)), flags=re.IGNORECASE).search\n ↑↑↑↑↑↑↑↑↑\n\nYou can see what re.quote does for your string, for example:\n>>> w = '[xyz]'\n>>> print re.escape(w)\n\\[xyz\\]\n\n",
"You need a \"smart\" way of building the regex:\ndef findWord(w):\n if re.match(r'\\w', w) and re.search(r'\\w$', w):\n return re.compile(r'\\b{0}\\b'.format(w), flags=re.IGNORECASE).search\n if not re.match(r'\\w', w) and not re.search(r'\\w$', w):\n return re.compile(r'{0}'.format(w), flags=re.IGNORECASE).search\n if not re.match(r'\\w', w) and re.search(r'\\w$', w):\n return re.compile(r'{0}\\b'.format(w), flags=re.IGNORECASE).search\n if re.match(r'\\w', w) and not re.search(r'\\w$', w):\n return re.compile(r'\\b{0}'.format(w), flags=re.IGNORECASE).search\n\nThe problem is that some of your keywords will have word characters at the start only, others - at the end only, most will have word characters on both ends, and some will have non-word characters. To effectively check the word boundary, you need to know if a word character is present at the start/end of the keyword.\nThus, with re.match(r'\\w', x) we can check if the keyword starts with a word character, and if yes, add the \\b to the pattern, and with re.search(r'\\w$', x) we can check if the keyword ends with a word character.\nIn case you have multiple keywords to check a string against you can check this post of mine.\n",
"You can use a \\ before [ or ].\nFor instance, to find 'abc[12]' in 'xyzabc[12]def', one can use\nmatch_pattern = 'abc\\[12\\]'\n\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"python",
"regex",
"string_search"
] |
stackoverflow_0031532290_python_regex_string_search.txt
|
Q:
How to get a random value from a dictionary that is in a list
So I have a list (shown below) and I need to randomly access one of the dictionaries in a list, and print it out:
e.g. Instagram, 346, Social media platform, United States
I've tried to google and search for it, but whatever I tried it didn't work. I know how to print out the whole list, but I don't know how to print a single dictionary randomly.
data = [
{
'name': 'Instagram',
'follower_count': 346,
'description': 'Social media platform',
'country': 'United States'
},
{
'name': 'Cristiano Ronaldo',
'follower_count': 215,
'description': 'Footballer',
'country': 'Portugal'
},
{
'name': 'Ariana Grande',
'follower_count': 183,
'description': 'Musician and actress',
'country': 'United States'
}
]
A:
You can use random.choice:
import random
random.choice(data)
A:
import random
print(random.choice(data))
#Output1
{'name': 'Cristiano Ronaldo', 'follower_count': 215, 'description': 'Footballer', 'country': 'Portugal'}
print(random.choice(data))
#Output2
{'name': 'Ariana Grande', 'follower_count': 183, 'description': 'Musician and actress', 'country': 'United States'}
# UPDATE 2 (To select more than one key from same random dictionary)
random_index = random.randrange(len(data)-1)
print(data[random_index]["name"])
print(data[random_index]["follower_count"])
|
How to get a random value from a dictionary that is in a list
|
So I have a list (shown below) and I need to randomly access one of the dictionaries in a list, and print it out:
e.g. Instagram, 346, Social media platform, United States
I've tried to google and search for it, but whatever I tried it didn't work. I know how to print out the whole list, but I don't know how to print a single dictionary randomly.
data = [
{
'name': 'Instagram',
'follower_count': 346,
'description': 'Social media platform',
'country': 'United States'
},
{
'name': 'Cristiano Ronaldo',
'follower_count': 215,
'description': 'Footballer',
'country': 'Portugal'
},
{
'name': 'Ariana Grande',
'follower_count': 183,
'description': 'Musician and actress',
'country': 'United States'
}
]
|
[
"You can use random.choice:\nimport random\n\nrandom.choice(data)\n\n",
"import random\nprint(random.choice(data))\n\n#Output1\n{'name': 'Cristiano Ronaldo', 'follower_count': 215, 'description': 'Footballer', 'country': 'Portugal'}\n\nprint(random.choice(data))\n#Output2\n{'name': 'Ariana Grande', 'follower_count': 183, 'description': 'Musician and actress', 'country': 'United States'}\n\n# UPDATE 2 (To select more than one key from same random dictionary)\n\nrandom_index = random.randrange(len(data)-1)\nprint(data[random_index][\"name\"])\nprint(data[random_index][\"follower_count\"])\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"dictionary",
"list",
"python",
"random"
] |
stackoverflow_0074477108_dictionary_list_python_random.txt
|
Q:
How to fix 'NoneType' object is not subscriptable error
Please tell me why when I start the program I get the error 'NoneType' object is not subscriptable
def binary_search(array: list, element: int, start: int, end: int, counter: int) -> (int, int):
counter += 1
mid = (start + end) // 2
if element == array[mid]:
return mid, counter
if element < array[mid]:
return binary_search(array, element, start, mid-1, counter), counter
else:
return binary_search(array, element, mid+1, end, counter), counter
array = parse("input.txt")
n = size(array)
binaryComparisonsSum = 0
for i in range(1, n + 1):
array = array.sort()
result, binaryComparisons = binary_search(array, array[i], 0, len(array), 0)
binaryComparisonsSum += binaryComparisons
I was expecting to get the element I'm looking for and the amount of action to find it, but I see an error
A:
Doing:
array = array.sort()
will make array None and then you try to subscript array by doing array[i]. Instead you only need:
array.sort()
This is because the .sort() function sorts the list in place and does not return a sorted list.
Also it would be good practice to call array.sort() before the for loop as it is unnecessary to sort it more than once.
|
How to fix 'NoneType' object is not subscriptable error
|
Please tell me why when I start the program I get the error 'NoneType' object is not subscriptable
def binary_search(array: list, element: int, start: int, end: int, counter: int) -> (int, int):
counter += 1
mid = (start + end) // 2
if element == array[mid]:
return mid, counter
if element < array[mid]:
return binary_search(array, element, start, mid-1, counter), counter
else:
return binary_search(array, element, mid+1, end, counter), counter
array = parse("input.txt")
n = size(array)
binaryComparisonsSum = 0
for i in range(1, n + 1):
array = array.sort()
result, binaryComparisons = binary_search(array, array[i], 0, len(array), 0)
binaryComparisonsSum += binaryComparisons
I was expecting to get the element I'm looking for and the amount of action to find it, but I see an error
|
[
"Doing:\narray = array.sort()\n\nwill make array None and then you try to subscript array by doing array[i]. Instead you only need:\narray.sort()\n\nThis is because the .sort() function sorts the list in place and does not return a sorted list.\nAlso it would be good practice to call array.sort() before the for loop as it is unnecessary to sort it more than once.\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074477125_python.txt
|
Q:
AttributeError: 'Client' object has no attribute 'author' (Discord Bot)
The following is my code for an Amazon web scrapper. But I am getting the Client object has no attribute 'author' error. It specifically says File
"/Users/kailash/Documents/devkai/Amazon Scrapper/bot.py", line 13, in on_message
AttributeError: 'Client' object has no attribute 'author'
On line 13, there is just a blank link though.
import discord
from amazon import search
TOKEN = 'hidden'
CHANNEL_ID = 'hidden'
client = discord.Client()
@client.event
async def on_message(message):
if message.author == client.user: #line 11
return
#line 13
if message.channel.id != CHANNEL_ID:
return
if message.content.split(' ')[0] == '!amazon':
try:
query = message.content.replace('!amazon ', '')
item = search(query)
embed = discord.Embed(
title=item['title'],
url='https://www.amazon.co.uk/' + item['url']
)
embed.set_thumbnail(
url=item['img']
)
embed.add_field(
name='Price',
value=item['price']
)
embed.add_field(
name='Rating',
value=item['rating']
)
embed.add_field(
name='Number of Ratings',
value=item['number_of_ratings']
)
await message.channel.send(embed=embed)
except:
response = 'An error occurred with your request.'
await message.channel.send(response)
client.run(TOKEN)
I tired changing the version of my discordpy to 1.0.1, but that did not help. still AttributeError: 'Client' object has no attribute 'author'. Please help
What my terminal says:
/usr/local/bin/python "/Users/kailash/Documents/devkai/Amazon Scrapper/bot.py"
2022-11-16 21:42:22 ERROR discord.client Ignoring exception in on_message
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/discord/client.py", line 409, in _run_event
is not resumed until the WebSocket connection is terminated.
File "/Users/kailash/Documents/devkai/Amazon Scrapper/bot.py", line 13, in on_message
AttributeError: 'Client' object has no attribute 'author'
A:
I think your attribute error is being caused by your client definition. You define client as discord.Client however, this is usually done by defining client like this:
from discord.ext import commands
client = commands.Bot(command_prefix='your prefix') # you can add other stuff here too but this is just the basics
Hope this helps
|
AttributeError: 'Client' object has no attribute 'author' (Discord Bot)
|
The following is my code for an Amazon web scrapper. But I am getting the Client object has no attribute 'author' error. It specifically says File
"/Users/kailash/Documents/devkai/Amazon Scrapper/bot.py", line 13, in on_message
AttributeError: 'Client' object has no attribute 'author'
On line 13, there is just a blank link though.
import discord
from amazon import search
TOKEN = 'hidden'
CHANNEL_ID = 'hidden'
client = discord.Client()
@client.event
async def on_message(message):
if message.author == client.user: #line 11
return
#line 13
if message.channel.id != CHANNEL_ID:
return
if message.content.split(' ')[0] == '!amazon':
try:
query = message.content.replace('!amazon ', '')
item = search(query)
embed = discord.Embed(
title=item['title'],
url='https://www.amazon.co.uk/' + item['url']
)
embed.set_thumbnail(
url=item['img']
)
embed.add_field(
name='Price',
value=item['price']
)
embed.add_field(
name='Rating',
value=item['rating']
)
embed.add_field(
name='Number of Ratings',
value=item['number_of_ratings']
)
await message.channel.send(embed=embed)
except:
response = 'An error occurred with your request.'
await message.channel.send(response)
client.run(TOKEN)
I tired changing the version of my discordpy to 1.0.1, but that did not help. still AttributeError: 'Client' object has no attribute 'author'. Please help
What my terminal says:
/usr/local/bin/python "/Users/kailash/Documents/devkai/Amazon Scrapper/bot.py"
2022-11-16 21:42:22 ERROR discord.client Ignoring exception in on_message
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/discord/client.py", line 409, in _run_event
is not resumed until the WebSocket connection is terminated.
File "/Users/kailash/Documents/devkai/Amazon Scrapper/bot.py", line 13, in on_message
AttributeError: 'Client' object has no attribute 'author'
|
[
"I think your attribute error is being caused by your client definition. You define client as discord.Client however, this is usually done by defining client like this:\nfrom discord.ext import commands \n\nclient = commands.Bot(command_prefix='your prefix') # you can add other stuff here too but this is just the basics\n\nHope this helps\n"
] |
[
0
] |
[] |
[] |
[
"bots",
"discord",
"discord.py",
"python",
"python_3.x"
] |
stackoverflow_0074468089_bots_discord_discord.py_python_python_3.x.txt
|
Q:
How can you create an os.environ object with a modified environment, e.g. after loading many different modules with "module load"?
I have a python script that calls an application using subprocess. I am calling this application many times, currently I am doing something along the lines of
out, err = subprocess.Popen(f"module load {' '.join(my_module_list)} && ./my_binary", stdout=sp.PIPE, stderr=sp.STDOUT, shell = True).communicate()
to run my program. Ideally I would like to first generate a modified os.environ object that already contains all the paths to the modules I am loading, and then pass it to subprocess.Popen under the env argument. However, since the printenv command doesn't output a python dictionary format, I'm not sure how to access all the modifications that modules load makes to the environment variables. Is there a good, clean way to create the required modified os.environ object?
A:
I'd be tempted to call python in the subprocess and dump from os.environ in it
python -c 'import os; print(os.environ)'
Once you know what you're after, you can pass a dict directly to subprocess's env arg to set custom environmental vars, which could be something like
custom_env = os.environ.copy()
custom_env["foo"] = "bar"
subprocess.Popen(
...
env=custom_env,
)
|
How can you create an os.environ object with a modified environment, e.g. after loading many different modules with "module load"?
|
I have a python script that calls an application using subprocess. I am calling this application many times, currently I am doing something along the lines of
out, err = subprocess.Popen(f"module load {' '.join(my_module_list)} && ./my_binary", stdout=sp.PIPE, stderr=sp.STDOUT, shell = True).communicate()
to run my program. Ideally I would like to first generate a modified os.environ object that already contains all the paths to the modules I am loading, and then pass it to subprocess.Popen under the env argument. However, since the printenv command doesn't output a python dictionary format, I'm not sure how to access all the modifications that modules load makes to the environment variables. Is there a good, clean way to create the required modified os.environ object?
|
[
"I'd be tempted to call python in the subprocess and dump from os.environ in it\npython -c 'import os; print(os.environ)'\n\nOnce you know what you're after, you can pass a dict directly to subprocess's env arg to set custom environmental vars, which could be something like\ncustom_env = os.environ.copy()\ncustom_env[\"foo\"] = \"bar\"\n\nsubprocess.Popen(\n ...\n env=custom_env,\n)\n\n"
] |
[
1
] |
[] |
[] |
[
"environment_variables",
"module",
"python"
] |
stackoverflow_0074476744_environment_variables_module_python.txt
|
Q:
ValueError : Call arguments received: • inputs=tf.Tensor(shape=(None, 1), dtype=float32) • training=None
I get the described error with the Input layer and I can't seem to pinpoint the problem.
I'm working on a text classification dataset and wanted to use the universal sentence encoder model for embeddings but it doesn't seem to work here. When I created my own embeddings using the embedding layer and the text vectorization layer it worked flawlessly.
use = hub.KerasLayer('https://tfhub.dev/google/universal-sentence-encoder/4',trainable=False,dtype=tf.string,input_shape=[])
class CnnModel(keras.Model):
def __init__(self,channels):
super(CnnModel,self).__init__()
self.conversion = keras.Sequential([
Input(shape=(1,)),
use
])
self.computation = keras.Sequential([
Conv1D(filters=channels,kernel_size=2,strides=1,padding='valid'),
MaxPool1D(pool_size=2,strides=1,padding='valid'),
Conv1D(filters=channels,kernel_size=2,strides=1,padding='same'),
])
self.dense = keras.Sequential([
GlobalMaxPooling1D(),
Dense(units=1,activation='sigmoid')
])
def call(self,input_tensor):
print(input_tensor.shape)
x = self.conversion(input_tensor)
x = self.computation(x)
x = self.dense(x)
return x
model = CnnModel(16)
I can't even instantiate this class and get this error:
ValueError Traceback (most recent call last)
c:\Users\gupta\OneDrive\Desktop\GIT\Repo\rough.ipynb Cell 6 in <cell line: 25>()
23 x = self.dense(x)
24 return x
---> 25 model = CnnModel(16)
c:\Users\gupta\OneDrive\Desktop\GIT\Repo\rough.ipynb Cell 6 in CnnModel.__init__(self, channels)
4 def __init__(self,channels):
5 super(CnnModel,self).__init__()
----> 6 self.conversion = keras.Sequential([
7 Input(shape=(1,)),
8 use
9 ])
10 self.computation = keras.Sequential([
11 Conv1D(filters=channels,kernel_size=2,strides=1,padding='valid'),
12 MaxPool1D(pool_size=2,strides=1,padding='valid'),
13 Conv1D(filters=channels,kernel_size=2,strides=1,padding='same'),
14 ])
15 self.dense = keras.Sequential([
16 GlobalMaxPooling1D(),
17 Dense(units=1,activation='sigmoid')
18 ])
File c:\Users\gupta\AppData\Local\Programs\Python\Python310\lib\site-packages\tensorflow\python\training\tracking\base.py:629, in no_automatic_dependency_tracking.<locals>._method_wrapper(self, *args, **kwargs)
...
Call arguments received:
• inputs=tf.Tensor(shape=(None, 1), dtype=float32)
• training=None
I also tried making this model using Sequential API and I managed to localise the same error to this:
(this also gives the exact same error)
ann = keras.Sequential([
Input(shape=(1,)),
use
])
A:
I tried to build model for text classification and it worked for me. Providing the shape as blank and mentioning data type as string in the Input layer worked for me as we are dealing with text data.
keras.Input(shape=[], dtype = tf.string)
Example Code Snippet:
use = hub.KerasLayer('https://tfhub.dev/google/universal-sentence-encoder/4',trainable=False,dtype=tf.string,input_shape=[])
# build model: Sequential
ann = keras.Sequential([
keras.Input(shape=[], dtype = tf.string),
use,
keras.layers.Dense(1, activation = "sigmoid")
])
# compile model
ann.compile(Adam(2e-5), loss='binary_crossentropy', metrics=['accuracy'])
ann.summary()
#fit model
ann.fit(train_dataset, epochs=1,
validation_data=test_dataset,
validation_steps=3)
|
ValueError : Call arguments received: • inputs=tf.Tensor(shape=(None, 1), dtype=float32) • training=None
|
I get the described error with the Input layer and I can't seem to pinpoint the problem.
I'm working on a text classification dataset and wanted to use the universal sentence encoder model for embeddings but it doesn't seem to work here. When I created my own embeddings using the embedding layer and the text vectorization layer it worked flawlessly.
use = hub.KerasLayer('https://tfhub.dev/google/universal-sentence-encoder/4',trainable=False,dtype=tf.string,input_shape=[])
class CnnModel(keras.Model):
def __init__(self,channels):
super(CnnModel,self).__init__()
self.conversion = keras.Sequential([
Input(shape=(1,)),
use
])
self.computation = keras.Sequential([
Conv1D(filters=channels,kernel_size=2,strides=1,padding='valid'),
MaxPool1D(pool_size=2,strides=1,padding='valid'),
Conv1D(filters=channels,kernel_size=2,strides=1,padding='same'),
])
self.dense = keras.Sequential([
GlobalMaxPooling1D(),
Dense(units=1,activation='sigmoid')
])
def call(self,input_tensor):
print(input_tensor.shape)
x = self.conversion(input_tensor)
x = self.computation(x)
x = self.dense(x)
return x
model = CnnModel(16)
I can't even instantiate this class and get this error:
ValueError Traceback (most recent call last)
c:\Users\gupta\OneDrive\Desktop\GIT\Repo\rough.ipynb Cell 6 in <cell line: 25>()
23 x = self.dense(x)
24 return x
---> 25 model = CnnModel(16)
c:\Users\gupta\OneDrive\Desktop\GIT\Repo\rough.ipynb Cell 6 in CnnModel.__init__(self, channels)
4 def __init__(self,channels):
5 super(CnnModel,self).__init__()
----> 6 self.conversion = keras.Sequential([
7 Input(shape=(1,)),
8 use
9 ])
10 self.computation = keras.Sequential([
11 Conv1D(filters=channels,kernel_size=2,strides=1,padding='valid'),
12 MaxPool1D(pool_size=2,strides=1,padding='valid'),
13 Conv1D(filters=channels,kernel_size=2,strides=1,padding='same'),
14 ])
15 self.dense = keras.Sequential([
16 GlobalMaxPooling1D(),
17 Dense(units=1,activation='sigmoid')
18 ])
File c:\Users\gupta\AppData\Local\Programs\Python\Python310\lib\site-packages\tensorflow\python\training\tracking\base.py:629, in no_automatic_dependency_tracking.<locals>._method_wrapper(self, *args, **kwargs)
...
Call arguments received:
• inputs=tf.Tensor(shape=(None, 1), dtype=float32)
• training=None
I also tried making this model using Sequential API and I managed to localise the same error to this:
(this also gives the exact same error)
ann = keras.Sequential([
Input(shape=(1,)),
use
])
|
[
"I tried to build model for text classification and it worked for me. Providing the shape as blank and mentioning data type as string in the Input layer worked for me as we are dealing with text data.\nkeras.Input(shape=[], dtype = tf.string)\n\nExample Code Snippet:\nuse = hub.KerasLayer('https://tfhub.dev/google/universal-sentence-encoder/4',trainable=False,dtype=tf.string,input_shape=[])\n\n# build model: Sequential\nann = keras.Sequential([\n keras.Input(shape=[], dtype = tf.string),\n use,\n keras.layers.Dense(1, activation = \"sigmoid\")\n ])\n\n# compile model\nann.compile(Adam(2e-5), loss='binary_crossentropy', metrics=['accuracy'])\nann.summary()\n\n#fit model\nann.fit(train_dataset, epochs=1,\n validation_data=test_dataset,\n validation_steps=3)\n\n"
] |
[
1
] |
[] |
[] |
[
"keras",
"python",
"tensorflow",
"text_classification"
] |
stackoverflow_0074006276_keras_python_tensorflow_text_classification.txt
|
Q:
Add parameter description when converting a Dataclass to BaseModel
I need to add a description to a FastAPI query parameter, which I pass to the endpoint through a dataclass, in order to display it OpenAPI (auto-documentation).
How can I do it?
I tried through metadata in fields but it has no effect (no description for x):
To my understanding the dataclass object is used to create a pydantic BaseModel object. Which is then used by FastAPI.
Here's my unsuccessful code:
from dataclasses import dataclass, field
from fastapi import FastAPI, Depends
app = FastAPI()
@dataclass
class MyDataclass:
x: str = field(default=None, metadata={'description': 'descr of x'})
@app.get("/", )
async def root(f: MyDataclass = Depends()):
return {"message": "Hello World"}
@app.get("/hello/{name}")
async def say_hello(name: str):
return {"message": f"Hello {name}"}
A:
Instead of the field from dataclass, use Query from pydantic:
from dataclasses import dataclass
from fastapi import FastAPI, Depends, Query
app = FastAPI()
@dataclass
class MyDataclass:
x: str = Query(default=None, description='descr of x')
|
Add parameter description when converting a Dataclass to BaseModel
|
I need to add a description to a FastAPI query parameter, which I pass to the endpoint through a dataclass, in order to display it OpenAPI (auto-documentation).
How can I do it?
I tried through metadata in fields but it has no effect (no description for x):
To my understanding the dataclass object is used to create a pydantic BaseModel object. Which is then used by FastAPI.
Here's my unsuccessful code:
from dataclasses import dataclass, field
from fastapi import FastAPI, Depends
app = FastAPI()
@dataclass
class MyDataclass:
x: str = field(default=None, metadata={'description': 'descr of x'})
@app.get("/", )
async def root(f: MyDataclass = Depends()):
return {"message": "Hello World"}
@app.get("/hello/{name}")
async def say_hello(name: str):
return {"message": f"Hello {name}"}
|
[
"Instead of the field from dataclass, use Query from pydantic:\nfrom dataclasses import dataclass\nfrom fastapi import FastAPI, Depends, Query\n\napp = FastAPI()\n\n\n@dataclass\nclass MyDataclass:\n x: str = Query(default=None, description='descr of x')\n\n\n"
] |
[
0
] |
[] |
[] |
[
"fastapi",
"pydantic",
"python",
"python_3.x",
"python_dataclasses"
] |
stackoverflow_0074477129_fastapi_pydantic_python_python_3.x_python_dataclasses.txt
|
Q:
Why my url appears as a post request when its get django
I have some templates in a django project. I'm trying to save them in the the url with a post request even though I specify it in the html document.
Here's my views.py
`
from django.shortcuts import render
from django.http import HttpResponse, HttpResponseRedirect
from .forms import WcaForm, IdForm
from . import wcaScraper
# Create your views here.
def id(response):
form = IdForm(response.GET)
return render(response, "main/id.html", {"form": form})
def idresults(response):
print(response.method)
if response.method == "GET":
print(wcaScraper.getDataByName(response.GET.get('name')))
return render(response, "main/nameresults.html", {"ids": wcaScraper.getDataByName(response.GET.get('name'))})
def search(response):
form = WcaForm(response.GET)
return render(response, "main/search.html", {"form": form})
def results(response):
wcaData = wcaScraper.getDataById(response.GET.get('id'))
variablePassed = {
"id": response.GET.get('id'),
"single3": wcaData[0].single,
"avg3": wcaData[0].avg,
"single2": wcaData[1].single,
"avg2": wcaData[1].avg,
"single4": wcaData[2].single,
"avg4": wcaData[2].avg,
"single5": wcaData[3].single,
"avg5": wcaData[3].avg,
"single6": wcaData[4].single,
"avg6": wcaData[4].avg,
"single7": wcaData[5].single,
"avg7": wcaData[5].avg,
"blind3single": wcaData[6].single,
"blind3avg": wcaData[6].avg,
"fmsingle": wcaData[7].single,
"fmavg": wcaData[7].avg,
"ohsingle": wcaData[8].single,
"ohavg": wcaData[8].avg,
"clocksingle": wcaData[9].single,
"clockavg": wcaData[9].avg,
"megasingle": wcaData[10].single,
"megaavg": wcaData[10].avg,
"pyrasingle": wcaData[11].single,
"pyraavg": wcaData[11].avg,
"skewbsingle": wcaData[12].single,
"skewbavg": wcaData[12].avg,
"squaresingle": wcaData[13].single,
"squareavg": wcaData[13].avg,
"blind4single": wcaData[14].single,
"blind4avg": wcaData[14].avg,
"blind5single": wcaData[15].single,
"blind5avg": wcaData[15].avg,
"multisingle": wcaData[16].single,
"multiavg": wcaData[16].avg,
}
return render(response, "main/results.html", variablePassed)
`
And my html template
<html>
<h1>Search by name</h1>
<form method="get" action="/idresults">
{% csrf_token %} {{form}}
<button type="submit">Search</button>
</form>
<p>or</p>
<a href="id/"> Search by WCA Id</a>
</html>
I tried printing the method and I got `GET
But the url looks like this
http://localhost:8000/idresults/?csrfmiddlewaretoken=v1jXO1Tei1eU0l8FbgF49qeJU5zKJlTQUUkggmW0oYgrG5WcLOvJhBb08PBY3klg&name=zemdegs
A:
Your url does not appear as a POST, but as a GET. If your problem is the token, just remove the {%csrf_token%} from your template.
|
Why my url appears as a post request when its get django
|
I have some templates in a django project. I'm trying to save them in the the url with a post request even though I specify it in the html document.
Here's my views.py
`
from django.shortcuts import render
from django.http import HttpResponse, HttpResponseRedirect
from .forms import WcaForm, IdForm
from . import wcaScraper
# Create your views here.
def id(response):
form = IdForm(response.GET)
return render(response, "main/id.html", {"form": form})
def idresults(response):
print(response.method)
if response.method == "GET":
print(wcaScraper.getDataByName(response.GET.get('name')))
return render(response, "main/nameresults.html", {"ids": wcaScraper.getDataByName(response.GET.get('name'))})
def search(response):
form = WcaForm(response.GET)
return render(response, "main/search.html", {"form": form})
def results(response):
wcaData = wcaScraper.getDataById(response.GET.get('id'))
variablePassed = {
"id": response.GET.get('id'),
"single3": wcaData[0].single,
"avg3": wcaData[0].avg,
"single2": wcaData[1].single,
"avg2": wcaData[1].avg,
"single4": wcaData[2].single,
"avg4": wcaData[2].avg,
"single5": wcaData[3].single,
"avg5": wcaData[3].avg,
"single6": wcaData[4].single,
"avg6": wcaData[4].avg,
"single7": wcaData[5].single,
"avg7": wcaData[5].avg,
"blind3single": wcaData[6].single,
"blind3avg": wcaData[6].avg,
"fmsingle": wcaData[7].single,
"fmavg": wcaData[7].avg,
"ohsingle": wcaData[8].single,
"ohavg": wcaData[8].avg,
"clocksingle": wcaData[9].single,
"clockavg": wcaData[9].avg,
"megasingle": wcaData[10].single,
"megaavg": wcaData[10].avg,
"pyrasingle": wcaData[11].single,
"pyraavg": wcaData[11].avg,
"skewbsingle": wcaData[12].single,
"skewbavg": wcaData[12].avg,
"squaresingle": wcaData[13].single,
"squareavg": wcaData[13].avg,
"blind4single": wcaData[14].single,
"blind4avg": wcaData[14].avg,
"blind5single": wcaData[15].single,
"blind5avg": wcaData[15].avg,
"multisingle": wcaData[16].single,
"multiavg": wcaData[16].avg,
}
return render(response, "main/results.html", variablePassed)
`
And my html template
<html>
<h1>Search by name</h1>
<form method="get" action="/idresults">
{% csrf_token %} {{form}}
<button type="submit">Search</button>
</form>
<p>or</p>
<a href="id/"> Search by WCA Id</a>
</html>
I tried printing the method and I got `GET
But the url looks like this
http://localhost:8000/idresults/?csrfmiddlewaretoken=v1jXO1Tei1eU0l8FbgF49qeJU5zKJlTQUUkggmW0oYgrG5WcLOvJhBb08PBY3klg&name=zemdegs
|
[
"Your url does not appear as a POST, but as a GET. If your problem is the token, just remove the {%csrf_token%} from your template.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"get",
"html",
"python"
] |
stackoverflow_0074451407_django_get_html_python.txt
|
Q:
How to install dependencies of a custom python package
I have built a Python package according to the documentation: https://packaging.python.org/en/latest/tutorials/packaging-projects/
Everything works, but when I call pip install my_package.whl, the dependencies are not installed.
The dependencies are listed in the pyproject.toml file as follows:
requires = ["hatchling", "package1", "package2"]
Question 1. During the build, I can see the following log:
* Installing packages in isolated environment... (hatchling, pydicom~=2.3.1)
What does it mean the dependencies installed and for what purpose?
Question 2. How to achieve the behavior, where after typing 'pip install my_package.whl', the required dependencies are installed beforehand. This must be possible, becaus all of the available python packages work this way.
A:
require is for build time dependencies.
You want to use dependencies for runtime ones.
i.e.
dependencies = ["package1", "package2"]
A:
I've managed to solve it in the meantime.
Q1: These are packages required during the build, not for using the package.
Q2: Use setuptool and the setup.py file instead of pyproject.toml
The content of the file is in my case:
from setuptools import setup
setup(
name='my_package',
version='1.0.0',
description='Description',
author='Karol',
packages=['my_package'],
install_requires=[
'numpy~=1.23.4',
'pillow~=9.3.0',
'pydicom~=2.3.1'
],
zip_safe=False
)
|
How to install dependencies of a custom python package
|
I have built a Python package according to the documentation: https://packaging.python.org/en/latest/tutorials/packaging-projects/
Everything works, but when I call pip install my_package.whl, the dependencies are not installed.
The dependencies are listed in the pyproject.toml file as follows:
requires = ["hatchling", "package1", "package2"]
Question 1. During the build, I can see the following log:
* Installing packages in isolated environment... (hatchling, pydicom~=2.3.1)
What does it mean the dependencies installed and for what purpose?
Question 2. How to achieve the behavior, where after typing 'pip install my_package.whl', the required dependencies are installed beforehand. This must be possible, becaus all of the available python packages work this way.
|
[
"require is for build time dependencies.\nYou want to use dependencies for runtime ones.\ni.e.\ndependencies = [\"package1\", \"package2\"]\n\n",
"I've managed to solve it in the meantime.\nQ1: These are packages required during the build, not for using the package.\nQ2: Use setuptool and the setup.py file instead of pyproject.toml\nThe content of the file is in my case:\nfrom setuptools import setup\n\nsetup(\n name='my_package',\n version='1.0.0',\n description='Description',\n author='Karol',\n packages=['my_package'],\n install_requires=[\n 'numpy~=1.23.4',\n 'pillow~=9.3.0',\n 'pydicom~=2.3.1'\n ],\n zip_safe=False\n)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"packaging",
"pip",
"python"
] |
stackoverflow_0074475746_packaging_pip_python.txt
|
Q:
__init__() missing 1 required keyword-only argument: 'intents' discord
I was trying to make a discord bot and I used this code:
import discord
from discord.ext import commands
bot=commands.Bot(command_prefix='/')
@bot.event
async def on_ready():
print("Black_knight is up again")\`
and this error pops up:
line 6, in \<module\>
bot=commands.Bot(command_prefix='/')
TypeError: __init__() missing 1 required keyword-only argument: 'intents'
Also, I tried putting
intents = discord.Intents.default()
intents.message_content = True
before bot.commands, but still get the same error.
A:
This code should work but IT IMPORTS ALL INTENTS
intents = discord.Intents().all()
client = commands.Bot(command_prefix=',', intents=intents)
|
__init__() missing 1 required keyword-only argument: 'intents' discord
|
I was trying to make a discord bot and I used this code:
import discord
from discord.ext import commands
bot=commands.Bot(command_prefix='/')
@bot.event
async def on_ready():
print("Black_knight is up again")\`
and this error pops up:
line 6, in \<module\>
bot=commands.Bot(command_prefix='/')
TypeError: __init__() missing 1 required keyword-only argument: 'intents'
Also, I tried putting
intents = discord.Intents.default()
intents.message_content = True
before bot.commands, but still get the same error.
|
[
"This code should work but IT IMPORTS ALL INTENTS\nintents = discord.Intents().all()\nclient = commands.Bot(command_prefix=',', intents=intents)\n\n"
] |
[
0
] |
[] |
[] |
[
"discord",
"discord.py",
"python",
"python_3.8"
] |
stackoverflow_0074477418_discord_discord.py_python_python_3.8.txt
|
Q:
Sync Date and Time in Windows OS from python
To avoid a time delay error with a (Binance) API I once in a while need to sync the Windows OS via the Data and Time settings. I want to avoid doing this manually every day and was wondering if I can do this programmatically from python. I wasn't successful in finding how to do this
To run the exe file I am trying with a subprocess call but it does not get execute. Is this the right command and how do I make sure to run it as an admin?
sync_path = 'C:\\Windows\\System32\\'
sync_exe_name = 'w32tm.exe'
exe_to_run = sync_path + sync_exe_name
x = subprocess.run(exe_to_run, capture_output=True)
|
Sync Date and Time in Windows OS from python
|
To avoid a time delay error with a (Binance) API I once in a while need to sync the Windows OS via the Data and Time settings. I want to avoid doing this manually every day and was wondering if I can do this programmatically from python. I wasn't successful in finding how to do this
To run the exe file I am trying with a subprocess call but it does not get execute. Is this the right command and how do I make sure to run it as an admin?
sync_path = 'C:\\Windows\\System32\\'
sync_exe_name = 'w32tm.exe'
exe_to_run = sync_path + sync_exe_name
x = subprocess.run(exe_to_run, capture_output=True)
|
[] |
[] |
[
"os.path.realpath(\"C:\\Windows\\System32\\w32tm.exe\" )\n\nRun this code\n"
] |
[
-2
] |
[
"operating_system",
"python",
"windows"
] |
stackoverflow_0069869411_operating_system_python_windows.txt
|
Q:
SQLAlchemy - Get most recent child from every parent
Here's my situation. I have to tables
Parent
id
other
1
...
2
...
3
...
4
...
Children
id
parent_id
time_created
1
1
2022-11-17 13:18:49
2
1
2022-11-17 13:47:05
3
2
2022-11-18 12:00:22
4
2
2022-11-18 16:06:17
What I would like to do, using SQLAlchemy in Python, is to retrieve the most recent Children for every parent. The result of the query would return Children with IDs 2 and 4 since they are the most recent.
A:
As for constructing the query in SQL, the cleanest way to achieve this is to use Postgre's DISTINCT ON feature:
SELECT DISTINCT ON (parent_id) *
FROM Children
ORDER BY parent_id, time_created DESC;
Based on this answer, this could be mapped to the following SQLAlchemy code:
latest_children = Children.query.\
distinct(Children.parent_id).\
filter_by(**filter_by_query).\
filter(*queries).\
order_by(Children.query, Children.time_created.desc()).\
all()
|
SQLAlchemy - Get most recent child from every parent
|
Here's my situation. I have to tables
Parent
id
other
1
...
2
...
3
...
4
...
Children
id
parent_id
time_created
1
1
2022-11-17 13:18:49
2
1
2022-11-17 13:47:05
3
2
2022-11-18 12:00:22
4
2
2022-11-18 16:06:17
What I would like to do, using SQLAlchemy in Python, is to retrieve the most recent Children for every parent. The result of the query would return Children with IDs 2 and 4 since they are the most recent.
|
[
"As for constructing the query in SQL, the cleanest way to achieve this is to use Postgre's DISTINCT ON feature:\nSELECT DISTINCT ON (parent_id) *\nFROM Children\nORDER BY parent_id, time_created DESC;\n\nBased on this answer, this could be mapped to the following SQLAlchemy code:\nlatest_children = Children.query.\\\n distinct(Children.parent_id).\\\n filter_by(**filter_by_query).\\\n filter(*queries).\\\n order_by(Children.query, Children.time_created.desc()).\\\n all()\n\n"
] |
[
0
] |
[] |
[] |
[
"flask_sqlalchemy",
"postgresql",
"python",
"sql",
"sqlalchemy"
] |
stackoverflow_0074477152_flask_sqlalchemy_postgresql_python_sql_sqlalchemy.txt
|
Q:
How to summarise dataframe by way of majority votes of a column
This is a really tricky statistics that I want to produce. My dataframe contains information about true classes and prediction results of a machine learning model, for trips and corresponding trips' segments. The problem can best be explained with example, so I give the following example df:
df = pd.DataFrame(
{'trip': [25, 25, 25, 25, 25, 25, 25, 25, 25, 54, 54, 54, 54,73,73,73,75,75],
'segment': [0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2,0,0,1,1,3],
'class': [3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1,2,2,2,1,1],
'prediction': [0, 0, 3, 3, 3, 4, 4, 2, 2, 0, 0, 1, 1,4,2,4,0,2]
}
)
df
trip segment class prediction
0 25 0 3 0
1 25 0 3 0
2 25 0 3 3
3 25 0 3 3
4 25 0 3 3
5 25 1 3 4
6 25 1 3 4
7 25 1 3 2
8 25 1 3 2
9 54 2 1 0
10 54 2 1 0
11 54 2 1 1
12 54 2 1 1
13 73 0 2 4
14 73 0 2 2
15 73 1 2 4
16 75 1 1 0
17 75 3 1 2
From the given df, I would like to produce statistics of model's predictions at trip and segment levels, using prediction's majority votes, considering the actual class a trip or segment belongs to.
Segment's statistics
So considering the above df, I would like to produce the below segment's statistics (explanation given below):
class total-segments correctly-predicted accuracy-rate
0 - - -
1 3 1 0.33
2 2 1 0.5
3 2 1 0.5
4 - - -
no segment of class 0, so the dash.
there are 3 distinct segments of class type 1(segment 2 of trip 54 and segments 1 & 3 of trip 75). Of all the 3, only one (segment 2 of trip 54 has majority votes of its prediction correct, so 1 correctly-predicted and 0.33 (i.e. 1/3) accuracy-rate.
there're 2 segments belonging to class type 2 ( segments 0& 1 of trip 73). Segment 0 has majority votes correct, so 1 correctly-predicted and 0.5 (i.e. 1/2) accuracy-rate.
there're 2 segments of class 3 (segments 0 & 1 of trip 25). Segment 0 has majority votes correct, so 1 correctly-predicted and 0.5 (i.e. 1/5) accuracy-rate.
no segment of class type 4.
Trip-level statistics
Similarly, considering the class type of distinct trips in df and their prediction, I want to produce the following trip-level statistics (also explained below):
class total-trips correctly-predicted accuracy-rate
0 - - -
1 2 1 0.5
2 1 0 0.0
3 1 1 1.0
4 - - -
no trip belongs to class 0.
2 trips of class type 1(trip 54 & 75). 1 trip was predicted correct (majority votes of trip 54), so 1 correctly-predicted trip, and 0.5 accuracy-rate.
1 trip of class 2 (trip 73). Its majority votes prediction is incorrect, so 0 correctly-predicted trip, and 0.0 accuracy-rate.
1 trip of class 3 (trip 25). Its majority votes prediction is correct (3), so 1 correctly-predicted trip, and 1.0 accuracy-rate.
no trip of class 4.
Please forgive the long grammar, but this is a problem that one can understand only when well-explained.
A:
You can do it this way. you can comment all but the first line and then uncomment one by one to see what is happening with the command line.
res_seg = (
df['class'].eq(df['prediction'])
.groupby([df['class'],df['segment']]).mean()
.ge(0.5)
.groupby(level='class').agg(['size','sum'])
.rename(columns={'size':'total_segments','sum':'correctly_predicted'})\
.assign(accuracy_rate = lambda x: x['correctly_predicted']/x['total_segments'])
.reindex(range(5), fill_value='-')
.reset_index()
)
print(res_seg)
# class total_segments correctly_predicted accuracy_rate
# 0 0 - - -
# 1 1 3 1 0.333333
# 2 2 2 1 0.5
# 3 3 2 1 0.5
# 4 4 - - -
and similar for the trips, you would have to change the df['segment'] to df['trip'] in the groupby and maybe the name of the columns in the rename as well as the assign
|
How to summarise dataframe by way of majority votes of a column
|
This is a really tricky statistics that I want to produce. My dataframe contains information about true classes and prediction results of a machine learning model, for trips and corresponding trips' segments. The problem can best be explained with example, so I give the following example df:
df = pd.DataFrame(
{'trip': [25, 25, 25, 25, 25, 25, 25, 25, 25, 54, 54, 54, 54,73,73,73,75,75],
'segment': [0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2,0,0,1,1,3],
'class': [3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1,2,2,2,1,1],
'prediction': [0, 0, 3, 3, 3, 4, 4, 2, 2, 0, 0, 1, 1,4,2,4,0,2]
}
)
df
trip segment class prediction
0 25 0 3 0
1 25 0 3 0
2 25 0 3 3
3 25 0 3 3
4 25 0 3 3
5 25 1 3 4
6 25 1 3 4
7 25 1 3 2
8 25 1 3 2
9 54 2 1 0
10 54 2 1 0
11 54 2 1 1
12 54 2 1 1
13 73 0 2 4
14 73 0 2 2
15 73 1 2 4
16 75 1 1 0
17 75 3 1 2
From the given df, I would like to produce statistics of model's predictions at trip and segment levels, using prediction's majority votes, considering the actual class a trip or segment belongs to.
Segment's statistics
So considering the above df, I would like to produce the below segment's statistics (explanation given below):
class total-segments correctly-predicted accuracy-rate
0 - - -
1 3 1 0.33
2 2 1 0.5
3 2 1 0.5
4 - - -
no segment of class 0, so the dash.
there are 3 distinct segments of class type 1(segment 2 of trip 54 and segments 1 & 3 of trip 75). Of all the 3, only one (segment 2 of trip 54 has majority votes of its prediction correct, so 1 correctly-predicted and 0.33 (i.e. 1/3) accuracy-rate.
there're 2 segments belonging to class type 2 ( segments 0& 1 of trip 73). Segment 0 has majority votes correct, so 1 correctly-predicted and 0.5 (i.e. 1/2) accuracy-rate.
there're 2 segments of class 3 (segments 0 & 1 of trip 25). Segment 0 has majority votes correct, so 1 correctly-predicted and 0.5 (i.e. 1/5) accuracy-rate.
no segment of class type 4.
Trip-level statistics
Similarly, considering the class type of distinct trips in df and their prediction, I want to produce the following trip-level statistics (also explained below):
class total-trips correctly-predicted accuracy-rate
0 - - -
1 2 1 0.5
2 1 0 0.0
3 1 1 1.0
4 - - -
no trip belongs to class 0.
2 trips of class type 1(trip 54 & 75). 1 trip was predicted correct (majority votes of trip 54), so 1 correctly-predicted trip, and 0.5 accuracy-rate.
1 trip of class 2 (trip 73). Its majority votes prediction is incorrect, so 0 correctly-predicted trip, and 0.0 accuracy-rate.
1 trip of class 3 (trip 25). Its majority votes prediction is correct (3), so 1 correctly-predicted trip, and 1.0 accuracy-rate.
no trip of class 4.
Please forgive the long grammar, but this is a problem that one can understand only when well-explained.
|
[
"You can do it this way. you can comment all but the first line and then uncomment one by one to see what is happening with the command line.\nres_seg = (\n df['class'].eq(df['prediction'])\n .groupby([df['class'],df['segment']]).mean()\n .ge(0.5)\n .groupby(level='class').agg(['size','sum'])\n .rename(columns={'size':'total_segments','sum':'correctly_predicted'})\\\n .assign(accuracy_rate = lambda x: x['correctly_predicted']/x['total_segments'])\n .reindex(range(5), fill_value='-')\n .reset_index()\n)\nprint(res_seg)\n# class total_segments correctly_predicted accuracy_rate\n# 0 0 - - -\n# 1 1 3 1 0.333333\n# 2 2 2 1 0.5\n# 3 3 2 1 0.5\n# 4 4 - - -\n\nand similar for the trips, you would have to change the df['segment'] to df['trip'] in the groupby and maybe the name of the columns in the rename as well as the assign\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074476074_dataframe_pandas_python.txt
|
Q:
make a list with 2 value of 2 columns depending another column
I would like to create a list based on the value of a column, the value here is "Auvergne-Rhône-Alpes". And in this list put the 2 values latitude and longitude for this region.
My data frame :
I want to make a list like this :
listeNom_Région = [[46.153426, 4.926114],[46.009188,5.428017]...[45.749499,5.594320]]
A:
liste_norm = list(zip(df['latitude'], df['longitude']))
This will create tuples instead of lists inside your list. However, tuples function very similar to list. If you really want lists, you can iterate over the result and change them like this:
liste_norm = [list(elem) for elem in liste_norm]
|
make a list with 2 value of 2 columns depending another column
|
I would like to create a list based on the value of a column, the value here is "Auvergne-Rhône-Alpes". And in this list put the 2 values latitude and longitude for this region.
My data frame :
I want to make a list like this :
listeNom_Région = [[46.153426, 4.926114],[46.009188,5.428017]...[45.749499,5.594320]]
|
[
"liste_norm = list(zip(df['latitude'], df['longitude']))\n\nThis will create tuples instead of lists inside your list. However, tuples function very similar to list. If you really want lists, you can iterate over the result and change them like this:\nliste_norm = [list(elem) for elem in liste_norm]\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"list",
"python"
] |
stackoverflow_0074477309_dataframe_list_python.txt
|
Q:
Converting two complex dictionary list to a dictionary
suppose I have two dictionary list below:
all=[]
lis1={
'code':'matata',
'commandes':[
{
'date':'12-10-22',
'content':[
{
'article':'Article1',
'designation':'Designe1',
'quantity':5
}
]
}
]
}
lis2={
'code':'fropm',
'commandes':[
{
'date':'04-08-21',
'content':[
{
'article':'Article2',
'designation':'Designe2',
'quantity':3
}
]
}
]
}
Now I add at list level my two dictionaries
all.append(list1)
all.append(liste2)
to replace the [..] in {..} for a single list we can do all[0]
But after adding the two lists and then doing all[0] we only have the first list whose [..] whose square brackets are replaced by {..}
I would like to have this rendering { {...}, {...} }
Is this possible??
A:
You need to refine what you are trying to accomplish. lis1 is a dict, not a list. lis1['commandes'] is a list containing a single dict, but presumably in the general case it might have more. Each of those has a key "date" and another key "content", which is again a list of dicts ....
An arbitrary example would be to add the commandes from lis2 to those in lis1:
lis1['commandes'].extend( lis2['commandes'] )
which is using the list .extend() method to join two lists. It should yield
{
'code':'matata',
'commandes':[
{
'date':'12-10-22',
'content':[
{
'article':'Article1',
'designation':'Designe1',
'quantity':5
}
]
},
{
'date':'04-08-21',
'content':[
{
'article':'Article2',
'designation':'Designe2',
'quantity':3
}
]
}
]
}
"Drilling down" is just a matter of supplying array indices and dict keys as appropriate. for example,
lis1['commandes'][0]['content'][0]['quantity']
will be 5.
Added in response to comment:
Building such a structire is step-by-step. Remember that in Python, assignment is name-binding. So names referring to lists and dicts are a lot like pointers in other languages. You mutate the objects referred to in memory (if they are mutable, which lists and dicts are).
So something like:
lis = {}
lis['code'] = 'example'
lis['commandes'] = []
for foo in something:
lis['commandes'] .append( build_command( foo))
...
def build_command(foo):
command = {}
date = datetime.date.today()
command['date'] = datetime.datetime.now().strftime('%d-%m-%y')
command['content'] = []
for # iterating over something ...
content = {}
content['article'] =
content['designation'] =
content['quantity'] =
command['content'].append( content)
return command
|
Converting two complex dictionary list to a dictionary
|
suppose I have two dictionary list below:
all=[]
lis1={
'code':'matata',
'commandes':[
{
'date':'12-10-22',
'content':[
{
'article':'Article1',
'designation':'Designe1',
'quantity':5
}
]
}
]
}
lis2={
'code':'fropm',
'commandes':[
{
'date':'04-08-21',
'content':[
{
'article':'Article2',
'designation':'Designe2',
'quantity':3
}
]
}
]
}
Now I add at list level my two dictionaries
all.append(list1)
all.append(liste2)
to replace the [..] in {..} for a single list we can do all[0]
But after adding the two lists and then doing all[0] we only have the first list whose [..] whose square brackets are replaced by {..}
I would like to have this rendering { {...}, {...} }
Is this possible??
|
[
"You need to refine what you are trying to accomplish. lis1 is a dict, not a list. lis1['commandes'] is a list containing a single dict, but presumably in the general case it might have more. Each of those has a key \"date\" and another key \"content\", which is again a list of dicts ....\nAn arbitrary example would be to add the commandes from lis2 to those in lis1:\nlis1['commandes'].extend( lis2['commandes'] )\n\nwhich is using the list .extend() method to join two lists. It should yield\n{\n'code':'matata',\n'commandes':[\n {\n 'date':'12-10-22',\n 'content':[\n {\n 'article':'Article1',\n 'designation':'Designe1',\n 'quantity':5\n }\n ]\n },\n {\n 'date':'04-08-21',\n 'content':[\n {\n 'article':'Article2',\n 'designation':'Designe2',\n 'quantity':3\n }\n ]\n }\n ]\n}\n\n\"Drilling down\" is just a matter of supplying array indices and dict keys as appropriate. for example,\nlis1['commandes'][0]['content'][0]['quantity']\n\nwill be 5.\nAdded in response to comment:\nBuilding such a structire is step-by-step. Remember that in Python, assignment is name-binding. So names referring to lists and dicts are a lot like pointers in other languages. You mutate the objects referred to in memory (if they are mutable, which lists and dicts are).\nSo something like:\nlis = {}\nlis['code'] = 'example'\nlis['commandes'] = []\nfor foo in something:\n lis['commandes'] .append( build_command( foo))\n\n...\ndef build_command(foo):\n command = {}\n date = datetime.date.today()\n command['date'] = datetime.datetime.now().strftime('%d-%m-%y')\n command['content'] = []\n for # iterating over something ...\n content = {}\n content['article'] = \n content['designation'] =\n content['quantity'] =\n\n command['content'].append( content)\n return command\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0074476197_django_python.txt
|
Q:
Trouble visualize GIS data with Geopandas.plot()
I want to visualize the GIS data about Iran accidents in googlecolab, I have latitude, longitude, and death_count information but when I try to read it as Geopaandas data frame the plot function is not working correctly, May you please advise me on this issue, I have 3720 rows and 3 columns, and the result of visualization is attached as a link, thanks in advance for your help.
import pandas as pd
import matplotlib.pyplot as plt
import geopandas as gpd
df = pd.read_excel("/content/accidents98.xlsx")
gdf = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df.longitude)و df.latitude))
gdf['death_count']] = gdf[['death_count']].fillna(value=0)
fig, ax = plt.subplots(1, figsize=(20, 20))
ax.axis('off')
ax.set_title('accidents in Iran',
fontdict={'fontsize': '15', 'fontweight' : '3'})
fig = gdf.plot(column='death_count', cmap='RdYlGn', linewidth=0.5, ax=ax, edgecolor='0.2',legend=True)
the input:
the output:
A:
You have points_from_xy(df.latitude , df.longitude). points_from_xy expects (x, y) not (y, x). You need to switch the lat/lon order to lon, lat
|
Trouble visualize GIS data with Geopandas.plot()
|
I want to visualize the GIS data about Iran accidents in googlecolab, I have latitude, longitude, and death_count information but when I try to read it as Geopaandas data frame the plot function is not working correctly, May you please advise me on this issue, I have 3720 rows and 3 columns, and the result of visualization is attached as a link, thanks in advance for your help.
import pandas as pd
import matplotlib.pyplot as plt
import geopandas as gpd
df = pd.read_excel("/content/accidents98.xlsx")
gdf = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df.longitude)و df.latitude))
gdf['death_count']] = gdf[['death_count']].fillna(value=0)
fig, ax = plt.subplots(1, figsize=(20, 20))
ax.axis('off')
ax.set_title('accidents in Iran',
fontdict={'fontsize': '15', 'fontweight' : '3'})
fig = gdf.plot(column='death_count', cmap='RdYlGn', linewidth=0.5, ax=ax, edgecolor='0.2',legend=True)
the input:
the output:
|
[
"You have points_from_xy(df.latitude , df.longitude). points_from_xy expects (x, y) not (y, x). You need to switch the lat/lon order to lon, lat\n"
] |
[
0
] |
[] |
[] |
[
"geopandas",
"python"
] |
stackoverflow_0074477057_geopandas_python.txt
|
Q:
open a password protected .pem and .crt file using python
I created a private and pulic key key using command :
.....
openssl genrsa -aes256 -passout pass:password -out key.pem
4096 &&
openssl rsa -in key.pem -passin pass:password -pubout -out
pukey.pub
and then created cert file using this command:
openssl req -new -key key.pem -passin pass:password -x509 -out
keycert.pem -days 365000 -subj '/CN=localhost'
so I have protected the key.pem with a password and I want to open it in my python program, how can I specify the password to open key.pem file and keycert.pem file?
with open('../key.pem', 'rb') as f:
private_key = f.read()
with open('../keycert.pem', 'rb') as f:
certificate_chain = f.read()
when I run this I get error :
E1117 13:57:03.515461744 70812 ssl_transport_security.cc:854]
Invalid private key.
which shows it could not open the key.pem file as it is protected by a password
A:
use this line :
with open('key.pem', 'rb') as f:
private_key=load_pem_private_key(f.read(), password="1".encode(),
backend=default_backend())
pem =private_key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncryption()
)
solved the problem , first the private key is loaded second it is converted to the bytes.
|
open a password protected .pem and .crt file using python
|
I created a private and pulic key key using command :
.....
openssl genrsa -aes256 -passout pass:password -out key.pem
4096 &&
openssl rsa -in key.pem -passin pass:password -pubout -out
pukey.pub
and then created cert file using this command:
openssl req -new -key key.pem -passin pass:password -x509 -out
keycert.pem -days 365000 -subj '/CN=localhost'
so I have protected the key.pem with a password and I want to open it in my python program, how can I specify the password to open key.pem file and keycert.pem file?
with open('../key.pem', 'rb') as f:
private_key = f.read()
with open('../keycert.pem', 'rb') as f:
certificate_chain = f.read()
when I run this I get error :
E1117 13:57:03.515461744 70812 ssl_transport_security.cc:854]
Invalid private key.
which shows it could not open the key.pem file as it is protected by a password
|
[
"use this line :\nwith open('key.pem', 'rb') as f:\n private_key=load_pem_private_key(f.read(), password=\"1\".encode(),\n backend=default_backend())\n pem =private_key.private_bytes(\n encoding=serialization.Encoding.PEM,\n format=serialization.PrivateFormat.TraditionalOpenSSL,\n encryption_algorithm=serialization.NoEncryption()\n )\n\nsolved the problem , first the private key is loaded second it is converted to the bytes.\n"
] |
[
2
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074472138_python_python_3.x.txt
|
Q:
What are the 4 values passed in shape for ndarray in numPy?
What is the meaning of:
shape=(1, 224, 224, 3)
I mean what are all the values specifying given here for shape?
A:
Python NumPy numpy.shape() function finds the shape of an array. By shape, we mean that it helps in finding the dimensions of an array. It returns the shape in the form of a tuple because we cannot alter a tuple just like we cannot alter the dimensions of an array.
Example Codes: numpy.shape() to Pass a Simple Array
We will pass a simple one-dimensional array now.
import numpy as np
a = np.array([89, 34, 56, 87, 90, 23, 45, 12, 65, 78, 9, 34, 12, 11, 2, 65, 78, 82, 28, 78])
dimensions = np.shape(a)
print(dimensions)
Output:
(20,)
The output shows that the array is one-dimensional and contains 20 elements.
Example Codes: numpy.shape() to Pass a Multi-Dimensional Array
import numpy as np
a = np.array([[11, 12, 5], [15, 6,10], [10, 8, 12], [12,15,8], [34, 78, 90]])
dimensions = np.shape(a)
print(dimensions)
Output:
(5, 3)
Note that the output tuple now contains two integer elements. It shows that the array contains five rows and three columns.
A:
In machine learning, images are loaded in batch/bulk for faster loading, so the first value means "loaded in batch".
# you can read like this:
(batch, image_width, image_height, RGB_dims) = (1,224,224,3)
A:
When the shape is of length 4, it means that that you have a "4D-tensor". A 4D-tensor is a group of 3D-tensor. For instance if A is a 4D-tensor, A[0] is a 3D-tensor that is a the first element of this group. Here the first number 1 means that you group is only composed of one 3D-tensor. Then, you guess that a 3D-tensor is a group of 2D-tensor (also called matrices). Here your 3D-tensor is composed of 224 2D-tensors (the second number). Then each 2D-tensor is composed of 224 1D-tensors (vectors) of lenght 3.
In your particular case you can also (more simply) view your data as a group composed of one RGB image of size 224*224. Each pixel has 3 values (red, green, blue intensity).
|
What are the 4 values passed in shape for ndarray in numPy?
|
What is the meaning of:
shape=(1, 224, 224, 3)
I mean what are all the values specifying given here for shape?
|
[
"Python NumPy numpy.shape() function finds the shape of an array. By shape, we mean that it helps in finding the dimensions of an array. It returns the shape in the form of a tuple because we cannot alter a tuple just like we cannot alter the dimensions of an array.\nExample Codes: numpy.shape() to Pass a Simple Array\nWe will pass a simple one-dimensional array now.\nimport numpy as np \n \na = np.array([89, 34, 56, 87, 90, 23, 45, 12, 65, 78, 9, 34, 12, 11, 2, 65, 78, 82, 28, 78]) \ndimensions = np.shape(a) \nprint(dimensions) \n\nOutput:\n(20,)\nThe output shows that the array is one-dimensional and contains 20 elements.\nExample Codes: numpy.shape() to Pass a Multi-Dimensional Array\nimport numpy as np \n \na = np.array([[11, 12, 5], [15, 6,10], [10, 8, 12], [12,15,8], [34, 78, 90]]) \ndimensions = np.shape(a) \nprint(dimensions) \n\nOutput:\n(5, 3)\n\nNote that the output tuple now contains two integer elements. It shows that the array contains five rows and three columns.\n",
"In machine learning, images are loaded in batch/bulk for faster loading, so the first value means \"loaded in batch\".\n# you can read like this:\n(batch, image_width, image_height, RGB_dims) = (1,224,224,3)\n\n",
"When the shape is of length 4, it means that that you have a \"4D-tensor\". A 4D-tensor is a group of 3D-tensor. For instance if A is a 4D-tensor, A[0] is a 3D-tensor that is a the first element of this group. Here the first number 1 means that you group is only composed of one 3D-tensor. Then, you guess that a 3D-tensor is a group of 2D-tensor (also called matrices). Here your 3D-tensor is composed of 224 2D-tensors (the second number). Then each 2D-tensor is composed of 224 1D-tensors (vectors) of lenght 3.\nIn your particular case you can also (more simply) view your data as a group composed of one RGB image of size 224*224. Each pixel has 3 values (red, green, blue intensity).\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"numpy_ndarray",
"python"
] |
stackoverflow_0074477420_numpy_ndarray_python.txt
|
Q:
Unflatten a pandas dataframe
I have a pandas dataframe
df_flat = pd.DataFrame({'dim1': ['a', 'a', 'b', 'b'], 'dim2': ['x', 'y', 'x', 'y'], 'val': [2, 4, 6, 8]})
I want to transform this dataframe, unflatten for want of a better words and transform it to a np ND array such that is looks like:
df_unflatten = pd.DataFrame({'dim1': ['a', 'b'], 'x': [2, 6], 'y': [4, 8]}).set_index('dim1').to_numpy()
Edit: The 2D example was the wrong example to use here. The 3D example better highlights what i wish to do.
pd.DataFrame({'dim1': ['a', 'a', 'b', 'b', 'a', 'a', 'b', 'b'], 'dim2': ['x', 'y', 'x', 'y', 'x', 'y', 'x', 'y'], 'dim3': ['i', 'i', 'i', 'i', 'j', 'j', 'j', 'j'], 'val': [2, 4, 6, 8, 1, 3, 5, 7]})
which i hope to convert to a ND np array:
np.array([[[2, 1], [4, 3]],[[6, 5],[8, 7]]])
Note the extra square brackets, which gives this N levels of indexation (3 here). e.g. np.array([[[2, 1], [4, 3]],[[6, 5],[8, 7]]])[0][0][0]
I want this method to be flexible, such that if I add another dimension my 'unflattened' dataframe would become a numpy ndarray.
Are there any in built pandas functions that can help me achieve this. I am aware of functions that do the opposite e.g. .flatten() . unstack() etc. but I could not find any which achieve what I desire.
A:
I think the term you're looking for is "unmelt" since to "melt" a DataFrame is to bring it into the form you called df_flat. In order to achiece said unmelting, you can to as follows:
df = df_flat.set_index(['dim1', 'dim2'])['val'].unstack().reset_index()
# Output:
dim2 dim1 x y
0 a 2 4
1 b 6 8
For the flexible part, you can add more dimensions in the list as parameter for set_index.
A:
Given your data:
df_flat = pd.DataFrame({
'dim1': ['a', 'a', 'b', 'b'],
'dim2': ['x', 'y', 'x', 'y'],
'val': [2, 4, 6, 8]})
df_unflatten = pd.DataFrame({
'dim1': ['a', 'b'],
'x': [2, 6],
'y': [4, 8]}).set_index('dim1')
Just unstack after setting indicies. Unstack without parameters uses the last multi-index for unstacking.
>>> new_df = df_flat.set_index(['dim1', 'dim2']).unstack()
>>> np.allclose(new_df, df_unflatten)
True
|
Unflatten a pandas dataframe
|
I have a pandas dataframe
df_flat = pd.DataFrame({'dim1': ['a', 'a', 'b', 'b'], 'dim2': ['x', 'y', 'x', 'y'], 'val': [2, 4, 6, 8]})
I want to transform this dataframe, unflatten for want of a better words and transform it to a np ND array such that is looks like:
df_unflatten = pd.DataFrame({'dim1': ['a', 'b'], 'x': [2, 6], 'y': [4, 8]}).set_index('dim1').to_numpy()
Edit: The 2D example was the wrong example to use here. The 3D example better highlights what i wish to do.
pd.DataFrame({'dim1': ['a', 'a', 'b', 'b', 'a', 'a', 'b', 'b'], 'dim2': ['x', 'y', 'x', 'y', 'x', 'y', 'x', 'y'], 'dim3': ['i', 'i', 'i', 'i', 'j', 'j', 'j', 'j'], 'val': [2, 4, 6, 8, 1, 3, 5, 7]})
which i hope to convert to a ND np array:
np.array([[[2, 1], [4, 3]],[[6, 5],[8, 7]]])
Note the extra square brackets, which gives this N levels of indexation (3 here). e.g. np.array([[[2, 1], [4, 3]],[[6, 5],[8, 7]]])[0][0][0]
I want this method to be flexible, such that if I add another dimension my 'unflattened' dataframe would become a numpy ndarray.
Are there any in built pandas functions that can help me achieve this. I am aware of functions that do the opposite e.g. .flatten() . unstack() etc. but I could not find any which achieve what I desire.
|
[
"I think the term you're looking for is \"unmelt\" since to \"melt\" a DataFrame is to bring it into the form you called df_flat. In order to achiece said unmelting, you can to as follows:\ndf = df_flat.set_index(['dim1', 'dim2'])['val'].unstack().reset_index()\n\n# Output:\ndim2 dim1 x y\n0 a 2 4\n1 b 6 8\n\nFor the flexible part, you can add more dimensions in the list as parameter for set_index.\n",
"Given your data:\ndf_flat = pd.DataFrame({\n 'dim1': ['a', 'a', 'b', 'b'],\n 'dim2': ['x', 'y', 'x', 'y'],\n 'val': [2, 4, 6, 8]})\ndf_unflatten = pd.DataFrame({\n 'dim1': ['a', 'b'], \n 'x': [2, 6], \n 'y': [4, 8]}).set_index('dim1')\n\nJust unstack after setting indicies. Unstack without parameters uses the last multi-index for unstacking.\n>>> new_df = df_flat.set_index(['dim1', 'dim2']).unstack()\n>>> np.allclose(new_df, df_unflatten)\nTrue\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074477214_pandas_python.txt
|
Q:
I made a Discord Bot, but It can't reply me on server
I'm new on Python and Discord programming, and I made a bot there, but I can't see my Bot send me a message on my server, but only in private chats. I follow the freecodecamp tutorial to made it.
How could I fix it there?
`
import os
import discord
import requests
import json
import random
from replit import db
from keep_alive import keep_alive
my_secret = os.getenv('TOKEN')
client = discord.Client()
sad_words = ["sad", "depressed", "unhappy", "angry", "miserable"]
starter_encouragements = [
"Cheer up!", "Hang in there.", "You are a great person / bot!"
]
cool_words = ["happy", "kind", "cheer", "great", "beautiful"]
if "responding" not in db.keys():
db["responding"] = True
def get_quote():
response = requests.get("https://zenquotes.io/api/random")
json_data = json.loads(response.text)
quote = json_data[0]["q"] + " -" + json_data[0]["a"]
return (quote)
def update_encouragements(encouraging_message):
if "encouragements" in db.keys():
encouragements = db["encouragements"]
encouragements.append(encouraging_message)
db["encouragements"] = encouragements
else:
db["encouragements"] = [encouraging_message]
def delete_encouragment(index):
encouragements = db["encouragements"]
if len(encouragements) > index:
del encouragements[index]
db["encouragements"] = encouragements
@client.event
async def on_ready():
print("We have logged in as {0.user}".format(client))
@client.event
async def on_message(message):
if message.author == client.user:
return
msg = message.content
if msg.startswith("$inspire"):
quote = get_quote()
await message.channel.send(quote)
if db["responding"]:
options = starter_encouragements
if "encouragements" in db.keys():
options = options + db["encouragements"]
if any(word in msg for word in sad_words):
await message.channel.send(random.choice(options))
if any(word in msg for word in cool_words):
await message.channel.send(random.choice(options))
if msg.startswith("$new"):
encouraging_message = msg.split("$new ", 1)[1]
update_encouragements(encouraging_message)
await message.channel.send("New encouraging message added.")
if msg.startswith("$del"):
encouragements = []
if "encouragements" in db.keys():
index = int(msg.split("$del", 1)[1])
delete_encouragment(index)
encouragements = db["encouragements"]
await message.channel.send(encouragements)
if msg.startswith("$list"):
encouragements = []
if "encouragements" in db.keys():
encouragements = db["encouragements"]
await message.channel.send(encouragements)
if msg.startswith("$responding"):
value = msg.split("$responding ", 1)[1]
if value.lower() == "true":
db["responding"] = True
await message.channel.send("Responding is on.")
else:
db["responding"] = False
await message.channel.send("Responding is off.")
keep_alive()
client.run(my_secret)
`
I want to know if I forgot something to make my bot reply me on server and private chat.
A:
I followed the same tutorial when I started making bots, and honestly it is kind of a misleading tutorial. Having your commands in the on_message event can be pretty inconsistent in my experience. I would reccommend that instead, you create defined commands. This can be done like this:
from discord.ext import commands
client = commands.Bot(command_prefix='$') # there are other things you can define here like if you don't want the default help command for example, and you can make the prefix whatever you want I just put it as $ since thats what you had in your code
@client.event
async def on_message(message):
if message.author == client.user:
return # stops commands if your bot is the one messaging. I didn't do this at first and just ended up with a lot of infinite loops
await client.process_commands(message) # processes commands, especially important with a return statement inside the event
@client.command # similar to client.event, this is how if you want to, you would set up a command
async def inspire(ctx): # ctx is context, which will be good to have in most commands
quote = get_quote()
await ctx.send(quote) # I used ctx.send here because ctx grabs the channel that the command was called in. So instead of grabbing the channel manually through message.channel, we can just use the context (ctx) parameter
This is just a small start on commands and you would still your other functions from where you pull the quote or get the words from defined. However i've found that using and creating commands like this is easier for the user to use, easier to debug, and easier to write. Here's a link to the discord.py docs which I would definitely reccommend reading a good bit of. https://discordpy.readthedocs.io/en/stable/api.html It's been super helpful to me in developing my learning of both the discord api, and python. It's a super good resource and I hope it can help you too. Hope this answer helps
|
I made a Discord Bot, but It can't reply me on server
|
I'm new on Python and Discord programming, and I made a bot there, but I can't see my Bot send me a message on my server, but only in private chats. I follow the freecodecamp tutorial to made it.
How could I fix it there?
`
import os
import discord
import requests
import json
import random
from replit import db
from keep_alive import keep_alive
my_secret = os.getenv('TOKEN')
client = discord.Client()
sad_words = ["sad", "depressed", "unhappy", "angry", "miserable"]
starter_encouragements = [
"Cheer up!", "Hang in there.", "You are a great person / bot!"
]
cool_words = ["happy", "kind", "cheer", "great", "beautiful"]
if "responding" not in db.keys():
db["responding"] = True
def get_quote():
response = requests.get("https://zenquotes.io/api/random")
json_data = json.loads(response.text)
quote = json_data[0]["q"] + " -" + json_data[0]["a"]
return (quote)
def update_encouragements(encouraging_message):
if "encouragements" in db.keys():
encouragements = db["encouragements"]
encouragements.append(encouraging_message)
db["encouragements"] = encouragements
else:
db["encouragements"] = [encouraging_message]
def delete_encouragment(index):
encouragements = db["encouragements"]
if len(encouragements) > index:
del encouragements[index]
db["encouragements"] = encouragements
@client.event
async def on_ready():
print("We have logged in as {0.user}".format(client))
@client.event
async def on_message(message):
if message.author == client.user:
return
msg = message.content
if msg.startswith("$inspire"):
quote = get_quote()
await message.channel.send(quote)
if db["responding"]:
options = starter_encouragements
if "encouragements" in db.keys():
options = options + db["encouragements"]
if any(word in msg for word in sad_words):
await message.channel.send(random.choice(options))
if any(word in msg for word in cool_words):
await message.channel.send(random.choice(options))
if msg.startswith("$new"):
encouraging_message = msg.split("$new ", 1)[1]
update_encouragements(encouraging_message)
await message.channel.send("New encouraging message added.")
if msg.startswith("$del"):
encouragements = []
if "encouragements" in db.keys():
index = int(msg.split("$del", 1)[1])
delete_encouragment(index)
encouragements = db["encouragements"]
await message.channel.send(encouragements)
if msg.startswith("$list"):
encouragements = []
if "encouragements" in db.keys():
encouragements = db["encouragements"]
await message.channel.send(encouragements)
if msg.startswith("$responding"):
value = msg.split("$responding ", 1)[1]
if value.lower() == "true":
db["responding"] = True
await message.channel.send("Responding is on.")
else:
db["responding"] = False
await message.channel.send("Responding is off.")
keep_alive()
client.run(my_secret)
`
I want to know if I forgot something to make my bot reply me on server and private chat.
|
[
"I followed the same tutorial when I started making bots, and honestly it is kind of a misleading tutorial. Having your commands in the on_message event can be pretty inconsistent in my experience. I would reccommend that instead, you create defined commands. This can be done like this:\nfrom discord.ext import commands\n\nclient = commands.Bot(command_prefix='$') # there are other things you can define here like if you don't want the default help command for example, and you can make the prefix whatever you want I just put it as $ since thats what you had in your code\n\n@client.event\nasync def on_message(message):\n if message.author == client.user:\n return # stops commands if your bot is the one messaging. I didn't do this at first and just ended up with a lot of infinite loops\n await client.process_commands(message) # processes commands, especially important with a return statement inside the event\n\n@client.command # similar to client.event, this is how if you want to, you would set up a command\nasync def inspire(ctx): # ctx is context, which will be good to have in most commands\n quote = get_quote()\n await ctx.send(quote) # I used ctx.send here because ctx grabs the channel that the command was called in. So instead of grabbing the channel manually through message.channel, we can just use the context (ctx) parameter\n\nThis is just a small start on commands and you would still your other functions from where you pull the quote or get the words from defined. However i've found that using and creating commands like this is easier for the user to use, easier to debug, and easier to write. Here's a link to the discord.py docs which I would definitely reccommend reading a good bit of. https://discordpy.readthedocs.io/en/stable/api.html It's been super helpful to me in developing my learning of both the discord api, and python. It's a super good resource and I hope it can help you too. Hope this answer helps\n"
] |
[
0
] |
[] |
[] |
[
"discord",
"python",
"server"
] |
stackoverflow_0074470705_discord_python_server.txt
|
Q:
AttributeError: 'str' object has no attribute 'request' - googletrans
I am trying to use this google translate python library googletrans 3.0.0, which I installed from pypi.
I used this code to start with:
from googletrans import Translator
proxies = {'http': 'http://myproxy.com:8080', 'https': 'http://myproxy.com:8080'}
translator = Translator(proxies=proxies)
translator.translate("colour")
When I call the translator in the last line above, I got this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/alpha/miniconda3/lib/python3.9/site-packages/googletrans/client.py", line 182, in translate
data = self._translate(text, dest, src, kwargs)
File "/home/alpha/miniconda3/lib/python3.9/site-packages/googletrans/client.py", line 78, in _translate
token = self.token_acquirer.do(text)
File "/home/alpha/miniconda3/lib/python3.9/site-packages/googletrans/gtoken.py", line 194, in do
self._update()
File "/home/alpha/miniconda3/lib/python3.9/site-packages/googletrans/gtoken.py", line 54, in _update
r = self.client.get(self.host)
File "/home/alpha/miniconda3/lib/python3.9/site-packages/httpx/_client.py", line 755, in get
return self.request(
File "/home/alpha/miniconda3/lib/python3.9/site-packages/httpx/_client.py", line 600, in request
return self.send(
File "/home/alpha/miniconda3/lib/python3.9/site-packages/httpx/_client.py", line 620, in send
response = self.send_handling_redirects(
File "/home/alpha/miniconda3/lib/python3.9/site-packages/httpx/_client.py", line 647, in send_handling_redirects
response = self.send_handling_auth(
File "/home/alpha/miniconda3/lib/python3.9/site-packages/httpx/_client.py", line 684, in send_handling_auth
response = self.send_single_request(request, timeout)
File "/home/alpha/miniconda3/lib/python3.9/site-packages/httpx/_client.py", line 714, in send_single_request
) = transport.request(
AttributeError: 'str' object has no attribute 'request'
Is it the way I am inputting the proxies information to the Translator that makes it unhappy?
A:
This seems to be very confusing according to the official docs, but this github issue has a solution.
For some reason the docs specify both strings and HTTPTransports but this has been clarified in the issue above.
Basically:
from httpcore import SyncHTTPProxy
from googletrans import Translator
http_proxy = SyncHTTPProxy((b'http', b'myproxy.com', 8080, b''))
proxies = {'http': http_proxy, 'https': http_proxy }
translator = Translator(proxies=proxies)
translator.translate("colour")
A:
You can also set some environment variables.
For Windows:
cmd:
set http_proxy=...
set https_proxy=...
powershell:
$env:http_proxy = ...; $env:https_proxy = ...
For Linux:
export http_proxy=... https_proxy=...
|
AttributeError: 'str' object has no attribute 'request' - googletrans
|
I am trying to use this google translate python library googletrans 3.0.0, which I installed from pypi.
I used this code to start with:
from googletrans import Translator
proxies = {'http': 'http://myproxy.com:8080', 'https': 'http://myproxy.com:8080'}
translator = Translator(proxies=proxies)
translator.translate("colour")
When I call the translator in the last line above, I got this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/alpha/miniconda3/lib/python3.9/site-packages/googletrans/client.py", line 182, in translate
data = self._translate(text, dest, src, kwargs)
File "/home/alpha/miniconda3/lib/python3.9/site-packages/googletrans/client.py", line 78, in _translate
token = self.token_acquirer.do(text)
File "/home/alpha/miniconda3/lib/python3.9/site-packages/googletrans/gtoken.py", line 194, in do
self._update()
File "/home/alpha/miniconda3/lib/python3.9/site-packages/googletrans/gtoken.py", line 54, in _update
r = self.client.get(self.host)
File "/home/alpha/miniconda3/lib/python3.9/site-packages/httpx/_client.py", line 755, in get
return self.request(
File "/home/alpha/miniconda3/lib/python3.9/site-packages/httpx/_client.py", line 600, in request
return self.send(
File "/home/alpha/miniconda3/lib/python3.9/site-packages/httpx/_client.py", line 620, in send
response = self.send_handling_redirects(
File "/home/alpha/miniconda3/lib/python3.9/site-packages/httpx/_client.py", line 647, in send_handling_redirects
response = self.send_handling_auth(
File "/home/alpha/miniconda3/lib/python3.9/site-packages/httpx/_client.py", line 684, in send_handling_auth
response = self.send_single_request(request, timeout)
File "/home/alpha/miniconda3/lib/python3.9/site-packages/httpx/_client.py", line 714, in send_single_request
) = transport.request(
AttributeError: 'str' object has no attribute 'request'
Is it the way I am inputting the proxies information to the Translator that makes it unhappy?
|
[
"This seems to be very confusing according to the official docs, but this github issue has a solution.\nFor some reason the docs specify both strings and HTTPTransports but this has been clarified in the issue above.\nBasically:\nfrom httpcore import SyncHTTPProxy\nfrom googletrans import Translator\n\nhttp_proxy = SyncHTTPProxy((b'http', b'myproxy.com', 8080, b''))\nproxies = {'http': http_proxy, 'https': http_proxy }\n\ntranslator = Translator(proxies=proxies)\ntranslator.translate(\"colour\")\n\n",
"You can also set some environment variables.\nFor Windows:\ncmd:\n set http_proxy=...\n set https_proxy=...\n\npowershell:\n $env:http_proxy = ...; $env:https_proxy = ...\n\nFor Linux:\n export http_proxy=... https_proxy=...\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"google_translate",
"python"
] |
stackoverflow_0071033206_google_translate_python.txt
|
Q:
Create a CSV using pandas
I have a method which is returning a list of data based on some conditions.
example:
source_image = cv2.imread("images/source_test.tif")
target_image= cv2.imread("images/target_test.tif")
total_matching_points =998
if total_matching_points > 500:
generateTargetCSV(source_image, target_image, total_matching_points)
Now I need to create a CSV under method where it will store all the values passed in parameter:
def generateTargetCSV(source, target, total_matching_points):
#Need help here to create a csv where it will store all the values coming from above arguments///
df = pd.DataFrame(source, target, total_matching_points)
df.to_csv('some_value.csv', index=False)
A:
Assuming you are passing lists into the method, to create a pandas df you should do
df = pd.DataFrame({'source':source, 'target':target, 'total_matching_points': total_matching_points})
You can then save with
df.to_csv(location+filename, index=(Boolean))
|
Create a CSV using pandas
|
I have a method which is returning a list of data based on some conditions.
example:
source_image = cv2.imread("images/source_test.tif")
target_image= cv2.imread("images/target_test.tif")
total_matching_points =998
if total_matching_points > 500:
generateTargetCSV(source_image, target_image, total_matching_points)
Now I need to create a CSV under method where it will store all the values passed in parameter:
def generateTargetCSV(source, target, total_matching_points):
#Need help here to create a csv where it will store all the values coming from above arguments///
df = pd.DataFrame(source, target, total_matching_points)
df.to_csv('some_value.csv', index=False)
|
[
"Assuming you are passing lists into the method, to create a pandas df you should do\n df = pd.DataFrame({'source':source, 'target':target, 'total_matching_points': total_matching_points})\n\nYou can then save with\ndf.to_csv(location+filename, index=(Boolean))\n\n"
] |
[
0
] |
[] |
[] |
[
"csv",
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074475072_csv_dataframe_pandas_python.txt
|
Q:
Can't store a pdf file in a MySql table
I need to store a pdf file in MySql. Whether I use escape_string or not, I always get the same error
b_blob = open(dir + fname_only, "rb")
myblob = b_blob.read() ####<- b'%PDF-1.4\n%\xaa\xab\xac\xad\n4 0 obj\n<<\n/Producer (Apache FOP Version 0.94)\
try:
conn = mysql.connector.connect( usual stuff )
cursor =conn.cursor(buffered=True, dictionary=True)
newblob = conn._cmysql.escape_string(myblob)
query = """INSERT INTO `mytable` (`storing`) VALUES('%s')""" %(newblob)
cursor.execute(query)
except Exception as exc:
Functions.error_handler(exc);
return
b_blob.close()
...MySQL server version for the right syntax to use near '\n%\xaa\xab\xac\xad\n4 0 obj\n<<\n/Producer (Apache FOP Version 0.94)\n/Creation' at line 1
A:
So it looks like your problem is arriving from the quotes at the start of your string. I would consider putting double quotes around the newblob variable. Should look like this.
query = """INSERT INTO `mytable` (`storing`) VALUES("%s")""" %(newblob)
|
Can't store a pdf file in a MySql table
|
I need to store a pdf file in MySql. Whether I use escape_string or not, I always get the same error
b_blob = open(dir + fname_only, "rb")
myblob = b_blob.read() ####<- b'%PDF-1.4\n%\xaa\xab\xac\xad\n4 0 obj\n<<\n/Producer (Apache FOP Version 0.94)\
try:
conn = mysql.connector.connect( usual stuff )
cursor =conn.cursor(buffered=True, dictionary=True)
newblob = conn._cmysql.escape_string(myblob)
query = """INSERT INTO `mytable` (`storing`) VALUES('%s')""" %(newblob)
cursor.execute(query)
except Exception as exc:
Functions.error_handler(exc);
return
b_blob.close()
...MySQL server version for the right syntax to use near '\n%\xaa\xab\xac\xad\n4 0 obj\n<<\n/Producer (Apache FOP Version 0.94)\n/Creation' at line 1
|
[
"So it looks like your problem is arriving from the quotes at the start of your string. I would consider putting double quotes around the newblob variable. Should look like this.\nquery = \"\"\"INSERT INTO `mytable` (`storing`) VALUES(\"%s\")\"\"\" %(newblob)\n\n"
] |
[
0
] |
[] |
[] |
[
"mysql",
"mysql_connector_python",
"python"
] |
stackoverflow_0074476809_mysql_mysql_connector_python_python.txt
|
Q:
How do I combine repeating columns, appending the values from merged columns
I have an output of a dataframe below, into a dictionary.
{0: ['RevitCategory', 'Door'],
1: ['DesignModelID', 'ModelA_Rev1'],
2: ['DesignObjectID', 'ModelA_Rev1_Object1'],
3: ['TypeName', 'ARC_DOR_INTERNAL'],
4: ['Function', 'Internal'],
5: ['Uniclass2015_Ss', 'Ss_25_30_20_25 : Doorset systems'],
6: ['IfcExportAs', 'IfcDoorType.DOOR'],
7: ['NRM1Classification', '2.8.1.1 Internal doors'],
8: ['RevitCategory', 'Wall'],
9: ['DesignModelID', 'ModelA_Rev1'],
10: ['DesignObjectID', 'ModelA_Rev1_Object2'],
11: ['TypeName', 'ARC_WALL_PARTITION_TYPE1'],
12: ['Function', 'Internal'],
13: ['Uniclass2015_Ss', 'Ss_25_10_30_35 : Gypsum board partition systems'],
14: ['IfcExportAs', 'IfcWallType.PARTITIONING'],
15: ['NRM1Classification', '2.7.1.1 Internal walls'],
16: ['Area', 5],
17: ['RevitCategory', 'Wall'],
18: ['DesignModelID', 'ModelA_Rev1'],
19: ['DesignObjectID', 'ModelA_Rev1_Object3'],
20: ['TypeName', 'ARC_WALL_PARTITION_TYPE1'],
21: ['Function', 'Internal'],
22: ['Uniclass2015_Ss', 'Ss_25_10_30_35 : Gypsum board partition systems'],
23: ['IfcExportAs', 'IfcWallType.PARTITIONING'],
24: ['NRM1Classification', '2.7.1.1 Internal walls'],
25: ['Area', 5],
26: ['RevitCategory', 'Wall'],
27: ['DesignModelID', 'ModelA_Rev1'],
28: ['DesignObjectID', 'ModelA_Rev1_Object4'],
29: ['TypeName', 'ARC_WALL_PARTITION_TYPE1'],
30: ['Function', 'Internal'],
31: ['Uniclass2015_Ss', 'Ss_25_10_30_35 : Gypsum board partition systems'],
32: ['IfcExportAs', 'IfcWallType.PARTITIONING'],
33: ['NRM1Classification', '2.7.1.1 Internal walls'],
34: ['Area', 5]
How would I combine the columns, for example in an output like the one below, stacking the results of the columns as they are merged ?
RevitCategory | DesignModelID | DesignObjectID | etc.
-----------------------------------------------------------------
Door | ModelA_Rev1 | ModelA_Rev1_Object1 |
Wall | ModelA_Rev1 | ModelA_Rev1_Object1 |
Wall | ModelA_Rev1 | ModelA_Rev1_Object1 |
-----------------------------------------------------------------
A:
Unless I misunderstood your problem and depending on if 'RevitCategory' marks the begining of the new row, it could work like this. I don't know if there is a solution more idiomatic to pandas.
df = pd.DataFrame()
j = 0
for key in dict:
if dict[key][0] == 'RevitCategory':
row = {dict[key][0]: dict[key][1]}
i = 1
while key + i in dict and dict[key + i][0] != 'RevitCategory':
row[dict[key + i][0]] = dict[key + i][1]
i += 1
if not df.empty:
df = pd.concat([df, pd.DataFrame(row, index=[j])])
else:
df = pd.DataFrame(row, index=[0])
j += 1
A:
you could create records and create the dataframe with DataFrame.from_records
obj = {0: ['RevitCategory', 'Door'],
1: ['DesignModelID', 'ModelA_Rev1'],
2: ['DesignObjectID', 'ModelA_Rev1_Object1'],
3: ['TypeName', 'ARC_DOR_INTERNAL'],
4: ['Function', 'Internal'],
5: ['Uniclass2015_Ss', 'Ss_25_30_20_25 : Doorset systems'],
6: ['IfcExportAs', 'IfcDoorType.DOOR'],
7: ['NRM1Classification', '2.8.1.1 Internal doors'],
8: ['RevitCategory', 'Wall'],
9: ['DesignModelID', 'ModelA_Rev1'],
10: ['DesignObjectID', 'ModelA_Rev1_Object2'],
11: ['TypeName', 'ARC_WALL_PARTITION_TYPE1'],
12: ['Function', 'Internal'],
13: ['Uniclass2015_Ss', 'Ss_25_10_30_35 : Gypsum board partition systems'],
14: ['IfcExportAs', 'IfcWallType.PARTITIONING'],
15: ['NRM1Classification', '2.7.1.1 Internal walls'],
16: ['Area', 5],
17: ['RevitCategory', 'Wall'],
18: ['DesignModelID', 'ModelA_Rev1'],
19: ['DesignObjectID', 'ModelA_Rev1_Object3'],
20: ['TypeName', 'ARC_WALL_PARTITION_TYPE1'],
21: ['Function', 'Internal'],
22: ['Uniclass2015_Ss', 'Ss_25_10_30_35 : Gypsum board partition systems'],
23: ['IfcExportAs', 'IfcWallType.PARTITIONING'],
24: ['NRM1Classification', '2.7.1.1 Internal walls'],
25: ['Area', 5],
26: ['RevitCategory', 'Wall'],
27: ['DesignModelID', 'ModelA_Rev1'],
28: ['DesignObjectID', 'ModelA_Rev1_Object4'],
29: ['TypeName', 'ARC_WALL_PARTITION_TYPE1'],
30: ['Function', 'Internal'],
31: ['Uniclass2015_Ss', 'Ss_25_10_30_35 : Gypsum board partition systems'],
32: ['IfcExportAs', 'IfcWallType.PARTITIONING'],
33: ['NRM1Classification', '2.7.1.1 Internal walls'],
34: ['Area', 5]}
# create records
records = []
c = None
for k, v in obj.values():
# check if we have a new record to add
if k == "RevitCategory":
# we do not want to add the at the first occurence of RevitCategory
if c is not None:
# append current element as record
records.append(c)
# reset current record
c = {}
# set the value for the key
c[k] = v
# create dataframe
df = pd.DataFrame.from_records(records)
|
How do I combine repeating columns, appending the values from merged columns
|
I have an output of a dataframe below, into a dictionary.
{0: ['RevitCategory', 'Door'],
1: ['DesignModelID', 'ModelA_Rev1'],
2: ['DesignObjectID', 'ModelA_Rev1_Object1'],
3: ['TypeName', 'ARC_DOR_INTERNAL'],
4: ['Function', 'Internal'],
5: ['Uniclass2015_Ss', 'Ss_25_30_20_25 : Doorset systems'],
6: ['IfcExportAs', 'IfcDoorType.DOOR'],
7: ['NRM1Classification', '2.8.1.1 Internal doors'],
8: ['RevitCategory', 'Wall'],
9: ['DesignModelID', 'ModelA_Rev1'],
10: ['DesignObjectID', 'ModelA_Rev1_Object2'],
11: ['TypeName', 'ARC_WALL_PARTITION_TYPE1'],
12: ['Function', 'Internal'],
13: ['Uniclass2015_Ss', 'Ss_25_10_30_35 : Gypsum board partition systems'],
14: ['IfcExportAs', 'IfcWallType.PARTITIONING'],
15: ['NRM1Classification', '2.7.1.1 Internal walls'],
16: ['Area', 5],
17: ['RevitCategory', 'Wall'],
18: ['DesignModelID', 'ModelA_Rev1'],
19: ['DesignObjectID', 'ModelA_Rev1_Object3'],
20: ['TypeName', 'ARC_WALL_PARTITION_TYPE1'],
21: ['Function', 'Internal'],
22: ['Uniclass2015_Ss', 'Ss_25_10_30_35 : Gypsum board partition systems'],
23: ['IfcExportAs', 'IfcWallType.PARTITIONING'],
24: ['NRM1Classification', '2.7.1.1 Internal walls'],
25: ['Area', 5],
26: ['RevitCategory', 'Wall'],
27: ['DesignModelID', 'ModelA_Rev1'],
28: ['DesignObjectID', 'ModelA_Rev1_Object4'],
29: ['TypeName', 'ARC_WALL_PARTITION_TYPE1'],
30: ['Function', 'Internal'],
31: ['Uniclass2015_Ss', 'Ss_25_10_30_35 : Gypsum board partition systems'],
32: ['IfcExportAs', 'IfcWallType.PARTITIONING'],
33: ['NRM1Classification', '2.7.1.1 Internal walls'],
34: ['Area', 5]
How would I combine the columns, for example in an output like the one below, stacking the results of the columns as they are merged ?
RevitCategory | DesignModelID | DesignObjectID | etc.
-----------------------------------------------------------------
Door | ModelA_Rev1 | ModelA_Rev1_Object1 |
Wall | ModelA_Rev1 | ModelA_Rev1_Object1 |
Wall | ModelA_Rev1 | ModelA_Rev1_Object1 |
-----------------------------------------------------------------
|
[
"Unless I misunderstood your problem and depending on if 'RevitCategory' marks the begining of the new row, it could work like this. I don't know if there is a solution more idiomatic to pandas.\ndf = pd.DataFrame()\nj = 0\nfor key in dict:\n if dict[key][0] == 'RevitCategory':\n row = {dict[key][0]: dict[key][1]}\n i = 1\n while key + i in dict and dict[key + i][0] != 'RevitCategory': \n row[dict[key + i][0]] = dict[key + i][1]\n i += 1 \n if not df.empty: \n df = pd.concat([df, pd.DataFrame(row, index=[j])])\n else:\n df = pd.DataFrame(row, index=[0]) \n j += 1\n\n",
"you could create records and create the dataframe with DataFrame.from_records\nobj = {0: ['RevitCategory', 'Door'],\n 1: ['DesignModelID', 'ModelA_Rev1'],\n 2: ['DesignObjectID', 'ModelA_Rev1_Object1'],\n 3: ['TypeName', 'ARC_DOR_INTERNAL'],\n 4: ['Function', 'Internal'],\n 5: ['Uniclass2015_Ss', 'Ss_25_30_20_25 : Doorset systems'],\n 6: ['IfcExportAs', 'IfcDoorType.DOOR'],\n 7: ['NRM1Classification', '2.8.1.1 Internal doors'],\n 8: ['RevitCategory', 'Wall'],\n 9: ['DesignModelID', 'ModelA_Rev1'],\n 10: ['DesignObjectID', 'ModelA_Rev1_Object2'],\n 11: ['TypeName', 'ARC_WALL_PARTITION_TYPE1'],\n 12: ['Function', 'Internal'],\n 13: ['Uniclass2015_Ss', 'Ss_25_10_30_35 : Gypsum board partition systems'],\n 14: ['IfcExportAs', 'IfcWallType.PARTITIONING'],\n 15: ['NRM1Classification', '2.7.1.1 Internal walls'],\n 16: ['Area', 5],\n 17: ['RevitCategory', 'Wall'],\n 18: ['DesignModelID', 'ModelA_Rev1'],\n 19: ['DesignObjectID', 'ModelA_Rev1_Object3'],\n 20: ['TypeName', 'ARC_WALL_PARTITION_TYPE1'],\n 21: ['Function', 'Internal'],\n 22: ['Uniclass2015_Ss', 'Ss_25_10_30_35 : Gypsum board partition systems'],\n 23: ['IfcExportAs', 'IfcWallType.PARTITIONING'],\n 24: ['NRM1Classification', '2.7.1.1 Internal walls'],\n 25: ['Area', 5],\n 26: ['RevitCategory', 'Wall'],\n 27: ['DesignModelID', 'ModelA_Rev1'],\n 28: ['DesignObjectID', 'ModelA_Rev1_Object4'],\n 29: ['TypeName', 'ARC_WALL_PARTITION_TYPE1'],\n 30: ['Function', 'Internal'],\n 31: ['Uniclass2015_Ss', 'Ss_25_10_30_35 : Gypsum board partition systems'],\n 32: ['IfcExportAs', 'IfcWallType.PARTITIONING'],\n 33: ['NRM1Classification', '2.7.1.1 Internal walls'],\n 34: ['Area', 5]}\n\n# create records\nrecords = []\nc = None\nfor k, v in obj.values():\n # check if we have a new record to add\n if k == \"RevitCategory\":\n # we do not want to add the at the first occurence of RevitCategory\n if c is not None:\n # append current element as record\n records.append(c)\n # reset current record\n c = {}\n # set the value for the key\n c[k] = v\n\n# create dataframe\ndf = pd.DataFrame.from_records(records)\n\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074477154_pandas_python.txt
|
Q:
np_r function with two values
I have found the following code:
x=0.3*np.random.randn(100,2)
x_train=np.r_[x+2,x-2]
In the first case x is an array of 100 rows and two columns in a format list of list, for what I see. In this case when I use size it returns 200. However, in the x_train part it is using np.r_. For what I know this instruction serves to concatenate arrays, so when I run size again it returns 400. However, I cannot get what does x+2 and x-2 perform in this case. For example, why in the first case is adding 2 and in the other case is subtracting 2?
I have read the documentation and still not get any clue.
A:
The linked scikit is showing how to find two separate classes in 2 dimensions. The code you are asking about generates random x&y coordinate data for those two separate classes
The purpose of np.random.randn is to generate 100 normally-distributed random x and y coordinate pairs (ie x is a 100x2 matrix). Side note, the .3 multiplier is probably used to decreased standard deviation for tighter clusters.
By adding 2 to x (ie add the value 2 to each element in x), they create a set of x and y coordinates that are closely scattered around (2,2) and by subtracting 2 from x, they create a set of x and y coordinates that are scattered around (-2,-2).
np.r_ ,in this case, is the same as using np.concatenate((x-2,x+2),0) which creates a 200x2 array with 100 observations of x&y points scattered around (2,2) and 100 scattered around (-2,-2) in one matrix
|
np_r function with two values
|
I have found the following code:
x=0.3*np.random.randn(100,2)
x_train=np.r_[x+2,x-2]
In the first case x is an array of 100 rows and two columns in a format list of list, for what I see. In this case when I use size it returns 200. However, in the x_train part it is using np.r_. For what I know this instruction serves to concatenate arrays, so when I run size again it returns 400. However, I cannot get what does x+2 and x-2 perform in this case. For example, why in the first case is adding 2 and in the other case is subtracting 2?
I have read the documentation and still not get any clue.
|
[
"The linked scikit is showing how to find two separate classes in 2 dimensions. The code you are asking about generates random x&y coordinate data for those two separate classes\nThe purpose of np.random.randn is to generate 100 normally-distributed random x and y coordinate pairs (ie x is a 100x2 matrix). Side note, the .3 multiplier is probably used to decreased standard deviation for tighter clusters.\nBy adding 2 to x (ie add the value 2 to each element in x), they create a set of x and y coordinates that are closely scattered around (2,2) and by subtracting 2 from x, they create a set of x and y coordinates that are scattered around (-2,-2).\nnp.r_ ,in this case, is the same as using np.concatenate((x-2,x+2),0) which creates a 200x2 array with 100 observations of x&y points scattered around (2,2) and 100 scattered around (-2,-2) in one matrix\n"
] |
[
2
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0074477098_numpy_python.txt
|
Q:
from urllib3.util.ssl_ import ( ImportError: cannot import name ssl
My resources:
Python 2.7, Ubunutu 18.04, Pycharm, virtual box oracle
I have an automation solution built in python.
The solution can be run from both cmd or pycharm of course.
2 options to run automation solution.
python main.py args a,b,c...(run 1 suite of tests)
python jenkinsRun.py arg a,b,c...(run main.py with diff args each time -lets say 5 time for instance)
Once jenkinsRun.py is runnig it will execute each main.py like this:
os.system('python main.py %s %s %s %s %s %s'%(STD,config.VpcStackName, '-dryrun', 'false', '-tenant' ,config.PROD_STAGE_Tenant))
Note that this is how I implemented it 3 years ago..could be better ways like using __import__, but need way to pass arguments, etc...
Anyway, when run:
python main.py arg a,b,c..
All good.
When run:
jenkinsRun.py
which should run main each time with diff args I get exception:
"/home/ohad/.local/lib/python2.7/site-packages/botocore/httpsession.py", line 7, in <module>
from urllib3.util.ssl_ import (
ImportError: cannot import name ssl
This happend only when I run the code on my new environment (see resources above)
last week I had old virtul box with ubuntu 15.04 (old) which everything worked well (didn't touch the vode ever since).
I have installed on new virtual box from scratch libaries, drivers, etc, etc.
Any ideas?
A:
Could be some issue with installation. I did re-installed on MAC and it worked
sudo pip install awscli --ignore-installed six
A:
Just to make sure: are you certain that you are invoking Python 2.x ?
Ubuntu 18.04 has Python 3.x as default, so make sure that you are not accidentally starting the script using another python version.
A:
I had a similar error after creating a new environment (which also uses Boto3). It turned out to be a DLL error (ImportError: DLL load failed), which was caught by SSL module resulting in the error from the question: ImportError: cannot import name ssl.
Solution for me was to add an additional folder to the path: path_to_anaconda/Anaconda3/Library/bin. In that way, DLL load succeeds and the given ImportError is resolved.
A:
I was working in PyCharm when I hit this wall.
Solved it by redirecting the path to my Anaconda environment, which I keep better provisioned and up to date.
A:
Update the latest version of awscli resolved on my Mac by the below command line.
curl "https://awscli.amazonaws.com/AWSCLIV2-2.0.30.pkg" -o
"AWSCLIV2.pkg" sudo installer -pkg AWSCLIV2.pkg -target /
Reference:
https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html#cliv2-mac-install-cmd
A:
After uninstalling, installing, even creating environments... this worked for me!
https://stackoverflow.com/a/60405693
A:
In my case this issue came apparently from having colliding versions of boto3, botocore and awscli. This fixed the issue in my case:
pip install boto3 botocore awscli aiobotocore --ignore-installed
A:
I was getting the same error on Win 10 and VS Code was pointing at the Conda interpreter. The issue was solved by installing Python 3.11 outside of Conda and pointing at the new interpreter. Don't forget to add the new Python to PATH and install boto3 afterwards.
|
from urllib3.util.ssl_ import ( ImportError: cannot import name ssl
|
My resources:
Python 2.7, Ubunutu 18.04, Pycharm, virtual box oracle
I have an automation solution built in python.
The solution can be run from both cmd or pycharm of course.
2 options to run automation solution.
python main.py args a,b,c...(run 1 suite of tests)
python jenkinsRun.py arg a,b,c...(run main.py with diff args each time -lets say 5 time for instance)
Once jenkinsRun.py is runnig it will execute each main.py like this:
os.system('python main.py %s %s %s %s %s %s'%(STD,config.VpcStackName, '-dryrun', 'false', '-tenant' ,config.PROD_STAGE_Tenant))
Note that this is how I implemented it 3 years ago..could be better ways like using __import__, but need way to pass arguments, etc...
Anyway, when run:
python main.py arg a,b,c..
All good.
When run:
jenkinsRun.py
which should run main each time with diff args I get exception:
"/home/ohad/.local/lib/python2.7/site-packages/botocore/httpsession.py", line 7, in <module>
from urllib3.util.ssl_ import (
ImportError: cannot import name ssl
This happend only when I run the code on my new environment (see resources above)
last week I had old virtul box with ubuntu 15.04 (old) which everything worked well (didn't touch the vode ever since).
I have installed on new virtual box from scratch libaries, drivers, etc, etc.
Any ideas?
|
[
"Could be some issue with installation. I did re-installed on MAC and it worked\nsudo pip install awscli --ignore-installed six\n\n",
"Just to make sure: are you certain that you are invoking Python 2.x ?\nUbuntu 18.04 has Python 3.x as default, so make sure that you are not accidentally starting the script using another python version.\n",
"I had a similar error after creating a new environment (which also uses Boto3). It turned out to be a DLL error (ImportError: DLL load failed), which was caught by SSL module resulting in the error from the question: ImportError: cannot import name ssl. \nSolution for me was to add an additional folder to the path: path_to_anaconda/Anaconda3/Library/bin. In that way, DLL load succeeds and the given ImportError is resolved.\n",
"I was working in PyCharm when I hit this wall.\nSolved it by redirecting the path to my Anaconda environment, which I keep better provisioned and up to date.\n\n\n",
"Update the latest version of awscli resolved on my Mac by the below command line.\n\ncurl \"https://awscli.amazonaws.com/AWSCLIV2-2.0.30.pkg\" -o\n\n\n\"AWSCLIV2.pkg\" sudo installer -pkg AWSCLIV2.pkg -target /\n\nReference:\nhttps://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html#cliv2-mac-install-cmd\n",
"After uninstalling, installing, even creating environments... this worked for me!\nhttps://stackoverflow.com/a/60405693\n",
"In my case this issue came apparently from having colliding versions of boto3, botocore and awscli. This fixed the issue in my case:\npip install boto3 botocore awscli aiobotocore --ignore-installed\n\n",
"I was getting the same error on Win 10 and VS Code was pointing at the Conda interpreter. The issue was solved by installing Python 3.11 outside of Conda and pointing at the new interpreter. Don't forget to add the new Python to PATH and install boto3 afterwards.\n"
] |
[
16,
3,
3,
0,
0,
0,
0,
0
] |
[
"I am not sure why it worked. But, I had this issue in AWS Glue, and I was able to get around this problem by using Glue 3.0 instead of Glue 2.0.\n",
"Please update the latest urllib package:\nrun :\npip3 uninstall urllib3\npip3 install urllib3\n\n"
] |
[
-1,
-1
] |
[
"python",
"python_2.7"
] |
stackoverflow_0054217137_python_python_2.7.txt
|
Q:
How to Convert list to string and keep the 'quotes'
I have the following list :
StringTest = ['A','B','C','D']
The output excepted is :
"'A','B','C','D'"
but it seems that the '' are perma deleted.
Below is the code I tried :
StringTest = ['A','B','C','D']
StringTest = ','.join(StringTest )
print(StringTest )
which returns :
"A,B,C,D"
How can I do ?
A:
You could do it like this:
StringTest = ['A','B','C','D']
print('"'+','.join(f"'{s}'" for s in StringTest)+'"')
Output:
"'A','B','C','D'"
A:
Have you tried repr?
print(','.join(map(repr, StringTest)))
# 'A','B','C','D'
print(repr(','.join(map(repr, StringTest)))
# "'A','B','C','D'"
A:
Use str.join to add the commas between each character, and use a generator expression to add the single quotes to each character:
string_test = ['A', 'B', 'C', 'D']
string_test = ",".join(f"'{c}'" for c in string_test)
print(string_test)
Output:
'A','B','C','D'
See also: f-strings
A:
You can do something weird like this:
StringTest = "'"+"','".join(StringTest)+"'"
A:
This is expected operation for string functions. If you want the "'" character included in your string, your input string needs to include it like "'A'". There are many ways to do this using string manipulation and iterating thru your input list, e.g.
','.join([f"'{each}'" for each in StringTest])
As noted below in comments, if you want to embed this string within another set of quotes since the __str__ will strip them using print(), you can:
>>> '"{}"'.format(','.join([f"'{each}'" for each in StringTest]))
'"\'A\',\'B\',\'C\',\'D\'"'
>>> print(_)
"'A','B','C','D'"
A:
It seems weird but it is an easy solution anyway:
str_edit = '"'+ str(StringTest).replace('[', '').replace(']', '') + '"'
print(str_edit.replace(" ", ''))
output:
"'A','B','C','D'"
|
How to Convert list to string and keep the 'quotes'
|
I have the following list :
StringTest = ['A','B','C','D']
The output excepted is :
"'A','B','C','D'"
but it seems that the '' are perma deleted.
Below is the code I tried :
StringTest = ['A','B','C','D']
StringTest = ','.join(StringTest )
print(StringTest )
which returns :
"A,B,C,D"
How can I do ?
|
[
"You could do it like this:\nStringTest = ['A','B','C','D']\n\nprint('\"'+','.join(f\"'{s}'\" for s in StringTest)+'\"')\n\nOutput:\n\"'A','B','C','D'\"\n\n",
"Have you tried repr?\nprint(','.join(map(repr, StringTest)))\n# 'A','B','C','D'\nprint(repr(','.join(map(repr, StringTest)))\n# \"'A','B','C','D'\"\n\n",
"Use str.join to add the commas between each character, and use a generator expression to add the single quotes to each character:\nstring_test = ['A', 'B', 'C', 'D']\nstring_test = \",\".join(f\"'{c}'\" for c in string_test)\nprint(string_test)\n\nOutput:\n'A','B','C','D'\nSee also: f-strings\n",
"You can do something weird like this:\nStringTest = \"'\"+\"','\".join(StringTest)+\"'\"\n\n",
"This is expected operation for string functions. If you want the \"'\" character included in your string, your input string needs to include it like \"'A'\". There are many ways to do this using string manipulation and iterating thru your input list, e.g.\n','.join([f\"'{each}'\" for each in StringTest])\n\nAs noted below in comments, if you want to embed this string within another set of quotes since the __str__ will strip them using print(), you can:\n>>> '\"{}\"'.format(','.join([f\"'{each}'\" for each in StringTest]))\n'\"\\'A\\',\\'B\\',\\'C\\',\\'D\\'\"'\n>>> print(_)\n\"'A','B','C','D'\"\n\n",
"It seems weird but it is an easy solution anyway:\nstr_edit = '\"'+ str(StringTest).replace('[', '').replace(']', '') + '\"'\nprint(str_edit.replace(\" \", ''))\n\noutput:\n\"'A','B','C','D'\" \n\n"
] |
[
2,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074477391_python.txt
|
Q:
Need to plot a number of graphs in a grid from a for loop creation
I have a straightforward for loop that loops through datasets in a set and plots the resultant scatterplot for each dataset using the code below;
for i in dataframes:
x = i['cycleNumber']
y = i['QCharge_mA_h']
plt.figure()
sns.scatterplot(x=x, y=y).set(title=i.name)
This plots the graphs out as expected, one on top of the other. Is there a simple way to get them all to plot onto a grid for better readability?
As an example lets say we have the following datasets and code:
data1 = {'X':[12, 10, 20, 17], 'Y':[9, 8, 5, 3]}
data2 = {'X':[2, 13, 7, 21], 'Y':[17, 18, 4, 6]}
data3 = {'X':[9, 19, 20, 3], 'Y':[6, 12, 4, 1]}
data4 = {'X':[10, 13, 15, 1], 'Y':[6, 12, 5,16]}
data5 = {'X':[12, 10, 5, 3], 'Y':[18, 7, 21, 7]}
data6 = {'X':[5, 10, 8, 17], 'Y':[9, 12, 5, 18]}
df1=pd.DataFrame(data1)
df2=pd.DataFrame(data2)
df3=pd.DataFrame(data3)
df4=pd.DataFrame(data4)
df5=pd.DataFrame(data5)
df6=pd.DataFrame(data6)
lst = [df1, df2, df3, df4, df5, df6]
for i in lst:
plt.figure()
sns.scatterplot(x=i['X'], y=i['Y'])
This returns an output of each scatterplot called printing on top of another i.e. stacked. I cant upload a shot of what that output looks like as it runs across multiple pages (this tidy output that I can capture and display is exactly what it is I'm trying to achieve).
I want it to be in a grid, lets say a 2x3 grid given it has 6 plots. How do I achieve this?
A:
Few ways you could do this.
The Original
import matplotlib # 3.6.0
from matplotlib import pyplot as plt
import numpy as np # 1.23.3
import pandas as pd # 1.5.1
import seaborn as sns # 0.12.1
# make fake data
df = pd.DataFrame({
"cycleNumber": np.random.random(size=(100,)),
"QCharge_mA_h": np.random.random(size=(100,)),
})
# single plot
fig, ax = plt.subplots()
sns.scatterplot(df, x="cycleNumber", y="QCharge_mA_h", ax=ax)
plt.show()
With matplotlib
# make 5 random data frames
dataframes = []
for i in range(5):
np.random.seed(i)
random_df = pd.DataFrame({
"cycleNumber": np.random.random(size=(100,)),
"QCharge_mA_h": np.random.random(size=(100,)),
})
dataframes.append(random_df)
# make len(dataframes) rows using matplotlib
fig, axs = plt.subplots(nrows=len(dataframes))
for df, ax in zip(dataframes, axs):
sns.scatterplot(df, x="cycleNumber", y="QCharge_mA_h", ax=ax)
plt.show()
With seaborn
# make 5 random data frames
dataframes = []
for i in range(5):
np.random.seed(i)
random_df = pd.DataFrame({
"cycleNumber": np.random.random(size=(100,)),
"QCharge_mA_h": np.random.random(size=(100,)),
})
dataframes.append(random_df)
# make len(dataframes) rows using matplotlib
# concat dataframes
dfs = pd.concat(dataframes, keys=range(len(dataframes)), names=["keys"])
# move keys to columns
dfs = dfs.reset_index(level="keys")
# make grid and map scatterplot to each row
grid = sns.FacetGrid(data=dfs, row="keys")
grid.map(sns.scatterplot, "cycleNumber", "QCharge_mA_h")
plt.show()
With col_wrap=3
# make 5 random data frames
dataframes = []
for i in range(5):
np.random.seed(i)
random_df = pd.DataFrame({
"cycleNumber": np.random.random(size=(100,)),
"QCharge_mA_h": np.random.random(size=(100,)),
})
dataframes.append(random_df)
# make len(dataframes) rows using matplotlib
# concat dataframes
dfs = pd.concat(dataframes, keys=range(len(dataframes)), names=["keys"])
# move keys to columns
dfs = dfs.reset_index(level="keys")
# make grid and map scatterplot to each column, wrapping after 3
grid = sns.FacetGrid(data=dfs, col="keys", col_wrap=3)
grid.map(sns.scatterplot, "cycleNumber", "QCharge_mA_h")
plt.show()
|
Need to plot a number of graphs in a grid from a for loop creation
|
I have a straightforward for loop that loops through datasets in a set and plots the resultant scatterplot for each dataset using the code below;
for i in dataframes:
x = i['cycleNumber']
y = i['QCharge_mA_h']
plt.figure()
sns.scatterplot(x=x, y=y).set(title=i.name)
This plots the graphs out as expected, one on top of the other. Is there a simple way to get them all to plot onto a grid for better readability?
As an example lets say we have the following datasets and code:
data1 = {'X':[12, 10, 20, 17], 'Y':[9, 8, 5, 3]}
data2 = {'X':[2, 13, 7, 21], 'Y':[17, 18, 4, 6]}
data3 = {'X':[9, 19, 20, 3], 'Y':[6, 12, 4, 1]}
data4 = {'X':[10, 13, 15, 1], 'Y':[6, 12, 5,16]}
data5 = {'X':[12, 10, 5, 3], 'Y':[18, 7, 21, 7]}
data6 = {'X':[5, 10, 8, 17], 'Y':[9, 12, 5, 18]}
df1=pd.DataFrame(data1)
df2=pd.DataFrame(data2)
df3=pd.DataFrame(data3)
df4=pd.DataFrame(data4)
df5=pd.DataFrame(data5)
df6=pd.DataFrame(data6)
lst = [df1, df2, df3, df4, df5, df6]
for i in lst:
plt.figure()
sns.scatterplot(x=i['X'], y=i['Y'])
This returns an output of each scatterplot called printing on top of another i.e. stacked. I cant upload a shot of what that output looks like as it runs across multiple pages (this tidy output that I can capture and display is exactly what it is I'm trying to achieve).
I want it to be in a grid, lets say a 2x3 grid given it has 6 plots. How do I achieve this?
|
[
"Few ways you could do this.\nThe Original\nimport matplotlib # 3.6.0\nfrom matplotlib import pyplot as plt\nimport numpy as np # 1.23.3\nimport pandas as pd # 1.5.1\nimport seaborn as sns # 0.12.1\n\n\n# make fake data\ndf = pd.DataFrame({\n \"cycleNumber\": np.random.random(size=(100,)),\n \"QCharge_mA_h\": np.random.random(size=(100,)),\n})\n\n# single plot\nfig, ax = plt.subplots()\nsns.scatterplot(df, x=\"cycleNumber\", y=\"QCharge_mA_h\", ax=ax)\nplt.show()\n\n\nWith matplotlib\n# make 5 random data frames\ndataframes = []\nfor i in range(5):\n np.random.seed(i)\n random_df = pd.DataFrame({\n \"cycleNumber\": np.random.random(size=(100,)),\n \"QCharge_mA_h\": np.random.random(size=(100,)),\n })\n dataframes.append(random_df)\n\n# make len(dataframes) rows using matplotlib\nfig, axs = plt.subplots(nrows=len(dataframes))\nfor df, ax in zip(dataframes, axs):\n sns.scatterplot(df, x=\"cycleNumber\", y=\"QCharge_mA_h\", ax=ax)\n\nplt.show()\n\n\nWith seaborn\n# make 5 random data frames\ndataframes = []\nfor i in range(5):\n np.random.seed(i)\n random_df = pd.DataFrame({\n \"cycleNumber\": np.random.random(size=(100,)),\n \"QCharge_mA_h\": np.random.random(size=(100,)),\n })\n dataframes.append(random_df)\n\n# make len(dataframes) rows using matplotlib\n\n# concat dataframes\ndfs = pd.concat(dataframes, keys=range(len(dataframes)), names=[\"keys\"])\n\n# move keys to columns\ndfs = dfs.reset_index(level=\"keys\")\n\n# make grid and map scatterplot to each row\ngrid = sns.FacetGrid(data=dfs, row=\"keys\")\ngrid.map(sns.scatterplot, \"cycleNumber\", \"QCharge_mA_h\")\nplt.show()\n\n\nWith col_wrap=3\n# make 5 random data frames\ndataframes = []\nfor i in range(5):\n np.random.seed(i)\n random_df = pd.DataFrame({\n \"cycleNumber\": np.random.random(size=(100,)),\n \"QCharge_mA_h\": np.random.random(size=(100,)),\n })\n dataframes.append(random_df)\n\n# make len(dataframes) rows using matplotlib\n\n# concat dataframes\ndfs = pd.concat(dataframes, keys=range(len(dataframes)), names=[\"keys\"])\n\n# move keys to columns\ndfs = dfs.reset_index(level=\"keys\")\n\n# make grid and map scatterplot to each column, wrapping after 3\ngrid = sns.FacetGrid(data=dfs, col=\"keys\", col_wrap=3)\ngrid.map(sns.scatterplot, \"cycleNumber\", \"QCharge_mA_h\")\nplt.show()\n\n\n"
] |
[
1
] |
[] |
[] |
[
"matplotlib",
"pandas",
"python",
"seaborn"
] |
stackoverflow_0074477227_matplotlib_pandas_python_seaborn.txt
|
Q:
Fail to import Alpha_vantage.timesseries
EDIT: When I wrote this post I was a beginner on Stackoverflow and in programming generally. I don't remember how I solved this inquiry unfortunately. How can I close this post?
I am having trouble working with this specific module. At first, I had a problem importing alpha vantage but I could install it with the following line: python3 -m pip install alpha_vantage.py( If I tried to install it like this: pip install alphavantage - That did not work.
So now it is working however I need to work with alpha_vantage.timeseries and it doesn't work. If I import the "timeseries" separately, it works but is not linked to the alpha_vantage?! So it doesn't work.
Do you know how can I make it work?
A:
In your example you import TimesSeries from alpha_vantage.timeseries.
Please note that you have an extra s in TimeSeries.
It should be TimeSeries and not TimesSeries
Here is an example from their website
from alpha_vantage.timeseries import TimeSeries
|
Fail to import Alpha_vantage.timesseries
|
EDIT: When I wrote this post I was a beginner on Stackoverflow and in programming generally. I don't remember how I solved this inquiry unfortunately. How can I close this post?
I am having trouble working with this specific module. At first, I had a problem importing alpha vantage but I could install it with the following line: python3 -m pip install alpha_vantage.py( If I tried to install it like this: pip install alphavantage - That did not work.
So now it is working however I need to work with alpha_vantage.timeseries and it doesn't work. If I import the "timeseries" separately, it works but is not linked to the alpha_vantage?! So it doesn't work.
Do you know how can I make it work?
|
[
"In your example you import TimesSeries from alpha_vantage.timeseries.\nPlease note that you have an extra s in TimeSeries.\n\nIt should be TimeSeries and not TimesSeries\n\nHere is an example from their website\nfrom alpha_vantage.timeseries import TimeSeries\n\n"
] |
[
0
] |
[
"I just went to another direction and used other library.\nEDIT: When I wrote this post I was a beginner on Stackoverflow and in programming generally. I don't remember how I solved this inquiry unfortunately. How can I close this post?\n"
] |
[
-1
] |
[
"alpha_vantage",
"import_module",
"python"
] |
stackoverflow_0070448183_alpha_vantage_import_module_python.txt
|
Q:
Package install issue "error: legacy-install-failure" MacOS
I am getting the below error when trying to install wordcloud. I am using MacOs 13.0.1 and Python 3.8.10.
Jesse-Burton@MacBook-Pro-4 ~ % pip3 install wordcloud
Collecting wordcloud
Using cached wordcloud-1.8.2.2.tar.gz (220 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: numpy>=1.6.1 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from wordcloud) (1.23.4)
Requirement already satisfied: pillow in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from wordcloud) (9.2.0)
Requirement already satisfied: matplotlib in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from wordcloud) (3.6.0)
Requirement already satisfied: python-dateutil>=2.7 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (2.8.2)
Requirement already satisfied: cycler>=0.10 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (0.11.0)
Requirement already satisfied: packaging>=20.0 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (21.3)
Requirement already satisfied: fonttools>=4.22.0 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (4.38.0)
Requirement already satisfied: contourpy>=1.0.1 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (1.0.5)
Requirement already satisfied: pyparsing>=2.2.1 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (3.0.9)
Requirement already satisfied: kiwisolver>=1.0.1 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (1.4.4)
Requirement already satisfied: six>=1.5 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from python-dateutil>=2.7->matplotlib->wordcloud) (1.16.0)
Installing collected packages: wordcloud
DEPRECATION: wordcloud is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for wordcloud ... error
error: subprocess-exited-with-error
× Running setup.py install for wordcloud did not run successfully.
│ exit code: 1
╰─> [26 lines of output]
running install
/Users/Jesse-Burton/.pyenv/versions/3.8.10/lib/python3.8/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build/lib.macosx-12.6-arm64-cpython-38
creating build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/wordcloud_cli.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/_version.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/init.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/tokenization.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/wordcloud.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/color_from_image.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/main.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/stopwords -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/DroidSansMono.ttf -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
UPDATING build/lib.macosx-12.6-arm64-cpython-38/wordcloud/_version.py
set build/lib.macosx-12.6-arm64-cpython-38/wordcloud/_version.py to '1.8.2.2'
running build_ext
building 'wordcloud.query_integral_image' extension
creating build/temp.macosx-12.6-arm64-cpython-38
creating build/temp.macosx-12.6-arm64-cpython-38/wordcloud
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Users/Jesse-Burton/.pyenv/versions/3.8.10/include/python3.8 -c wordcloud/query_integral_image.c -o build/temp.macosx-12.6-arm64-cpython-38/wordcloud/query_integral_image.o
xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> wordcloud
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
I have Matplotlib Pillow and Pandas all installed.
Any assistance would be greatly appreciated!
A:
So it turns out that I was getting this similar error on several packages, gensim being a core one.
I saw further up in the error message in the gensim install failure that it failed building the wheel and further down in that error message as well in this error message was this:
xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun
I was able to remedy this by installing xCode developer tools. by running xcode-select --install
After doing that everything worked perfectly!
|
Package install issue "error: legacy-install-failure" MacOS
|
I am getting the below error when trying to install wordcloud. I am using MacOs 13.0.1 and Python 3.8.10.
Jesse-Burton@MacBook-Pro-4 ~ % pip3 install wordcloud
Collecting wordcloud
Using cached wordcloud-1.8.2.2.tar.gz (220 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: numpy>=1.6.1 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from wordcloud) (1.23.4)
Requirement already satisfied: pillow in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from wordcloud) (9.2.0)
Requirement already satisfied: matplotlib in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from wordcloud) (3.6.0)
Requirement already satisfied: python-dateutil>=2.7 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (2.8.2)
Requirement already satisfied: cycler>=0.10 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (0.11.0)
Requirement already satisfied: packaging>=20.0 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (21.3)
Requirement already satisfied: fonttools>=4.22.0 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (4.38.0)
Requirement already satisfied: contourpy>=1.0.1 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (1.0.5)
Requirement already satisfied: pyparsing>=2.2.1 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (3.0.9)
Requirement already satisfied: kiwisolver>=1.0.1 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from matplotlib->wordcloud) (1.4.4)
Requirement already satisfied: six>=1.5 in ./.pyenv/versions/3.8.10/lib/python3.8/site-packages (from python-dateutil>=2.7->matplotlib->wordcloud) (1.16.0)
Installing collected packages: wordcloud
DEPRECATION: wordcloud is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for wordcloud ... error
error: subprocess-exited-with-error
× Running setup.py install for wordcloud did not run successfully.
│ exit code: 1
╰─> [26 lines of output]
running install
/Users/Jesse-Burton/.pyenv/versions/3.8.10/lib/python3.8/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build/lib.macosx-12.6-arm64-cpython-38
creating build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/wordcloud_cli.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/_version.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/init.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/tokenization.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/wordcloud.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/color_from_image.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/main.py -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/stopwords -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
copying wordcloud/DroidSansMono.ttf -> build/lib.macosx-12.6-arm64-cpython-38/wordcloud
UPDATING build/lib.macosx-12.6-arm64-cpython-38/wordcloud/_version.py
set build/lib.macosx-12.6-arm64-cpython-38/wordcloud/_version.py to '1.8.2.2'
running build_ext
building 'wordcloud.query_integral_image' extension
creating build/temp.macosx-12.6-arm64-cpython-38
creating build/temp.macosx-12.6-arm64-cpython-38/wordcloud
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Users/Jesse-Burton/.pyenv/versions/3.8.10/include/python3.8 -c wordcloud/query_integral_image.c -o build/temp.macosx-12.6-arm64-cpython-38/wordcloud/query_integral_image.o
xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> wordcloud
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
I have Matplotlib Pillow and Pandas all installed.
Any assistance would be greatly appreciated!
|
[
"So it turns out that I was getting this similar error on several packages, gensim being a core one.\nI saw further up in the error message in the gensim install failure that it failed building the wheel and further down in that error message as well in this error message was this:\n\nxcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun\n\nI was able to remedy this by installing xCode developer tools. by running xcode-select --install \nAfter doing that everything worked perfectly!\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_3.x",
"word_cloud"
] |
stackoverflow_0074461573_python_python_3.x_word_cloud.txt
|
Q:
Django postgres psycopg2: ImproperlyConfigure even though module installed
I am using Django for the first time but have used PostgreSQL previously. I am trying to follow the official Django tutorial to set up with a database. I have followed everything but I get an error when using the command "python manage.py migrate" that psycopg2 is not found even though I have it
installed.Traceback (most recent call last):
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/backends/postgresql/base.py", line 24, in <module>
import psycopg2 as Database
ModuleNotFoundError: No module named 'psycopg2'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/alexanderverheecke/Documents/GitHub/mysite/manage.py", line 22, in <module>
main()
File "/Users/alexanderverheecke/Documents/GitHub/mysite/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/core/management/__init__.py", line 420, in execute
django.setup()
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/apps/registry.py", line 116, in populate
app_config.import_models()
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/apps/config.py", line 269, in import_models
self.models_module = import_module(models_module_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/contrib/auth/models.py", line 3, in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/contrib/auth/base_user.py", line 49, in <module>
class AbstractBaseUser(models.Model):
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/models/base.py", line 141, in __new__
new_class.add_to_class("_meta", Options(meta, app_label))
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/models/base.py", line 369, in add_to_class
value.contribute_to_class(cls, name)
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/models/options.py", line 231, in contribute_to_class
self.db_table, connection.ops.max_name_length()
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/utils/connection.py", line 15, in __getattr__
return getattr(self._connections[self._alias], item)
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/utils/connection.py", line 62, in __getitem__
conn = self.create_connection(alias)
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/utils.py", line 193, in create_connection
backend = load_backend(db["ENGINE"])
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/utils.py", line 113, in load_backend
return import_module("%s.base" % backend_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/backends/postgresql/base.py", line 28, in <module>
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named 'psycopg2'
I am using the settings as mentioned in the Django tutorial:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'DatabaseName',
'USER': 'UserName',
'PASSWORD': 'Userpassword',
'HOST': '127.0.0.1',
'PORT': '5432',
}
}
I have confirmed that I have the necessary packages installed and in the same location :
pip freeze:
Django==4.1.3
psycopg2==2.9.5
psycopg2-binary==2.9.5
pip show Django/psycopg2/psycopg2-binary:
Django : Location: /opt/homebrew/lib/python3.10/site-packages
Psycopg2: Location: /opt/homebrew/lib/python3.10/site-packages
psycopg2-binary: Location: /opt/homebrew/lib/python3.10/site-packages
A:
It's seems like you use system python for running your migrations. Error traceback contains following path of python binary: "/Users/alexanderverheecke/Library/Python/3.9/...", however in pip show command your python path is "/opt/homebrew/lib/python3.10/".
Actually I don't understand how it's even possible, because first one looks like Windows path and the second one like Linux...
Anyway. Try something of this and run migrate command again:
activate/deactivate virtual environment if you use it
change your OS to the correct one
run pip install again
Make shure command which python(for Linux), gcm python (for Windows) returns the same python path as pip show.
|
Django postgres psycopg2: ImproperlyConfigure even though module installed
|
I am using Django for the first time but have used PostgreSQL previously. I am trying to follow the official Django tutorial to set up with a database. I have followed everything but I get an error when using the command "python manage.py migrate" that psycopg2 is not found even though I have it
installed.Traceback (most recent call last):
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/backends/postgresql/base.py", line 24, in <module>
import psycopg2 as Database
ModuleNotFoundError: No module named 'psycopg2'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/alexanderverheecke/Documents/GitHub/mysite/manage.py", line 22, in <module>
main()
File "/Users/alexanderverheecke/Documents/GitHub/mysite/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/core/management/__init__.py", line 420, in execute
django.setup()
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/apps/registry.py", line 116, in populate
app_config.import_models()
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/apps/config.py", line 269, in import_models
self.models_module = import_module(models_module_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/contrib/auth/models.py", line 3, in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/contrib/auth/base_user.py", line 49, in <module>
class AbstractBaseUser(models.Model):
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/models/base.py", line 141, in __new__
new_class.add_to_class("_meta", Options(meta, app_label))
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/models/base.py", line 369, in add_to_class
value.contribute_to_class(cls, name)
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/models/options.py", line 231, in contribute_to_class
self.db_table, connection.ops.max_name_length()
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/utils/connection.py", line 15, in __getattr__
return getattr(self._connections[self._alias], item)
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/utils/connection.py", line 62, in __getitem__
conn = self.create_connection(alias)
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/utils.py", line 193, in create_connection
backend = load_backend(db["ENGINE"])
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/utils.py", line 113, in load_backend
return import_module("%s.base" % backend_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/Users/alexanderverheecke/Library/Python/3.9/lib/python/site-packages/django/db/backends/postgresql/base.py", line 28, in <module>
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named 'psycopg2'
I am using the settings as mentioned in the Django tutorial:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'DatabaseName',
'USER': 'UserName',
'PASSWORD': 'Userpassword',
'HOST': '127.0.0.1',
'PORT': '5432',
}
}
I have confirmed that I have the necessary packages installed and in the same location :
pip freeze:
Django==4.1.3
psycopg2==2.9.5
psycopg2-binary==2.9.5
pip show Django/psycopg2/psycopg2-binary:
Django : Location: /opt/homebrew/lib/python3.10/site-packages
Psycopg2: Location: /opt/homebrew/lib/python3.10/site-packages
psycopg2-binary: Location: /opt/homebrew/lib/python3.10/site-packages
|
[
"It's seems like you use system python for running your migrations. Error traceback contains following path of python binary: \"/Users/alexanderverheecke/Library/Python/3.9/...\", however in pip show command your python path is \"/opt/homebrew/lib/python3.10/\".\nActually I don't understand how it's even possible, because first one looks like Windows path and the second one like Linux...\nAnyway. Try something of this and run migrate command again:\n\nactivate/deactivate virtual environment if you use it\nchange your OS to the correct one\nrun pip install again\n\nMake shure command which python(for Linux), gcm python (for Windows) returns the same python path as pip show.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"postgresql",
"psycopg2",
"python"
] |
stackoverflow_0074477149_django_postgresql_psycopg2_python.txt
|
Q:
Error when appending data to existing data frame to retrain a model
I am adding more data to a my X_train data as well as to my y_train data in order to retrain my model with more data. I do this using pd. concat(). However, when I train my model using the concatenated dataset I get the following error:
/usr/local/lib/python3.7/dist-packages/sklearn/utils/validation.py:1692:
FutureWarning: Feature names only support names that are all strings. Got feature
names with dtypes: ['int', 'str']. An error will be raised in 1.2.
FutureWarning,
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-166-a11464987b97> in <module>
----> 1 model1_pool_preds = model1(LinearSVC(class_weight='balanced',
random_state=42), OneVsRestClassifier, X_train_init_new, y_train_init_new,
X_test_init, y_test_init, X_pool)
6 frames
/usr/local/lib/python3.7/dist-packages/pandas/core/generic.py in __array__(self,
dtype)
1991
1992 def __array__(self, dtype: NpDtype | None = None) -> np.ndarray:
-> 1993 return np.asarray(self._values, dtype=dtype)
1994
1995 def __array_wrap__(
ValueError: could not convert string to float:
I suppose this is happening because the data I added to the existing dataframe contains some strings instead of float numbers. How can I convert the entire dataset into float? my code is below:
y_train_init_new = pd.concat([y_train_init, X_pool_labeled.iloc[:, -7:]])
X_train_init_new = pd.concat([X_train_init, X_pool_labeled.iloc[:, 0:27446]])
def model1(model, classifier, X, y, X_test, y_test, X_pool):
m = model
clf = classifier(m)
clf.fit(X,y)
clf_predictions = clf.predict(X_test)
C_report = classification_report(y_test, clf_predictions, zero_division=0)
print(C_report)
clf_roc_auc = roc_auc_score(y_test, clf_predictions, multi_class='ovr')
print('AUC: ', clf_roc_auc)
clf_predictions_pool = clf.predict(X_pool)
return clf_predictions_pool
model1_pool_preds = model1(LinearSVC(class_weight='balanced', random_state=42),
OneVsRestClassifier, X_train_init, y_train_init, X_test_init, y_test_init, X_pool)
How can I convert all the data of the concatenated dataset into float data?
A:
Given a data frame that is entirely strings but can be turned without errors into numbers, you can just call df.astype(float) on the whole lot.
>>> df = pd.DataFrame([str(i) for i in range(0, 1000)], columns=['x'])
>>> df
x
0 0
1 1
2 2
3 3
4 4
.. ...
995 995
996 996
997 997
998 998
999 999
[1000 rows x 1 columns]
>>> df.astype(float)
x
0 0.0
1 1.0
2 2.0
3 3.0
4 4.0
.. ...
995 995.0
996 996.0
997 997.0
998 998.0
999 999.0
[1000 rows x 1 columns]
This is more difficult if you have mixed non-numeric columns. Given that such columns can't be used anyway, just drop them and call astype(float) on the remainder.
|
Error when appending data to existing data frame to retrain a model
|
I am adding more data to a my X_train data as well as to my y_train data in order to retrain my model with more data. I do this using pd. concat(). However, when I train my model using the concatenated dataset I get the following error:
/usr/local/lib/python3.7/dist-packages/sklearn/utils/validation.py:1692:
FutureWarning: Feature names only support names that are all strings. Got feature
names with dtypes: ['int', 'str']. An error will be raised in 1.2.
FutureWarning,
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-166-a11464987b97> in <module>
----> 1 model1_pool_preds = model1(LinearSVC(class_weight='balanced',
random_state=42), OneVsRestClassifier, X_train_init_new, y_train_init_new,
X_test_init, y_test_init, X_pool)
6 frames
/usr/local/lib/python3.7/dist-packages/pandas/core/generic.py in __array__(self,
dtype)
1991
1992 def __array__(self, dtype: NpDtype | None = None) -> np.ndarray:
-> 1993 return np.asarray(self._values, dtype=dtype)
1994
1995 def __array_wrap__(
ValueError: could not convert string to float:
I suppose this is happening because the data I added to the existing dataframe contains some strings instead of float numbers. How can I convert the entire dataset into float? my code is below:
y_train_init_new = pd.concat([y_train_init, X_pool_labeled.iloc[:, -7:]])
X_train_init_new = pd.concat([X_train_init, X_pool_labeled.iloc[:, 0:27446]])
def model1(model, classifier, X, y, X_test, y_test, X_pool):
m = model
clf = classifier(m)
clf.fit(X,y)
clf_predictions = clf.predict(X_test)
C_report = classification_report(y_test, clf_predictions, zero_division=0)
print(C_report)
clf_roc_auc = roc_auc_score(y_test, clf_predictions, multi_class='ovr')
print('AUC: ', clf_roc_auc)
clf_predictions_pool = clf.predict(X_pool)
return clf_predictions_pool
model1_pool_preds = model1(LinearSVC(class_weight='balanced', random_state=42),
OneVsRestClassifier, X_train_init, y_train_init, X_test_init, y_test_init, X_pool)
How can I convert all the data of the concatenated dataset into float data?
|
[
"Given a data frame that is entirely strings but can be turned without errors into numbers, you can just call df.astype(float) on the whole lot.\n>>> df = pd.DataFrame([str(i) for i in range(0, 1000)], columns=['x'])\n>>> df\n x\n0 0\n1 1\n2 2\n3 3\n4 4\n.. ...\n995 995\n996 996\n997 997\n998 998\n999 999\n\n[1000 rows x 1 columns]\n\n>>> df.astype(float)\n x\n0 0.0\n1 1.0\n2 2.0\n3 3.0\n4 4.0\n.. ...\n995 995.0\n996 996.0\n997 997.0\n998 998.0\n999 999.0\n\n[1000 rows x 1 columns]\n\nThis is more difficult if you have mixed non-numeric columns. Given that such columns can't be used anyway, just drop them and call astype(float) on the remainder.\n"
] |
[
2
] |
[] |
[] |
[
"floating_point",
"pandas",
"python"
] |
stackoverflow_0074477605_floating_point_pandas_python.txt
|
Q:
assignments to list elements for a list created using the * operator not working as expected in Python
>>> m=[[-1]*2]*2
>>> n=[[-1,-1],[-1,-1]]
>>> m==n
True
>>> for i in range(2):
... m[i][i]=10
...
>>> m
[[10, 10], [10, 10]]
>>> for i in range(2):
... n[i][i]=10
...
>>> n
[[10, -1], [-1, 10]]
In the code block above, the assignment to the elements of n takes place as expected, but the assignment to elements of m is incorrect although both m and n before the assignment are equal, and the assignment takes place in the same manner. Can someone please clarify? Is this a bug in the usage of the * operator for the creation of the original list? This is Python 3.10.0.
|
assignments to list elements for a list created using the * operator not working as expected in Python
|
>>> m=[[-1]*2]*2
>>> n=[[-1,-1],[-1,-1]]
>>> m==n
True
>>> for i in range(2):
... m[i][i]=10
...
>>> m
[[10, 10], [10, 10]]
>>> for i in range(2):
... n[i][i]=10
...
>>> n
[[10, -1], [-1, 10]]
In the code block above, the assignment to the elements of n takes place as expected, but the assignment to elements of m is incorrect although both m and n before the assignment are equal, and the assignment takes place in the same manner. Can someone please clarify? Is this a bug in the usage of the * operator for the creation of the original list? This is Python 3.10.0.
|
[] |
[] |
[
"Lists are used to store separate values, you can't declare a integer into a list and attempt to multiply it without specifying which number to multiply, and putting a integer in brackets m=[[-1]*2]*2 is how that can work, instead, do m=[-1] m[0]*2*2\n"
] |
[
-2
] |
[
"list",
"python"
] |
stackoverflow_0074477705_list_python.txt
|
Q:
Having n (2048 bit number), how can I find two numbers p and q that satisfy n = p*q, where p = r||s (r and s concatenated) and q = s||r?
I'm using the RSA encryption/decryption system, and I have the modulus n (which is a 2048 bit number) and I need to find p and q, which satisfy n = p*q and both are prime numbers. The clue that is given to me is that p is equal to q but with its bits inverted as I say in the title of this post (concretely r and s have the same bits so we could say that p and q have their halves inverted). I don't find the way to take advantage of this so I would be very grateful if someone could help me
I have tried to traverse the number n to find the number p that satisfies that p * p_halfs_inverted = n but logically n is too huge and it is not viable to do it in this way.
A:
OK here's how you can solve this problem.
Start by representing p and q in terms of two k-bit numbers r and s as follows (for your example, k=512):
p = 2kr + s
q = 2ks + r
The value of n is the product of these two numbers:
n = pq = (2kr + s)(2ks + r) = 22krs + 2k(r2 + s2) + rs
The first two terms on the right are both multiples of 2k, so the k lowest bits of n are exactly equal to the k lowest bits of rs. Furthermore, since rs is typically a 2k-bit number and r2 + s2 is typically a (2k+1)-bit number, the k highest bits of n are also mostly equal to the k highest bits of rs, but perhaps slightly larger due to the carry generated when adding the 2k(r2 + s2) term.
If n◁ and n▷ are numbers representing the top k bits and bottom k bits of n, then we can generate a candidate value for rs by calculating 2kn◁ + n▷. If this value is correct, we can subtract (22k + 1)rs from n to obtain the value of 2k(r2 + s2). Divide this result by 2k and add 2rs to obtain r2 + 2rs + s2, then calculate the square root of this value to obtain the value of r + s. (If the number isn't a perfect square, then you need to subtract 1 from n◁ and try again.
After at most two iterations of this process, you will have the exact values of rs and r+s. You should then have no difficulty solving a simultaneous equation to obtain values for r and s, from which you can find p and q.
Note: You might find the sympy.sqrt() function useful for calculating square roots of large numbers. It returns objects with an is_integer attribute that will tell you if the number you provided was a perfect square.
|
Having n (2048 bit number), how can I find two numbers p and q that satisfy n = p*q, where p = r||s (r and s concatenated) and q = s||r?
|
I'm using the RSA encryption/decryption system, and I have the modulus n (which is a 2048 bit number) and I need to find p and q, which satisfy n = p*q and both are prime numbers. The clue that is given to me is that p is equal to q but with its bits inverted as I say in the title of this post (concretely r and s have the same bits so we could say that p and q have their halves inverted). I don't find the way to take advantage of this so I would be very grateful if someone could help me
I have tried to traverse the number n to find the number p that satisfies that p * p_halfs_inverted = n but logically n is too huge and it is not viable to do it in this way.
|
[
"OK here's how you can solve this problem.\nStart by representing p and q in terms of two k-bit numbers r and s as follows (for your example, k=512):\n\np = 2kr + s\nq = 2ks + r\n\nThe value of n is the product of these two numbers:\n\nn = pq = (2kr + s)(2ks + r) = 22krs + 2k(r2 + s2) + rs\n\nThe first two terms on the right are both multiples of 2k, so the k lowest bits of n are exactly equal to the k lowest bits of rs. Furthermore, since rs is typically a 2k-bit number and r2 + s2 is typically a (2k+1)-bit number, the k highest bits of n are also mostly equal to the k highest bits of rs, but perhaps slightly larger due to the carry generated when adding the 2k(r2 + s2) term.\nIf n◁ and n▷ are numbers representing the top k bits and bottom k bits of n, then we can generate a candidate value for rs by calculating 2kn◁ + n▷. If this value is correct, we can subtract (22k + 1)rs from n to obtain the value of 2k(r2 + s2). Divide this result by 2k and add 2rs to obtain r2 + 2rs + s2, then calculate the square root of this value to obtain the value of r + s. (If the number isn't a perfect square, then you need to subtract 1 from n◁ and try again.\nAfter at most two iterations of this process, you will have the exact values of rs and r+s. You should then have no difficulty solving a simultaneous equation to obtain values for r and s, from which you can find p and q.\nNote: You might find the sympy.sqrt() function useful for calculating square roots of large numbers. It returns objects with an is_integer attribute that will tell you if the number you provided was a perfect square.\n"
] |
[
1
] |
[] |
[] |
[
"cryptography",
"encryption",
"factors",
"python",
"rsa"
] |
stackoverflow_0074451247_cryptography_encryption_factors_python_rsa.txt
|
Q:
Pandas map returns column with NaN values
I have two dataframes. I am trying to map state postal codes from state_abbv_dict to the state column.
county_2015.head()
Year Month State County Rate min_wage
0 2015 February Mississippi Newton County 6.1 7.91
1 2015 February Mississippi Panola County 9.4 7.91
2 2015 February Mississippi Monroe County 7.9 7.91
3 2015 February Mississippi Hinds County 6.1 7.91
4 2015 February Mississippi Kemper County 10.6 7.91
state_abbv_dict
{'Postal Code': {'Alabama': 'AL',
'Alaska': 'AK',
'Arizona': 'AZ',
'Arkansas': 'AR',
'California': 'CA',
'Colorado': 'CO',
'Connecticut': 'CT',
'Delaware': 'DE',
'District of Columbia': 'DC',
'Florida': 'FL',
'Georgia': 'GA',
'Hawaii': 'HI',
'Idaho': 'ID',
'Illinois': 'IL',
'Indiana': 'IN',
'Iowa': 'IA',
'Kansas': 'KS',
'Kentucky': 'KY',
'Louisiana': 'LA',
'Maine': 'ME',
'Maryland': 'MD',
'Massachusetts': 'MA',
'Michigan': 'MI',
'Minnesota': 'MN',
'Mississippi': 'MS',
...
'Virginia': 'VA',
'Washington': 'WA',
'West Virginia': 'WV',
'Wisconsin': 'WI',
'Wyoming': 'WY'}}
county_2015['State'] = county_2015['State'].map(state_abbv_dict)
county_2015.tail()
Year Month State County Rate min_wage
2797 2015 February NaN Somerset County 8.4 8.18
2798 2015 February NaN Oxford County 6.8 8.18
2799 2015 February NaN Knox County 6.1 8.18
2800 2015 February NaN Piscataquis County 7.0 8.18
2801 2015 February NaN Aroostook County 7.2 8.18
A:
It looks like it's because the states are a secondary level, I think you just need to change it to this:
county_2015['State'] = county_2015['State'].map(state_abbv_dict['Postal Code'])
|
Pandas map returns column with NaN values
|
I have two dataframes. I am trying to map state postal codes from state_abbv_dict to the state column.
county_2015.head()
Year Month State County Rate min_wage
0 2015 February Mississippi Newton County 6.1 7.91
1 2015 February Mississippi Panola County 9.4 7.91
2 2015 February Mississippi Monroe County 7.9 7.91
3 2015 February Mississippi Hinds County 6.1 7.91
4 2015 February Mississippi Kemper County 10.6 7.91
state_abbv_dict
{'Postal Code': {'Alabama': 'AL',
'Alaska': 'AK',
'Arizona': 'AZ',
'Arkansas': 'AR',
'California': 'CA',
'Colorado': 'CO',
'Connecticut': 'CT',
'Delaware': 'DE',
'District of Columbia': 'DC',
'Florida': 'FL',
'Georgia': 'GA',
'Hawaii': 'HI',
'Idaho': 'ID',
'Illinois': 'IL',
'Indiana': 'IN',
'Iowa': 'IA',
'Kansas': 'KS',
'Kentucky': 'KY',
'Louisiana': 'LA',
'Maine': 'ME',
'Maryland': 'MD',
'Massachusetts': 'MA',
'Michigan': 'MI',
'Minnesota': 'MN',
'Mississippi': 'MS',
...
'Virginia': 'VA',
'Washington': 'WA',
'West Virginia': 'WV',
'Wisconsin': 'WI',
'Wyoming': 'WY'}}
county_2015['State'] = county_2015['State'].map(state_abbv_dict)
county_2015.tail()
Year Month State County Rate min_wage
2797 2015 February NaN Somerset County 8.4 8.18
2798 2015 February NaN Oxford County 6.8 8.18
2799 2015 February NaN Knox County 6.1 8.18
2800 2015 February NaN Piscataquis County 7.0 8.18
2801 2015 February NaN Aroostook County 7.2 8.18
|
[
"It looks like it's because the states are a secondary level, I think you just need to change it to this:\ncounty_2015['State'] = county_2015['State'].map(state_abbv_dict['Postal Code'])\n\n"
] |
[
0
] |
[] |
[] |
[
"dictionary",
"merge",
"python"
] |
stackoverflow_0074477794_dictionary_merge_python.txt
|
Q:
Selenium - can't get the correct XPath using Chrome inspect elements - @id="layers" vs @id="react-root" - Python
Trying to get the correct XPATH for the username box for the Twitter login.
My (simplified) code is:
from selenium import webdriver
from selenium.webdriver import Keys
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
import time
driver_service = Service(executable_path=CHROME_DRIVER_PATH)
driver = webdriver.Chrome(service=driver_service)
url = "https://twitter.com/login"
driver.get(url)
user_name_box = driver.find_element(By.XPATH, value='//*[@id="react-root"]/div/div/div/main/div/div/div/div[2]/div[2]/div/div[5]/label/div/div[2]')
user_name_box.click()
Then, for whatever reason, Selenium can't find the element, and when I searched for the solution, the correct XPATH is '//*[@id="layers"]/div/div/div/div/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div/div/div/div[5]/label/div/div[2]/div/input'
I suspect with my limited knowledge that this has something to do with React and the fact that the Twitter login box layers on top... but how do I access the correct XPath? Is there any way to select the correct XPath using Chrome? Or another tool?
Thanks very much. This is my first post on StackOverflow so be gentle :)
I was expecting that I could grab the correct XPath and then of course got an error. Googled the correct XPath, found it, but would like to know what's happening and how to grab the correct XPaths if elements are sitting in layers.
A:
A manual procedure using xml2xpath can be used to show all possible XPath expressions from an HTML/XML source.
Saving the page source from the browser or the Outer Html from dev console to a file and passing a starting XPath expression:
xml2xpath.sh -s '//*[@id="react-root"]' -l tmp.html
Result using Outer Html
XPath expressions found: 99 (51 unique elements, use -r to override)
//*[@id="react-root"]
//*[@id="react-root"]/div
//*[@id="react-root"]/div/div
//*[@id="react-root"]/div/div/div
... [redacted]
//*[@id="react-root"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/label/div
//*[@id="react-root"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/label/div/div
//*[@id="react-root"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/label/div/div/div
//*[@id="react-root"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/label/div/div/div/span
//*[@id="react-root"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/label/div/div/div/input
//*[@id="react-root"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/span
//*[@id="react-root"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/span/span
//*[@id="react-root"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/span/span/span
//*[@id="react-root"]/div/div/div/main
//*[@id="react-root"]/div/div/div/main/div
//*[@id="react-root"]/div/div/div/main/div/div
Testing an specific XPath expression to be used in Selenium
xml2xpath.sh -s '//*[@id="react-root"]//input[@autocomplete="username"]' -l tmp.html
Result (meaning expression matched elements)
XPath expressions found: 1 (1 unique elements, use -r to override)
//*[@id="react-root"]//input[@autocomplete="username"]
xml2xpath it's a bash script wrapper around xmllint XML tool.
If -a option is passed, absolute XPaths are returned similar to browser does
/html/body/div
/html/body/div/@id
/html/body/div/@style
/html/body/div/div
/html/body/div/div/@class
/html/body/div/div/div
/html/body/div/div/div/@class
/html/body/div/div/div/style
/html/body/div/div/div/div[1]/@aria-label
/html/body/div/div/div/div[1]/@class
/html/body/div/div/div/div[1]/@id
/html/body/div/div/div/div[2]/@class
/html/body/div/div/div/div[2]/@id
/html/body/div/div/div/div[1]/svg
/html/body/div/div/div/div[1]/svg/@viewbox
/html/body/div/div/div/div[1]/svg/@aria-hidden
/html/body/div/div/div/div[1]/svg/@class
/html/body/div/div/div/div[1]/svg/g
/html/body/div/div/div/div[1]/svg/g/path
/html/body/div/div/div/div[1]/svg/g/path/@d
/html/body/div/div/div/div[1]
/html/body/div/div/div/div[2]
/html/body/div/div/div/div[2]/form
/html/body/div/div/div/div[2]/form/@action
/html/body/div/div/div/div[2]/form/@method
/html/body/div/div/div/div[2]/form/div
/html/body/div/div/div/div[2]/form/div/@class
/html/body/div/div/div/div[2]/form/div/div
/html/body/div/div/div/div[2]/form/div/div/@dir
/html/body/div/div/div/div[2]/form/div/div/@class
/html/body/div/div/div/div[2]/form/div/div/@style
/html/body/div/div/div/div[2]/form/div/div/span
/html/body/div/div/div/div[2]/form/div/div/span/@class
/html/body/div/div/div/div[2]/form/div/br
/html/body/div/div/div/div[2]/form/div/input[1]
/html/body/div/div/div/div[2]/form/div/input[2]
/html/body/div/div/div/div[2]/form/div/input[1]/@type
/html/body/div/div/div/div[2]/form/div/input[1]/@name
/html/body/div/div/div/div[2]/form/div/input[1]/@value
/html/body/div/div/div/div[2]/form/div/input[2]/@type
/html/body/div/div/div/div[2]/form/div/input[2]/@value
NOTE: if source is an HTML fragment, /html/body are added by xmllint on what seems to be a bug.
|
Selenium - can't get the correct XPath using Chrome inspect elements - @id="layers" vs @id="react-root" - Python
|
Trying to get the correct XPATH for the username box for the Twitter login.
My (simplified) code is:
from selenium import webdriver
from selenium.webdriver import Keys
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
import time
driver_service = Service(executable_path=CHROME_DRIVER_PATH)
driver = webdriver.Chrome(service=driver_service)
url = "https://twitter.com/login"
driver.get(url)
user_name_box = driver.find_element(By.XPATH, value='//*[@id="react-root"]/div/div/div/main/div/div/div/div[2]/div[2]/div/div[5]/label/div/div[2]')
user_name_box.click()
Then, for whatever reason, Selenium can't find the element, and when I searched for the solution, the correct XPATH is '//*[@id="layers"]/div/div/div/div/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div/div/div/div[5]/label/div/div[2]/div/input'
I suspect with my limited knowledge that this has something to do with React and the fact that the Twitter login box layers on top... but how do I access the correct XPath? Is there any way to select the correct XPath using Chrome? Or another tool?
Thanks very much. This is my first post on StackOverflow so be gentle :)
I was expecting that I could grab the correct XPath and then of course got an error. Googled the correct XPath, found it, but would like to know what's happening and how to grab the correct XPaths if elements are sitting in layers.
|
[
"A manual procedure using xml2xpath can be used to show all possible XPath expressions from an HTML/XML source.\nSaving the page source from the browser or the Outer Html from dev console to a file and passing a starting XPath expression:\nxml2xpath.sh -s '//*[@id=\"react-root\"]' -l tmp.html\n\nResult using Outer Html\nXPath expressions found: 99 (51 unique elements, use -r to override)\n//*[@id=\"react-root\"]\n//*[@id=\"react-root\"]/div\n//*[@id=\"react-root\"]/div/div\n//*[@id=\"react-root\"]/div/div/div\n... [redacted]\n//*[@id=\"react-root\"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/label/div\n//*[@id=\"react-root\"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/label/div/div\n//*[@id=\"react-root\"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/label/div/div/div\n//*[@id=\"react-root\"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/label/div/div/div/span\n//*[@id=\"react-root\"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/label/div/div/div/input\n//*[@id=\"react-root\"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/span\n//*[@id=\"react-root\"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/span/span\n//*[@id=\"react-root\"]/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/div/span/span/span\n//*[@id=\"react-root\"]/div/div/div/main\n//*[@id=\"react-root\"]/div/div/div/main/div\n//*[@id=\"react-root\"]/div/div/div/main/div/div\n\nTesting an specific XPath expression to be used in Selenium\nxml2xpath.sh -s '//*[@id=\"react-root\"]//input[@autocomplete=\"username\"]' -l tmp.html\n\nResult (meaning expression matched elements)\nXPath expressions found: 1 (1 unique elements, use -r to override)\n//*[@id=\"react-root\"]//input[@autocomplete=\"username\"]\n\nxml2xpath it's a bash script wrapper around xmllint XML tool.\nIf -a option is passed, absolute XPaths are returned similar to browser does\n/html/body/div\n/html/body/div/@id\n/html/body/div/@style\n/html/body/div/div\n/html/body/div/div/@class\n/html/body/div/div/div\n/html/body/div/div/div/@class\n/html/body/div/div/div/style\n/html/body/div/div/div/div[1]/@aria-label\n/html/body/div/div/div/div[1]/@class\n/html/body/div/div/div/div[1]/@id\n/html/body/div/div/div/div[2]/@class\n/html/body/div/div/div/div[2]/@id\n/html/body/div/div/div/div[1]/svg\n/html/body/div/div/div/div[1]/svg/@viewbox\n/html/body/div/div/div/div[1]/svg/@aria-hidden\n/html/body/div/div/div/div[1]/svg/@class\n/html/body/div/div/div/div[1]/svg/g\n/html/body/div/div/div/div[1]/svg/g/path\n/html/body/div/div/div/div[1]/svg/g/path/@d\n/html/body/div/div/div/div[1]\n/html/body/div/div/div/div[2]\n/html/body/div/div/div/div[2]/form\n/html/body/div/div/div/div[2]/form/@action\n/html/body/div/div/div/div[2]/form/@method\n/html/body/div/div/div/div[2]/form/div\n/html/body/div/div/div/div[2]/form/div/@class\n/html/body/div/div/div/div[2]/form/div/div\n/html/body/div/div/div/div[2]/form/div/div/@dir\n/html/body/div/div/div/div[2]/form/div/div/@class\n/html/body/div/div/div/div[2]/form/div/div/@style\n/html/body/div/div/div/div[2]/form/div/div/span\n/html/body/div/div/div/div[2]/form/div/div/span/@class\n/html/body/div/div/div/div[2]/form/div/br\n/html/body/div/div/div/div[2]/form/div/input[1]\n/html/body/div/div/div/div[2]/form/div/input[2]\n/html/body/div/div/div/div[2]/form/div/input[1]/@type\n/html/body/div/div/div/div[2]/form/div/input[1]/@name\n/html/body/div/div/div/div[2]/form/div/input[1]/@value\n/html/body/div/div/div/div[2]/form/div/input[2]/@type\n/html/body/div/div/div/div[2]/form/div/input[2]/@value\n\nNOTE: if source is an HTML fragment, /html/body are added by xmllint on what seems to be a bug.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"reactjs",
"selenium_chromedriver",
"selenium_webdriver",
"xpath"
] |
stackoverflow_0074476685_python_reactjs_selenium_chromedriver_selenium_webdriver_xpath.txt
|
Q:
How to make a non-overriding method stub in Python multi-inheritance?
Imagine that you have 2 mixin classes, that each define abstract methods and implementations. Together they implement every method, but depending on the inheritance order, the empty stubs will overwrite the implementation from the other class. There's at least two ways to overcome this in most situations but I don't really like either.
One could remove the abstract methods and just rely on duck typing, but then there is no clear interface definition and type hinting.
One could try to break down the classes into smaller ones to get a straight line of dependency and force a specific inheritance order, but that's not always practical.
Is there a way to, for example, mark a method virtual, which prevents it from actually being added to the class, or at least prevents it from overriding an existing method of the same name?
Is there another solution I didn't think of?
Simple example:
class MixinA:
def high_level(self):
self.mid_level()
def low_level(self):
...
def mid_level(self):
raise NotImplementedError
class MixinB:
def mid_level(self):
self.low_level()
def low_level(self):
raise NotImplementedError
class ChildA(MixinA, MixinB):
pass
class ChildB(MixinB, MixinA):
pass
for cls in (ChildA, ChildB):
try:
cls().high_level()
print("success")
except NotImplementedError:
print("error")
A:
OK, this is what an ABC-based implementation might look like.
You just have to mix and match the Mixins to achieve what you want. The Mixins only implement what they are actually providing.
mypy will flag errors during type-checking
abc will also throw errors about missing methods at runtime
from abc import ABC, abstractmethod
class AbstractLevel(ABC):
@abstractmethod
def high_level(self):
...
@abstractmethod
def low_level(self):
...
@abstractmethod
def mid_level(self):
...
class MixinHighMid:
def high_level(self):
self.mid_level()
class MixinMid:
def mid_level(self):
print(f" MixinMid.mid_level")
class ConcreteLow(MixinHighMid, MixinMid, AbstractLevel):
def low_level(self):
print(f" {self}.low_level")
class BadConcreteLow(MixinHighMid,AbstractLevel):
def low_level(self):
print(f"{self}.low_level")
for cls in (ConcreteLow, BadConcreteLow):
good = ConcreteLow() #happy mypy
try:
bad = BadConcreteLow() #sad mypy
except (TypeError,) as e:
pass
try:
print(cls.__name__)
inst = cls() #sad mypy because knows BadConcreteLow will blow up
print(f" calling high_level")
inst.high_level()
except (TypeError,) as e:
print(f"error on {cls.__name__}: {e}")
runtime output:
ConcreteLow
calling high_level
MixinMid.mid_level
BadConcreteLow
error on BadConcreteLow: Can't instantiate abstract class BadConcreteLow with abstract method mid_level
and mypy output:
test_418_mixindecl.py:41: error: Cannot instantiate abstract class "BadConcreteLow" with abstract attribute "mid_level"
test_418_mixindecl.py:49: error: Cannot instantiate abstract class "AbstractLevel" with abstract attributes "high_level", "low_level" and "mid_level"
Found 2 errors in 1 file (checked 1 source file)
To amuse myself, I also added the following Mixin to mangle method signatures:
class BadMixinMid:
def mid_level(self, v : int):
print(f" MixinMid.mid_level")
class BadConcreteMid(MixinHighMid,BadMixinMid,AbstractLevel):
def low_level(self):
print(f"{self}.low_level")
mypy caught that on type-checking while the runtime only blew up on the method call itself.
test_418_mixindecl.py:41: error: Definition of "mid_level" in base class "BadMixinMid" is incompatible with definition in base class "AbstractLevel"
A:
I found a solution that satisfies my requirements, that is:
it describes an interface which the inheriting class needs to implement
does not override existing functions with abstract functions
is typing compatible
Basically, I just defined the abstract functions in a separate protocol and marked the self var as that protocol.
from typing import Protocol
class InterfaceForA(Protocol):
def mid_level(self): ...
class MixinA:
def high_level(self: InterfaceForA):
self.mid_level()
def low_level(self):
print("success")
class InterfaceForB(Protocol):
def low_level(self): ...
class MixinB:
def mid_level(self: InterfaceForB):
self.low_level()
class ChildA(MixinA, MixinB):
pass
class ChildB(MixinB, MixinA):
pass
for cls in ChildA, ChildB:
try:
cls().high_level()
except:
print("error")
With this code, I get success printed twice. In addition, the type hints let me know if the method implementations in the mixin are incompatible with the requirements defined in the protocols.
|
How to make a non-overriding method stub in Python multi-inheritance?
|
Imagine that you have 2 mixin classes, that each define abstract methods and implementations. Together they implement every method, but depending on the inheritance order, the empty stubs will overwrite the implementation from the other class. There's at least two ways to overcome this in most situations but I don't really like either.
One could remove the abstract methods and just rely on duck typing, but then there is no clear interface definition and type hinting.
One could try to break down the classes into smaller ones to get a straight line of dependency and force a specific inheritance order, but that's not always practical.
Is there a way to, for example, mark a method virtual, which prevents it from actually being added to the class, or at least prevents it from overriding an existing method of the same name?
Is there another solution I didn't think of?
Simple example:
class MixinA:
def high_level(self):
self.mid_level()
def low_level(self):
...
def mid_level(self):
raise NotImplementedError
class MixinB:
def mid_level(self):
self.low_level()
def low_level(self):
raise NotImplementedError
class ChildA(MixinA, MixinB):
pass
class ChildB(MixinB, MixinA):
pass
for cls in (ChildA, ChildB):
try:
cls().high_level()
print("success")
except NotImplementedError:
print("error")
|
[
"OK, this is what an ABC-based implementation might look like.\nYou just have to mix and match the Mixins to achieve what you want. The Mixins only implement what they are actually providing.\nmypy will flag errors during type-checking\nabc will also throw errors about missing methods at runtime\nfrom abc import ABC, abstractmethod\n\nclass AbstractLevel(ABC):\n\n @abstractmethod\n def high_level(self):\n ...\n\n @abstractmethod\n def low_level(self):\n ...\n\n @abstractmethod\n def mid_level(self):\n ...\n\nclass MixinHighMid:\n\n def high_level(self):\n self.mid_level()\n\nclass MixinMid:\n\n def mid_level(self):\n print(f\" MixinMid.mid_level\")\n\nclass ConcreteLow(MixinHighMid, MixinMid, AbstractLevel):\n def low_level(self):\n print(f\" {self}.low_level\")\n\nclass BadConcreteLow(MixinHighMid,AbstractLevel):\n\n def low_level(self):\n print(f\"{self}.low_level\")\n\n\nfor cls in (ConcreteLow, BadConcreteLow):\n\n good = ConcreteLow() #happy mypy\n try:\n bad = BadConcreteLow() #sad mypy\n except (TypeError,) as e:\n pass\n \n try:\n print(cls.__name__)\n inst = cls() #sad mypy because knows BadConcreteLow will blow up\n print(f\" calling high_level\")\n inst.high_level()\n except (TypeError,) as e:\n print(f\"error on {cls.__name__}: {e}\")\n\n\nruntime output:\nConcreteLow\n calling high_level\n MixinMid.mid_level\nBadConcreteLow\nerror on BadConcreteLow: Can't instantiate abstract class BadConcreteLow with abstract method mid_level\n\nand mypy output:\ntest_418_mixindecl.py:41: error: Cannot instantiate abstract class \"BadConcreteLow\" with abstract attribute \"mid_level\"\ntest_418_mixindecl.py:49: error: Cannot instantiate abstract class \"AbstractLevel\" with abstract attributes \"high_level\", \"low_level\" and \"mid_level\"\nFound 2 errors in 1 file (checked 1 source file)\n\nTo amuse myself, I also added the following Mixin to mangle method signatures:\nclass BadMixinMid:\n\n def mid_level(self, v : int):\n print(f\" MixinMid.mid_level\")\n\nclass BadConcreteMid(MixinHighMid,BadMixinMid,AbstractLevel):\n\n def low_level(self):\n print(f\"{self}.low_level\")\n\n\nmypy caught that on type-checking while the runtime only blew up on the method call itself.\ntest_418_mixindecl.py:41: error: Definition of \"mid_level\" in base class \"BadMixinMid\" is incompatible with definition in base class \"AbstractLevel\"\n\n",
"I found a solution that satisfies my requirements, that is:\n\nit describes an interface which the inheriting class needs to implement\ndoes not override existing functions with abstract functions\nis typing compatible\n\nBasically, I just defined the abstract functions in a separate protocol and marked the self var as that protocol.\nfrom typing import Protocol\n\n\nclass InterfaceForA(Protocol):\n def mid_level(self): ...\n\n\nclass MixinA:\n def high_level(self: InterfaceForA):\n self.mid_level()\n\n def low_level(self):\n print(\"success\")\n\n\nclass InterfaceForB(Protocol):\n def low_level(self): ...\n\n\nclass MixinB:\n def mid_level(self: InterfaceForB):\n self.low_level()\n\n\nclass ChildA(MixinA, MixinB):\n pass\n\n\nclass ChildB(MixinB, MixinA):\n pass\n\n\nfor cls in ChildA, ChildB:\n try:\n cls().high_level()\n except:\n print(\"error\")\n\nWith this code, I get success printed twice. In addition, the type hints let me know if the method implementations in the mixin are incompatible with the requirements defined in the protocols.\n"
] |
[
0,
0
] |
[] |
[] |
[
"multiple_inheritance",
"python",
"virtual"
] |
stackoverflow_0074220534_multiple_inheritance_python_virtual.txt
|
Q:
Get Binary Representation of PIL Image Without Saving
I am writing an application that uses images intensively. It is composed of two parts. The client part is written in Python. It does some preprocessing on images and sends them over TCP to a Node.js server.
After preprocessing, the Image object looks like this:
window = img.crop((x,y,width+x,height+y))
window = window.resize((48,48),Image.ANTIALIAS)
To send that over socket, I have to have it in binary format. What I am doing now is:
window.save("window.jpg")
infile = open("window.jpg","rb")
encodedWindow = base64.b64encode(infile.read())
#Then send encodedWindow
This is a huge overhead, though, since I am saving the image to the hard disk first, then loading it again to obtain the binary format. This is causing my application to be extremely slow.
I read the documentation of PIL Image, but found nothing useful there.
A:
According to the documentation, (at effbot.org):
"You can use a file object instead of a filename. In this case, you must always specify the format. The file object must implement the seek, tell, and write methods, and be opened in binary mode."
This means you can pass a StringIO object. Write to it and get the size without ever hitting the disk.
Like this:
s = StringIO.StringIO()
window.save(s, "jpg")
encodedWindow = base64.b64encode(s.getvalue())
A:
use BytesIO
from io import BytesIO
from PIL import Image
photo=Image.open('photo.jpg')
s=BytesIO()
photo.save(s,'jpeg')
data = s.getvalue()
with open('photo2.jpg', mode='wb') as f:
f.write(data)
|
Get Binary Representation of PIL Image Without Saving
|
I am writing an application that uses images intensively. It is composed of two parts. The client part is written in Python. It does some preprocessing on images and sends them over TCP to a Node.js server.
After preprocessing, the Image object looks like this:
window = img.crop((x,y,width+x,height+y))
window = window.resize((48,48),Image.ANTIALIAS)
To send that over socket, I have to have it in binary format. What I am doing now is:
window.save("window.jpg")
infile = open("window.jpg","rb")
encodedWindow = base64.b64encode(infile.read())
#Then send encodedWindow
This is a huge overhead, though, since I am saving the image to the hard disk first, then loading it again to obtain the binary format. This is causing my application to be extremely slow.
I read the documentation of PIL Image, but found nothing useful there.
|
[
"According to the documentation, (at effbot.org):\n\"You can use a file object instead of a filename. In this case, you must always specify the format. The file object must implement the seek, tell, and write methods, and be opened in binary mode.\"\nThis means you can pass a StringIO object. Write to it and get the size without ever hitting the disk.\nLike this:\ns = StringIO.StringIO()\nwindow.save(s, \"jpg\")\nencodedWindow = base64.b64encode(s.getvalue())\n\n",
"use BytesIO\nfrom io import BytesIO\nfrom PIL import Image\n\nphoto=Image.open('photo.jpg')\n\ns=BytesIO()\nphoto.save(s,'jpeg')\n\ndata = s.getvalue()\n\nwith open('photo2.jpg', mode='wb') as f:\n f.write(data)\n\n"
] |
[
4,
0
] |
[
"It's about the difference between in-memory file-like object and BufferedReader object.\nHere is my experiment in Jupyter(Python 3.8.10):\nfrom PIL import Image as PILImage, ImageOps as PILImageOps\nfrom IPython.display import display, Image\nfrom io import BytesIO\nimport base64\n\nurl = \"https://learn.microsoft.com/en-us/archive/msdn-magazine/2018/april/images/mt846470.0418_mccaffreytrun_figure2_hires(en-us,msdn.10).png\"\nprint(\"get computer-readable bytes from the url\")\nimg_bytes = requests.get(url).content\nprint(type(img_bytes))\ndisplay(Image(img_bytes))\nprint(\"convert to in-memory file-like object\")\nin_memory_file_like_object = BytesIO(img_bytes)\nprint(type(in_memory_file_like_object))\n\nprint(\"convert to an PIL Image object for manipulating\") \npil_img = PILImage.open(in_memory_file_like_object)\nprint(\"let's rotate it, and it remains a PIL Image object\") \npil_img.show()\nrotated_img = pil_img.rotate(45)\nprint(type(rotated_img))\nprint(\"let's create an in-memory file-like object and save the PIL Image object into it\") \nin_memory_file_like_object = BytesIO()\nrotated_img.save(in_memory_file_like_object, 'png')\nprint(type(in_memory_file_like_object))\n\nprint(\"get computer-readable bytes\") \nimg_bytes = in_memory_file_like_object.getvalue()\nprint(type(img_bytes))\ndisplay(Image(img_bytes))\nprint('convert to base64 to be transmitted over channels that do not preserve all 8-bits of data, such as email')\n# https://stackoverflow.com/a/8909233/3552975\nbase_64 = base64.b64encode(img_bytes)\nprint(type(base_64))\n# https://stackoverflow.com/a/45928164/3552975\nassert base64.b64encode(base64.b64decode(base_64)) == base_64\n\nIn short you can save a PIL Image object into an in-memory file-like object by rotated_img.save(in_memory_file_like_object, 'png') as shown above, and then conver the in-memory file-like object into base64.\n",
"from io import BytesIO\n\nb = BytesIO()\nimg.save(b, format=\"png\")\nb.seek(0)\ndata = b.read()\ndel b\n\n"
] |
[
-1,
-2
] |
[
"node.js",
"python",
"python_imaging_library",
"sockets"
] |
stackoverflow_0027652121_node.js_python_python_imaging_library_sockets.txt
|
Q:
ElementNotVisibleException: Message: element not interactable in Robot Framework
Example code:
<div class="modal-footer">
<button type="button" class="btn btn-primary btn-block" data-modal="AlertSubmitApproval" id="btn_close_modal">ตกลง</button>
</div>
I try to click the button id="btn_close_modal" but it seems like the button is not visible then robot response ElementNotVisibleException: Message: element not interactable, in spite of the fact I able to click by manual.
My robot code:
Request approve
Selenium2Library.Click Element &{Landing}[reqApprove]
Sleep 2s
Selenium2Library.Click Element &{Landing}[cofReq]
Sleep 2s
Selenium2Library.Wait Until Page Contains Element id=btn_close_modal timeout=20s
Sleep 3s
Selenium2Library.Click Element id=btn_close_modal
How can I able to click the button id=btn_close_modal, please could anyone help.
A:
The desired element is within a Modal Dialog Box so you need to induce WebDriverWait for the element to be visible/enabled and you can use either/both (clubbing up) of the following solutions:
Wait Until Element Is Visible:
Request approve
Selenium2Library.Click Element &{Landing}[reqApprove]
Sleep 2s
Selenium2Library.Click Element &{Landing}[cofReq]
Sleep 2s
Selenium2Library.Wait Until Element Is Visible xpath=//button[@class="btn btn-primary btn-block" and @id="btn_close_modal"] timeout=20s
Sleep 3s
Selenium2Library.Click Element xpath=//button[@class="btn btn-primary btn-block" and @id="btn_close_modal"]
Wait Until Element Is Enabled:
Request approve
Selenium2Library.Click Element &{Landing}[reqApprove]
Sleep 2s
Selenium2Library.Click Element &{Landing}[cofReq]
Sleep 2s
Selenium2Library.Wait Until Element Is Enabled xpath=//button[@class="btn btn-primary btn-block" and @id="btn_close_modal"] timeout=20s
Sleep 3s
Selenium2Library.Click Element xpath=//button[@class="btn btn-primary btn-block" and @id="btn_close_modal"]
You can find a detailed discussion about Wait Until Element Is Visible and Wait Until Element Is Enabled in Robotframework: Selenium2Lib: Wait Until (…) Keywords
Reference: Selenium2Library
A:
try using this,
Click Element //button[@id='btn_close_modal']
or try using java script executor,
Wait Until Page Does Not Contain NOBODY SELECTED
Execute JavaScript $("#btn_close_modal").click();
A:
Try this : if Error element not interactable in the robot selenium frame work to button or checkbox etc..
Execute JavaScript $(id=btn_close_modal).click()
A:
Try to use Press keys instead of Click Element.
Press Keys locator_here ENTER
or Press Keys locator_here SPACE
|
ElementNotVisibleException: Message: element not interactable in Robot Framework
|
Example code:
<div class="modal-footer">
<button type="button" class="btn btn-primary btn-block" data-modal="AlertSubmitApproval" id="btn_close_modal">ตกลง</button>
</div>
I try to click the button id="btn_close_modal" but it seems like the button is not visible then robot response ElementNotVisibleException: Message: element not interactable, in spite of the fact I able to click by manual.
My robot code:
Request approve
Selenium2Library.Click Element &{Landing}[reqApprove]
Sleep 2s
Selenium2Library.Click Element &{Landing}[cofReq]
Sleep 2s
Selenium2Library.Wait Until Page Contains Element id=btn_close_modal timeout=20s
Sleep 3s
Selenium2Library.Click Element id=btn_close_modal
How can I able to click the button id=btn_close_modal, please could anyone help.
|
[
"The desired element is within a Modal Dialog Box so you need to induce WebDriverWait for the element to be visible/enabled and you can use either/both (clubbing up) of the following solutions:\n\nWait Until Element Is Visible:\nRequest approve\nSelenium2Library.Click Element &{Landing}[reqApprove]\nSleep 2s\nSelenium2Library.Click Element &{Landing}[cofReq]\nSleep 2s\nSelenium2Library.Wait Until Element Is Visible xpath=//button[@class=\"btn btn-primary btn-block\" and @id=\"btn_close_modal\"] timeout=20s\nSleep 3s\nSelenium2Library.Click Element xpath=//button[@class=\"btn btn-primary btn-block\" and @id=\"btn_close_modal\"]\n\nWait Until Element Is Enabled:\nRequest approve\nSelenium2Library.Click Element &{Landing}[reqApprove]\nSleep 2s\nSelenium2Library.Click Element &{Landing}[cofReq]\nSleep 2s\nSelenium2Library.Wait Until Element Is Enabled xpath=//button[@class=\"btn btn-primary btn-block\" and @id=\"btn_close_modal\"] timeout=20s\nSleep 3s\nSelenium2Library.Click Element xpath=//button[@class=\"btn btn-primary btn-block\" and @id=\"btn_close_modal\"]\n\nYou can find a detailed discussion about Wait Until Element Is Visible and Wait Until Element Is Enabled in Robotframework: Selenium2Lib: Wait Until (…) Keywords\nReference: Selenium2Library\n\n",
"try using this,\nClick Element //button[@id='btn_close_modal']\n\nor try using java script executor,\nWait Until Page Does Not Contain NOBODY SELECTED\nExecute JavaScript $(\"#btn_close_modal\").click();\n\n",
"Try this : if Error element not interactable in the robot selenium frame work to button or checkbox etc..\nExecute JavaScript $(id=btn_close_modal).click()\n",
"Try to use Press keys instead of Click Element.\nPress Keys locator_here ENTER\nor Press Keys locator_here SPACE\n"
] |
[
3,
0,
0,
0
] |
[] |
[] |
[
"element",
"python",
"robotframework",
"selenium",
"webdriverwait"
] |
stackoverflow_0053097684_element_python_robotframework_selenium_webdriverwait.txt
|
Q:
sklearn cross_val_score() returns NaN values
i'm trying to predict next customer purchase to my job. I followed a guide, but when i tried to use cross_val_score() function, it returns NaN values.Google Colab notebook screenshot
Variables:
X_train is a dataframe
X_test is a dataframe
y_train is a list
y_test is a list
Code:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=50)
X_train = X_train.reset_index(drop=True)
X_train
X_test = X_test.reset_index(drop=True)
y_train = y_train.astype('float')
y_test = y_test.astype('float')
models = []
models.append(("LR",LogisticRegression()))
models.append(("NB",GaussianNB()))
models.append(("RF",RandomForestClassifier()))
models.append(("SVC",SVC()))
models.append(("Dtree",DecisionTreeClassifier()))
models.append(("XGB",xgb.XGBClassifier()))
models.append(("KNN",KNeighborsClassifier()))´
for name,model in models:
kfold = KFold(n_splits=2, random_state=22)
cv_result = cross_val_score(model,X_train,y_train, cv = kfold,scoring = "accuracy")
print(name, cv_result)
>>
LR [nan nan]
NB [nan nan]
RF [nan nan]
SVC [nan nan]
Dtree [nan nan]
XGB [nan nan]
KNN [nan nan]
help me please!
A:
My case is a bit different. I was using cross_validate instead of cross_val_score with a list of performance metrics. Doing a 5 fold CV, I kept getting NaNs for all performance metrics for a RandomForestRegressor:
scorers = ['neg_mean_absolute_error', 'neg_root_mean_squared_error', 'r2', 'accuracy']
results = cross_validate(forest, X, y, cv=5, scoring=scorers, return_estimator=True)
results
Turns out, I stupidly included the 'accuracy' metric which is only used in classification. Instead of throwing an error, it looks like sklearn just returns NaNs for such cases
A:
I fixed the issue on my side. I was using a custom metric (Area Under Curve Precision-Recall (AUCPR))
def pr_auc_score(y, y_pred, **kwargs):
classes = list(range(y_pred.shape[1]))
if len(classes) == 2:
precision, recall, _ = precision_recall_curve(y, y_pred[:,1],
**kwargs)
else:
Y = label_binarize(y, classes=classes)
precision, recall, _ = precision_recall_curve(Y.ravel(), y_pred.ravel(),
**kwargs)
return auc(recall, precision)
The problem is, for a binary problem, y_pred contains only the predicted probability of the label 1, so y_pred's shape is (n_sample,).
When I try to call the method : y_pred.shape[1], it raises an error.
The solution: inside cross_validate, use the parameter error_score="raise". This will allow you to detect the error.
A:
Well thanks everyone for your answers. The answer of Anna helped me a lot!, but i don't used X_train.values, instead i assigned an unique ID to the Customers, then dropped Customers column and it works!
Now the models has this output :)
LR [0.73958333 0.74736842]
NB [0.60416667 0.71578947]
RF [0.80208333 0.82105263]
SVC [0.79166667 0.77894737]
Dtree [0.82291667 0.83157895]
XGB [0.85416667 0.85263158]
KNN [0.79166667 0.75789474]
A:
For my case, I had a time delta data type inside my numpy array that resulted in the error
A:
I face to face with that problem. I solved this way; i convert X_train and y_train to DataFrame.
cross_val_score(model,X_train,y_train, cv = kfold,scoring = "accuracy")
A:
I know this is answered already but for others who still cannot figure out the problem, this is for you...
Check if you y data type is a int or not. It will return nan if your date type for the y value is an object
How to check
y.dtype
How to change the data type
y = y.astype(int)
A:
Try doing encoding of categorical columns before passing to cross_val_score. It worked for me.
|
sklearn cross_val_score() returns NaN values
|
i'm trying to predict next customer purchase to my job. I followed a guide, but when i tried to use cross_val_score() function, it returns NaN values.Google Colab notebook screenshot
Variables:
X_train is a dataframe
X_test is a dataframe
y_train is a list
y_test is a list
Code:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=50)
X_train = X_train.reset_index(drop=True)
X_train
X_test = X_test.reset_index(drop=True)
y_train = y_train.astype('float')
y_test = y_test.astype('float')
models = []
models.append(("LR",LogisticRegression()))
models.append(("NB",GaussianNB()))
models.append(("RF",RandomForestClassifier()))
models.append(("SVC",SVC()))
models.append(("Dtree",DecisionTreeClassifier()))
models.append(("XGB",xgb.XGBClassifier()))
models.append(("KNN",KNeighborsClassifier()))´
for name,model in models:
kfold = KFold(n_splits=2, random_state=22)
cv_result = cross_val_score(model,X_train,y_train, cv = kfold,scoring = "accuracy")
print(name, cv_result)
>>
LR [nan nan]
NB [nan nan]
RF [nan nan]
SVC [nan nan]
Dtree [nan nan]
XGB [nan nan]
KNN [nan nan]
help me please!
|
[
"My case is a bit different. I was using cross_validate instead of cross_val_score with a list of performance metrics. Doing a 5 fold CV, I kept getting NaNs for all performance metrics for a RandomForestRegressor:\nscorers = ['neg_mean_absolute_error', 'neg_root_mean_squared_error', 'r2', 'accuracy']\n\nresults = cross_validate(forest, X, y, cv=5, scoring=scorers, return_estimator=True)\nresults\n\nTurns out, I stupidly included the 'accuracy' metric which is only used in classification. Instead of throwing an error, it looks like sklearn just returns NaNs for such cases\n",
"I fixed the issue on my side. I was using a custom metric (Area Under Curve Precision-Recall (AUCPR))\ndef pr_auc_score(y, y_pred, **kwargs):\n classes = list(range(y_pred.shape[1]))\n if len(classes) == 2:\n precision, recall, _ = precision_recall_curve(y, y_pred[:,1],\n **kwargs)\n else:\n Y = label_binarize(y, classes=classes)\n precision, recall, _ = precision_recall_curve(Y.ravel(), y_pred.ravel(),\n **kwargs)\n return auc(recall, precision)\n\nThe problem is, for a binary problem, y_pred contains only the predicted probability of the label 1, so y_pred's shape is (n_sample,).\nWhen I try to call the method : y_pred.shape[1], it raises an error.\nThe solution: inside cross_validate, use the parameter error_score=\"raise\". This will allow you to detect the error.\n",
"Well thanks everyone for your answers. The answer of Anna helped me a lot!, but i don't used X_train.values, instead i assigned an unique ID to the Customers, then dropped Customers column and it works!\nNow the models has this output :)\nLR [0.73958333 0.74736842]\nNB [0.60416667 0.71578947]\nRF [0.80208333 0.82105263]\nSVC [0.79166667 0.77894737]\nDtree [0.82291667 0.83157895]\nXGB [0.85416667 0.85263158]\nKNN [0.79166667 0.75789474]\n\n",
"For my case, I had a time delta data type inside my numpy array that resulted in the error\n",
"I face to face with that problem. I solved this way; i convert X_train and y_train to DataFrame.\ncross_val_score(model,X_train,y_train, cv = kfold,scoring = \"accuracy\")\n\n",
"I know this is answered already but for others who still cannot figure out the problem, this is for you...\nCheck if you y data type is a int or not. It will return nan if your date type for the y value is an object\nHow to check\ny.dtype\nHow to change the data type\ny = y.astype(int)\n",
"Try doing encoding of categorical columns before passing to cross_val_score. It worked for me.\n"
] |
[
5,
4,
1,
0,
0,
0,
0
] |
[
"For me using xtrain.values, ytrain.values worked as the cross validation needs the input to be an array and not dataframe.\n",
"The cross_val_score method returns NaN when there are null values in your dataset.\nEither use a model which can deal with missing values or remove all the null values from your dataset and try again.\n"
] |
[
-1,
-2
] |
[
"cross_validation",
"nan",
"prediction",
"python",
"sklearn_pandas"
] |
stackoverflow_0060172458_cross_validation_nan_prediction_python_sklearn_pandas.txt
|
Q:
New dataframe in Pandas based on specific values(a lot of them) from existing df
Good evening! I'm using pandas on Jupyter Notebook. I have a huge dataframe representing full history of posts of 26 channels in a messenger. It has a column "dialog_id" which represents in which dialog the message was sent(so, there can be only 26 unique values in the column, but there are more then 700k rows, and the df is sorted itself by time, not id, so it is kinda chaotic). I have to split this dataframe into 2 different(one will contain full history of 13 channels, and the other will contain history for the rest 13 channels). I know ids by which I have to split, they are random as well. For example, one is -1001232032465 and the other is -1001153765346.
The question is, how do I do it most elegantly and adequate?
I know I can do it somehow with df.loc[], but I don't want to put like 13 rows of df.loc[]. I've tried to use logical operators for this, like:
df1.loc[(df["dialog_id"] == '-1001708255880') & (df["dialog_id"] == '-1001645788710' )], but it doesn't work. I suppose I'm using them wrong. I expect a solution with any method creating a new df, with the use of logical operators. In verbal expression, I think it should sound like "put the row in a new df if the dialog_id is x, or dialog_id is y, or dialog_id is z, etc". Please help me!
A:
The easiest way seems to be just setting up a query.
df = pd.DataFrame(dict(col_id=[1,2,3,4,], other=[5,6,7,8,]))
channel_groupA = [1,2]
channel_groupB = [3,4]
df_groupA = df.query(f'col_id == {channel_groupA}')
df_groupB = df.query(f'col_id == {channel_groupB}')
|
New dataframe in Pandas based on specific values(a lot of them) from existing df
|
Good evening! I'm using pandas on Jupyter Notebook. I have a huge dataframe representing full history of posts of 26 channels in a messenger. It has a column "dialog_id" which represents in which dialog the message was sent(so, there can be only 26 unique values in the column, but there are more then 700k rows, and the df is sorted itself by time, not id, so it is kinda chaotic). I have to split this dataframe into 2 different(one will contain full history of 13 channels, and the other will contain history for the rest 13 channels). I know ids by which I have to split, they are random as well. For example, one is -1001232032465 and the other is -1001153765346.
The question is, how do I do it most elegantly and adequate?
I know I can do it somehow with df.loc[], but I don't want to put like 13 rows of df.loc[]. I've tried to use logical operators for this, like:
df1.loc[(df["dialog_id"] == '-1001708255880') & (df["dialog_id"] == '-1001645788710' )], but it doesn't work. I suppose I'm using them wrong. I expect a solution with any method creating a new df, with the use of logical operators. In verbal expression, I think it should sound like "put the row in a new df if the dialog_id is x, or dialog_id is y, or dialog_id is z, etc". Please help me!
|
[
"The easiest way seems to be just setting up a query.\ndf = pd.DataFrame(dict(col_id=[1,2,3,4,], other=[5,6,7,8,]))\n\nchannel_groupA = [1,2]\nchannel_groupB = [3,4]\n\ndf_groupA = df.query(f'col_id == {channel_groupA}')\ndf_groupB = df.query(f'col_id == {channel_groupB}')\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"jupyter_notebook",
"pandas",
"python"
] |
stackoverflow_0074477104_dataframe_jupyter_notebook_pandas_python.txt
|
Q:
Trick_winner Funciton
I have created a class function called trick_winner(self) within the class Cards which take the value within self.trick1 for example self.trick1 = ('AH' 'JH' 'KH' '2H') and returns the pairs in order from great to least, being that 'A' is the highest value followed by '7', 'J', 'K', 'Q', '6', '5', '4', '3', '2'. But when I use the built in sort function sorted is returns the value in but they are not pairs, they are treating each value as its own seperate value.
I have tried to used the built in sort function, but it does not come out the way I want it to show. I am expecting if I type in a = Cards('AH' '4H' 'KH' '2H') and when I run the class function is it will return the pairs in order from greatest to least 'A' 'KH' '4H' '2H'.
I have created the function
class Cards:
def __init__(self, trick)
self.trick1 = trick
def trick_winner(self):
R = {'2': 0, '3': 0, '4': 0, '5': 0, '6': 0,
'J': 4, 'Q': 3, 'K': 5, '7': 10, 'A': 11}
self.trick1 = self.trick1.upper()
a = sorted(self.trick1)
print(a)
and running the funcntion:
c = cards('7H' ' JH' ' KH' ' 2H')
c.trick_winner()
the outcome was:
[' ', ' ', ' ', '2', '7', 'H', 'H', 'H', 'H', 'J', 'K']
A:
You should create a class for a single card and implement the order. Look here:
R = {"2": 0, "3": 0, "4": 0, "5": 0, "6": 0, "J": 4, "Q": 3, "K": 5, "7": 10, "A": 11}
class Card:
def __init__(self, color, value):
self.color = color
self.value = value
def __lt__(self, other):
return R[self.value] < R[other.value]
# Only for printing purposes, you shouldn't use that in production
def __repr__(self):
return str(self.value)
c1 = Card("H", "A")
c2 = Card("H", "K")
c3 = Card("H", "7")
c4 = Card("H", "4")
print(sorted([c1, c2, c3, c4], reverse=True))
A:
For a bit more structured answer, I created card and cards class which makes sense.
#Define R at the beginning of the script so it would be globally accesible
R = {'2': 0, '3': 0, '4': 0, '5': 0, '6': 0,
'J': 4, 'Q': 3, 'K': 5, '7': 10, 'A': 11}
class Card:
def __init__(self,card):
self.card = card
#this function used when checking greater than symbol
#so that sorted method will work properly
def __gt__(self,other):
if(isinstance(other,Card)):
return R[other.card] > R[self.card]
#these two are for printing results.
def __str__(self):
return self.card
def __repr__(self):
return str(self.card)
class Cards:
#you can add many card into cards class
def __init__(self, *cards):
self.cards = cards
def trick_winner(self):
a = sorted(self.cards)
print(a)
Here create many card and send them into cards class. Now you can easily manipulate or use them with Cards class
c = Cards(Card('A'),Card('4'),Card('K'),Card('2'))
c.trick_winner()
|
Trick_winner Funciton
|
I have created a class function called trick_winner(self) within the class Cards which take the value within self.trick1 for example self.trick1 = ('AH' 'JH' 'KH' '2H') and returns the pairs in order from great to least, being that 'A' is the highest value followed by '7', 'J', 'K', 'Q', '6', '5', '4', '3', '2'. But when I use the built in sort function sorted is returns the value in but they are not pairs, they are treating each value as its own seperate value.
I have tried to used the built in sort function, but it does not come out the way I want it to show. I am expecting if I type in a = Cards('AH' '4H' 'KH' '2H') and when I run the class function is it will return the pairs in order from greatest to least 'A' 'KH' '4H' '2H'.
I have created the function
class Cards:
def __init__(self, trick)
self.trick1 = trick
def trick_winner(self):
R = {'2': 0, '3': 0, '4': 0, '5': 0, '6': 0,
'J': 4, 'Q': 3, 'K': 5, '7': 10, 'A': 11}
self.trick1 = self.trick1.upper()
a = sorted(self.trick1)
print(a)
and running the funcntion:
c = cards('7H' ' JH' ' KH' ' 2H')
c.trick_winner()
the outcome was:
[' ', ' ', ' ', '2', '7', 'H', 'H', 'H', 'H', 'J', 'K']
|
[
"You should create a class for a single card and implement the order. Look here:\nR = {\"2\": 0, \"3\": 0, \"4\": 0, \"5\": 0, \"6\": 0, \"J\": 4, \"Q\": 3, \"K\": 5, \"7\": 10, \"A\": 11}\n\nclass Card:\n def __init__(self, color, value):\n self.color = color\n self.value = value\n\n def __lt__(self, other):\n return R[self.value] < R[other.value]\n\n # Only for printing purposes, you shouldn't use that in production\n def __repr__(self):\n return str(self.value)\n\nc1 = Card(\"H\", \"A\")\nc2 = Card(\"H\", \"K\")\nc3 = Card(\"H\", \"7\")\nc4 = Card(\"H\", \"4\")\nprint(sorted([c1, c2, c3, c4], reverse=True))\n\n",
"For a bit more structured answer, I created card and cards class which makes sense.\n#Define R at the beginning of the script so it would be globally accesible\nR = {'2': 0, '3': 0, '4': 0, '5': 0, '6': 0,\n'J': 4, 'Q': 3, 'K': 5, '7': 10, 'A': 11}\n\nclass Card:\n def __init__(self,card):\n self.card = card\n\n #this function used when checking greater than symbol\n #so that sorted method will work properly\n def __gt__(self,other):\n if(isinstance(other,Card)):\n return R[other.card] > R[self.card]\n #these two are for printing results.\n def __str__(self):\n return self.card\n\n def __repr__(self):\n return str(self.card)\n\n\nclass Cards:\n #you can add many card into cards class\n def __init__(self, *cards):\n self.cards = cards\n\n def trick_winner(self):\n \n a = sorted(self.cards)\n print(a)\n\nHere create many card and send them into cards class. Now you can easily manipulate or use them with Cards class\nc = Cards(Card('A'),Card('4'),Card('K'),Card('2'))\nc.trick_winner()\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074477641_python.txt
|
Q:
I am trying to run this code that asks user to enter a sentence, the display the number of vowels and consonants in the sentence
I am getting syntax errors when trying to run or sometimes it runs but does not execute the way I am intending it to.
I have been playing around with the formatting but still no solution.
def checkVowelsConsonants(s):
vowels=0
consonants=0
for ch in s:
#convert character into its ASCII equivalent
ascii_value=ord(ch)
#if ASCII is between 65 to 90 to 97 to 122 then it's a character
#otherwise a special character
if((ascii_value>=65 and ascii_value<=90)or(ascii_value>=97 and ascii_value<=122)):
#check for lower case
if ch=='a' or ch=='e' or ch=='i' or ch=='o' or ch=='u':
vowels=vowels+1
#check for upper case
elif ch=='A' or ch=='E' or ch=='I' or ch=='O' or ch=='U':
vowels=vowels+1
else:
consonants=consonants+1
#print the result
print("The number of vowels is "+str(vowels)+" and consonants is "+str(consonants))
while True:
#print the menu
print("1. Print the number of vowels and consonats")
print("2. Exit the program")
#take choioce as input from user
choice=int(input("Enter choice: "))
#take sentence input from user
if choice==1:
sentence=input("Enter a sentence: ")
sentence_list=[]
for ch in sentence:
sentence_list.append(ch)
checkVowelsConsonants(sentence_list)
#exit the program
if choice==2:
break
#choice other that 1 and 2
else:
print("Invalid choice!")
A:
I've cleaned up your code, checking ascii values with 65 <= ascii_value <= 90.
As you want to check lowercase and upper case vowels I made this one if condition by checking if ch in "aeiouAEIOU" making all other valid characters lowercase or uppercase consonants.
It is also not necessary to convert your user input sentence to a list prior to your checkVowelsConsonants call, as a string is also iterable.
That is why I removed that for loop appending each individual character.
But if that was intended by you, just leave as is.
By following your code I indented it so that your while loop only terminates when choice is 2.
def checkVowelsConsonants(s):
vowels = 0
consonants = 0
for ch in s:
# Convert character into its ASCII equivalent
ascii_value = ord(ch)
# If ASCII is between 65 to 90 or 97 to 122 then it's a character
# otherwise a special character
if (65 <= ascii_value <=90) or(97 <= ascii_value <= 122):
# Check for lower case and upper case vowels
if ch in "aeiouAEIOU":
vowels = vowels + 1
else:
consonants = consonants + 1
# Print the result
print("The number of vowels is " + str(vowels) + " and consonants is " + str(consonants))
while True:
# Print the menu
print("1. Print the number of vowels and consonats")
print("2. Exit the program")
# Take choice as input from user
choice = int(input("Enter choice: "))
# Take sentence input from user
if choice == 1:
sentence = input("Enter a sentence: ")
checkVowelsConsonants(sentence)
# Exit the program
elif choice == 2:
break
# choice other than 1 or 2
else:
print("Invalid choice!")
A:
To add:
The program doesn't end even with selection of choice 1. So, here I have rewritten the code and added a break after the choice input. I have also used string formatted text to keep things a bit cleaner. Here below:
def checkVowelsConsonants(s):
vowels = 0
consonants = 0
for ch in s:
ascii_value = ord(ch)
if ((ascii_value >= 65 & ascii_value <= 90) | (ascii_value >= 97 & ascii_value <= 122)):
#check for lower case
if ch == 'a' or ch == 'e' or ch == 'i' or ch == 'o' or ch == 'u':
vowels = vowels+1
#check for upper case
elif ch == 'A' or ch == 'E' or ch == 'I' or ch == 'O' or ch == 'U':
vowels +=1
else:
consonants +=1
#print the result
print("The number of vowels is {} and consonants is {}".format(vowels,consonants))
while True:
#print the menu
print("Choices:")
print("1: Print the number of vowels and consonants")
print("2: Exit the program\n")
#take choioce as input from user
choice = int(input("Enter choice: \n"))
#take sentence input from user
if choice == 1:
sentence = input("Enter a sentence: \n")
checkVowelsConsonants(sentence)
break
#exit the program
elif choice == 2:
print('Program exited.')
break
#choice other that 1 and 2
else:
print("Invalid choice!")
|
I am trying to run this code that asks user to enter a sentence, the display the number of vowels and consonants in the sentence
|
I am getting syntax errors when trying to run or sometimes it runs but does not execute the way I am intending it to.
I have been playing around with the formatting but still no solution.
def checkVowelsConsonants(s):
vowels=0
consonants=0
for ch in s:
#convert character into its ASCII equivalent
ascii_value=ord(ch)
#if ASCII is between 65 to 90 to 97 to 122 then it's a character
#otherwise a special character
if((ascii_value>=65 and ascii_value<=90)or(ascii_value>=97 and ascii_value<=122)):
#check for lower case
if ch=='a' or ch=='e' or ch=='i' or ch=='o' or ch=='u':
vowels=vowels+1
#check for upper case
elif ch=='A' or ch=='E' or ch=='I' or ch=='O' or ch=='U':
vowels=vowels+1
else:
consonants=consonants+1
#print the result
print("The number of vowels is "+str(vowels)+" and consonants is "+str(consonants))
while True:
#print the menu
print("1. Print the number of vowels and consonats")
print("2. Exit the program")
#take choioce as input from user
choice=int(input("Enter choice: "))
#take sentence input from user
if choice==1:
sentence=input("Enter a sentence: ")
sentence_list=[]
for ch in sentence:
sentence_list.append(ch)
checkVowelsConsonants(sentence_list)
#exit the program
if choice==2:
break
#choice other that 1 and 2
else:
print("Invalid choice!")
|
[
"I've cleaned up your code, checking ascii values with 65 <= ascii_value <= 90.\nAs you want to check lowercase and upper case vowels I made this one if condition by checking if ch in \"aeiouAEIOU\" making all other valid characters lowercase or uppercase consonants.\nIt is also not necessary to convert your user input sentence to a list prior to your checkVowelsConsonants call, as a string is also iterable.\nThat is why I removed that for loop appending each individual character.\nBut if that was intended by you, just leave as is.\nBy following your code I indented it so that your while loop only terminates when choice is 2.\ndef checkVowelsConsonants(s):\n vowels = 0\n consonants = 0\n for ch in s:\n # Convert character into its ASCII equivalent\n ascii_value = ord(ch)\n # If ASCII is between 65 to 90 or 97 to 122 then it's a character\n # otherwise a special character\n if (65 <= ascii_value <=90) or(97 <= ascii_value <= 122):\n # Check for lower case and upper case vowels\n if ch in \"aeiouAEIOU\":\n vowels = vowels + 1\n else:\n consonants = consonants + 1\n # Print the result\n print(\"The number of vowels is \" + str(vowels) + \" and consonants is \" + str(consonants))\n\nwhile True:\n # Print the menu\n print(\"1. Print the number of vowels and consonats\")\n print(\"2. Exit the program\")\n # Take choice as input from user\n choice = int(input(\"Enter choice: \"))\n # Take sentence input from user\n if choice == 1:\n sentence = input(\"Enter a sentence: \")\n checkVowelsConsonants(sentence)\n # Exit the program\n elif choice == 2:\n break\n # choice other than 1 or 2\n else:\n print(\"Invalid choice!\")\n\n",
"To add:\nThe program doesn't end even with selection of choice 1. So, here I have rewritten the code and added a break after the choice input. I have also used string formatted text to keep things a bit cleaner. Here below:\ndef checkVowelsConsonants(s):\n vowels = 0\n consonants = 0\n for ch in s:\n ascii_value = ord(ch)\n if ((ascii_value >= 65 & ascii_value <= 90) | (ascii_value >= 97 & ascii_value <= 122)):\n#check for lower case\n if ch == 'a' or ch == 'e' or ch == 'i' or ch == 'o' or ch == 'u':\n vowels = vowels+1\n#check for upper case\n elif ch == 'A' or ch == 'E' or ch == 'I' or ch == 'O' or ch == 'U':\n vowels +=1\n else:\n consonants +=1\n#print the result\n print(\"The number of vowels is {} and consonants is {}\".format(vowels,consonants))\n\n \nwhile True:\n#print the menu\n print(\"Choices:\") \n print(\"1: Print the number of vowels and consonants\")\n print(\"2: Exit the program\\n\")\n#take choioce as input from user\n choice = int(input(\"Enter choice: \\n\"))\n#take sentence input from user\n if choice == 1:\n sentence = input(\"Enter a sentence: \\n\")\n checkVowelsConsonants(sentence)\n break\n#exit the program\n elif choice == 2:\n print('Program exited.')\n break\n#choice other that 1 and 2\n else:\n print(\"Invalid choice!\")\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"for_loop",
"python",
"syntax_error",
"while_loop"
] |
stackoverflow_0074475753_for_loop_python_syntax_error_while_loop.txt
|
Q:
How do I parse a List JSON File in CSV into a dataframe
[{"Apertura":35,"Apertura_Homogeneo":35,"Cantidad_Operaciones":1,"Cierre":35,"Cierre_Homogeneo":35,"Denominacion":"INSUMOS AGROQUIMICOS S.A.","Fecha":"02\/02\/2018","Maximo":35,"Maximo_Homogeneo":35,"Minimo":35,"Minimo_Homogeneo":35,"Monto_Operado_Pesos":175,"Promedio":35,"Promedio_Homogeneo":35,"Simbolo":"INAG","Variacion":-5.15,"Variacion_Homogeneo":0,"Vencimiento":"48hs","Volumen_Nominal":5},
{"Apertura":34.95,"Apertura_Homogeneo":34.95,"Cantidad_Operaciones":2,"Cierre":34.95,"Cierre_Homogeneo":34.95,"Denominacion":"INSUMOS AGROQUIMICOS S.A.","Fecha":"05\/02\/2018","Maximo":34.95,"Maximo_Homogeneo":34.95,"Minimo":34.95,"Minimo_Homogeneo":34.95,"Monto_Operado_Pesos":5243,"Promedio":-79228162514264337593543950335,"Promedio_Homogeneo":-79228162514264337593543950335,"Simbolo":"INAG","Variacion":-0.14,"Variacion_Homogeneo":-0.14,"Vencimiento":"48hs","Volumen_Nominal":150},
{"Apertura":32.10,"Apertura_Homogeneo":32.10,"Cantidad_Operaciones":2,"Cierre":32.10,"Cierre_Homogeneo":32.10,"Denominacion":"INSUMOS AGROQUIMICOS S.A.","Fecha":"07\/02\/2018","Maximo":32.10,"Maximo_Homogeneo":32.10,"Minimo":32.10,"Minimo_Homogeneo":32.10,"Monto_Operado_Pesos":98756,"Promedio":32.10,"Promedio_Homogeneo":32.10,"Simbolo":"INAG","Variacion":-8.16,"Variacion_Homogeneo":-8.88,"Vencimiento":"48hs","Volumen_Nominal":3076}]
Hi,
in the same example as above, if I do get a CSV file with that data Arpertura.csv, how can I import and parse it in a PANDAS dataframe? The real file is a few gigabytes large. I want to get
Sum Volumen_Nominal for all Aperturas (3076+150+5) and some other slice and dice.
Thanks.
Chibi
I tried importing the CSV with
df = pd.read_csv(r\'filename')
df_json = df.to_JSON()
pd.read_json(_, orient='split')
but it would not work. I think the list structure in front has to be removed.
The result I now gets is
Header = [{"Apertura":35,"Apertura_Homogeneo":35,"Cantidad_Operaciones":1,"Cierre":35,"Cierre_Homogeneo":35,"Denominacion":"INSUMOS AGROQUIMICOS S.A.","Fecha":"02\/02\/2018","Maximo":35,"Maximo_Homogeneo":35,"Minimo":35,"Minimo_Homogeneo":35,"Monto_Operado_Pesos":175,"Promedio":35,"Promedio_Homogeneo":35,"Simbolo":"INAG","Variacion":-5.15,"Variacion_Homogeneo":0,"Vencimiento":"48hs","Volumen_Nominal":5}
Body starts with nan and follows with the rest:
{"Apertura":34.95,"Apertura_Homogeneo":34.95,"Cantidad_Operaciones":2,"Cierre":34.95,"Cierre_Homogeneo":34.95,"Denominacion":"INSUMOS AGROQUIMICOS S.A.","Fecha":"05\/02\/2018","Maximo":34.95,"Maximo_Homogeneo":34.95,"Minimo":34.95,"Minimo_Homogeneo":34.95,"Monto_Operado_Pesos":5243,"Promedio":-79228162514264337593543950335,"Promedio_Homogeneo":-79228162514264337593543950335,"Simbolo":"INAG","Variacion":-0.14,"Variacion_Homogeneo":-0.14,"Vencimiento":"48hs","Volumen_Nominal":150}
A:
You do not need to convert the dataframe to json and back.
If you want the sum of a column you can use:
df = pd.read_csv(r'filename')
df["Volumen_Nominal"].sum()
|
How do I parse a List JSON File in CSV into a dataframe
|
[{"Apertura":35,"Apertura_Homogeneo":35,"Cantidad_Operaciones":1,"Cierre":35,"Cierre_Homogeneo":35,"Denominacion":"INSUMOS AGROQUIMICOS S.A.","Fecha":"02\/02\/2018","Maximo":35,"Maximo_Homogeneo":35,"Minimo":35,"Minimo_Homogeneo":35,"Monto_Operado_Pesos":175,"Promedio":35,"Promedio_Homogeneo":35,"Simbolo":"INAG","Variacion":-5.15,"Variacion_Homogeneo":0,"Vencimiento":"48hs","Volumen_Nominal":5},
{"Apertura":34.95,"Apertura_Homogeneo":34.95,"Cantidad_Operaciones":2,"Cierre":34.95,"Cierre_Homogeneo":34.95,"Denominacion":"INSUMOS AGROQUIMICOS S.A.","Fecha":"05\/02\/2018","Maximo":34.95,"Maximo_Homogeneo":34.95,"Minimo":34.95,"Minimo_Homogeneo":34.95,"Monto_Operado_Pesos":5243,"Promedio":-79228162514264337593543950335,"Promedio_Homogeneo":-79228162514264337593543950335,"Simbolo":"INAG","Variacion":-0.14,"Variacion_Homogeneo":-0.14,"Vencimiento":"48hs","Volumen_Nominal":150},
{"Apertura":32.10,"Apertura_Homogeneo":32.10,"Cantidad_Operaciones":2,"Cierre":32.10,"Cierre_Homogeneo":32.10,"Denominacion":"INSUMOS AGROQUIMICOS S.A.","Fecha":"07\/02\/2018","Maximo":32.10,"Maximo_Homogeneo":32.10,"Minimo":32.10,"Minimo_Homogeneo":32.10,"Monto_Operado_Pesos":98756,"Promedio":32.10,"Promedio_Homogeneo":32.10,"Simbolo":"INAG","Variacion":-8.16,"Variacion_Homogeneo":-8.88,"Vencimiento":"48hs","Volumen_Nominal":3076}]
Hi,
in the same example as above, if I do get a CSV file with that data Arpertura.csv, how can I import and parse it in a PANDAS dataframe? The real file is a few gigabytes large. I want to get
Sum Volumen_Nominal for all Aperturas (3076+150+5) and some other slice and dice.
Thanks.
Chibi
I tried importing the CSV with
df = pd.read_csv(r\'filename')
df_json = df.to_JSON()
pd.read_json(_, orient='split')
but it would not work. I think the list structure in front has to be removed.
The result I now gets is
Header = [{"Apertura":35,"Apertura_Homogeneo":35,"Cantidad_Operaciones":1,"Cierre":35,"Cierre_Homogeneo":35,"Denominacion":"INSUMOS AGROQUIMICOS S.A.","Fecha":"02\/02\/2018","Maximo":35,"Maximo_Homogeneo":35,"Minimo":35,"Minimo_Homogeneo":35,"Monto_Operado_Pesos":175,"Promedio":35,"Promedio_Homogeneo":35,"Simbolo":"INAG","Variacion":-5.15,"Variacion_Homogeneo":0,"Vencimiento":"48hs","Volumen_Nominal":5}
Body starts with nan and follows with the rest:
{"Apertura":34.95,"Apertura_Homogeneo":34.95,"Cantidad_Operaciones":2,"Cierre":34.95,"Cierre_Homogeneo":34.95,"Denominacion":"INSUMOS AGROQUIMICOS S.A.","Fecha":"05\/02\/2018","Maximo":34.95,"Maximo_Homogeneo":34.95,"Minimo":34.95,"Minimo_Homogeneo":34.95,"Monto_Operado_Pesos":5243,"Promedio":-79228162514264337593543950335,"Promedio_Homogeneo":-79228162514264337593543950335,"Simbolo":"INAG","Variacion":-0.14,"Variacion_Homogeneo":-0.14,"Vencimiento":"48hs","Volumen_Nominal":150}
|
[
"You do not need to convert the dataframe to json and back.\nIf you want the sum of a column you can use:\ndf = pd.read_csv(r'filename')\ndf[\"Volumen_Nominal\"].sum()\n\n"
] |
[
0
] |
[] |
[] |
[
"json",
"pandas",
"python"
] |
stackoverflow_0074477954_json_pandas_python.txt
|
Q:
replacing in dataframe based on list/array
If I have an array/list like such,
(['Alabama', 'Arizona', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah'])
and I have a dataframe column where those values in the array are present and repeating, how do I replace them based on the location's index in the array? eg my column has Alabama which has an index 0, so for all the Alabama in the column, I would like to replace it with 0.
A:
Let us say you have a dataframe like this:
df = pd.DataFrame({"States": ['Alabama', 'Arizona', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah',
'Alabama', 'Arizona', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah']})
print(df)
Then you could apply the index like this:
# Here is the list of states for which you want the index
indexList = (['Alabama', 'Arizona', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah'])
# Replace the state names with the respective index
df["States"] = df["States"].apply(indexList.index)
# Print the dataframe
print(df)
OUTPUT:
States
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 0
8 1
9 2
10 3
11 4
12 5
13 6
As you can see,
Alabama was replaced with the number 0.
Arizona was replaced with the number 1.
etc etc
|
replacing in dataframe based on list/array
|
If I have an array/list like such,
(['Alabama', 'Arizona', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah'])
and I have a dataframe column where those values in the array are present and repeating, how do I replace them based on the location's index in the array? eg my column has Alabama which has an index 0, so for all the Alabama in the column, I would like to replace it with 0.
|
[
"Let us say you have a dataframe like this:\ndf = pd.DataFrame({\"States\": ['Alabama', 'Arizona', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah',\n 'Alabama', 'Arizona', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah']})\n\nprint(df)\n\n\nThen you could apply the index like this:\n# Here is the list of states for which you want the index\nindexList = (['Alabama', 'Arizona', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah'])\n\n# Replace the state names with the respective index\ndf[\"States\"] = df[\"States\"].apply(indexList.index)\n\n# Print the dataframe\nprint(df)\n\nOUTPUT:\n States\n0 0\n1 1\n2 2\n3 3\n4 4\n5 5\n6 6\n7 0\n8 1\n9 2\n10 3\n11 4\n12 5\n13 6\n\nAs you can see, \n\nAlabama was replaced with the number 0.\nArizona was replaced with the number 1.\netc etc\n\n"
] |
[
0
] |
[] |
[] |
[
"for_loop",
"python",
"replace"
] |
stackoverflow_0074477848_for_loop_python_replace.txt
|
Q:
InvocationException: GraphViz's executables not found
I'm unable to visualize or write the Decision tree. How can I go about it?
Python version 3.5, Anaconda 3, I have even set the environment variables
from sklearn import tree
model = tree.DecisionTreeClassifier(criterion='gini')
model=tree.DecisionTreeClassifier()
model.fit(trainData,trainLabel)
model.score(trainData,trainLabel)
predicted= model.predict(testData)
from sklearn.externals.six import StringIO
import pydot
import pydotplus
dot_data = StringIO()
tree.export_graphviz(model, out_file=dot_data)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
print(graph)
graph.write_pdf("C:\\Users\\anagha\\Desktop\\SynehackData\\DATA\\DATA\\graph.pdf")
error :
InvocationException: GraphViz's executables not found
A:
I understand the thread is a little old but today I got the same error when trying to visualize a Bayesian Network in a Jupyter notebook with the PyAgrum library.
I'm on Windows 10 using the Anaconda package management. In my case I needed to install the package python-graphviz using the following command:
conda install python-graphviz
After the installation, I just needed to restart the jupyter kernel and run the code again.
A:
I got this error and tried a million things.
I saw that I should add to the environment variable, 'path' if you're in Windows.
I did this, restarted, Python, but it didn't work.
I did it for graphviz and for pydotplus.
Then I saw someone used a slightly different path from what I had used.
something like
Drive:\Users\User.Name\AppData\Local\Continuum\anaconda3\envs\MyVirtualEnv\Library\bin\graphviz
So I added that to the path, and restarted all things anaconda.
This was probably the 98th thing I tried. It worked!
I had been using a path like
Drive:\Users\User.Name\AppData\Local\Continuum\anaconda3\envs\MyVirtualEnv\lib\site-packages\graphviz, which didn't work, but I put in both, and a similar one for pydotplus.
A:
The other solutions didn't help for me.
I already had the following installed:
graphviz==2.50.0
pydotplus==2.0.2
pydot==1.4.1
But running whereis dot and whereis graphviz, it was clear that I was still missing the graphviz library on my operating system: for dot, the whereis command returned a path on my system, for graphviz, no path was printed by the whereis command.
What helped for me (on Ubuntu) is running sudo apt-get install graphviz, since the PyPi page of the python package graphviz (https://pypi.org/project/graphviz/) mentions the following:
To render the generated DOT source code, you also need to install Graphviz (download page, archived versions, installation procedure for Windows).
The download page linked to above mentioned that sudo apt-get install graphviz was the way to go on Ubuntu. If you have another operating system, check the Download page above for a way to install graphviz on your specific OS.
|
InvocationException: GraphViz's executables not found
|
I'm unable to visualize or write the Decision tree. How can I go about it?
Python version 3.5, Anaconda 3, I have even set the environment variables
from sklearn import tree
model = tree.DecisionTreeClassifier(criterion='gini')
model=tree.DecisionTreeClassifier()
model.fit(trainData,trainLabel)
model.score(trainData,trainLabel)
predicted= model.predict(testData)
from sklearn.externals.six import StringIO
import pydot
import pydotplus
dot_data = StringIO()
tree.export_graphviz(model, out_file=dot_data)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
print(graph)
graph.write_pdf("C:\\Users\\anagha\\Desktop\\SynehackData\\DATA\\DATA\\graph.pdf")
error :
InvocationException: GraphViz's executables not found
|
[
"I understand the thread is a little old but today I got the same error when trying to visualize a Bayesian Network in a Jupyter notebook with the PyAgrum library.\nI'm on Windows 10 using the Anaconda package management. In my case I needed to install the package python-graphviz using the following command:\nconda install python-graphviz\nAfter the installation, I just needed to restart the jupyter kernel and run the code again.\n",
"I got this error and tried a million things.\nI saw that I should add to the environment variable, 'path' if you're in Windows.\nI did this, restarted, Python, but it didn't work. \nI did it for graphviz and for pydotplus.\nThen I saw someone used a slightly different path from what I had used.\nsomething like \nDrive:\\Users\\User.Name\\AppData\\Local\\Continuum\\anaconda3\\envs\\MyVirtualEnv\\Library\\bin\\graphviz\nSo I added that to the path, and restarted all things anaconda.\nThis was probably the 98th thing I tried. It worked!\nI had been using a path like \nDrive:\\Users\\User.Name\\AppData\\Local\\Continuum\\anaconda3\\envs\\MyVirtualEnv\\lib\\site-packages\\graphviz, which didn't work, but I put in both, and a similar one for pydotplus.\n",
"The other solutions didn't help for me.\nI already had the following installed:\ngraphviz==2.50.0\npydotplus==2.0.2\npydot==1.4.1\n\nBut running whereis dot and whereis graphviz, it was clear that I was still missing the graphviz library on my operating system: for dot, the whereis command returned a path on my system, for graphviz, no path was printed by the whereis command.\nWhat helped for me (on Ubuntu) is running sudo apt-get install graphviz, since the PyPi page of the python package graphviz (https://pypi.org/project/graphviz/) mentions the following:\n\nTo render the generated DOT source code, you also need to install Graphviz (download page, archived versions, installation procedure for Windows).\n\nThe download page linked to above mentioned that sudo apt-get install graphviz was the way to go on Ubuntu. If you have another operating system, check the Download page above for a way to install graphviz on your specific OS.\n"
] |
[
1,
0,
0
] |
[
"You can take help of this code !!\nimport pydotplus\nfrom sklearn.datasets import load_iris\nfrom sklearn import tree\nimport collections\n\n# Data Collection\nX = [ [180, 15,0], \n [177, 42,0],\n [136, 35,1],\n [174, 65,0],\n [141, 28,1]]\n\nY = ['man', 'woman', 'woman', 'man', 'woman'] \n\ndata_feature_names = [ 'height', 'hair length', 'voice pitch' ]\n\n# Training\nclf = tree.DecisionTreeClassifier()\nclf = clf.fit(X,Y)\n# Visualize data\ndot_data = tree.export_graphviz(clf,\n feature_names=data_feature_names,\n out_file=None,\n filled=True,\n rounded=True)\ngraph = pydotplus.graph_from_dot_data(dot_data)\n\ncolors = ('turquoise', 'orange')\nedges = collections.defaultdict(list)\n\nfor edge in graph.get_edge_list():\n edges[edge.get_source()].append(int(edge.get_destination()))\n\nfor edge in edges:\n edges[edge].sort() \n for i in range(2):\n dest = graph.get_node(str(edges[edge][i]))[0]\n dest.set_fillcolor(colors[i])\n\ngraph.write_png('tree.png')\n\n"
] |
[
-4
] |
[
"decision_tree",
"pygraphviz",
"python",
"python_3.x"
] |
stackoverflow_0043535863_decision_tree_pygraphviz_python_python_3.x.txt
|
Q:
How to count characters from nested lists of strings inside a dictionary (Python)?
I'm trying to count the frequency of a charater from nested lists of strings inside a dictionary.
Returning, for each key, the most frequent value.
I was thinking something along the lines of:
res = {0: ['a', 'a', 'b'], 1: ['e'], 2: ['i', 'x', 'i', 'c']}
for k, v in res.items():
# count the most frequent
print(res)
Expecting:
res = {0: 'a', 1: 'e', 2: 'i'}
A:
output = {k: most_frequenct(v) for k, v in data.items()}
most_frequenct could be any of the following
https://www.geeksforgeeks.org/python-find-most-frequent-element-in-a-list/
Hope it helps
|
How to count characters from nested lists of strings inside a dictionary (Python)?
|
I'm trying to count the frequency of a charater from nested lists of strings inside a dictionary.
Returning, for each key, the most frequent value.
I was thinking something along the lines of:
res = {0: ['a', 'a', 'b'], 1: ['e'], 2: ['i', 'x', 'i', 'c']}
for k, v in res.items():
# count the most frequent
print(res)
Expecting:
res = {0: 'a', 1: 'e', 2: 'i'}
|
[
"output = {k: most_frequenct(v) for k, v in data.items()}\n\nmost_frequenct could be any of the following\nhttps://www.geeksforgeeks.org/python-find-most-frequent-element-in-a-list/\nHope it helps\n"
] |
[
0
] |
[] |
[] |
[
"dictionary",
"list",
"python",
"string"
] |
stackoverflow_0074477988_dictionary_list_python_string.txt
|
Q:
How to do this Not In operation without triggering an overflow in the marker amount of operations in pyobdc/sqlalchemy?
This is a simplification of the case:
I have two databases, a MySQL and a MS_Access. I am trying to delete all elements from the MsAccess that are not in the MySQL table but are still in MSAccess.
I am using sqlalchemy to connect to both DB. To connect with MSAccess (I know, this database should not be used anymore, this is actually part of a migration process), I am using sqlalchemy-access, that internally works with pyodbc.
The code that does this operation is:
#every row in the mysql table contains a field that references its correspondent row in msaccess
mysql_ids = mysql_session.query(mysql_table.id_msaccess).all()
list_of_ids = [elem(0) for elem in mysql_ids]
delete_query = delete(access_table).where((access_table).id.not_in(list_of_ids))
results = access_session.execute(delete_query)
However, I get this error message:
(pyodbc.ProgrammingError) ('The SQL contains -9972 parameter markers, but 55564 parameters were supplied)
DELETE FROM [access_table] WHERE ([access_table].[id] NOT IN (?, ?, ... <here there are all the 55564 parameter markers>) parameters: (241, 242, 243,...)
I have found this issue in pyodbc's github page:
Github Issue in Pyodbc
They essentially say that there is a marker counter that overflows in the internal implementation. They are talking about SQL Server but I guess the same thing happens here.
I could do this query in blocks of 32768 rows, or otherwise check for every element from the mysql table to see if it is in the ms-access table (I think this would be quite slow) but I wonder if there is not a better approach. Do you have any suggestions on how could I approach this?
Thanks in advance for any suggestions
A:
I could do this query in blocks of 32768 rows
That won't work for a NOT IN query. Say you had a list of rows to keep:
[1, 2, 3, 4, 5, 6]
If you tried to do that in batches of 3 then the first DELETE would be
DELETE FROM access_table WHERE id NOT IN (1, 2, 3)
which would delete the rows with id values of 4, 5, and 6. Then the next DELETE would be
DELETE FROM access_table WHERE id NOT IN (4, 5, 6)
which would delete the rows with id values of 1, 2, and 3.
However, you could build a list of rows to delete like this:
with mysql_engine.begin() as conn:
mysql_existing = (
conn.scalars(sa.select(mysql_table.c.id_msaccess)).all()
)
print(mysql_existing) # [2, 3]
with access_engine.begin() as conn:
access_existing = (
conn.scalars(sa.select(access_table.c.id)).all()
)
print(access_existing) # [1, 2, 3, 4, 5, 6]
access_to_delete = list(set(access_existing).difference(mysql_existing))
print(access_to_delete) # [1, 4, 5, 6]
and you could process that list in batches by using IN instead of NOT IN.
|
How to do this Not In operation without triggering an overflow in the marker amount of operations in pyobdc/sqlalchemy?
|
This is a simplification of the case:
I have two databases, a MySQL and a MS_Access. I am trying to delete all elements from the MsAccess that are not in the MySQL table but are still in MSAccess.
I am using sqlalchemy to connect to both DB. To connect with MSAccess (I know, this database should not be used anymore, this is actually part of a migration process), I am using sqlalchemy-access, that internally works with pyodbc.
The code that does this operation is:
#every row in the mysql table contains a field that references its correspondent row in msaccess
mysql_ids = mysql_session.query(mysql_table.id_msaccess).all()
list_of_ids = [elem(0) for elem in mysql_ids]
delete_query = delete(access_table).where((access_table).id.not_in(list_of_ids))
results = access_session.execute(delete_query)
However, I get this error message:
(pyodbc.ProgrammingError) ('The SQL contains -9972 parameter markers, but 55564 parameters were supplied)
DELETE FROM [access_table] WHERE ([access_table].[id] NOT IN (?, ?, ... <here there are all the 55564 parameter markers>) parameters: (241, 242, 243,...)
I have found this issue in pyodbc's github page:
Github Issue in Pyodbc
They essentially say that there is a marker counter that overflows in the internal implementation. They are talking about SQL Server but I guess the same thing happens here.
I could do this query in blocks of 32768 rows, or otherwise check for every element from the mysql table to see if it is in the ms-access table (I think this would be quite slow) but I wonder if there is not a better approach. Do you have any suggestions on how could I approach this?
Thanks in advance for any suggestions
|
[
"\nI could do this query in blocks of 32768 rows\n\nThat won't work for a NOT IN query. Say you had a list of rows to keep:\n[1, 2, 3, 4, 5, 6]\n\nIf you tried to do that in batches of 3 then the first DELETE would be\nDELETE FROM access_table WHERE id NOT IN (1, 2, 3)\n\nwhich would delete the rows with id values of 4, 5, and 6. Then the next DELETE would be\nDELETE FROM access_table WHERE id NOT IN (4, 5, 6)\n\nwhich would delete the rows with id values of 1, 2, and 3.\nHowever, you could build a list of rows to delete like this:\nwith mysql_engine.begin() as conn:\n mysql_existing = (\n conn.scalars(sa.select(mysql_table.c.id_msaccess)).all()\n )\n print(mysql_existing) # [2, 3]\n\nwith access_engine.begin() as conn:\n access_existing = (\n conn.scalars(sa.select(access_table.c.id)).all()\n )\n print(access_existing) # [1, 2, 3, 4, 5, 6]\n\naccess_to_delete = list(set(access_existing).difference(mysql_existing))\nprint(access_to_delete) # [1, 4, 5, 6]\n\nand you could process that list in batches by using IN instead of NOT IN.\n"
] |
[
2
] |
[] |
[] |
[
"ms_access",
"mysql",
"pyodbc",
"python",
"sqlalchemy"
] |
stackoverflow_0074474679_ms_access_mysql_pyodbc_python_sqlalchemy.txt
|
Q:
Create a new dataframe by removing the outliers from the column
I am working on removing outlier tutorial but it quite confused me when this loop not working properly:
target = df['ConvertedComp']
mean = target.mean()
sd = target.std()
for x in target:
z_score = (x-mean)/sd
if np.abs(z_score) > 3:
selected_df = df[df.ConvertedComp != x]
Also are there any other method to create new dataframe without outlier efficiently ? Thank you ! Hope I can learn something new.
A:
You can try the following code to select rows where z_score calculated from ConvertedComp column is less than or equal to 3.
mask = df['ConvertedComp'].sub(df['ConvertedComp'].mean()).div(df['ConvertedComp'].std()).abs().le(3)
df = df[mask]
A:
Here is what worked for me.
(NOTE: A variation of this answer by the course assistants is also available on the course's forum.)
Calculate the Lower quartile.
lower = d_q1-(1.5*IQR)
print("Lower: ", lower)
Calculate the Upper quartile.
upper = d_q3+(1.5*IQR)
print("Upper: ", upper)
Remove the lower & upper outliers by creating a new df.
df2 = [(df['ConvertedComp'] < lower) | (df['ConvertedComp'] > upper)]
Change the outliers to "na" to remove their numerical data.
(That will leave them out of visualizations and value_counts().)
import numpy as np
df[df2] = np.nan
|
Create a new dataframe by removing the outliers from the column
|
I am working on removing outlier tutorial but it quite confused me when this loop not working properly:
target = df['ConvertedComp']
mean = target.mean()
sd = target.std()
for x in target:
z_score = (x-mean)/sd
if np.abs(z_score) > 3:
selected_df = df[df.ConvertedComp != x]
Also are there any other method to create new dataframe without outlier efficiently ? Thank you ! Hope I can learn something new.
|
[
"You can try the following code to select rows where z_score calculated from ConvertedComp column is less than or equal to 3.\nmask = df['ConvertedComp'].sub(df['ConvertedComp'].mean()).div(df['ConvertedComp'].std()).abs().le(3)\n\ndf = df[mask]\n\n",
"Here is what worked for me.\n(NOTE: A variation of this answer by the course assistants is also available on the course's forum.)\nCalculate the Lower quartile.\nlower = d_q1-(1.5*IQR)\n\nprint(\"Lower: \", lower)\n\nCalculate the Upper quartile.\nupper = d_q3+(1.5*IQR)\n\nprint(\"Upper: \", upper)\n\nRemove the lower & upper outliers by creating a new df.\ndf2 = [(df['ConvertedComp'] < lower) | (df['ConvertedComp'] > upper)]\n\nChange the outliers to \"na\" to remove their numerical data.\n(That will leave them out of visualizations and value_counts().)\nimport numpy as np\ndf[df2] = np.nan\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"dataframe",
"outliers",
"pandas",
"python"
] |
stackoverflow_0067513640_dataframe_outliers_pandas_python.txt
|
Q:
Python, adding a for loop to limit no. of tries - error in code
In the following code, I am trying to use a flag to break out of the loop when true (password is correct) but limit the number of incorrect tries to 3.
def secretagent():
flag=False
while flag==False:
for i in range(3):
password=input("Enter password:")
if password=="secret007":
print("Access Granted!")
flag=True
break
else:
print("Impostor...access denied!")
print("Welcome Secret agent...let's get started...")
#print("You have tried 3 times and failed. Goodbye forever!")
secretagent()
Here, for ease, is the trinket: https://trinket.io/python/8869529c45
Can anyone suggest a suitable solution - the most pythonic way and for learning purposes explain the best way to approach this. e.g. should the for loop or while loop be on the outside and why?
Currently, it still allows unlimited number of tries so my for loop has obviously been placed wrong.
A:
Use int instead of bool
import sys
number_of_tries = 0
while True:
if number_of_tries == 3:
sys.exit() # exit the program
password=input("Enter password:")
if password=="secret007":
print("Access Granted!")
break
else:
print("Impostor...access denied!")
number_of_tries += 1
print("Welcome Secret agent...let's get started...")
Or even simpler with for loop:
for _ in range(3):
password=input("Enter password:")
if password=="secret007":
print("Access Granted!")
break
else:
print("Impostor...access denied!")
else: # it means no break was called
sys.exit()
A:
Personnaly, instead of using a flag, I would only use a for loop and iterate over it 3 times and checking the value of the answer. I would then return True or False, or continue the loop depending on the answer. Then I would print the response based on the return value of the function. Something like that
def secretagent():
for i in range(3):
password=input("Enter password:")
if password=="secret007":
print("Access Granted!")
return True
elif i < 2:
print("Wrong password, try again")
else:
print("Impostor...access denied!")
return False
flag = secretagent()
if flag==True:
print("Welcome Secret agent...let's get started...")
else:
print("You have tried 3 times and failed. Goodbye forever!")
Your while loop is not necessary since you already know you only want to provide 3 tries to your agent.
Hope this helps.
|
Python, adding a for loop to limit no. of tries - error in code
|
In the following code, I am trying to use a flag to break out of the loop when true (password is correct) but limit the number of incorrect tries to 3.
def secretagent():
flag=False
while flag==False:
for i in range(3):
password=input("Enter password:")
if password=="secret007":
print("Access Granted!")
flag=True
break
else:
print("Impostor...access denied!")
print("Welcome Secret agent...let's get started...")
#print("You have tried 3 times and failed. Goodbye forever!")
secretagent()
Here, for ease, is the trinket: https://trinket.io/python/8869529c45
Can anyone suggest a suitable solution - the most pythonic way and for learning purposes explain the best way to approach this. e.g. should the for loop or while loop be on the outside and why?
Currently, it still allows unlimited number of tries so my for loop has obviously been placed wrong.
|
[
"Use int instead of bool\nimport sys\n\nnumber_of_tries = 0\nwhile True:\n if number_of_tries == 3:\n sys.exit() # exit the program\n password=input(\"Enter password:\")\n if password==\"secret007\":\n print(\"Access Granted!\")\n break\n else:\n print(\"Impostor...access denied!\")\n number_of_tries += 1\n \nprint(\"Welcome Secret agent...let's get started...\")\n\nOr even simpler with for loop:\nfor _ in range(3):\n password=input(\"Enter password:\")\n if password==\"secret007\":\n print(\"Access Granted!\")\n break\n else:\n print(\"Impostor...access denied!\")\nelse: # it means no break was called\n sys.exit()\n\n",
"Personnaly, instead of using a flag, I would only use a for loop and iterate over it 3 times and checking the value of the answer. I would then return True or False, or continue the loop depending on the answer. Then I would print the response based on the return value of the function. Something like that\ndef secretagent():\n for i in range(3):\n password=input(\"Enter password:\")\n if password==\"secret007\":\n print(\"Access Granted!\")\n return True\n elif i < 2:\n print(\"Wrong password, try again\")\n else:\n print(\"Impostor...access denied!\")\n return False\n \n \nflag = secretagent()\n\nif flag==True:\n print(\"Welcome Secret agent...let's get started...\")\nelse:\n print(\"You have tried 3 times and failed. Goodbye forever!\")\n\nYour while loop is not necessary since you already know you only want to provide 3 tries to your agent.\nHope this helps.\n"
] |
[
2,
2
] |
[] |
[] |
[
"loops",
"python"
] |
stackoverflow_0074477916_loops_python.txt
|
Q:
Divide DataFrame Column on (,) into two new columns
I have a pandas DataFrame called data_combined with the following structure:
index corr_year corr_5d
0 (DAL, AAL) 0.873762 0.778594
1 (WEC, ED) 0.851578 0.850549
2 (CMS, LNT) 0.850028 0.776143
3 (SWKS, QRVO) 0.850799 0.830603
4 (ALK, DAL) 0.874162 0.744590
Now I am trying to divide the column named index into two columns on the (,).
The desired output should look like this:
index1 index2 corr_year corr_5d
0 DAL AAL 0.873762 0.778594
1 WEC ED 0.851578 0.850549
2 CMS LNT 0.850028 0.776143
3 SWKS QRVO 0.850799 0.830603
4 ALK DAL 0.874162 0.744590
I have tried using pd.explode() with the following code
data_results_test = data_results_combined.explode('index')
data_results_test
Which leads to the following output:
index corr_year corr_5d
0 DAL 0.873762 0.778594
0 AAL 0.873762 0.778594
1 WEC 0.851578 0.850549
1 ED 0.851578 0.850549
How can I achieve the split with newly added columns instead of rows. pd.explode does not seem to have any option to choose wether to add new rows or columns
A:
How about a simple apply? (Assuming 'index' column is a tuple)
data_results_combined['index1'] = data_results_combined['index'].apply(lambda x: x[0])
data_results_combined['index2'] = data_results_combined['index'].apply(lambda x: x[1])
A:
df[['index1','index2']] = df['index'].str.split(',',expand=True)
|
Divide DataFrame Column on (,) into two new columns
|
I have a pandas DataFrame called data_combined with the following structure:
index corr_year corr_5d
0 (DAL, AAL) 0.873762 0.778594
1 (WEC, ED) 0.851578 0.850549
2 (CMS, LNT) 0.850028 0.776143
3 (SWKS, QRVO) 0.850799 0.830603
4 (ALK, DAL) 0.874162 0.744590
Now I am trying to divide the column named index into two columns on the (,).
The desired output should look like this:
index1 index2 corr_year corr_5d
0 DAL AAL 0.873762 0.778594
1 WEC ED 0.851578 0.850549
2 CMS LNT 0.850028 0.776143
3 SWKS QRVO 0.850799 0.830603
4 ALK DAL 0.874162 0.744590
I have tried using pd.explode() with the following code
data_results_test = data_results_combined.explode('index')
data_results_test
Which leads to the following output:
index corr_year corr_5d
0 DAL 0.873762 0.778594
0 AAL 0.873762 0.778594
1 WEC 0.851578 0.850549
1 ED 0.851578 0.850549
How can I achieve the split with newly added columns instead of rows. pd.explode does not seem to have any option to choose wether to add new rows or columns
|
[
"How about a simple apply? (Assuming 'index' column is a tuple)\ndata_results_combined['index1'] = data_results_combined['index'].apply(lambda x: x[0])\ndata_results_combined['index2'] = data_results_combined['index'].apply(lambda x: x[1])\n\n",
"df[['index1','index2']] = df['index'].str.split(',',expand=True)\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"explode",
"pandas",
"python"
] |
stackoverflow_0074478109_explode_pandas_python.txt
|
Q:
How to properly mask a numpy 2D array?
Say I have a two dimensional array of coordinates that looks something like
x = array([[1,2],[2,3],[3,4]])
Previously in my work so far, I generated a mask that ends up looking something like
mask = [False,False,True]
When I try to use this mask on the 2D coordinate vector, I get an error
newX = np.ma.compressed(np.ma.masked_array(x,mask))
>>>numpy.ma.core.MaskError: Mask and data not compatible: data size
is 6, mask size is 3.`
which makes sense, I suppose. So I tried to simply use the following mask instead:
mask2 = np.column_stack((mask,mask))
newX = np.ma.compressed(np.ma.masked_array(x,mask2))
And what I get is close:
>>>array([1,2,2,3])
to what I would expect (and want):
>>>array([[1,2],[2,3]])
There must be an easier way to do this?
A:
Is this what you are looking for?
import numpy as np
x[~np.array(mask)]
# array([[1, 2],
# [2, 3]])
Or from numpy masked array:
newX = np.ma.array(x, mask = np.column_stack((mask, mask)))
newX
# masked_array(data =
# [[1 2]
# [2 3]
# [-- --]],
# mask =
# [[False False]
# [False False]
# [ True True]],
# fill_value = 999999)
A:
With np.where you can do all sorts of things:
x_maskd = np.where(mask, x, 0)
A:
Your x is 3x2:
In [379]: x
Out[379]:
array([[1, 2],
[2, 3],
[3, 4]])
Make a 3 element boolean mask:
In [380]: rowmask=np.array([False,False,True])
That can be used to select the rows where it is True, or where it is False. In both cases the result is 2d:
In [381]: x[rowmask,:]
Out[381]: array([[3, 4]])
In [382]: x[~rowmask,:]
Out[382]:
array([[1, 2],
[2, 3]])
This is without using the MaskedArray subclass. To make such array, we need a mask that matches x in shape. There isn't provision for masking just one dimension.
In [393]: xmask=np.stack((rowmask,rowmask),-1) # column stack
In [394]: xmask
Out[394]:
array([[False, False],
[False, False],
[ True, True]], dtype=bool)
In [395]: np.ma.MaskedArray(x,xmask)
Out[395]:
masked_array(data =
[[1 2]
[2 3]
[-- --]],
mask =
[[False False]
[False False]
[ True True]],
fill_value = 999999)
Applying compressed to that produces a raveled array: array([1, 2, 2, 3])
Since masking is element by element, it could mask one element in row 1, 2 in row 2 etc. So in general compressing, removing the masked elements, will not yield a 2d array. The flattened form is the only general choice.
np.ma makes most sense when there's a scattering of masked values. It isn't of much value if you want want to select, or deselect, whole rows or columns.
===============
Here are more typical masked arrays:
In [403]: np.ma.masked_inside(x,2,3)
Out[403]:
masked_array(data =
[[1 --]
[-- --]
[-- 4]],
mask =
[[False True]
[ True True]
[ True False]],
fill_value = 999999)
In [404]: np.ma.masked_equal(x,2)
Out[404]:
masked_array(data =
[[1 --]
[-- 3]
[3 4]],
mask =
[[False True]
[ True False]
[False False]],
fill_value = 2)
In [406]: np.ma.masked_outside(x,2,3)
Out[406]:
masked_array(data =
[[-- 2]
[2 3]
[3 --]],
mask =
[[ True False]
[False False]
[False True]],
fill_value = 999999)
A:
Since none of these solutions worked for me, I thought to write down what solution did, maybe it will useful for somebody else. I use python 3.x and I worked on two 3D arrays. One, which I call data_3D contains float values of recordings in a brain scan, and the other, template_3D contains integers which represent regions of the brain. I wanted to choose those values from data_3D corresponding to an integer region_code as per template_3D:
my_mask = np.in1d(template_3D, region_code).reshape(template_3D.shape)
data_3D_masked = data_3D[my_mask]
which gives me a 1D array of only relevant recordings.
A:
If you have
A = [[ 8. 0. 165. 22. 164. 47. 184. 185.]
[ 0. 6. -74. -27. 63. 49. -46. -48.]
[165. -74. 0. 0. 0. 0. 0. 0.]
[ 22. -27. 0. 0. 0. 0. 0. 0.]
[164. 63. 0. 0. 0. 0. 0. 0.]
[ 47. 49. 0. 0. 0. 0. 0. 0.]
[184. -46. 0. 0. 0. 0. 0. 0.]
[185. -48. 0. 0. 0. 0. 0. 0.]]
and your mask is
mask = np.array([True, True, True, False, True, False, True, False])
then your masked A becomes
A[mask, :][:, mask] = [[ 8. 0. 165. 164. 184.]
[ 0. 6. -74. 63. -46.]
[165. -74. 0. 0. 0.]
[164. 63. 0. 0. 0.]
[184. -46. 0. 0. 0.]]
A:
In your last example, the problem is not the mask. It is your use of compressed. From the docstring of compressed:
Return all the non-masked data as a 1-D array.
So compressed flattens the nonmasked values into a 1-d array. (It has to, because there is no guarantee that the compressed data will have an n-dimensional structure.)
Take a look at the masked array before you compress it:
In [8]: np.ma.masked_array(x, mask2)
Out[8]:
masked_array(data =
[[1 2]
[2 3]
[-- --]],
mask =
[[False False]
[False False]
[ True True]],
fill_value = 999999)
A:
masked_X = np.where(mask, X, 0) is the fastest & the simplest way to mask a data :
X = np.array([[2,-1,4],
[3,-3,1],
[9,-7,2]])
mask = np.identity(3)
time measure :
%timeit np.where(mask,X,0)
969 ns ± 14.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit np.ma.array(X, mask=mask)
6.47 µs ± 85.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
I let you conclude !
|
How to properly mask a numpy 2D array?
|
Say I have a two dimensional array of coordinates that looks something like
x = array([[1,2],[2,3],[3,4]])
Previously in my work so far, I generated a mask that ends up looking something like
mask = [False,False,True]
When I try to use this mask on the 2D coordinate vector, I get an error
newX = np.ma.compressed(np.ma.masked_array(x,mask))
>>>numpy.ma.core.MaskError: Mask and data not compatible: data size
is 6, mask size is 3.`
which makes sense, I suppose. So I tried to simply use the following mask instead:
mask2 = np.column_stack((mask,mask))
newX = np.ma.compressed(np.ma.masked_array(x,mask2))
And what I get is close:
>>>array([1,2,2,3])
to what I would expect (and want):
>>>array([[1,2],[2,3]])
There must be an easier way to do this?
|
[
"Is this what you are looking for?\nimport numpy as np\nx[~np.array(mask)]\n# array([[1, 2],\n# [2, 3]])\n\nOr from numpy masked array:\nnewX = np.ma.array(x, mask = np.column_stack((mask, mask)))\nnewX\n\n# masked_array(data =\n# [[1 2]\n# [2 3]\n# [-- --]],\n# mask =\n# [[False False]\n# [False False]\n# [ True True]],\n# fill_value = 999999)\n\n",
"With np.where you can do all sorts of things:\nx_maskd = np.where(mask, x, 0)\n\n",
"Your x is 3x2:\nIn [379]: x\nOut[379]: \narray([[1, 2],\n [2, 3],\n [3, 4]])\n\nMake a 3 element boolean mask:\nIn [380]: rowmask=np.array([False,False,True])\n\nThat can be used to select the rows where it is True, or where it is False. In both cases the result is 2d:\nIn [381]: x[rowmask,:]\nOut[381]: array([[3, 4]])\n\nIn [382]: x[~rowmask,:]\nOut[382]: \narray([[1, 2],\n [2, 3]])\n\nThis is without using the MaskedArray subclass. To make such array, we need a mask that matches x in shape. There isn't provision for masking just one dimension.\nIn [393]: xmask=np.stack((rowmask,rowmask),-1) # column stack\n\nIn [394]: xmask\nOut[394]: \narray([[False, False],\n [False, False],\n [ True, True]], dtype=bool)\n\nIn [395]: np.ma.MaskedArray(x,xmask)\nOut[395]: \nmasked_array(data =\n [[1 2]\n [2 3]\n [-- --]],\n mask =\n [[False False]\n [False False]\n [ True True]],\n fill_value = 999999)\n\nApplying compressed to that produces a raveled array: array([1, 2, 2, 3])\nSince masking is element by element, it could mask one element in row 1, 2 in row 2 etc. So in general compressing, removing the masked elements, will not yield a 2d array. The flattened form is the only general choice.\nnp.ma makes most sense when there's a scattering of masked values. It isn't of much value if you want want to select, or deselect, whole rows or columns.\n===============\nHere are more typical masked arrays:\nIn [403]: np.ma.masked_inside(x,2,3)\nOut[403]: \nmasked_array(data =\n [[1 --]\n [-- --]\n [-- 4]],\n mask =\n [[False True]\n [ True True]\n [ True False]],\n fill_value = 999999)\n\nIn [404]: np.ma.masked_equal(x,2)\nOut[404]: \nmasked_array(data =\n [[1 --]\n [-- 3]\n [3 4]],\n mask =\n [[False True]\n [ True False]\n [False False]],\n fill_value = 2)\n\nIn [406]: np.ma.masked_outside(x,2,3)\nOut[406]: \nmasked_array(data =\n [[-- 2]\n [2 3]\n [3 --]],\n mask =\n [[ True False]\n [False False]\n [False True]],\n fill_value = 999999)\n\n",
"Since none of these solutions worked for me, I thought to write down what solution did, maybe it will useful for somebody else. I use python 3.x and I worked on two 3D arrays. One, which I call data_3D contains float values of recordings in a brain scan, and the other, template_3D contains integers which represent regions of the brain. I wanted to choose those values from data_3D corresponding to an integer region_code as per template_3D:\nmy_mask = np.in1d(template_3D, region_code).reshape(template_3D.shape)\ndata_3D_masked = data_3D[my_mask]\n\nwhich gives me a 1D array of only relevant recordings. \n",
"If you have\nA = [[ 8. 0. 165. 22. 164. 47. 184. 185.]\n [ 0. 6. -74. -27. 63. 49. -46. -48.]\n [165. -74. 0. 0. 0. 0. 0. 0.]\n [ 22. -27. 0. 0. 0. 0. 0. 0.]\n [164. 63. 0. 0. 0. 0. 0. 0.]\n [ 47. 49. 0. 0. 0. 0. 0. 0.]\n [184. -46. 0. 0. 0. 0. 0. 0.]\n [185. -48. 0. 0. 0. 0. 0. 0.]]\n\nand your mask is\nmask = np.array([True, True, True, False, True, False, True, False])\n\nthen your masked A becomes\nA[mask, :][:, mask] = [[ 8. 0. 165. 164. 184.]\n [ 0. 6. -74. 63. -46.]\n [165. -74. 0. 0. 0.]\n [164. 63. 0. 0. 0.]\n [184. -46. 0. 0. 0.]]\n\n",
"In your last example, the problem is not the mask. It is your use of compressed. From the docstring of compressed:\nReturn all the non-masked data as a 1-D array.\n\nSo compressed flattens the nonmasked values into a 1-d array. (It has to, because there is no guarantee that the compressed data will have an n-dimensional structure.)\nTake a look at the masked array before you compress it:\nIn [8]: np.ma.masked_array(x, mask2)\n\nOut[8]: \nmasked_array(data =\n [[1 2]\n [2 3]\n [-- --]],\n mask =\n [[False False]\n [False False]\n [ True True]],\n fill_value = 999999)\n\n",
"masked_X = np.where(mask, X, 0) is the fastest & the simplest way to mask a data :\nX = np.array([[2,-1,4],\n [3,-3,1],\n [9,-7,2]])\n\nmask = np.identity(3)\n\ntime measure :\n%timeit np.where(mask,X,0)\n\n\n969 ns ± 14.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\n\n%timeit np.ma.array(X, mask=mask)\n\n\n6.47 µs ± 85.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\n\nI let you conclude !\n"
] |
[
25,
9,
8,
2,
2,
1,
0
] |
[] |
[] |
[
"mask",
"masked_array",
"matrix",
"numpy",
"python"
] |
stackoverflow_0038193958_mask_masked_array_matrix_numpy_python.txt
|
Q:
Snakemake MissingOutputException when writing list to file
I'm having a MissingOutputException with what I think is a very basic rule. I'm trying to print a list given through the config file into a file using some Python commands but Snakemake keeps throwing MissingOutputException error:
# --- Importing Configuration Files --- #
configfile: "config.yaml"
# -------------------------------------------------
scaffolds = config["Scaffolds"]
localrules: all, MakeScaffoldList
# -------------------------------------------------
rule all:
input:
LIST = "scaffolds.list"
# -------------------------------------------------
rule MakeScaffoldList:
output:
LIST = "scaffolds.list"
params:
SCAFFOLDS = scaffolds
run:
"""
with open(output.LIST, 'w') as f:
for line in params.SCAFFOLDS:
f.write(f"{line}\n")
"""
Error:
[Thu Nov 17 14:08:33 2022]
localrule MakeScaffoldList:
output: scaffolds.list
jobid: 1
resources: mem_mb=27200, disk_mb=1000, tmpdir=/scratch, account=snic2022-22-156, partition=core, time=12:00:00, threads=4
MissingOutputException in line 37 of test.smk:
Job Missing files after 5 seconds. This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait:
scaffolds.list completed successfully, but some output files are missing. 0
Exiting because a job execution failed. Look above for error message
What am I doing wrong? Is it the Python code wrong?
A:
If you want to include Python code directly into your Snakefile you have to loose the quotation marks around your Python code in the run directive:
scaffolds = ["dummy", "entries"]
localrules: all, MakeScaffoldList
# -------------------------------------------------
rule all:
input:
LIST = "scaffolds.list"
# -------------------------------------------------
rule MakeScaffoldList:
output:
LIST = "scaffolds.list"
params:
SCAFFOLDS = scaffolds
run:
with open(output.LIST, 'w') as f:
for line in params.SCAFFOLDS:
f.write(f"{line}\n")
works.
|
Snakemake MissingOutputException when writing list to file
|
I'm having a MissingOutputException with what I think is a very basic rule. I'm trying to print a list given through the config file into a file using some Python commands but Snakemake keeps throwing MissingOutputException error:
# --- Importing Configuration Files --- #
configfile: "config.yaml"
# -------------------------------------------------
scaffolds = config["Scaffolds"]
localrules: all, MakeScaffoldList
# -------------------------------------------------
rule all:
input:
LIST = "scaffolds.list"
# -------------------------------------------------
rule MakeScaffoldList:
output:
LIST = "scaffolds.list"
params:
SCAFFOLDS = scaffolds
run:
"""
with open(output.LIST, 'w') as f:
for line in params.SCAFFOLDS:
f.write(f"{line}\n")
"""
Error:
[Thu Nov 17 14:08:33 2022]
localrule MakeScaffoldList:
output: scaffolds.list
jobid: 1
resources: mem_mb=27200, disk_mb=1000, tmpdir=/scratch, account=snic2022-22-156, partition=core, time=12:00:00, threads=4
MissingOutputException in line 37 of test.smk:
Job Missing files after 5 seconds. This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait:
scaffolds.list completed successfully, but some output files are missing. 0
Exiting because a job execution failed. Look above for error message
What am I doing wrong? Is it the Python code wrong?
|
[
"If you want to include Python code directly into your Snakefile you have to loose the quotation marks around your Python code in the run directive:\nscaffolds = [\"dummy\", \"entries\"]\n\nlocalrules: all, MakeScaffoldList\n\n# -------------------------------------------------\n\nrule all:\n input:\n LIST = \"scaffolds.list\"\n\n# -------------------------------------------------\n\nrule MakeScaffoldList:\n output:\n LIST = \"scaffolds.list\"\n params:\n SCAFFOLDS = scaffolds\n run:\n with open(output.LIST, 'w') as f:\n for line in params.SCAFFOLDS:\n f.write(f\"{line}\\n\")\n\nworks.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"snakemake"
] |
stackoverflow_0074475638_python_snakemake.txt
|
Q:
filter out observation of a column which start with values of a list
I have the following dataframe:
import pandas as pd
df = pd.DataFrame({'code': ['52511', '52512', '12525', '13333']})
and the following list:
list = ['525', '13333']
I want to consider only the observations of df that start witht the element of list.
Desired output:
import pandas as pd
df = pd.DataFrame({'code': ['52511', '52512', '13333']})
A:
The startswith function supports tuple type. You can convert list to tuple.
listt = ['525', '13333']
df=df[df['code'].str.startswith(tuple(listt))]
df
'''
code
0 52511
1 52512
3 13333
'''
|
filter out observation of a column which start with values of a list
|
I have the following dataframe:
import pandas as pd
df = pd.DataFrame({'code': ['52511', '52512', '12525', '13333']})
and the following list:
list = ['525', '13333']
I want to consider only the observations of df that start witht the element of list.
Desired output:
import pandas as pd
df = pd.DataFrame({'code': ['52511', '52512', '13333']})
|
[
"The startswith function supports tuple type. You can convert list to tuple.\nlistt = ['525', '13333']\ndf=df[df['code'].str.startswith(tuple(listt))]\ndf\n'''\n code\n0 52511\n1 52512\n3 13333\n\n'''\n\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python",
"string"
] |
stackoverflow_0074478159_pandas_python_string.txt
|
Q:
How to generate a time-ordered uid in Python?
Is this possible? I've heard Cassandra has something similar : https://datastax.github.io/python-driver/api/cassandra/util.html
I have been using a ISO timestamp concatenated with a uuid4, but that ended up way too large (58 characters) and probably overkill.
Keeping a sequential number doesn't work in my context (DynamoDB NoSQL)
Worth noticing that for my application it doesn't matter if items created in batch/same second are in a random order, as long as the uid don't collapse.
I have no specific restriction on maximum length, ideally I would like to see the different collision chance for different lengths, but it needs to be smaller than 58 (my original attempt)
This is to use with DynamoDB(NoSQL Database) as Sort-key
A:
Why uuid.uuid1 is not sequential
uuid.uuid1(node=None, clock_seq=None) is effectively:
60 bits of timestamp (representing number of 100-ns intervals after 1582-10-15 00:00:00)
14 bits of "clock sequence"
48 bits of "Node info" (generated from network card's mac-address or from hostname or from RNG).
If you don't provide any arguments, then System function is called to generate uuid. In that case:
It's unclear if "clock sequence" is sequential or random.
It's unclear if it's safe to be used in multiple processes (can clock_seq be repeated in different processes or not?). In Python 3.7 this info is now available.
If you provide clock_seq or node, then "pure python implementation is used". IN this case even with "fixed value" for clock_seq:
timestamp part is guaranteed to be sequential for all the calls in current process even in threaded execution.
clock_seq part is randomly generated. But that is not critical annymore because timestamp is sequential and unique.
It's NOT safe for multiple processes (processes that call uuid1 with the same clock_seq, node might return conflicting values if called during the "same 100-ns time interval")
Solution that reuses uuid.uuid1
It's easy to see, that you can make uuid1 sequential by providing clock_seq or node arguments (to use python implementation).
import time
from uuid import uuid1, getnode
_my_clock_seq = getrandbits(14)
_my_node = getnode()
def sequential_uuid(node=None):
return uuid1(node=node, clock_seq=_my_clock_seq)
# .hex attribute of this value is 32-characters long string
def alt_sequential_uuid(clock_seq=None):
return uuid1(node=_my_node, clock_seq=clock_seq)
if __name__ == '__main__':
from itertools import count
old_n = uuid1() # "Native"
old_s = sequential_uuid() # Sequential
native_conflict_index = None
t_0 = time.time()
for x in count():
new_n = uuid1()
new_s = sequential_uuid()
if old_n > new_n and not native_conflict_index:
native_conflict_index = x
if old_s >= new_s:
print("OOops: non-sequential results for `sequential_uuid()`")
break
if (x >= 10*0x3fff and time.time() - t_0 > 30) or (native_conflict_index and x > 2*native_conflict_index):
print('No issues for `sequential_uuid()`')
break
old_n = new_n
old_s = new_s
print(f'Conflicts for `uuid.uuid1()`: {bool(native_conflict_index)}')
Multiple processes issues
BUT if you are running some parallel processes on the same machine, then:
node which defaults to uuid.get_node() will be the same for all the processes;
clock_seq has small chance to be the same for some processes (chance of 1/16384)
That might lead to conflicts! That is general concern for using
uuid.uuid1 in parallel processes on the same machine unless you have access to SafeUUID from Python3.7.
If you make sure to also set node to unique value for each parallel process that runs this code, then conflicts should not happen.
Even if you are using SafeUUID, and set unique node, it's still possible to have non-sequential (but unique) ids if they are generated in different processes.
If some lock-related overhead is acceptable, then you can store clock_seq in some external atomic storage (for example in "locked" file) and increment it with each call: this allows to have same value for node on all parallel processes and also will make id-s sequential. For cases when all parallel processes are subprocesses created using multiprocessing: clock_seq can be "shared" using multiprocessing.Value
As a result you always have to remember:
If you are running multiple processes on the same machine, then you must:
Ensure uniqueness of node. The problem for this solution: you can't be sure to have sequential ids from different processes generated during the same 100-ns interval. But this is very "light" operation executed once on process startup and achieved by: "adding" something to default node, e.g. int(time.time()*1e9) - 0x118494406d1cc000, or by adding some counter from machine-level atomic db.
Ensure "machine-level atomic clock_seq" and the same node for all processes on one machine. That way you'll have some overhead for "locking" clock_seq, but id-s are guaranteed to be sequential even if generated in different processes during the same 100-ns interval (unless you are calling uuid from several threads in the same process).
For processes on different machines:
either you have to use some "global counter service";
or it's not possible to have sequential ids generated on different machines during the same 100-ns interval.
Reducing size of the id
General approach to generate UUIDs is quite simple, so it's easy to implement something similar from scratch, and for example use less bits for node_info part:
import time
from random import getrandbits
_my_clock_seq = getrandbits(14)
_last_timestamp_part = 0
_used_clock_seq = 0
timestamp_multiplier = 1e7 # I'd recommend to use this value
# Next values are enough up to year 2116:
if timestamp_multiplier == 1e9:
time_bits = 62 # Up to year 2116, also reduces chances for non-sequential id-s generated in different processes
elif timestamp_multiplier == 1e8:
time_bits = 60 # up to year 2335
elif timestamp_multiplier == 1e7:
time_bits = 56 # Up to year 2198.
else:
raise ValueError('Please calculate and set time_bits')
time_mask = 2**time_bits - 1
seq_bits = 16
seq_mask = 2**seq_bits - 1
node_bits = 12
node_mask = 2**node_bits - 1
max_hex_len = len(hex(2**(node_bits+seq_bits+time_bits) - 1)) - 2 # 21
_default_node_number = getrandbits(node_bits) # or `uuid.getnode() & node_mask`
def sequential_uuid(node_number=None):
"""Return 21-characters long hex string that is sequential and unique for each call in current process.
Results from different processes may "overlap" but are guaranteed to
be unique if `node_number` is different in each process.
"""
global _my_clock_seq
global _last_timestamp_part
global _used_clock_seq
if node_number is None:
node_number = _default_node_number
if not 0 <= node_number <= node_mask:
raise ValueError("Node number out of range")
timestamp_part = int(time.time() * timestamp_multiplier) & time_mask
_my_clock_seq = (_my_clock_seq + 1) & seq_mask
if _last_timestamp_part >= timestamp_part:
timestamp_part = _last_timestamp_part
if _used_clock_seq == _my_clock_seq:
timestamp_part = (timestamp_part + 1) & time_mask
else:
_used_clock_seq = _my_clock_seq
_last_timestamp_part = timestamp_part
return hex(
(timestamp_part << (node_bits+seq_bits))
|
(_my_clock_seq << (node_bits))
|
node_number
)[2:]
Notes:
Maybe it's better to simply store integer value (not hex-string) in the database
If you are storing it as text/char, then its better to convert integer to base64-string instead of converting it to hex-string. That way it will be shorter (21 chars hex-string → 16 chars b64-encoded string):
from base64 import b64encode
total_bits = time_bits+seq_bits+node_bits
total_bytes = total_bits // 8 + 1 * bool(total_bits % 8)
def int_to_b64(int_value):
return b64encode(int_value.to_bytes(total_bytes, 'big'))
Collision chances
Single process: collisions not possible
Multiple processes with manually set unique clock_seq or unique node in each process: collisions not possible
Multiple processes with randomly set node (48-bits, "fixed" in time):
Chance to have the node collision in several processes:
in 2 processes out of 10000: ~0.000018%
in 2 processes out of 100000: 0.0018%
Chance to have single collision of the id per second in 2 processes with the "colliding" node:
for "timestamp" interval of 100-ns (default for uuid.uuid1 , and in my code when timestamp_multiplier == 1e7): proportional to 3.72e-19 * avg_call_frequency²
for "timestamp" interval of 10-ns (timestamp_multiplier == 1e8): proportional to 3.72e-21 * avg_call_frequency²
A:
In the article you've linked too, the cassandra.util.uuid_from_time(time_arg, node=None, clock_seq=None)[source] seems to be exactly what you're looking for.
def uuid_from_time(time_arg, node=None, clock_seq=None):
"""
Converts a datetime or timestamp to a type 1 :class:`uuid.UUID`.
:param time_arg:
The time to use for the timestamp portion of the UUID.
This can either be a :class:`datetime` object or a timestamp
in seconds (as returned from :meth:`time.time()`).
:type datetime: :class:`datetime` or timestamp
:param node:
None integer for the UUID (up to 48 bits). If not specified, this
field is randomized.
:type node: long
:param clock_seq:
Clock sequence field for the UUID (up to 14 bits). If not specified,
a random sequence is generated.
:type clock_seq: int
:rtype: :class:`uuid.UUID`
"""
if hasattr(time_arg, 'utctimetuple'):
seconds = int(calendar.timegm(time_arg.utctimetuple()))
microseconds = (seconds * 1e6) + time_arg.time().microsecond
else:
microseconds = int(time_arg * 1e6)
# 0x01b21dd213814000 is the number of 100-ns intervals between the
# UUID epoch 1582-10-15 00:00:00 and the Unix epoch 1970-01-01 00:00:00.
intervals = int(microseconds * 10) + 0x01b21dd213814000
time_low = intervals & 0xffffffff
time_mid = (intervals >> 32) & 0xffff
time_hi_version = (intervals >> 48) & 0x0fff
if clock_seq is None:
clock_seq = random.getrandbits(14)
else:
if clock_seq > 0x3fff:
raise ValueError('clock_seq is out of range (need a 14-bit value)')
clock_seq_low = clock_seq & 0xff
clock_seq_hi_variant = 0x80 | ((clock_seq >> 8) & 0x3f)
if node is None:
node = random.getrandbits(48)
return uuid.UUID(fields=(time_low, time_mid, time_hi_version,
clock_seq_hi_variant, clock_seq_low, node), version=1)
There's nothing Cassandra specific to a Type 1 UUID...
A:
You should be able to encode a timestamp precise to the second for a time range of 135 years in 32 bits. That will only take 8 characters to represent in hex. Added to the hex representation of the uuid (32 hex characters) that will amount to only 40 hex characters.
Encoding the time stamp that way requires that you pick a base year (e.g. 2000) and compute the number of days up to the current date (time stamp). Multiply this number of days by 86400, then add the seconds since midnight. This will give you values that are less than 2^32 until you reach year 2135.
Note that you have to keep leading zeroes in the hex encoded form of the timestamp prefix in order for alphanumeric sorting to preserve the chronology.
With a few bits more in the time stamp, you could increase the time range and/or the precision. With 8 more bits (two hex characters), you could go up to 270 years with a precision to the hundredth of a second.
Note that you don't have to model the fraction of seconds in a base 10 range. You will get optimal bit usage by breaking it down in 128ths instead of 100ths for the same number of characters. With the doubling of the year range, this still fits within 8 bits (2 hex characters)
The collision probability, within the time precision (i.e. per second or per 100th or 128th of a second) is driven by the range of the uuid so it will be 1 in 2^128 for the chosen precision. Increasing the precision of the time stamp has the most impact on reducing the collision chances. It is also the factor that has the lowest impact on total size of the key.
More efficient character encoding: 27 to 29 character keys
You could significantly reduce the size of the key by encoding it in base 64 instead of 16 which would give you 27 to 29 characters (depending on you choice of precision)
Note that, for the timestamp part, you need to use an encoding function that takes an integer as input and that preserves the collating sequence of digit characters.
For example:
def encode64(number, size):
chars = "+-0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
result = list()
for _ in range(size):
result.append(chars[number%64])
number //= 64
return "".join(reversed(result))
a = encode64(1234567890,6) # '-7ZU9G'
b = encode64(9876543210,6) # '7Ag-Pe'
print(a < b) # True
u = encode64(int(uuid.uuid4()),22) # '1QA2LtMg30ztnugxaokVMk'
key = a+u # '-7ZU9G1QA2LtMg30ztnugxaokVMk' (28 characters)
You can save some more characters by combining the time stamp and uuid into a single number before encoding instead of concatenating the two encoded values.
The encode64() function needs one character every 6 bits.
So, for 135 years with precision to the second: (32+128)/6 = 26.7 --> 27 characters
instead of (32/6 = 5.3 --> 6) + (128/6 = 21.3 --> 22) ==> 28 characters
uid = uuid.uuid4()
timeStamp = daysSince2000 * 86400 + int(secondsSinceMidnight)
key = encode64( timeStamp<<128 | int(uid) ,27)
with a 270 year span and 128th of a second precision: (40+128)/6 = 28 characters
uid = uuid.uuid4()
timeStamp = daysSince2000 * 86400 + int(secondsSinceMidnight)
precision = 128
timeStamp = timeStamp * precision + int(factionOfSecond * precision)
key = encode64( timeStamp<<128 | int(uid) ,28)
With 29 characters you can raise precision to 1024th of a second and year range to 2160 years.
UUID masking: 17 to 19 characters keys
To be even more efficient, you could strip out the first 64 bits of the uuid (which is already a time stamp) and combine it with your own time stamp. This would give you keys with a length of 17 to 19 characters with practically no loss of collision avoidance (depending on your choice of precision).
mask = (1<<64)-1
key = encode64( timeStamp<<64 | (int(uid) & mask) ,19)
Integer/Numeric keys ?
As a final note, if your database supports very large integers or numeric fields (140 bits or more) as keys, you don't have to convert the combined number to a string. Just use it directly as the key. The numerical sequence of timeStamp<<128 | int(uid) will respect the chronology.
A:
The uuid6 module (pip install uuid6) solves the problem. It aims at implementing the corresponding draft for a new uuid variant standard, see here.
Example code:
import uuid6
for i in range(0, 30):
u = uuid6.uuid7()
print(u)
time.sleep(0.1)
The package suggests to use uuid6.uuid7():
Implementations SHOULD utilize UUID version 7 over UUID version 1 and
6 if possible.
UUID version 7 features a time-ordered value field derived from the
widely implemented and well known Unix Epoch timestamp source, the
number of milliseconds seconds since midnight 1 Jan 1970 UTC, leap
seconds excluded. As well as improved entropy characteristics over
versions 1 or 6.
|
How to generate a time-ordered uid in Python?
|
Is this possible? I've heard Cassandra has something similar : https://datastax.github.io/python-driver/api/cassandra/util.html
I have been using a ISO timestamp concatenated with a uuid4, but that ended up way too large (58 characters) and probably overkill.
Keeping a sequential number doesn't work in my context (DynamoDB NoSQL)
Worth noticing that for my application it doesn't matter if items created in batch/same second are in a random order, as long as the uid don't collapse.
I have no specific restriction on maximum length, ideally I would like to see the different collision chance for different lengths, but it needs to be smaller than 58 (my original attempt)
This is to use with DynamoDB(NoSQL Database) as Sort-key
|
[
"Why uuid.uuid1 is not sequential\nuuid.uuid1(node=None, clock_seq=None) is effectively:\n\n60 bits of timestamp (representing number of 100-ns intervals after 1582-10-15 00:00:00)\n14 bits of \"clock sequence\"\n48 bits of \"Node info\" (generated from network card's mac-address or from hostname or from RNG).\n\nIf you don't provide any arguments, then System function is called to generate uuid. In that case:\n\nIt's unclear if \"clock sequence\" is sequential or random.\nIt's unclear if it's safe to be used in multiple processes (can clock_seq be repeated in different processes or not?). In Python 3.7 this info is now available.\n\nIf you provide clock_seq or node, then \"pure python implementation is used\". IN this case even with \"fixed value\" for clock_seq:\n\ntimestamp part is guaranteed to be sequential for all the calls in current process even in threaded execution.\nclock_seq part is randomly generated. But that is not critical annymore because timestamp is sequential and unique.\nIt's NOT safe for multiple processes (processes that call uuid1 with the same clock_seq, node might return conflicting values if called during the \"same 100-ns time interval\")\n\nSolution that reuses uuid.uuid1\nIt's easy to see, that you can make uuid1 sequential by providing clock_seq or node arguments (to use python implementation).\nimport time\n\nfrom uuid import uuid1, getnode\n\n_my_clock_seq = getrandbits(14)\n_my_node = getnode()\n\n\ndef sequential_uuid(node=None):\n return uuid1(node=node, clock_seq=_my_clock_seq)\n # .hex attribute of this value is 32-characters long string\n\n\ndef alt_sequential_uuid(clock_seq=None):\n return uuid1(node=_my_node, clock_seq=clock_seq)\n\n\nif __name__ == '__main__':\n from itertools import count\n old_n = uuid1() # \"Native\"\n old_s = sequential_uuid() # Sequential\n\n native_conflict_index = None\n\n t_0 = time.time()\n\n for x in count():\n new_n = uuid1()\n new_s = sequential_uuid()\n\n if old_n > new_n and not native_conflict_index:\n native_conflict_index = x\n\n if old_s >= new_s:\n print(\"OOops: non-sequential results for `sequential_uuid()`\")\n break\n\n if (x >= 10*0x3fff and time.time() - t_0 > 30) or (native_conflict_index and x > 2*native_conflict_index):\n print('No issues for `sequential_uuid()`')\n break\n\n old_n = new_n\n old_s = new_s\n\n print(f'Conflicts for `uuid.uuid1()`: {bool(native_conflict_index)}')\n\n\nMultiple processes issues\nBUT if you are running some parallel processes on the same machine, then:\n\nnode which defaults to uuid.get_node() will be the same for all the processes;\nclock_seq has small chance to be the same for some processes (chance of 1/16384)\n\nThat might lead to conflicts! That is general concern for using\n uuid.uuid1 in parallel processes on the same machine unless you have access to SafeUUID from Python3.7.\nIf you make sure to also set node to unique value for each parallel process that runs this code, then conflicts should not happen.\nEven if you are using SafeUUID, and set unique node, it's still possible to have non-sequential (but unique) ids if they are generated in different processes.\nIf some lock-related overhead is acceptable, then you can store clock_seq in some external atomic storage (for example in \"locked\" file) and increment it with each call: this allows to have same value for node on all parallel processes and also will make id-s sequential. For cases when all parallel processes are subprocesses created using multiprocessing: clock_seq can be \"shared\" using multiprocessing.Value\nAs a result you always have to remember:\n\nIf you are running multiple processes on the same machine, then you must:\n\nEnsure uniqueness of node. The problem for this solution: you can't be sure to have sequential ids from different processes generated during the same 100-ns interval. But this is very \"light\" operation executed once on process startup and achieved by: \"adding\" something to default node, e.g. int(time.time()*1e9) - 0x118494406d1cc000, or by adding some counter from machine-level atomic db.\nEnsure \"machine-level atomic clock_seq\" and the same node for all processes on one machine. That way you'll have some overhead for \"locking\" clock_seq, but id-s are guaranteed to be sequential even if generated in different processes during the same 100-ns interval (unless you are calling uuid from several threads in the same process).\n\nFor processes on different machines:\n\neither you have to use some \"global counter service\";\nor it's not possible to have sequential ids generated on different machines during the same 100-ns interval.\n\n\nReducing size of the id\nGeneral approach to generate UUIDs is quite simple, so it's easy to implement something similar from scratch, and for example use less bits for node_info part:\nimport time\nfrom random import getrandbits\n\n_my_clock_seq = getrandbits(14)\n_last_timestamp_part = 0\n_used_clock_seq = 0\n\n\ntimestamp_multiplier = 1e7 # I'd recommend to use this value\n\n# Next values are enough up to year 2116:\nif timestamp_multiplier == 1e9:\n time_bits = 62 # Up to year 2116, also reduces chances for non-sequential id-s generated in different processes\nelif timestamp_multiplier == 1e8:\n time_bits = 60 # up to year 2335\nelif timestamp_multiplier == 1e7:\n time_bits = 56 # Up to year 2198.\nelse:\n raise ValueError('Please calculate and set time_bits')\n\ntime_mask = 2**time_bits - 1\n\nseq_bits = 16\nseq_mask = 2**seq_bits - 1\n\nnode_bits = 12\nnode_mask = 2**node_bits - 1\n\nmax_hex_len = len(hex(2**(node_bits+seq_bits+time_bits) - 1)) - 2 # 21\n\n_default_node_number = getrandbits(node_bits) # or `uuid.getnode() & node_mask`\n\n\ndef sequential_uuid(node_number=None):\n \"\"\"Return 21-characters long hex string that is sequential and unique for each call in current process.\n\n Results from different processes may \"overlap\" but are guaranteed to\n be unique if `node_number` is different in each process.\n\n \"\"\"\n global _my_clock_seq\n global _last_timestamp_part\n global _used_clock_seq\n if node_number is None:\n node_number = _default_node_number\n if not 0 <= node_number <= node_mask:\n raise ValueError(\"Node number out of range\")\n\n timestamp_part = int(time.time() * timestamp_multiplier) & time_mask\n _my_clock_seq = (_my_clock_seq + 1) & seq_mask\n\n if _last_timestamp_part >= timestamp_part:\n timestamp_part = _last_timestamp_part\n if _used_clock_seq == _my_clock_seq:\n timestamp_part = (timestamp_part + 1) & time_mask\n else:\n _used_clock_seq = _my_clock_seq\n\n _last_timestamp_part = timestamp_part\n\n return hex(\n (timestamp_part << (node_bits+seq_bits))\n |\n (_my_clock_seq << (node_bits))\n |\n node_number\n )[2:]\n\n\nNotes:\n\nMaybe it's better to simply store integer value (not hex-string) in the database\nIf you are storing it as text/char, then its better to convert integer to base64-string instead of converting it to hex-string. That way it will be shorter (21 chars hex-string → 16 chars b64-encoded string):\n\nfrom base64 import b64encode\n\ntotal_bits = time_bits+seq_bits+node_bits\ntotal_bytes = total_bits // 8 + 1 * bool(total_bits % 8)\n\ndef int_to_b64(int_value):\n return b64encode(int_value.to_bytes(total_bytes, 'big'))\n\n\nCollision chances\n\nSingle process: collisions not possible\nMultiple processes with manually set unique clock_seq or unique node in each process: collisions not possible\nMultiple processes with randomly set node (48-bits, \"fixed\" in time):\n\nChance to have the node collision in several processes:\n\nin 2 processes out of 10000: ~0.000018%\nin 2 processes out of 100000: 0.0018%\n\nChance to have single collision of the id per second in 2 processes with the \"colliding\" node:\n\nfor \"timestamp\" interval of 100-ns (default for uuid.uuid1 , and in my code when timestamp_multiplier == 1e7): proportional to 3.72e-19 * avg_call_frequency²\nfor \"timestamp\" interval of 10-ns (timestamp_multiplier == 1e8): proportional to 3.72e-21 * avg_call_frequency²\n\n\n\n",
"In the article you've linked too, the cassandra.util.uuid_from_time(time_arg, node=None, clock_seq=None)[source] seems to be exactly what you're looking for.\ndef uuid_from_time(time_arg, node=None, clock_seq=None):\n \"\"\"\n Converts a datetime or timestamp to a type 1 :class:`uuid.UUID`.\n\n :param time_arg:\n The time to use for the timestamp portion of the UUID.\n This can either be a :class:`datetime` object or a timestamp\n in seconds (as returned from :meth:`time.time()`).\n :type datetime: :class:`datetime` or timestamp\n\n :param node:\n None integer for the UUID (up to 48 bits). If not specified, this\n field is randomized.\n :type node: long\n\n :param clock_seq:\n Clock sequence field for the UUID (up to 14 bits). If not specified,\n a random sequence is generated.\n :type clock_seq: int\n\n :rtype: :class:`uuid.UUID`\n\n \"\"\"\n if hasattr(time_arg, 'utctimetuple'):\n seconds = int(calendar.timegm(time_arg.utctimetuple()))\n microseconds = (seconds * 1e6) + time_arg.time().microsecond\n else:\n microseconds = int(time_arg * 1e6)\n\n # 0x01b21dd213814000 is the number of 100-ns intervals between the\n # UUID epoch 1582-10-15 00:00:00 and the Unix epoch 1970-01-01 00:00:00.\n intervals = int(microseconds * 10) + 0x01b21dd213814000\n\n time_low = intervals & 0xffffffff\n time_mid = (intervals >> 32) & 0xffff\n time_hi_version = (intervals >> 48) & 0x0fff\n\n if clock_seq is None:\n clock_seq = random.getrandbits(14)\n else:\n if clock_seq > 0x3fff:\n raise ValueError('clock_seq is out of range (need a 14-bit value)')\n\n clock_seq_low = clock_seq & 0xff\n clock_seq_hi_variant = 0x80 | ((clock_seq >> 8) & 0x3f)\n\n if node is None:\n node = random.getrandbits(48)\n\n return uuid.UUID(fields=(time_low, time_mid, time_hi_version,\n clock_seq_hi_variant, clock_seq_low, node), version=1)\n\nThere's nothing Cassandra specific to a Type 1 UUID...\n",
"You should be able to encode a timestamp precise to the second for a time range of 135 years in 32 bits. That will only take 8 characters to represent in hex. Added to the hex representation of the uuid (32 hex characters) that will amount to only 40 hex characters.\nEncoding the time stamp that way requires that you pick a base year (e.g. 2000) and compute the number of days up to the current date (time stamp). Multiply this number of days by 86400, then add the seconds since midnight. This will give you values that are less than 2^32 until you reach year 2135. \nNote that you have to keep leading zeroes in the hex encoded form of the timestamp prefix in order for alphanumeric sorting to preserve the chronology.\nWith a few bits more in the time stamp, you could increase the time range and/or the precision. With 8 more bits (two hex characters), you could go up to 270 years with a precision to the hundredth of a second.\nNote that you don't have to model the fraction of seconds in a base 10 range. You will get optimal bit usage by breaking it down in 128ths instead of 100ths for the same number of characters. With the doubling of the year range, this still fits within 8 bits (2 hex characters)\nThe collision probability, within the time precision (i.e. per second or per 100th or 128th of a second) is driven by the range of the uuid so it will be 1 in 2^128 for the chosen precision. Increasing the precision of the time stamp has the most impact on reducing the collision chances. It is also the factor that has the lowest impact on total size of the key.\nMore efficient character encoding: 27 to 29 character keys\nYou could significantly reduce the size of the key by encoding it in base 64 instead of 16 which would give you 27 to 29 characters (depending on you choice of precision) \nNote that, for the timestamp part, you need to use an encoding function that takes an integer as input and that preserves the collating sequence of digit characters.\nFor example:\ndef encode64(number, size):\n chars = \"+-0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz\"\n result = list()\n for _ in range(size):\n result.append(chars[number%64])\n number //= 64\n return \"\".join(reversed(result))\n\na = encode64(1234567890,6) # '-7ZU9G'\nb = encode64(9876543210,6) # '7Ag-Pe'\nprint(a < b) # True\n\nu = encode64(int(uuid.uuid4()),22) # '1QA2LtMg30ztnugxaokVMk'\n\nkey = a+u # '-7ZU9G1QA2LtMg30ztnugxaokVMk' (28 characters)\n\nYou can save some more characters by combining the time stamp and uuid into a single number before encoding instead of concatenating the two encoded values.\nThe encode64() function needs one character every 6 bits.\nSo, for 135 years with precision to the second: (32+128)/6 = 26.7 --> 27 characters\ninstead of (32/6 = 5.3 --> 6) + (128/6 = 21.3 --> 22) ==> 28 characters\nuid = uuid.uuid4()\ntimeStamp = daysSince2000 * 86400 + int(secondsSinceMidnight)\nkey = encode64( timeStamp<<128 | int(uid) ,27)\n\nwith a 270 year span and 128th of a second precision: (40+128)/6 = 28 characters\nuid = uuid.uuid4()\ntimeStamp = daysSince2000 * 86400 + int(secondsSinceMidnight)\nprecision = 128\ntimeStamp = timeStamp * precision + int(factionOfSecond * precision)\nkey = encode64( timeStamp<<128 | int(uid) ,28)\n\nWith 29 characters you can raise precision to 1024th of a second and year range to 2160 years.\nUUID masking: 17 to 19 characters keys\nTo be even more efficient, you could strip out the first 64 bits of the uuid (which is already a time stamp) and combine it with your own time stamp. This would give you keys with a length of 17 to 19 characters with practically no loss of collision avoidance (depending on your choice of precision).\nmask = (1<<64)-1\nkey = encode64( timeStamp<<64 | (int(uid) & mask) ,19)\n\nInteger/Numeric keys ?\nAs a final note, if your database supports very large integers or numeric fields (140 bits or more) as keys, you don't have to convert the combined number to a string. Just use it directly as the key. The numerical sequence of timeStamp<<128 | int(uid) will respect the chronology.\n",
"The uuid6 module (pip install uuid6) solves the problem. It aims at implementing the corresponding draft for a new uuid variant standard, see here.\nExample code:\nimport uuid6\n\nfor i in range(0, 30):\n u = uuid6.uuid7()\n print(u)\n time.sleep(0.1)\n\nThe package suggests to use uuid6.uuid7():\n\nImplementations SHOULD utilize UUID version 7 over UUID version 1 and\n6 if possible.\nUUID version 7 features a time-ordered value field derived from the\nwidely implemented and well known Unix Epoch timestamp source, the\nnumber of milliseconds seconds since midnight 1 Jan 1970 UTC, leap\nseconds excluded. As well as improved entropy characteristics over\nversions 1 or 6.\n\n"
] |
[
11,
1,
1,
0
] |
[] |
[] |
[
"amazon_dynamodb",
"python",
"python_3.x",
"uuid"
] |
stackoverflow_0056119272_amazon_dynamodb_python_python_3.x_uuid.txt
|
Q:
Timestamp overlapping matplotlib
I am trying to create a graph using matplotlib with number of requests (y-axis) vs timestamp (x-axis in HH:MM format).
This graph will show the pattern for the all the requests received between 6:00 AM to 6:00 PM. Below is the sample data. Actual data has more than 500 entries.
time_stamp = ['06:02', '06:03', '06:12', '06:16', '06:17', '06:27', '06:28', '06:30', '06:31', '06:34', '06:35', '06:36', '06:37', '06:38', '06:39', '06:40', '06:41', '06:42', '06:43']
requests = [74, 20, 2, 1, 11, 9, 34, 3, 5, 4, 28, 77, 75, 73, 122, 99, 170, 79, 44, 79, 100, 58, 104, 84, 77, 98, 27]
Below is the script which I am using to generate the graph. Problem which I am facing currently is overlapping of all the timestamps on the x-axis.
Script:
import matplotlib.pyplot as plt
TITLE = 'Time (Per Minute) Vs Num of Requests Graph'
X_AXIS_NAME = 'TimeStamps (per minute)'
Y_AXIS_NAME = 'No. of Requests'
time_stamp = ['06:02', '06:03', '06:12', '06:16', '06:17', '06:27', '06:28',
'06:30', '06:31', '06:34', '06:35', '06:36', '06:37', '06:38', '06:39',
'06:40', '06:41', '06:42', '06:43', '06:44', '06:45', '06:46', '06:47',
'06:48', '06:49', '06:50', '06:51', '06:52', '06:53', '06:54', '06:55',
'06:56', '06:57', '06:58', '06:59', '07:00', '07:01']
requests = [74, 20, 2, 1, 11, 9, 34, 3, 5, 4, 28, 77, 75, 73]
fig, ax = plt.subplots()
plt.plot(time_stamp, requests)
fig.autofmt_xdate()
plt.xlabel(X_AXIS_NAME)
plt.ylabel(Y_AXIS_NAME)
plt.title(TITLE)
plt.show()
fig.savefig('graph.png', dpi=fig.dpi)
Generated Graph:
And this is the graph which I actually want to generate. This graph has been generated using Excel.
Expected Graph:
Timestamps are not overlapped.
EDIT 1:
dates = []
for ts in time_stamp:
dates.append( datetime.strptime(ts, '%H:%M'))
mp_dates = matplotlib.dates.date2num(dates)
matplotlib.pyplot.plot_date(mp_dates, requests)
EDIT 2:
dates = []
for ts in time_stamp:
local_d = datetime.strptime(ts, '%H:%M')
dates.append( local_d)
fig, ax = plt.subplots()
plt.setp( ax.xaxis.get_majorticklabels(), rotation=90)
plt.plot(dates, requests)
ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M'))
#fig.autofmt_xdate()
plt.xlabel(X_AXIS_NAME)
plt.ylabel(Y_AXIS_NAME)
plt.title(TITLE)
# function to show the plot
plt.show()
fig.savefig('graph.png', dpi=fig.dpi)
Only missing piece is to reduce the interval between 2 ticks. Currently it is 2 hours.
Any help or pointer in this regards is highly appreciated.
A:
For just fully rotating the labels like in your excel plot. Try this.
plt.setp( ax.xaxis.get_majorticklabels(), rotation=90)
A:
After doing more research finally I am able to plot it.
dates = []
for ts in time_stamp:
local_d = datetime.strptime(ts, '%H:%M')
dates.append( local_d)
fig, ax = plt.subplots()
plt.setp( ax.xaxis.get_majorticklabels(), rotation=90)
plt.plot(dates, requests)
ax.xaxis.set_major_locator(mdates.MinuteLocator(interval=20))
ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M'))
plt.xlabel(X_AXIS_NAME)
plt.ylabel(Y_AXIS_NAME)
plt.title(TITLE)
plt.show()
fig.savefig('graph.png', dpi=fig.dpi)
Thanks to the community!
A:
You can actually use matplotlib's autofmt_xdate() method to solve the problem you're facing.
Just add following line before plt.show()
plt.gcf().autofmt_xdate()
The defaults work well, so most probably you can just call it without any parameters, but for the sake of completeness, you can use parameters specified below.
Quoting matplotlib documentation (v.3.1.1):
autofmt_xdate(self, bottom=0.2, rotation=30, ha='right', which=None)
Date ticklabels often overlap, so it is useful to rotate them and right align them. Also, a common use case is a number of subplots with shared xaxes where the x-axis is date data. The ticklabels are often long, and it helps to rotate them on the bottom subplot and turn them off on other subplots, as well as turn off xlabels.
Parameters:
bottom : scalar
The bottom of the subplots for subplots_adjust().
rotation : angle in degrees
The rotation of the xtick labels.
ha : string
The horizontal alignment of the xticklabels.
which : {None, 'major', 'minor', 'both'}
Selects which ticklabels to rotate. Default is None which works the same as major
A:
The problem is not the many data but the density of tick labels. autofmt_xdate even fails with a few labelled ticks if the figure is narrow. So the solution is to reduce the number of labelled ticks. No rotation is needed if only full hours are labelled without printing minutes. Note that MinuteLocator(interval=60) would fail -- silently placing ticks with an offset of a fractional hour.
import datetime as dt
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from numpy import arange # for fake x data
y = [3, 30, 3000, 2900, 3100, 1000, 3000, 2000, 200, 20, 2] # roughly
x = arange(len(y))*dt.timedelta(seconds=4800) + dt.datetime.strptime('05:50', '%H:%M')
fig, ax = plt.subplots(figsize=(10,4))
ax.set_title('Request Load (<server> <service> <date>)')
ax.set_xlabel('time of day in hours (timezone)')
ax.set_ylabel('requests per minute')
ax.plot(x, y)
ax.xaxis.set_minor_locator(mdates.MinuteLocator(interval=15))
ax.xaxis.set_major_locator(mdates.HourLocator(interval=1))
ax.xaxis.set_major_formatter(mdates.DateFormatter('%H'))
ax.set_ylim(0)
fig.tight_layout()
fig.show()
|
Timestamp overlapping matplotlib
|
I am trying to create a graph using matplotlib with number of requests (y-axis) vs timestamp (x-axis in HH:MM format).
This graph will show the pattern for the all the requests received between 6:00 AM to 6:00 PM. Below is the sample data. Actual data has more than 500 entries.
time_stamp = ['06:02', '06:03', '06:12', '06:16', '06:17', '06:27', '06:28', '06:30', '06:31', '06:34', '06:35', '06:36', '06:37', '06:38', '06:39', '06:40', '06:41', '06:42', '06:43']
requests = [74, 20, 2, 1, 11, 9, 34, 3, 5, 4, 28, 77, 75, 73, 122, 99, 170, 79, 44, 79, 100, 58, 104, 84, 77, 98, 27]
Below is the script which I am using to generate the graph. Problem which I am facing currently is overlapping of all the timestamps on the x-axis.
Script:
import matplotlib.pyplot as plt
TITLE = 'Time (Per Minute) Vs Num of Requests Graph'
X_AXIS_NAME = 'TimeStamps (per minute)'
Y_AXIS_NAME = 'No. of Requests'
time_stamp = ['06:02', '06:03', '06:12', '06:16', '06:17', '06:27', '06:28',
'06:30', '06:31', '06:34', '06:35', '06:36', '06:37', '06:38', '06:39',
'06:40', '06:41', '06:42', '06:43', '06:44', '06:45', '06:46', '06:47',
'06:48', '06:49', '06:50', '06:51', '06:52', '06:53', '06:54', '06:55',
'06:56', '06:57', '06:58', '06:59', '07:00', '07:01']
requests = [74, 20, 2, 1, 11, 9, 34, 3, 5, 4, 28, 77, 75, 73]
fig, ax = plt.subplots()
plt.plot(time_stamp, requests)
fig.autofmt_xdate()
plt.xlabel(X_AXIS_NAME)
plt.ylabel(Y_AXIS_NAME)
plt.title(TITLE)
plt.show()
fig.savefig('graph.png', dpi=fig.dpi)
Generated Graph:
And this is the graph which I actually want to generate. This graph has been generated using Excel.
Expected Graph:
Timestamps are not overlapped.
EDIT 1:
dates = []
for ts in time_stamp:
dates.append( datetime.strptime(ts, '%H:%M'))
mp_dates = matplotlib.dates.date2num(dates)
matplotlib.pyplot.plot_date(mp_dates, requests)
EDIT 2:
dates = []
for ts in time_stamp:
local_d = datetime.strptime(ts, '%H:%M')
dates.append( local_d)
fig, ax = plt.subplots()
plt.setp( ax.xaxis.get_majorticklabels(), rotation=90)
plt.plot(dates, requests)
ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M'))
#fig.autofmt_xdate()
plt.xlabel(X_AXIS_NAME)
plt.ylabel(Y_AXIS_NAME)
plt.title(TITLE)
# function to show the plot
plt.show()
fig.savefig('graph.png', dpi=fig.dpi)
Only missing piece is to reduce the interval between 2 ticks. Currently it is 2 hours.
Any help or pointer in this regards is highly appreciated.
|
[
"For just fully rotating the labels like in your excel plot. Try this.\nplt.setp( ax.xaxis.get_majorticklabels(), rotation=90)\n\n",
"After doing more research finally I am able to plot it.\ndates = []\nfor ts in time_stamp:\n local_d = datetime.strptime(ts, '%H:%M')\n dates.append( local_d)\n\nfig, ax = plt.subplots()\nplt.setp( ax.xaxis.get_majorticklabels(), rotation=90)\nplt.plot(dates, requests)\nax.xaxis.set_major_locator(mdates.MinuteLocator(interval=20))\nax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M'))\nplt.xlabel(X_AXIS_NAME)\nplt.ylabel(Y_AXIS_NAME)\nplt.title(TITLE)\nplt.show()\nfig.savefig('graph.png', dpi=fig.dpi)\n\n\nThanks to the community!\n",
"You can actually use matplotlib's autofmt_xdate() method to solve the problem you're facing.\nJust add following line before plt.show()\nplt.gcf().autofmt_xdate()\n\nThe defaults work well, so most probably you can just call it without any parameters, but for the sake of completeness, you can use parameters specified below.\nQuoting matplotlib documentation (v.3.1.1):\n\nautofmt_xdate(self, bottom=0.2, rotation=30, ha='right', which=None)\nDate ticklabels often overlap, so it is useful to rotate them and right align them. Also, a common use case is a number of subplots with shared xaxes where the x-axis is date data. The ticklabels are often long, and it helps to rotate them on the bottom subplot and turn them off on other subplots, as well as turn off xlabels.\nParameters:\n\nbottom : scalar\nThe bottom of the subplots for subplots_adjust().\n\nrotation : angle in degrees\nThe rotation of the xtick labels.\n\nha : string\nThe horizontal alignment of the xticklabels.\n\nwhich : {None, 'major', 'minor', 'both'}\nSelects which ticklabels to rotate. Default is None which works the same as major\n\n\n\n",
"The problem is not the many data but the density of tick labels. autofmt_xdate even fails with a few labelled ticks if the figure is narrow. So the solution is to reduce the number of labelled ticks. No rotation is needed if only full hours are labelled without printing minutes. Note that MinuteLocator(interval=60) would fail -- silently placing ticks with an offset of a fractional hour.\nimport datetime as dt\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nfrom numpy import arange # for fake x data\n\ny = [3, 30, 3000, 2900, 3100, 1000, 3000, 2000, 200, 20, 2] # roughly\nx = arange(len(y))*dt.timedelta(seconds=4800) + dt.datetime.strptime('05:50', '%H:%M')\n\nfig, ax = plt.subplots(figsize=(10,4))\nax.set_title('Request Load (<server> <service> <date>)')\nax.set_xlabel('time of day in hours (timezone)')\nax.set_ylabel('requests per minute')\nax.plot(x, y)\nax.xaxis.set_minor_locator(mdates.MinuteLocator(interval=15))\nax.xaxis.set_major_locator(mdates.HourLocator(interval=1))\nax.xaxis.set_major_formatter(mdates.DateFormatter('%H'))\nax.set_ylim(0)\nfig.tight_layout()\nfig.show()\n\n\n"
] |
[
0,
0,
0,
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0049947615_matplotlib_python.txt
|
Q:
Angle of reflection relative to coordinate system
I have a two 2D points u = (ux, uy) and v = (vx, vy) that define a line segment.
Additionally I have an angle θ that is defined relative to the coordinate system (angle to x-axis), indicating the directing of a moving particle.
Is there a simple way to find the angle of reflection resulting (again, relative to the coordinate system) from that particle bouncing off the line segment?
So far I have found the angle of the line segment θuv = numpy.arctan2(vy-uy, vx-ux), taken the difference Δθ = θuv - θ and set the resulting angle to θ_reflected = θ - 2*Δθ.
Sometimes, this seems to work, but other times it's completely off.
A:
Segment has length
leng = hypot(vy-uy, vx-ux)
and unit direction vector (perhaps in numpy there is ready function like normalized)
dx = (vx-ux) / leng
dy = (vy-uy) / leng
Unit normal to segment
nx = - dy
ny = dx
particle direction vector is
px = cos(θ)
py = sin(θ)
Reflected vector
dott = dot(p, n) = px * nx + py * ny
rx = px - 2 * dott * nx
ry = py - 2 * dott * ny
If you need angle
θ_reflected = atan2(ry, rx)
but sometimes particle direction vector components are more useful
|
Angle of reflection relative to coordinate system
|
I have a two 2D points u = (ux, uy) and v = (vx, vy) that define a line segment.
Additionally I have an angle θ that is defined relative to the coordinate system (angle to x-axis), indicating the directing of a moving particle.
Is there a simple way to find the angle of reflection resulting (again, relative to the coordinate system) from that particle bouncing off the line segment?
So far I have found the angle of the line segment θuv = numpy.arctan2(vy-uy, vx-ux), taken the difference Δθ = θuv - θ and set the resulting angle to θ_reflected = θ - 2*Δθ.
Sometimes, this seems to work, but other times it's completely off.
|
[
"Segment has length\nleng = hypot(vy-uy, vx-ux)\n\nand unit direction vector (perhaps in numpy there is ready function like normalized)\ndx = (vx-ux) / leng\ndy = (vy-uy) / leng\n\nUnit normal to segment\nnx = - dy\nny = dx\n\nparticle direction vector is\npx = cos(θ)\npy = sin(θ)\n\nReflected vector\ndott = dot(p, n) = px * nx + py * ny\nrx = px - 2 * dott * nx\nry = py - 2 * dott * ny\n\nIf you need angle\nθ_reflected = atan2(ry, rx)\n\nbut sometimes particle direction vector components are more useful\n"
] |
[
1
] |
[] |
[] |
[
"geometry",
"python",
"reflection",
"simulation",
"vector"
] |
stackoverflow_0074477952_geometry_python_reflection_simulation_vector.txt
|
Q:
How to send data between 2 EC2 instances
I have two AWS EC2 instances, one running a.py and b.py. These two programs use data produced by the other to complete tasks, a.py waits for b.py to create some data that it uses to create some data that a.py will use to create data that b.py will .... basically, they will keep passing data back and forth until a condition is met.
I haven't been able to find a place that concretely defines how to do this. Optimally, I want to do this with the smallest time lag.
A:
As you are using AWS already, the native solution for things like that is SQS queue. To achieve that task, you need to create two SQS queue:
SQS-Queue-App-A
SQS-Queue-App-B
Then make a.py, something along these lines:
import boto3
# Create SQS client
sqs = boto3.client('sqs')
queue_a_url = 'SQS_QUEUE_URL'
queue_b_url = 'SQS_QUEUE_URL'
while (1):
messages = sqs.receive_messages(
MaxNumberOfMessages=10,
WaitTimeSeconds=10,
QueueUrl=queue_a_url,
)
for msg in messages:
logger.info("Received: %s: %s", msg.message_id, msg.body)
#Do whatever you need to do with the message
response = sqs.send_message(
QueueUrl=queue_b_url,
MessageBody=(
'something to process by script B'
)
)
You can create them as FIFO queues to be sure that messages are in sequences.
|
How to send data between 2 EC2 instances
|
I have two AWS EC2 instances, one running a.py and b.py. These two programs use data produced by the other to complete tasks, a.py waits for b.py to create some data that it uses to create some data that a.py will use to create data that b.py will .... basically, they will keep passing data back and forth until a condition is met.
I haven't been able to find a place that concretely defines how to do this. Optimally, I want to do this with the smallest time lag.
|
[
"As you are using AWS already, the native solution for things like that is SQS queue. To achieve that task, you need to create two SQS queue:\n\nSQS-Queue-App-A\nSQS-Queue-App-B\n\nThen make a.py, something along these lines:\nimport boto3\n\n# Create SQS client\nsqs = boto3.client('sqs')\n\nqueue_a_url = 'SQS_QUEUE_URL'\nqueue_b_url = 'SQS_QUEUE_URL'\n\nwhile (1):\n messages = sqs.receive_messages(\n MaxNumberOfMessages=10,\n WaitTimeSeconds=10,\n QueueUrl=queue_a_url,\n )\n for msg in messages:\n logger.info(\"Received: %s: %s\", msg.message_id, msg.body)\n #Do whatever you need to do with the message \n \n response = sqs.send_message(\n QueueUrl=queue_b_url,\n\n MessageBody=(\n 'something to process by script B'\n )\n )\n\nYou can create them as FIFO queues to be sure that messages are in sequences.\n"
] |
[
2
] |
[] |
[] |
[
"amazon_ec2",
"amazon_web_services",
"python"
] |
stackoverflow_0074477177_amazon_ec2_amazon_web_services_python.txt
|
Q:
How to check if a specific number is present in the lines of a file?
I have a file named in.txt.
in.txt
0000fb435 00000326fab123bc2a 20
00003b4c6 0020346afeff655423 26
0000cb341 be3652a156fffcabd5 26
.
.
i need to check if number 20 is present in file and if present i need the output to look like this.
Expected output:
out.txt
0020fb435 00000326fab123bc2a 20 twenty_number
00003b4c6 0020346afeff655423 26 none
0000cb341 be3652a120fffcabd5 26 none
.
.
this is my current attempt:
with open("in.txt", "r") as fin:
with open("out.txt", "w") as fout:
for line in fin:
line = line.strip()
if '20' in line:
fout.write(line + f" twenty_number \n")
this is current output:
out.txt
0020fb435 00000326fab123bc2a 20 twenty_number
00003b4c6 0020346afeff655423 26 twenty_number
0000cb341 be3652a120fffcabd5 26 twenty_number
.
.
this is because it is checking "20" in every line but i only need to check the last column.
A:
You just need to use endswith as the if condition.
with open("in.txt", "r") as fin:
with open("out.txt", "w") as fout:
for line in fin:
line = line.strip()
if line.endswith('20'):
fout.write(line + f" twenty_number \n")
else:
fout.write(line + f" none \n")
output in out.txt
0000fb435 00000326fab123bc2a 20 twenty_number
00003b4c6 0020346afeff655423 26 none
0000cb341 be3652a156fffcabd5 26 none
A:
with open("in.txt", "r") as fin:
with open("out.txt", "w") as fout:
for line in fin:
last_col = line.split()[-1]
fout.write(f"{line.strip()} {'twenty_number' if '20' in last_col else 'none'}" )
output:
0020fb435 00000326fab123bc2a 20 twenty_number
00003b4c6 0020346afeff655423 26 none
0000cb341 be3652a120fffcabd5 26 none
|
How to check if a specific number is present in the lines of a file?
|
I have a file named in.txt.
in.txt
0000fb435 00000326fab123bc2a 20
00003b4c6 0020346afeff655423 26
0000cb341 be3652a156fffcabd5 26
.
.
i need to check if number 20 is present in file and if present i need the output to look like this.
Expected output:
out.txt
0020fb435 00000326fab123bc2a 20 twenty_number
00003b4c6 0020346afeff655423 26 none
0000cb341 be3652a120fffcabd5 26 none
.
.
this is my current attempt:
with open("in.txt", "r") as fin:
with open("out.txt", "w") as fout:
for line in fin:
line = line.strip()
if '20' in line:
fout.write(line + f" twenty_number \n")
this is current output:
out.txt
0020fb435 00000326fab123bc2a 20 twenty_number
00003b4c6 0020346afeff655423 26 twenty_number
0000cb341 be3652a120fffcabd5 26 twenty_number
.
.
this is because it is checking "20" in every line but i only need to check the last column.
|
[
"You just need to use endswith as the if condition.\nwith open(\"in.txt\", \"r\") as fin:\n with open(\"out.txt\", \"w\") as fout:\n for line in fin:\n line = line.strip()\n if line.endswith('20'):\n fout.write(line + f\" twenty_number \\n\")\n else:\n fout.write(line + f\" none \\n\")\n\noutput in out.txt\n0000fb435 00000326fab123bc2a 20 twenty_number \n00003b4c6 0020346afeff655423 26 none \n0000cb341 be3652a156fffcabd5 26 none \n\n",
"with open(\"in.txt\", \"r\") as fin:\n with open(\"out.txt\", \"w\") as fout:\n for line in fin:\n last_col = line.split()[-1]\n fout.write(f\"{line.strip()} {'twenty_number' if '20' in last_col else 'none'}\" )\n\noutput:\n0020fb435 00000326fab123bc2a 20 twenty_number\n00003b4c6 0020346afeff655423 26 none\n0000cb341 be3652a120fffcabd5 26 none\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074478191_python_python_3.x.txt
|
Q:
How do I use a custom generator function to feed Keras model.fit samples one by one?
The Problem
Feeding data into Keras LSTM model with my custom generator function (see code below) gives me the following error.
WARNING:tensorflow:Model was constructed with shape (None, 3177, 2) for input
KerasTensor(type_spec=TensorSpec(shape=(None, 3177, 2), dtype=tf.float32, name='masking_9_input'),
name='masking_9_input', description="created by layer 'masking_9_input'"), but it was called on an
input with incompatible shape (None, None).
Generator function
def padded_generator(trajectories=trajectories, max_length=3177):
X = []
Y = []
for trajectory in trajectories.values:
curr_X = np.hstack([trajectory[0][0]])
curr_Y = np.hstack([trajectory[0][2]])
temp = (np.hstack([trajectory[0][1:]]))
for i, point in enumerate(temp):
if i >= temp.shape[0] - 1: # Should break at second to last sample.
break
curr_X = np.vstack((curr_X, point)) # Stack next point on existing X
padded_X = np.squeeze(tf.keras.utils.pad_sequences([curr_X],
maxlen= 3177,
padding='post',
dtype=float,
value=-10))
curr_Y = temp[i+1] # Point added to X in next iter. is current target.
yield (padded_X, curr_Y)
data_gen = padded_generator()
A full trajectory here is simply an array of points in the form of
[[-0.1843775 0.6867699 ]
[-1.0841161 -3.0429556 ]
[ 1.3582058 -0.6040352 ]
[ 1.8754534 -1.7010269 ]
...
[-2.4015598 0.3573116 ]
[-1.3986164 -0.95052546]
[-0.705326 -1.3387672 ]
[-1.455082 -0.57572746]
[-3.1130497 -2.7871382 ]]
From these, the generator yields a padded partial trajectory X and corresponding label Y each time it is called. These have shape:
Shape of X: (3177, 2)
Shape of Y: (2,)
Model
Now, the model i've written for this is quite simple, but it does have the corresponding input shape, and the right masking layer syntax AFAIK.
model = Sequential()
model.add(tf.keras.layers.Masking(mask_value=-10, input_shape=(3177, 2)))
model.add(LSTM(100, activation='relu', return_sequences=True))
model.add(LSTM(50, activation='relu', return_sequences=True))
model.add(LSTM(25, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(2))
model.compile(optimizer='adam', loss='mse')
Then, I run:
model.fit(data_gen, verbose=1)
And I get the error at the top of the post.
I've googled, but I don't know how to fix this. I am not an expert at all, so I'd appreciate explanations with examples or in simple terms. Thanks in advance for the help.
A:
For any future people with the same question:
I'm not really sure what the problem was. My guess is that my generator was feeding the whole tuple (input, label) to the network as an input, while I only desired it to feed the input and not the label. Hence, the error with input_shape. However, the Keras docs state that one should call model.fit(generator_function) with a Python generator function that gives back (inputs, targets), so while it might sound more plausible, I'm not sure if this is the case, or the error messaging is just very unclear.
I fixed it by making a tf.Dataset out of my vanilla generator, simply by running:
data = tf.data.Dataset.from_generator(
padded_generator,
output_types= (tf.float32, tf.float32),
output_shapes= ((3177,2),(1,2))
)
data= data.batch(10)
The last line ensures batching to create inputs of shape (None, 3177, 2), to feed proper ndims=3 input to the LSTM layers (which gave an error after I fixed the error stated in the question).
Then,
model.fit(data)
runs properly.
Note: if you have a generator that has function arguments, instead use lambda:
data = tf.data.Dataset.from_generator(
lambda: padded_generator(**Function arguments here**),
output_types= (tf.float32, tf.float32),
output_shapes= ((3177,2),(1,2))
)
data= data.batch(10)
|
How do I use a custom generator function to feed Keras model.fit samples one by one?
|
The Problem
Feeding data into Keras LSTM model with my custom generator function (see code below) gives me the following error.
WARNING:tensorflow:Model was constructed with shape (None, 3177, 2) for input
KerasTensor(type_spec=TensorSpec(shape=(None, 3177, 2), dtype=tf.float32, name='masking_9_input'),
name='masking_9_input', description="created by layer 'masking_9_input'"), but it was called on an
input with incompatible shape (None, None).
Generator function
def padded_generator(trajectories=trajectories, max_length=3177):
X = []
Y = []
for trajectory in trajectories.values:
curr_X = np.hstack([trajectory[0][0]])
curr_Y = np.hstack([trajectory[0][2]])
temp = (np.hstack([trajectory[0][1:]]))
for i, point in enumerate(temp):
if i >= temp.shape[0] - 1: # Should break at second to last sample.
break
curr_X = np.vstack((curr_X, point)) # Stack next point on existing X
padded_X = np.squeeze(tf.keras.utils.pad_sequences([curr_X],
maxlen= 3177,
padding='post',
dtype=float,
value=-10))
curr_Y = temp[i+1] # Point added to X in next iter. is current target.
yield (padded_X, curr_Y)
data_gen = padded_generator()
A full trajectory here is simply an array of points in the form of
[[-0.1843775 0.6867699 ]
[-1.0841161 -3.0429556 ]
[ 1.3582058 -0.6040352 ]
[ 1.8754534 -1.7010269 ]
...
[-2.4015598 0.3573116 ]
[-1.3986164 -0.95052546]
[-0.705326 -1.3387672 ]
[-1.455082 -0.57572746]
[-3.1130497 -2.7871382 ]]
From these, the generator yields a padded partial trajectory X and corresponding label Y each time it is called. These have shape:
Shape of X: (3177, 2)
Shape of Y: (2,)
Model
Now, the model i've written for this is quite simple, but it does have the corresponding input shape, and the right masking layer syntax AFAIK.
model = Sequential()
model.add(tf.keras.layers.Masking(mask_value=-10, input_shape=(3177, 2)))
model.add(LSTM(100, activation='relu', return_sequences=True))
model.add(LSTM(50, activation='relu', return_sequences=True))
model.add(LSTM(25, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(2))
model.compile(optimizer='adam', loss='mse')
Then, I run:
model.fit(data_gen, verbose=1)
And I get the error at the top of the post.
I've googled, but I don't know how to fix this. I am not an expert at all, so I'd appreciate explanations with examples or in simple terms. Thanks in advance for the help.
|
[
"For any future people with the same question:\nI'm not really sure what the problem was. My guess is that my generator was feeding the whole tuple (input, label) to the network as an input, while I only desired it to feed the input and not the label. Hence, the error with input_shape. However, the Keras docs state that one should call model.fit(generator_function) with a Python generator function that gives back (inputs, targets), so while it might sound more plausible, I'm not sure if this is the case, or the error messaging is just very unclear.\nI fixed it by making a tf.Dataset out of my vanilla generator, simply by running:\n data = tf.data.Dataset.from_generator(\n padded_generator,\n output_types= (tf.float32, tf.float32),\n output_shapes= ((3177,2),(1,2))\n )\n data= data.batch(10)\n\nThe last line ensures batching to create inputs of shape (None, 3177, 2), to feed proper ndims=3 input to the LSTM layers (which gave an error after I fixed the error stated in the question).\nThen,\n model.fit(data)\n\nruns properly.\nNote: if you have a generator that has function arguments, instead use lambda:\n data = tf.data.Dataset.from_generator(\n lambda: padded_generator(**Function arguments here**),\n output_types= (tf.float32, tf.float32),\n output_shapes= ((3177,2),(1,2))\n )\n data= data.batch(10)\n\n"
] |
[
0
] |
[] |
[] |
[
"generator",
"keras",
"lstm",
"machine_learning",
"python"
] |
stackoverflow_0074404355_generator_keras_lstm_machine_learning_python.txt
|
Q:
How to split df column into df row?
I have a df that looks like this
id shortTextContent shortTextCode longPlainTextContent longTextCode semiTextContent semiTextCode
1 shortContent1 shortCode1 long1 longCode1 semiContent1 semiCode1
2 shortContent2 shortCode2 long2 longCode2 semiContent2 semiCode2
How should I split it into the following content? (where the column names become row content, like below)
id content content code
1 shortTextContent shortContent1 shortCode1
1 longPlainTextContent long1 longCode1
1 semiTextContent semiContent1 semiCode1
...
A:
df = pd.DataFrame(dict(id=[1,2,3,4],other=['a','b','c','d']))
df_melted = pd.melt(df)
|
How to split df column into df row?
|
I have a df that looks like this
id shortTextContent shortTextCode longPlainTextContent longTextCode semiTextContent semiTextCode
1 shortContent1 shortCode1 long1 longCode1 semiContent1 semiCode1
2 shortContent2 shortCode2 long2 longCode2 semiContent2 semiCode2
How should I split it into the following content? (where the column names become row content, like below)
id content content code
1 shortTextContent shortContent1 shortCode1
1 longPlainTextContent long1 longCode1
1 semiTextContent semiContent1 semiCode1
...
|
[
"df = pd.DataFrame(dict(id=[1,2,3,4],other=['a','b','c','d']))\ndf_melted = pd.melt(df)\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074476475_dataframe_pandas_python.txt
|
Q:
writing Airflow 2 dag
I have been in Airflow 1.10.14 for a long time, and now I'm trying to upgrade to Airflow 2.4.3 (latest?) I have built this dag in the new format in hopes to assimilate the language and understand how the new format works. Below is my dag:
from airflow.decorators import dag, task
from airflow.models import Variable
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
from airflow.providers.microsoft.mssql.operators.mssql import MsSqlOperator
from airflow.operators.bash import BashOperator
from datetime import datetime
import glob
path = '~/airflow/staging/gcs/offrs2/'
clear_Staging_Folders = """
rm -rf {}OFFRS2/LEADS*.*
""".format(Variable.get("temp_directory"))
@dag(
schedule_interval='@daily',
start_date=datetime(2022, 11, 1),
catchup=False,
tags=['offrs2', 'LEADS']
)
def taskflow():
CLEAR_STAGING = BashOperator(
task_id='Clear_Folders',
bash_command=clear_Staging_Folders,
dag=dag,
)
BQ_Output = BigQueryInsertJobOperator(
task_id='BQ_Output',
configuration={
"query": {
"query": '~/airflow/sql/Leads/Leads_Export.sql',
"useLegacySql": False
}
}
)
Prep_MSSQL = MsSqlOperator(
task_id='Prep_DB3_Table',
mssql_conn_id = 'db.offrs.com',
sql='truncate table offrs_staging..LEADS;'
)
@task
def Load_Staging_Table():
for files in glob.glob(path + 'LEADS*.csv'):
print(files)
CLEAR_STAGING >> BQ_Output >> Load_Staging_Table()
dag = taskflow()
when I send this up, I'm getting the below error:
Broken DAG: [/home/airflow/airflow/dags/BQ_OFFRS2_Leads.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 376, in apply_defaults
task_group = TaskGroupContext.get_current_task_group(dag)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/task_group.py", line 490, in get_current_task_group
return dag.task_group
AttributeError: 'function' object has no attribute 'task_group'
As I look at my code, I don't have a specified task_group.
Where am I going wrong here?
Thank you!
A:
You forgot to remove an undefined dag variable in CLEAR_STAGING. When you are using decorator, remove dag=dag.
CLEAR_STAGING = BashOperator(
task_id='Clear_Folders',
bash_command=clear_Staging_Folders,
# dag=dag <== Remove this
)
|
writing Airflow 2 dag
|
I have been in Airflow 1.10.14 for a long time, and now I'm trying to upgrade to Airflow 2.4.3 (latest?) I have built this dag in the new format in hopes to assimilate the language and understand how the new format works. Below is my dag:
from airflow.decorators import dag, task
from airflow.models import Variable
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
from airflow.providers.microsoft.mssql.operators.mssql import MsSqlOperator
from airflow.operators.bash import BashOperator
from datetime import datetime
import glob
path = '~/airflow/staging/gcs/offrs2/'
clear_Staging_Folders = """
rm -rf {}OFFRS2/LEADS*.*
""".format(Variable.get("temp_directory"))
@dag(
schedule_interval='@daily',
start_date=datetime(2022, 11, 1),
catchup=False,
tags=['offrs2', 'LEADS']
)
def taskflow():
CLEAR_STAGING = BashOperator(
task_id='Clear_Folders',
bash_command=clear_Staging_Folders,
dag=dag,
)
BQ_Output = BigQueryInsertJobOperator(
task_id='BQ_Output',
configuration={
"query": {
"query": '~/airflow/sql/Leads/Leads_Export.sql',
"useLegacySql": False
}
}
)
Prep_MSSQL = MsSqlOperator(
task_id='Prep_DB3_Table',
mssql_conn_id = 'db.offrs.com',
sql='truncate table offrs_staging..LEADS;'
)
@task
def Load_Staging_Table():
for files in glob.glob(path + 'LEADS*.csv'):
print(files)
CLEAR_STAGING >> BQ_Output >> Load_Staging_Table()
dag = taskflow()
when I send this up, I'm getting the below error:
Broken DAG: [/home/airflow/airflow/dags/BQ_OFFRS2_Leads.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 376, in apply_defaults
task_group = TaskGroupContext.get_current_task_group(dag)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/task_group.py", line 490, in get_current_task_group
return dag.task_group
AttributeError: 'function' object has no attribute 'task_group'
As I look at my code, I don't have a specified task_group.
Where am I going wrong here?
Thank you!
|
[
"You forgot to remove an undefined dag variable in CLEAR_STAGING. When you are using decorator, remove dag=dag.\nCLEAR_STAGING = BashOperator(\n task_id='Clear_Folders',\n bash_command=clear_Staging_Folders,\n # dag=dag <== Remove this\n)\n\n"
] |
[
1
] |
[] |
[] |
[
"airflow",
"airflow_2.x",
"python"
] |
stackoverflow_0074470445_airflow_airflow_2.x_python.txt
|
Q:
Pygame.event.get() stopped working. It says the video system hasn't been initialized
I was dealing with another section of my code not working (which can be found in the code's comments) when I noticed that my filename wasn't what I wanted it to be. I closed VS Code, changed the filename, and it started giving this error. I'm not sure what's up, and all I could find on the internet was 'initialize pygame' or 'put pygame.quit() outside the loop', in which I've done both. I even tried updating pygame.
import pygame
import random
# starts up all the upper levels stuff like the window and framerate
pygame.init()
win = pygame.display.set_mode((1028, 548))
quitt = False
clock = pygame.time.Clock()
class character(): # makes a character that moves. all other functionality will be done elsewhere
def __init__(self):
self.rect = pygame.rect.Rect(100, 205, 40, 40)
def move(self): #movement
keys = pygame.key.get_pressed()
if keys[pygame.K_UP]:
self.rect = self.rect.move(0, -4)
if keys[pygame.K_DOWN]:
self.rect = self.rect.move(0, 4)
if keys[pygame.K_LEFT]:
self.rect = self.rect.move(-4, 0)
if keys[pygame.K_RIGHT]:
self.rect = self.rect.move(4, 0)
class block(): #blocks that can be pushed
def __init__(self, startpos):
self.rect = pygame.rect.Rect(startpos[0], startpos[1], 40, 40)
def move(self):
global player, blocklist
if player.rect.colliderect(self.rect): # oh boy, here's the problem code
if player.rect.right > self.rect.left: # if approaching from the left. this works.
if self.rect.bottom - player.rect.top <= 5:
self.rect.move_ip(0, -4)
elif self.rect.top - player.rect.bottom >= -5:
self.rect.move_ip(0, 4)
elif player.rect.right - self.rect.left <= 5:
self.rect.move_ip(4, 0)
elif player.rect.left < self.rect.right: # if approaching from the right. This mostly works.
if self.rect.bottom - player.rect.top <= 5:
self.rect.move_ip(0, -4)
elif self.rect.top - player.rect.bottom >= -5:
self.rect.move_ip(0, 4)
elif player.rect.left - self.rect.right >= -5: # here's the problem line. idk what's up.
self.rect.move_ip(-4, 0)
else: #here to catch when the two have equal x coords. this works.
if self.rect.bottom - player.rect.top <= 5:
self.rect.move_ip(0, -4)
elif self.rect.top - player.rect.bottom >= -5:
self.rect.move_ip(0, 4)
self.shunt() # here to stop two objects from being within one another. Also allows you to push more than one block.
def shunt(self): #mostly copied from the wall. this works.
global player, blocklist
if self.rect.colliderect(player.rect):
if player.rect.right - self.rect.left <= 5:
player.rect.move_ip(self.rect.left-player.rect.right, 0)
elif player.rect.left - self.rect.right >= -5:
player.rect.move_ip(self.rect.right-player.rect.left, 0)
elif player.rect.top - self.rect.bottom >= -5:
player.rect.move_ip(0, self.rect.bottom - player.rect.top)
elif player.rect.bottom - self.rect.top <= 5:
player.rect.move_ip(0, self.rect.top-player.rect.bottom)
for block in blocklist:
if self.rect.colliderect(block.rect) and blocklist.index(block) != blocklist.index(self):
if block.rect.right - self.rect.left <= 5:
block.rect.move_ip(self.rect.left-block.rect.right, 0)
elif block.rect.left - self.rect.right >= -5:
block.rect.move_ip(self.rect.right-block.rect.left, 0)
elif block.rect.top - self.rect.bottom >= -5:
block.rect.move_ip(0, self.rect.bottom - block.rect.top)
elif block.rect.bottom - self.rect.top <= 5:
block.rect.move_ip(0, self.rect.top-block.rect.bottom)
if block.rect.colliderect(player.rect):
block.shunt()
class button(): # gets pressed if a block goes on it.
def __init__(self, startpos):
self.rect = pygame.rect.Rect(startpos[0], startpos[1], 40, 40)
self.pressed = False
def checkpressed(self): #checks if it's been pressed.
global blocklist
for block in blocklist:
if self.rect.colliderect(block.rect):
self.pressed = True
class wall(): # walls. They do wall things.
def __init__(self, startpos, size):
self.rect = pygame.rect.Rect(startpos[0], startpos[1], size[0], size[1])
def shunt(self): # top of the shunt chain.
global player, blocklist
if self.rect.colliderect(player.rect): # shunt the player
if player.rect.right - self.rect.left <= 5:
player.rect.move_ip(self.rect.left-player.rect.right, 0)
elif player.rect.left - self.rect.right >= -5:
player.rect.move_ip(self.rect.right-player.rect.left, 0)
elif player.rect.top - self.rect.bottom >= -5:
player.rect.move_ip(0, self.rect.bottom - player.rect.top)
elif player.rect.bottom - self.rect.top <= 5:
player.rect.move_ip(0, self.rect.top-player.rect.bottom)
for block in blocklist: # shunt the blocks
if self.rect.colliderect(block.rect):
if block.rect.right - self.rect.left <= 5:
block.rect.move_ip(self.rect.left-block.rect.right, 0)
elif block.rect.left - self.rect.right >= -5:
block.rect.move_ip(self.rect.right-block.rect.left, 0)
elif block.rect.top - self.rect.bottom >= -5:
block.rect.move_ip(0, self.rect.bottom - block.rect.top)
elif block.rect.bottom - self.rect.top <= 5:
block.rect.move_ip(0, self.rect.top-block.rect.bottom)
if block.rect.colliderect(player.rect):
block.shunt()
for blocky in blocklist:
if block.rect.colliderect(blocky):
block.shunt()
player = character() # makes the player
bg = pygame.rect.Rect(0, 0, 1028, 548) # don't want it to look like when you go out of bounds in a source game.
blocklist = [block((200, 205)), block((300, 205))] # list of blocks
buttonlist = [button((700, 100))] # list of buttons (or at least button)
wallist = [wall((0, 0), (1028, 40)), wall((0, 0), (40, 548)), wall((988, 0), (40, 548)), wall((0, 508), (1028, 40)), wall((494, 0), (40, 300))] # list of walls
while not quitt: #main loop
clock.tick(60) #60 fps
for event in pygame.event.get(): # this is breaking
if event.type == pygame.QUIT:
quitt = True
pygame.draw.rect(win, (0, 0, 0), bg)# makes sure we aren't seeing the previous frames
for button in buttonlist:#update buttons
button.checkpressed()
if button.pressed:
pygame.draw.rect(win, (0, 255, 0), button.rect) #draw buttons
else:
pygame.draw.rect(win, (255, 0, 255), button.rect)
player.move() # update the player
for block in blocklist: # update the blocks
block.move()
for wall in wallist: # walls shunt
wall.shunt()
pygame.draw.rect(win, (0, 0, 255), block.rect) # draw blocks
for wall in wallist:
pygame.draw.rect(win, (255, 255, 255), wall.rect) # draw walls
pygame.draw.rect(win, (255, 0, 0), player.rect) # draw player
pygame.display.update() # update the screen
pygame.quit() #bye bye
A:
It is a matter of indentation, pygame.quit() must be called after the application loop, but not in the application loop:
while not quitt: #main loop
clock.tick(60) #60 fps
for event in pygame.event.get(): # this is breaking
if event.type == pygame.QUIT:
quitt = True
pygame.draw.rect(win, (0, 0, 0), bg)
# [...]
# INDANTATION
#<-|
pygame.quit() #bye bye
|
Pygame.event.get() stopped working. It says the video system hasn't been initialized
|
I was dealing with another section of my code not working (which can be found in the code's comments) when I noticed that my filename wasn't what I wanted it to be. I closed VS Code, changed the filename, and it started giving this error. I'm not sure what's up, and all I could find on the internet was 'initialize pygame' or 'put pygame.quit() outside the loop', in which I've done both. I even tried updating pygame.
import pygame
import random
# starts up all the upper levels stuff like the window and framerate
pygame.init()
win = pygame.display.set_mode((1028, 548))
quitt = False
clock = pygame.time.Clock()
class character(): # makes a character that moves. all other functionality will be done elsewhere
def __init__(self):
self.rect = pygame.rect.Rect(100, 205, 40, 40)
def move(self): #movement
keys = pygame.key.get_pressed()
if keys[pygame.K_UP]:
self.rect = self.rect.move(0, -4)
if keys[pygame.K_DOWN]:
self.rect = self.rect.move(0, 4)
if keys[pygame.K_LEFT]:
self.rect = self.rect.move(-4, 0)
if keys[pygame.K_RIGHT]:
self.rect = self.rect.move(4, 0)
class block(): #blocks that can be pushed
def __init__(self, startpos):
self.rect = pygame.rect.Rect(startpos[0], startpos[1], 40, 40)
def move(self):
global player, blocklist
if player.rect.colliderect(self.rect): # oh boy, here's the problem code
if player.rect.right > self.rect.left: # if approaching from the left. this works.
if self.rect.bottom - player.rect.top <= 5:
self.rect.move_ip(0, -4)
elif self.rect.top - player.rect.bottom >= -5:
self.rect.move_ip(0, 4)
elif player.rect.right - self.rect.left <= 5:
self.rect.move_ip(4, 0)
elif player.rect.left < self.rect.right: # if approaching from the right. This mostly works.
if self.rect.bottom - player.rect.top <= 5:
self.rect.move_ip(0, -4)
elif self.rect.top - player.rect.bottom >= -5:
self.rect.move_ip(0, 4)
elif player.rect.left - self.rect.right >= -5: # here's the problem line. idk what's up.
self.rect.move_ip(-4, 0)
else: #here to catch when the two have equal x coords. this works.
if self.rect.bottom - player.rect.top <= 5:
self.rect.move_ip(0, -4)
elif self.rect.top - player.rect.bottom >= -5:
self.rect.move_ip(0, 4)
self.shunt() # here to stop two objects from being within one another. Also allows you to push more than one block.
def shunt(self): #mostly copied from the wall. this works.
global player, blocklist
if self.rect.colliderect(player.rect):
if player.rect.right - self.rect.left <= 5:
player.rect.move_ip(self.rect.left-player.rect.right, 0)
elif player.rect.left - self.rect.right >= -5:
player.rect.move_ip(self.rect.right-player.rect.left, 0)
elif player.rect.top - self.rect.bottom >= -5:
player.rect.move_ip(0, self.rect.bottom - player.rect.top)
elif player.rect.bottom - self.rect.top <= 5:
player.rect.move_ip(0, self.rect.top-player.rect.bottom)
for block in blocklist:
if self.rect.colliderect(block.rect) and blocklist.index(block) != blocklist.index(self):
if block.rect.right - self.rect.left <= 5:
block.rect.move_ip(self.rect.left-block.rect.right, 0)
elif block.rect.left - self.rect.right >= -5:
block.rect.move_ip(self.rect.right-block.rect.left, 0)
elif block.rect.top - self.rect.bottom >= -5:
block.rect.move_ip(0, self.rect.bottom - block.rect.top)
elif block.rect.bottom - self.rect.top <= 5:
block.rect.move_ip(0, self.rect.top-block.rect.bottom)
if block.rect.colliderect(player.rect):
block.shunt()
class button(): # gets pressed if a block goes on it.
def __init__(self, startpos):
self.rect = pygame.rect.Rect(startpos[0], startpos[1], 40, 40)
self.pressed = False
def checkpressed(self): #checks if it's been pressed.
global blocklist
for block in blocklist:
if self.rect.colliderect(block.rect):
self.pressed = True
class wall(): # walls. They do wall things.
def __init__(self, startpos, size):
self.rect = pygame.rect.Rect(startpos[0], startpos[1], size[0], size[1])
def shunt(self): # top of the shunt chain.
global player, blocklist
if self.rect.colliderect(player.rect): # shunt the player
if player.rect.right - self.rect.left <= 5:
player.rect.move_ip(self.rect.left-player.rect.right, 0)
elif player.rect.left - self.rect.right >= -5:
player.rect.move_ip(self.rect.right-player.rect.left, 0)
elif player.rect.top - self.rect.bottom >= -5:
player.rect.move_ip(0, self.rect.bottom - player.rect.top)
elif player.rect.bottom - self.rect.top <= 5:
player.rect.move_ip(0, self.rect.top-player.rect.bottom)
for block in blocklist: # shunt the blocks
if self.rect.colliderect(block.rect):
if block.rect.right - self.rect.left <= 5:
block.rect.move_ip(self.rect.left-block.rect.right, 0)
elif block.rect.left - self.rect.right >= -5:
block.rect.move_ip(self.rect.right-block.rect.left, 0)
elif block.rect.top - self.rect.bottom >= -5:
block.rect.move_ip(0, self.rect.bottom - block.rect.top)
elif block.rect.bottom - self.rect.top <= 5:
block.rect.move_ip(0, self.rect.top-block.rect.bottom)
if block.rect.colliderect(player.rect):
block.shunt()
for blocky in blocklist:
if block.rect.colliderect(blocky):
block.shunt()
player = character() # makes the player
bg = pygame.rect.Rect(0, 0, 1028, 548) # don't want it to look like when you go out of bounds in a source game.
blocklist = [block((200, 205)), block((300, 205))] # list of blocks
buttonlist = [button((700, 100))] # list of buttons (or at least button)
wallist = [wall((0, 0), (1028, 40)), wall((0, 0), (40, 548)), wall((988, 0), (40, 548)), wall((0, 508), (1028, 40)), wall((494, 0), (40, 300))] # list of walls
while not quitt: #main loop
clock.tick(60) #60 fps
for event in pygame.event.get(): # this is breaking
if event.type == pygame.QUIT:
quitt = True
pygame.draw.rect(win, (0, 0, 0), bg)# makes sure we aren't seeing the previous frames
for button in buttonlist:#update buttons
button.checkpressed()
if button.pressed:
pygame.draw.rect(win, (0, 255, 0), button.rect) #draw buttons
else:
pygame.draw.rect(win, (255, 0, 255), button.rect)
player.move() # update the player
for block in blocklist: # update the blocks
block.move()
for wall in wallist: # walls shunt
wall.shunt()
pygame.draw.rect(win, (0, 0, 255), block.rect) # draw blocks
for wall in wallist:
pygame.draw.rect(win, (255, 255, 255), wall.rect) # draw walls
pygame.draw.rect(win, (255, 0, 0), player.rect) # draw player
pygame.display.update() # update the screen
pygame.quit() #bye bye
|
[
"It is a matter of indentation, pygame.quit() must be called after the application loop, but not in the application loop:\nwhile not quitt: #main loop\n clock.tick(60) #60 fps\n for event in pygame.event.get(): # this is breaking\n if event.type == pygame.QUIT:\n quitt = True\n pygame.draw.rect(win, (0, 0, 0), bg)\n # [...]\n\n # INDANTATION\n#<-|\npygame.quit() #bye bye\n\n"
] |
[
0
] |
[] |
[] |
[
"pygame",
"python",
"window"
] |
stackoverflow_0074478317_pygame_python_window.txt
|
Q:
How to have nested generators continue their logic while parent generators needs to stop?
Lets say I have the following code
def top():
counter = 0
for ch in child_1():
print(ch)
counter += 1
if counter > 2:
break
def child_1():
for ch in child_2():
yield ch
print("child_1 logic has finished")
def child_2():
for ch in "123456789":
yield ch
print("child_2 logic has finished")
if __name__ == '__main__':
top()
Is there a way to have top() method to exit in the middle of the iteration like I have the counter condition, but let the children to finish their logic? (get to the code after the yield statement)
I tried to use while loop and other python tricks but it all went unsuccessful, I don't think there's a way without modifying the nested generators to not be a generators but I'm trying my shot here :D
A:
If you want the children to "finish" what they're doing (i.e. perform the rest of the iteration), keep a reference to the iterator, and exhaust it after you break:
def top():
counter = 0
iter_1 = child_1()
for ch in iter_1:
print(ch)
counter += 1
if counter > 2:
break
for _ in iter_1:
pass
A:
You don't have to stop the top loop. You just need to stop doing the logic in the top loop. You can do this by moving your condition to the top, and using continue instead of break.
def top():
counter = 0
for ch in child_1():
if counter > 2: #continue here instead of break after
continue
counter += 1
def child_1():
for ch in child_2():
yield ch
print("child_1 logic has finished")
def child_2():
for ch in "123456789":
yield ch
print("child_2 logic has finished")
|
How to have nested generators continue their logic while parent generators needs to stop?
|
Lets say I have the following code
def top():
counter = 0
for ch in child_1():
print(ch)
counter += 1
if counter > 2:
break
def child_1():
for ch in child_2():
yield ch
print("child_1 logic has finished")
def child_2():
for ch in "123456789":
yield ch
print("child_2 logic has finished")
if __name__ == '__main__':
top()
Is there a way to have top() method to exit in the middle of the iteration like I have the counter condition, but let the children to finish their logic? (get to the code after the yield statement)
I tried to use while loop and other python tricks but it all went unsuccessful, I don't think there's a way without modifying the nested generators to not be a generators but I'm trying my shot here :D
|
[
"If you want the children to \"finish\" what they're doing (i.e. perform the rest of the iteration), keep a reference to the iterator, and exhaust it after you break:\ndef top():\n counter = 0\n\n iter_1 = child_1()\n for ch in iter_1:\n print(ch)\n counter += 1\n\n if counter > 2:\n break\n\n for _ in iter_1:\n pass\n\n",
"You don't have to stop the top loop. You just need to stop doing the logic in the top loop. You can do this by moving your condition to the top, and using continue instead of break.\ndef top():\n counter = 0\n\n for ch in child_1():\n if counter > 2: #continue here instead of break after\n continue\n\n counter += 1\n\ndef child_1():\n for ch in child_2():\n yield ch\n\n print(\"child_1 logic has finished\")\n\n\ndef child_2():\n for ch in \"123456789\":\n yield ch\n\n print(\"child_2 logic has finished\")\n\n"
] |
[
3,
1
] |
[] |
[] |
[
"generator",
"python"
] |
stackoverflow_0074477990_generator_python.txt
|
Q:
Extract value associated with column name on non-zero rows
I have two dfs(500x100 & 1300x2) and want to create a new column in the first one with which categories that occur on each row. To achieve this I need to fetch the category associated with the column name from second df. There might be several categories on same row.
df = pd.DataFrame({'apple': [0, 0, 1, 0],
'strawberries': [0, 1, 1, 0],
'cucumber': [1, 1, 0, 0],
'hawthorn': [0, 1, 0, 1]
})
df2 = pd.DataFrame({'storage': ['apple', 'strawberries', 'cucumber', 'hawthorn'],
'category': ['fruits', 'berries', 'vegetables', 'berries']
})
I've found two potential solutions which both aims to fetch value from dict when value of row is != 0:
df2_dict = dict(zip(df2['storage'], df2['category']))
df['categories'] = pd.Series(df.columns[np.where(df!=0)[1]]).map(df2_dict)
|
df['categories'] = df.apply(lambda s: ', '.join(s.index[s.eq(1)]), axis = 1).map(df2_dict)
These works to some extent but for some reason only give me results on about 1/10 of the rows. Desired output would be:
df = pd.DataFrame({'apple': [0, 0, 1, 0],
'strawberries': [0, 1, 1, 0],
'cucumber': [1, 1, 0, 0],
'hawthorn': [0, 1, 0, 1],
'categories': ['vegetables', 'berries, vegetables, berries',
'fruits, berries', 'berries' ]})
As of now column names are keys in dict. FYI the columns are dummies so only 0|1 in them.
Appreciate any smart solutions to this.
xoxo
A:
there might be easier ways of doing this but this works i think :)
df = pd.DataFrame({'apple': [0, 0, 1, 0],
'strawberries': [0, 1, 1, 0],
'cucumber': [1, 1, 0, 0],
'hawthorn': [0, 1, 0, 1]})
df2 = pd.DataFrame({'storage': ['apple', 'strawberries', 'cucumber', 'hawthorn'],
'category': ['fruits', 'berries', 'vegetables', 'berries']})
def cateogory (row):
result = []
for column in list(df.columns) :
if row[column] == 1 :
result.append (df2.loc[df2['storage'] == column]["category"])
return [item for sublist in result for item in sublist]
df['category'] = df.apply(lambda row : cateogory(row) , axis=1 )
Result :
apple strawberries cucumber hawthorn category
0 0 0 1 0 [vegetables]
1 0 1 1 1 [berries, vegetables, berries]
2 1 1 0 0 [fruits, berries]
3 0 0 0 1 [berries]
btw edited your example, there were some mistakes in it
Edit : corrected i think !
A:
df.apply(lambda x:x.idxmax(), axis=1).map(dict(df2.values))
output:
0 vegetables
1 berries
2 fruits
3 greens
dtype: object
make result to category column
If there are df values greater than 1, change df's value to 0 or 1
df.gt(0).apply(lambda x:x.idxmax(), axis=1).map(dict(df2.values))
same result
If this is not result you want, draw desired output of the example.
after edit question
df.apply(lambda x: ','.join(x[x>0].index.map(dict(df2.values))), axis=1)
result:
0 vegetables
1 berries,vegetables,greens
2 fruits,berries
3 greens
dtype: object
make result to category column
|
Extract value associated with column name on non-zero rows
|
I have two dfs(500x100 & 1300x2) and want to create a new column in the first one with which categories that occur on each row. To achieve this I need to fetch the category associated with the column name from second df. There might be several categories on same row.
df = pd.DataFrame({'apple': [0, 0, 1, 0],
'strawberries': [0, 1, 1, 0],
'cucumber': [1, 1, 0, 0],
'hawthorn': [0, 1, 0, 1]
})
df2 = pd.DataFrame({'storage': ['apple', 'strawberries', 'cucumber', 'hawthorn'],
'category': ['fruits', 'berries', 'vegetables', 'berries']
})
I've found two potential solutions which both aims to fetch value from dict when value of row is != 0:
df2_dict = dict(zip(df2['storage'], df2['category']))
df['categories'] = pd.Series(df.columns[np.where(df!=0)[1]]).map(df2_dict)
|
df['categories'] = df.apply(lambda s: ', '.join(s.index[s.eq(1)]), axis = 1).map(df2_dict)
These works to some extent but for some reason only give me results on about 1/10 of the rows. Desired output would be:
df = pd.DataFrame({'apple': [0, 0, 1, 0],
'strawberries': [0, 1, 1, 0],
'cucumber': [1, 1, 0, 0],
'hawthorn': [0, 1, 0, 1],
'categories': ['vegetables', 'berries, vegetables, berries',
'fruits, berries', 'berries' ]})
As of now column names are keys in dict. FYI the columns are dummies so only 0|1 in them.
Appreciate any smart solutions to this.
xoxo
|
[
"there might be easier ways of doing this but this works i think :)\ndf = pd.DataFrame({'apple': [0, 0, 1, 0], \n'strawberries': [0, 1, 1, 0], \n'cucumber': [1, 1, 0, 0], \n'hawthorn': [0, 1, 0, 1]})\n\ndf2 = pd.DataFrame({'storage': ['apple', 'strawberries', 'cucumber', 'hawthorn'],\n'category': ['fruits', 'berries', 'vegetables', 'berries']})\n\ndef cateogory (row):\n result = []\n for column in list(df.columns) :\n if row[column] == 1 :\n result.append (df2.loc[df2['storage'] == column][\"category\"])\n return [item for sublist in result for item in sublist]\n\ndf['category'] = df.apply(lambda row : cateogory(row) , axis=1 )\n\nResult :\n apple strawberries cucumber hawthorn category\n0 0 0 1 0 [vegetables]\n1 0 1 1 1 [berries, vegetables, berries]\n2 1 1 0 0 [fruits, berries]\n3 0 0 0 1 [berries]\n\n\nbtw edited your example, there were some mistakes in it\nEdit : corrected i think !\n",
"df.apply(lambda x:x.idxmax(), axis=1).map(dict(df2.values))\n\noutput:\n0 vegetables\n1 berries\n2 fruits\n3 greens\ndtype: object\n\nmake result to category column\n\nIf there are df values greater than 1, change df's value to 0 or 1\ndf.gt(0).apply(lambda x:x.idxmax(), axis=1).map(dict(df2.values))\n\nsame result\n\nIf this is not result you want, draw desired output of the example.\n\nafter edit question\ndf.apply(lambda x: ','.join(x[x>0].index.map(dict(df2.values))), axis=1)\n\nresult:\n0 vegetables\n1 berries,vegetables,greens\n2 fruits,berries\n3 greens\ndtype: object\n\nmake result to category column\n"
] |
[
0,
0
] |
[] |
[] |
[
"categories",
"dictionary",
"pandas",
"python"
] |
stackoverflow_0074477817_categories_dictionary_pandas_python.txt
|
Q:
Python not running shell command
I am trying to download a youtube video using yt-dlp. The python file uses yt-dlp to download a youtube video by passing a URL of the video manually into python script using the subprocess.Open function.
import subprocess
from moviepy.editor import *
import os
import moviepy.editor as mp
# Download files through url and saves it in yt-vidoes dir
command = "yt-dlp "
URL = 'https://www.youtube.com/watch?v=C_rsdqKA6ok'
parameters = ' --output yt-videos/%(title)s'
def download_video():
downloading = subprocess.Popen(command + URL)
downloading.wait()
print(downloading.returncode)
download_video()
It is working fine on Windows but on Ubuntu I get this error:
Traceback (most recent call last):
File "/home/purelogics/Arslan/shorts_bot/moveis/movies.py", line 17, in <module>
download_video()
File "/home/purelogics/Arslan/shorts_bot/moveis/movies.py", line 13, in download_video
downloading = subprocess.Popen(command + URL)
File "/home/linuxbrew/.linuxbrew/Cellar/python@3.10/3.10.8/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/home/linuxbrew/.linuxbrew/Cellar/python@3.10/3.10.8/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'yt-dlp https://www.youtube.com/watch?v=C_rsdqKA6ok'
A:
From the docs:
An example of passing some arguments to an external program as a sequence is:
Popen(["/usr/bin/git", "commit", "-m", "Fixes a bug."])
On POSIX, if args is a string, the string is interpreted as the name or path of the program to execute. However, this can only be done if not passing arguments to the program.
So, you want to pass a list to Popen where the first element of the list is the executable and the second is the parameter. As you have it now, it is trying to find a file to execute called yt-dlp https://www.youtube.com/watch?v=C_rsdqKA6ok
|
Python not running shell command
|
I am trying to download a youtube video using yt-dlp. The python file uses yt-dlp to download a youtube video by passing a URL of the video manually into python script using the subprocess.Open function.
import subprocess
from moviepy.editor import *
import os
import moviepy.editor as mp
# Download files through url and saves it in yt-vidoes dir
command = "yt-dlp "
URL = 'https://www.youtube.com/watch?v=C_rsdqKA6ok'
parameters = ' --output yt-videos/%(title)s'
def download_video():
downloading = subprocess.Popen(command + URL)
downloading.wait()
print(downloading.returncode)
download_video()
It is working fine on Windows but on Ubuntu I get this error:
Traceback (most recent call last):
File "/home/purelogics/Arslan/shorts_bot/moveis/movies.py", line 17, in <module>
download_video()
File "/home/purelogics/Arslan/shorts_bot/moveis/movies.py", line 13, in download_video
downloading = subprocess.Popen(command + URL)
File "/home/linuxbrew/.linuxbrew/Cellar/python@3.10/3.10.8/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/home/linuxbrew/.linuxbrew/Cellar/python@3.10/3.10.8/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'yt-dlp https://www.youtube.com/watch?v=C_rsdqKA6ok'
|
[
"From the docs:\n\nAn example of passing some arguments to an external program as a sequence is:\nPopen([\"/usr/bin/git\", \"commit\", \"-m\", \"Fixes a bug.\"])\nOn POSIX, if args is a string, the string is interpreted as the name or path of the program to execute. However, this can only be done if not passing arguments to the program.\n\nSo, you want to pass a list to Popen where the first element of the list is the executable and the second is the parameter. As you have it now, it is trying to find a file to execute called yt-dlp https://www.youtube.com/watch?v=C_rsdqKA6ok\n"
] |
[
1
] |
[] |
[] |
[
"moviepy",
"python",
"subprocess"
] |
stackoverflow_0074473623_moviepy_python_subprocess.txt
|
Q:
How to install Tensorflow properly on Windows using Python?
I'm trying to use tensorflow with my PC's GPU (Nvidia RTX 3070Ti) in python-conda environment. I'm solving a small image-classification problem from kaggle. I've solved it in google-collab, but now I'm intrested in solving it on my local machine. However TF doesn't work properly locally and I have no idea why. I've read tons of solutions but it didn't help yet.
I'm following this guide and always install proper versions of TF and CUDA: https://www.tensorflow.org/install/source_windows
cuda-toolkit 10.1, cudnn 7.6, tf-gpu 2.3, python 3.8
Also I've installed latest NVidia drivers for videocard.
What I've tried:
I've installed proper version CUDA-toolkit and CUDnn from nvidia site. I've installed it properly and included everything that was needed into PATH. I've checked it - MS Visiual Studio finds both CUDA and CUDnn and can work with it. I've installed proper version of Tensorflow-GPU using conda into my environment.
Result: TF can't find my GPU and uses only CPU.
I've removed all CUDA and CUDAnn drivers. I've installed CUDA-toolkit, CUDnn and Tensorflow-GPU python packages into my conda environment.
Result: TF recognizes my GPU and uses it! But during DNN training happens error: Failed to launch ptxas Relying on driver to perform ptx compilation. Modify $PATH to customize ptxas location. And training goes very bad - accuracy is very low and doesn't improving.
When I use absolutely same code and data on google-collab, everything is going smoothly - I get ~90% accuracy on 5th epoch.
I've tried tf 2.1 and relevant cuda and cudnn, but it's still same result!
I've tried to install cudatoolkit-dev, but it didn't help to solve ptxas problem.
I'm about to give up and use PyTorch instead of Tensorflow.
A:
So here is what worked for me:
Create 3.9 python environment
Install cuda and tensorflow packages from "Esri":
conda install -c esri cudatoolkit
conda install -c esri cudnn
conda install -c esri tensorflow-gpu
Then install tensorflow-hub:
conda install -c conda-forge tensorflow-hub
It will downgrade installations from previous steps, but it works. Maybe installing tensorflow-hub first could help to avoid it, but I didn't test it.
|
How to install Tensorflow properly on Windows using Python?
|
I'm trying to use tensorflow with my PC's GPU (Nvidia RTX 3070Ti) in python-conda environment. I'm solving a small image-classification problem from kaggle. I've solved it in google-collab, but now I'm intrested in solving it on my local machine. However TF doesn't work properly locally and I have no idea why. I've read tons of solutions but it didn't help yet.
I'm following this guide and always install proper versions of TF and CUDA: https://www.tensorflow.org/install/source_windows
cuda-toolkit 10.1, cudnn 7.6, tf-gpu 2.3, python 3.8
Also I've installed latest NVidia drivers for videocard.
What I've tried:
I've installed proper version CUDA-toolkit and CUDnn from nvidia site. I've installed it properly and included everything that was needed into PATH. I've checked it - MS Visiual Studio finds both CUDA and CUDnn and can work with it. I've installed proper version of Tensorflow-GPU using conda into my environment.
Result: TF can't find my GPU and uses only CPU.
I've removed all CUDA and CUDAnn drivers. I've installed CUDA-toolkit, CUDnn and Tensorflow-GPU python packages into my conda environment.
Result: TF recognizes my GPU and uses it! But during DNN training happens error: Failed to launch ptxas Relying on driver to perform ptx compilation. Modify $PATH to customize ptxas location. And training goes very bad - accuracy is very low and doesn't improving.
When I use absolutely same code and data on google-collab, everything is going smoothly - I get ~90% accuracy on 5th epoch.
I've tried tf 2.1 and relevant cuda and cudnn, but it's still same result!
I've tried to install cudatoolkit-dev, but it didn't help to solve ptxas problem.
I'm about to give up and use PyTorch instead of Tensorflow.
|
[
"So here is what worked for me:\n\nCreate 3.9 python environment\nInstall cuda and tensorflow packages from \"Esri\":\n\n\nconda install -c esri cudatoolkit\nconda install -c esri cudnn\nconda install -c esri tensorflow-gpu\n\n\n\nThen install tensorflow-hub:\n\n\nconda install -c conda-forge tensorflow-hub\n\n\nIt will downgrade installations from previous steps, but it works. Maybe installing tensorflow-hub first could help to avoid it, but I didn't test it.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"tensorflow"
] |
stackoverflow_0074163000_python_tensorflow.txt
|
Q:
How to save average values of column in a csv?
a=np.array(h5py.File('/Users/D/FIELD-3D.h5', 'r')['Zone']['TOp']['data'])
a=(a.flatten(order='C'))
a.shape(3,1000)
How could i get the average value of each column in a written in a csv file?
A:
you can use np.average with the axis attribute:
np.average(a, axis=0)
|
How to save average values of column in a csv?
|
a=np.array(h5py.File('/Users/D/FIELD-3D.h5', 'r')['Zone']['TOp']['data'])
a=(a.flatten(order='C'))
a.shape(3,1000)
How could i get the average value of each column in a written in a csv file?
|
[
"you can use np.average with the axis attribute:\nnp.average(a, axis=0)\n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074478289_python.txt
|
Q:
Error (java.lang.NoSuchMethodError) when sending a Spark data frame to Azure Eventhubs from a Databricks notebook
I need to send a pyspark Dataframe to an Eventhub from my Databricks notebook. The problem happens at this part of the code:
ehWriteConf = {
'eventhubs.connectionString' : EVENT_HUB_CONNECTION_STRING
}
def send_to_eventhub(df:DataFrame):
ds = df.select(struct(*[c for c in df.columns]).alias("body"))\
.select("body")\
.write.format("eventhubs")\
.options(**ehWriteConf)\
.save()
And I am calling this method after some processing on the dataframe:
# write feature_df into our EventHub
send_to_eventhub(feature_df)
Some similar questions suggest that this is a library version problem so I have tried already several answers I found, such as installing the compatible version of the following library:
com.microsoft.azure:azure-eventhubs-spark_2.12:2.3.22
But this is the error message I get:
java.lang.NoSuchMethodError: org.apache.spark.sql.AnalysisException.<init>(Ljava/lang/String;Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;)V
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<command-37526120346879> in <module>
5 # write feature_df into our EventHub
6
----> 7 send_to_eventhub(feature_df)
8
9 # implement reading data from EventHub through a loop in print statement
<command-2498519353602292> in send_to_eventhub(df)
34 # .format("org.apache.spark.sql.eventhubs.EventHubsSourceProvider")\
35 # .format("org.apache.spark.sql.eventhubs.EventHubsSourceProvider")
---> 36 ds = df.select(struct(*[c for c in df.columns]).alias("body"))\
37 .select("body")\
38 .write.format("eventhubs")\
/databricks/spark/python/pyspark/sql/readwriter.py in save(self, path, format, mode, partitionBy, **options)
736 self.format(format)
737 if path is None:
--> 738 self._jwrite.save()
739 else:
740 self._jwrite.save(path)
/databricks/spark/python/lib/py4j-0.10.9.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
1302
1303 answer = self.gateway_client.send_command(command)
-> 1304 return_value = get_return_value(
1305 answer, self.gateway_client, self.target_id, self.name)
1306
/databricks/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
115 def deco(*a, **kw):
116 try:
--> 117 return f(*a, **kw)
118 except py4j.protocol.Py4JJavaError as e:
119 converted = convert_exception(e.java_exception)
/databricks/spark/python/lib/py4j-0.10.9.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o1187.save.
: java.lang.NoSuchMethodError: org.apache.spark.sql.AnalysisException.<init>(Ljava/lang/String;Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;)V
at org.apache.spark.sql.eventhubs.EventHubsWriter$.validateQuery(EventHubsWriter.scala:58)
at org.apache.spark.sql.eventhubs.EventHubsWriter$.write(EventHubsWriter.scala:70)
at org.apache.spark.sql.eventhubs.EventHubsSourceProvider.createRelation(EventHubsSourceProvider.scala:124)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:47)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:80)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:78)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:89)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:160)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$8(SQLExecution.scala:239)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:386)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:186)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:968)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:141)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:336)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:160)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:156)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:575)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:167)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:575)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:268)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:264)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:551)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:156)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:324)
at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:156)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:141)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:132)
at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:186)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:959)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:427)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:396)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:258)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:295)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:748)
One of the problems is that is not so clear what method is not found.
The cluster details where I'm running the notebook are:
A:
The dataframe to write needs to have the following schema:
Column | Type
----------------------------------------------
body (required) | string or binary
partitionId (*optional) | string
partitionKey (*optional) | string
This worked for me.
df.withColumn('body', F.to_json(
F.struct(*df.columns),
options={"ignoreNullFields": False}))\
.select('body')\
.write\
.format("eventhubs")\
.options(**ehconf)\
.save()
|
Error (java.lang.NoSuchMethodError) when sending a Spark data frame to Azure Eventhubs from a Databricks notebook
|
I need to send a pyspark Dataframe to an Eventhub from my Databricks notebook. The problem happens at this part of the code:
ehWriteConf = {
'eventhubs.connectionString' : EVENT_HUB_CONNECTION_STRING
}
def send_to_eventhub(df:DataFrame):
ds = df.select(struct(*[c for c in df.columns]).alias("body"))\
.select("body")\
.write.format("eventhubs")\
.options(**ehWriteConf)\
.save()
And I am calling this method after some processing on the dataframe:
# write feature_df into our EventHub
send_to_eventhub(feature_df)
Some similar questions suggest that this is a library version problem so I have tried already several answers I found, such as installing the compatible version of the following library:
com.microsoft.azure:azure-eventhubs-spark_2.12:2.3.22
But this is the error message I get:
java.lang.NoSuchMethodError: org.apache.spark.sql.AnalysisException.<init>(Ljava/lang/String;Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;)V
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<command-37526120346879> in <module>
5 # write feature_df into our EventHub
6
----> 7 send_to_eventhub(feature_df)
8
9 # implement reading data from EventHub through a loop in print statement
<command-2498519353602292> in send_to_eventhub(df)
34 # .format("org.apache.spark.sql.eventhubs.EventHubsSourceProvider")\
35 # .format("org.apache.spark.sql.eventhubs.EventHubsSourceProvider")
---> 36 ds = df.select(struct(*[c for c in df.columns]).alias("body"))\
37 .select("body")\
38 .write.format("eventhubs")\
/databricks/spark/python/pyspark/sql/readwriter.py in save(self, path, format, mode, partitionBy, **options)
736 self.format(format)
737 if path is None:
--> 738 self._jwrite.save()
739 else:
740 self._jwrite.save(path)
/databricks/spark/python/lib/py4j-0.10.9.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
1302
1303 answer = self.gateway_client.send_command(command)
-> 1304 return_value = get_return_value(
1305 answer, self.gateway_client, self.target_id, self.name)
1306
/databricks/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
115 def deco(*a, **kw):
116 try:
--> 117 return f(*a, **kw)
118 except py4j.protocol.Py4JJavaError as e:
119 converted = convert_exception(e.java_exception)
/databricks/spark/python/lib/py4j-0.10.9.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o1187.save.
: java.lang.NoSuchMethodError: org.apache.spark.sql.AnalysisException.<init>(Ljava/lang/String;Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;)V
at org.apache.spark.sql.eventhubs.EventHubsWriter$.validateQuery(EventHubsWriter.scala:58)
at org.apache.spark.sql.eventhubs.EventHubsWriter$.write(EventHubsWriter.scala:70)
at org.apache.spark.sql.eventhubs.EventHubsSourceProvider.createRelation(EventHubsSourceProvider.scala:124)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:47)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:80)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:78)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:89)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:160)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$8(SQLExecution.scala:239)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:386)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:186)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:968)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:141)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:336)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:160)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:156)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:575)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:167)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:575)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:268)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:264)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:551)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:156)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:324)
at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:156)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:141)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:132)
at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:186)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:959)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:427)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:396)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:258)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:295)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:748)
One of the problems is that is not so clear what method is not found.
The cluster details where I'm running the notebook are:
|
[
"The dataframe to write needs to have the following schema:\nColumn | Type\n----------------------------------------------\nbody (required) | string or binary \npartitionId (*optional) | string \npartitionKey (*optional) | string\n\nThis worked for me.\ndf.withColumn('body', F.to_json(\n F.struct(*df.columns),\n options={\"ignoreNullFields\": False}))\\\n .select('body')\\\n .write\\\n .format(\"eventhubs\")\\\n .options(**ehconf)\\\n .save()\n\n"
] |
[
0
] |
[] |
[] |
[
"azure",
"azure_databricks",
"azure_eventhub",
"pyspark",
"python"
] |
stackoverflow_0073962665_azure_azure_databricks_azure_eventhub_pyspark_python.txt
|
Q:
OpenVINO cannot convert MLP Mixer TensorFlow model
I use this GitHub repository to train MLP Mixer TensorFlow 2.5.0 model.
And I try to generate .bin and .xml files with the command
mo --data_type FP16 --saved_model_dir C:\Users\john0\Desktop\mlp --input_shape (1,150,150,3)
The following is the error I faced.
[ WARNING ] Failed to parse a tensor with Unicode characters. Note that Inference Engine does not support string literals, so the string constant should be eliminated from the graph.
[ WARNING ] Failed to parse a tensor with Unicode characters. Note that Inference Engine does not support string literals, so the string constant should be eliminated from the graph.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'openvino.tools.mo.front.user_data_repack.UserDataRepack'>): Original placeholders: 'serving_default_input_1, saver_filename'. Freezing was requested for ''. --input_shape was provided without --input. Can not deduce which node shape to override
I use openvino_2022.1.0.643 version.
And you can download my model here.
A:
The error is due to the model having multiple inputs, and can be resolved using this MO command mo --data_type FP16 --saved_model_dir model\directory\mlp\ --input_shape (1..,150,150,3). However, I'm getting different errors now:
[ ERROR ] List of operations that cannot be converted to Inference Engine IR:
[ ERROR ] FusedBatchNormV3 (16)
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block/layer_normalization/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block/layer_normalization_1/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_1/layer_normalization_2/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_1/layer_normalization_3/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_2/layer_normalization_4/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_2/layer_normalization_5/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_3/layer_normalization_6/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_3/layer_normalization_7/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_4/layer_normalization_8/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_4/layer_normalization_9/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_5/layer_normalization_10/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_5/layer_normalization_11/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_6/layer_normalization_12/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_6/layer_normalization_13/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_7/layer_normalization_14/FusedBatchNormV3
[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_7/layer_normalization_15/FusedBatchNormV3
[ ERROR ] Part of the nodes was not converted to IR. Stopped.
As you can see, the FusedBatchNormV3 layer in your model is not supported in TensorFlow 2, you can refer to Supported Framework Layers for a list of supported operations for TensorFlow 2.
|
OpenVINO cannot convert MLP Mixer TensorFlow model
|
I use this GitHub repository to train MLP Mixer TensorFlow 2.5.0 model.
And I try to generate .bin and .xml files with the command
mo --data_type FP16 --saved_model_dir C:\Users\john0\Desktop\mlp --input_shape (1,150,150,3)
The following is the error I faced.
[ WARNING ] Failed to parse a tensor with Unicode characters. Note that Inference Engine does not support string literals, so the string constant should be eliminated from the graph.
[ WARNING ] Failed to parse a tensor with Unicode characters. Note that Inference Engine does not support string literals, so the string constant should be eliminated from the graph.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'openvino.tools.mo.front.user_data_repack.UserDataRepack'>): Original placeholders: 'serving_default_input_1, saver_filename'. Freezing was requested for ''. --input_shape was provided without --input. Can not deduce which node shape to override
I use openvino_2022.1.0.643 version.
And you can download my model here.
|
[
"The error is due to the model having multiple inputs, and can be resolved using this MO command mo --data_type FP16 --saved_model_dir model\\directory\\mlp\\ --input_shape (1..,150,150,3). However, I'm getting different errors now:\n[ ERROR ] List of operations that cannot be converted to Inference Engine IR:\n[ ERROR ] FusedBatchNormV3 (16)\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block/layer_normalization/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block/layer_normalization_1/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_1/layer_normalization_2/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_1/layer_normalization_3/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_2/layer_normalization_4/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_2/layer_normalization_5/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_3/layer_normalization_6/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_3/layer_normalization_7/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_4/layer_normalization_8/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_4/layer_normalization_9/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_5/layer_normalization_10/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_5/layer_normalization_11/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_6/layer_normalization_12/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_6/layer_normalization_13/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_7/layer_normalization_14/FusedBatchNormV3\n[ ERROR ] StatefulPartitionedCall/mlp_mixer/mlp_block_7/layer_normalization_15/FusedBatchNormV3\n[ ERROR ] Part of the nodes was not converted to IR. Stopped.\n\nAs you can see, the FusedBatchNormV3 layer in your model is not supported in TensorFlow 2, you can refer to Supported Framework Layers for a list of supported operations for TensorFlow 2.\n"
] |
[
0
] |
[] |
[] |
[
"mlp",
"openvino",
"python",
"tensorflow"
] |
stackoverflow_0074432043_mlp_openvino_python_tensorflow.txt
|
Q:
How to modify django's request.user in a Middleware?
What I'm trying to do is to detect the type of logged-in user and then setting a .profile parameter to request.user, so I can use it by calling request.user.profile in my views.
To do this, I've wrote a Middleware as follows:
class SetProfileMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
user, token = JWTAuthentication().authenticate(request)
profile_type = token.payload.get("profile_type", None)
request.user.profile = User.get_profile(profile_type, request.user)
request.user.profile_type = profile_type
# Works Here
print("-" * 20)
print(type(request.user)) # <class 'django.utils.functional.SimpleLazyObject'>
print('Process Request ->', request.user.profile)
response = self.get_response(request)
# Does not work here
print("-" * 20)
print(type(request.user)) # <class 'users.models.User'>
print('Process Response ->', request.user.profile)
return response
def process_view(self, request, view_func, view_args, view_kwargs):
# Works here
print("-" * 20)
print(type(request.user)) # <class 'django.utils.functional.SimpleLazyObject'>
print('Process View ->', request.user.profile)
Now I can access request.user.profile in process_view however it does not exists in my views and is causing an AttributeError stating that 'User' object has no attribute 'profile'.
Seems my request.user is being overwritten somewhere before hitting the view.
Note that I'm using Django Rest Framework, here is my view:
class ProfileAPIView(generics.RetrieveUpdateAPIView):
serializer_class = ProfileSerializer
def get_object(self):
obj = self.request.user.profile # Raise the `AttributeError`
self.check_object_permissions(self.request, obj)
return obj
Here is my settings.py:
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
LOCAL_MIDDLEWARE = [
"users.middleware.SetProfileMiddleware",
]
MIDDLEWARE = MIDDLEWARE + LOCAL_MIDDLEWARE
REST_FRAMEWORK = {
"DEFAULT_PERMISSION_CLASSES": ("rest_framework.permissions.IsAuthenticated",),
"DEFAULT_RENDERER_CLASSES": (
"rest_framework.renderers.JSONRenderer",
"rest_framework.renderers.BrowsableAPIRenderer",
),
"DEFAULT_AUTHENTICATION_CLASSES": [
"rest_framework_simplejwt.authentication.JWTAuthentication",
],
}
SIMPLE_JWT = {
"SLIDING_TOKEN_REFRESH_LIFETIME": timedelta(minutes=45),
"AUTH_TOKEN_CLASSES": ("rest_framework_simplejwt.tokens.SlidingToken",),
}
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
AUTH_USER_MODEL = "users.User"
LOGIN_REDIRECT_URL = "admin/"
A:
The problem is that you cannot add new properties to the User class.
instead try to add the property directly to the request like this
request.user_profile = User.get_profile(profile_type, request.user)
def set_profile(view_function):
def decorated_function(request, *args, **kwargs):
user, token = JWTAuthentication().authenticate(request)
profile_type = token.payload.get("profile_type", None)
request.user_profile = User.get_profile(profile_type, request.user)
request.user_profile_type = profile_type
return view_function(request, *args, **kwargs)
return decorated_function # No invocation here
Then in your function based view:
@api_view(["GET", "PUT"])
@set_profile
def my_view(request):
request.user_profile # Will not throw attribute error
...
The only difference between function based view and class based view is that the decorator will receive request argument instead of self.
def set_profile(view_function):
def decorated_function(self, *args, **kwargs):
user, token = JWTAuthentication().authenticate(self.request)
profile_type = token.payload.get("profile_type", None)
self.request.user_profile = User.get_profile(profile_type, self.request.user)
self.request.user_profile_type = profile_type
return view_function(self, *args, **kwargs)
return decorated_function # No invocation here
Your class should look like this:
class ProfileAPIView(generics.RetrieveUpdateAPIView):
serializer_class = ProfileSerializer
@set_profile
def get_object(self):
obj = self.request.user_profile
self.check_object_permissions(self.request, obj)
return obj
A:
After spending hours to figure out what is going on, turned out that SimpleJWT's JWTAuthentication.authenticate() method gets called just before the request hits the View, overwriting the request.user attribute.
So instead of trying to add the profile to the request.user using a middleware, I ended-up customizing JWTAuthentication.authentication() method:
class CustomAuth(JWTAuthentication):
def authenticate(self, request):
user, token = super().authenticate(request)
profile_type = token.payload.get("profile_type", None)
user.profile = User.get_profile((profile_type, user)
user.profile_type = profile_type
return user, token
settings.py:
REST_FRAMEWORK = {
"DEFAULT_AUTHENTICATION_CLASSES": [
"users.authentication.CustomAuth"
],
}
|
How to modify django's request.user in a Middleware?
|
What I'm trying to do is to detect the type of logged-in user and then setting a .profile parameter to request.user, so I can use it by calling request.user.profile in my views.
To do this, I've wrote a Middleware as follows:
class SetProfileMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
user, token = JWTAuthentication().authenticate(request)
profile_type = token.payload.get("profile_type", None)
request.user.profile = User.get_profile(profile_type, request.user)
request.user.profile_type = profile_type
# Works Here
print("-" * 20)
print(type(request.user)) # <class 'django.utils.functional.SimpleLazyObject'>
print('Process Request ->', request.user.profile)
response = self.get_response(request)
# Does not work here
print("-" * 20)
print(type(request.user)) # <class 'users.models.User'>
print('Process Response ->', request.user.profile)
return response
def process_view(self, request, view_func, view_args, view_kwargs):
# Works here
print("-" * 20)
print(type(request.user)) # <class 'django.utils.functional.SimpleLazyObject'>
print('Process View ->', request.user.profile)
Now I can access request.user.profile in process_view however it does not exists in my views and is causing an AttributeError stating that 'User' object has no attribute 'profile'.
Seems my request.user is being overwritten somewhere before hitting the view.
Note that I'm using Django Rest Framework, here is my view:
class ProfileAPIView(generics.RetrieveUpdateAPIView):
serializer_class = ProfileSerializer
def get_object(self):
obj = self.request.user.profile # Raise the `AttributeError`
self.check_object_permissions(self.request, obj)
return obj
Here is my settings.py:
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
LOCAL_MIDDLEWARE = [
"users.middleware.SetProfileMiddleware",
]
MIDDLEWARE = MIDDLEWARE + LOCAL_MIDDLEWARE
REST_FRAMEWORK = {
"DEFAULT_PERMISSION_CLASSES": ("rest_framework.permissions.IsAuthenticated",),
"DEFAULT_RENDERER_CLASSES": (
"rest_framework.renderers.JSONRenderer",
"rest_framework.renderers.BrowsableAPIRenderer",
),
"DEFAULT_AUTHENTICATION_CLASSES": [
"rest_framework_simplejwt.authentication.JWTAuthentication",
],
}
SIMPLE_JWT = {
"SLIDING_TOKEN_REFRESH_LIFETIME": timedelta(minutes=45),
"AUTH_TOKEN_CLASSES": ("rest_framework_simplejwt.tokens.SlidingToken",),
}
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
AUTH_USER_MODEL = "users.User"
LOGIN_REDIRECT_URL = "admin/"
|
[
"The problem is that you cannot add new properties to the User class.\ninstead try to add the property directly to the request like this\nrequest.user_profile = User.get_profile(profile_type, request.user)\ndef set_profile(view_function):\n \n def decorated_function(request, *args, **kwargs):\n\n user, token = JWTAuthentication().authenticate(request)\n profile_type = token.payload.get(\"profile_type\", None)\n\n request.user_profile = User.get_profile(profile_type, request.user)\n request.user_profile_type = profile_type\n\n return view_function(request, *args, **kwargs)\n\n return decorated_function # No invocation here\n\nThen in your function based view:\n@api_view([\"GET\", \"PUT\"])\n@set_profile\ndef my_view(request):\n request.user_profile # Will not throw attribute error\n ...\n\nThe only difference between function based view and class based view is that the decorator will receive request argument instead of self.\ndef set_profile(view_function):\n \n def decorated_function(self, *args, **kwargs):\n\n user, token = JWTAuthentication().authenticate(self.request)\n profile_type = token.payload.get(\"profile_type\", None)\n\n self.request.user_profile = User.get_profile(profile_type, self.request.user)\n self.request.user_profile_type = profile_type\n\n return view_function(self, *args, **kwargs)\n\n return decorated_function # No invocation here\n\nYour class should look like this:\nclass ProfileAPIView(generics.RetrieveUpdateAPIView):\nserializer_class = ProfileSerializer\n\n@set_profile\ndef get_object(self):\n obj = self.request.user_profile\n self.check_object_permissions(self.request, obj)\n return obj\n\n",
"After spending hours to figure out what is going on, turned out that SimpleJWT's JWTAuthentication.authenticate() method gets called just before the request hits the View, overwriting the request.user attribute.\nSo instead of trying to add the profile to the request.user using a middleware, I ended-up customizing JWTAuthentication.authentication() method:\nclass CustomAuth(JWTAuthentication):\n def authenticate(self, request):\n\n user, token = super().authenticate(request)\n\n profile_type = token.payload.get(\"profile_type\", None)\n user.profile = User.get_profile((profile_type, user)\n user.profile_type = profile_type\n\n return user, token\n\nsettings.py:\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": [\n \"users.authentication.CustomAuth\"\n ],\n}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"django",
"django_middleware",
"django_rest_framework",
"django_rest_framework_simplejwt",
"python"
] |
stackoverflow_0074473955_django_django_middleware_django_rest_framework_django_rest_framework_simplejwt_python.txt
|
Q:
Can't import from keras module 'tensorflow._api.v1.compat.v2.compat' has no attribute 'v1'
I'm using jupyter/python (anaconda) and I was successful in loading these libraries
I tried to print tf ver
tf.print(tf. __ version __)
<tf.Operation 'PrintV2' type=PrintV2>
and when I ran tf.__version__ it said that I'm running TF 1.14.0' and keras ver 2.2.4-tf'
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
However when I tried to load these two libraries I got an error "AttributeError: module 'tensorflow._api.v1.compat.v2.compat' has no attribute 'v1'"
from keras.layers import Dense, Dropout
from keras.models import Sequential
A:
Please install the latest tensorflow version using below code.
!pip install --upgrade tensorflow
import tensorflow as tf
tf.__version__
then try importing the above mentioned libraries using tensorflow.keras:
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.models import Sequential
|
Can't import from keras module 'tensorflow._api.v1.compat.v2.compat' has no attribute 'v1'
|
I'm using jupyter/python (anaconda) and I was successful in loading these libraries
I tried to print tf ver
tf.print(tf. __ version __)
<tf.Operation 'PrintV2' type=PrintV2>
and when I ran tf.__version__ it said that I'm running TF 1.14.0' and keras ver 2.2.4-tf'
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
However when I tried to load these two libraries I got an error "AttributeError: module 'tensorflow._api.v1.compat.v2.compat' has no attribute 'v1'"
from keras.layers import Dense, Dropout
from keras.models import Sequential
|
[
"Please install the latest tensorflow version using below code.\n!pip install --upgrade tensorflow\nimport tensorflow as tf\ntf.__version__\n\nthen try importing the above mentioned libraries using tensorflow.keras:\nfrom tensorflow.keras.layers import Dense, Dropout\nfrom tensorflow.keras.models import Sequential\n\n"
] |
[
0
] |
[] |
[] |
[
"keras",
"python",
"tensorflow"
] |
stackoverflow_0073939042_keras_python_tensorflow.txt
|
Q:
How can I execute multiple commands in cmd in different lines of code in Python?
I am trying to run multiple commands in command prompt using Python, but I want to organize them into separete lines of code, so it's easier for me to read and edit later.
I started by using:
import os
os.system('cmd /c "command_1 & command_2 & command_3 & ... & command_n"')
But as I developed my program, I began to need more and more commands and it became annoying very quickly. So I tried a lot of different formats and even tried using the subprocess module, but to no avail, since I can't seem to be able to separate the commands.
Here are some examples that I tried, but FAILED:
- Separating them into different functions:
import os
os.system('cmd /k "command_1"')
os.system('cmd /k "command_2"')
...
os.system('cmd /c "command_n"')
This only executes the first command.
- Separating them into different lines:
import os
os.system(
'''
cmd /k "command_1 &
command_2 &
...
command_n"
'''
)
# I tried different variations of this, but none of them worked
These also execute only the first command, even when I try to put the "&" on the line below, or even with different formating. I tried passing each command as an argument, but this function can only get one.
A:
You can make up the single string on multiple lines:
os.system('cmd /c "'
+ 'command_1 & '
+ 'command_2 & '
+ 'command_3 & '
...
+ 'command_n"'
)
It's the same string, but formatted differently. Whereas the ''' multi-line string includes the line-breaks in the string, this one doesn't.
Or you could write a multi-line string and remove the line breaks:
os.system(
'''
cmd /k "command_1 &
command_2 &
...
command_n"
'''.replace('\n', '')
)
|
How can I execute multiple commands in cmd in different lines of code in Python?
|
I am trying to run multiple commands in command prompt using Python, but I want to organize them into separete lines of code, so it's easier for me to read and edit later.
I started by using:
import os
os.system('cmd /c "command_1 & command_2 & command_3 & ... & command_n"')
But as I developed my program, I began to need more and more commands and it became annoying very quickly. So I tried a lot of different formats and even tried using the subprocess module, but to no avail, since I can't seem to be able to separate the commands.
Here are some examples that I tried, but FAILED:
- Separating them into different functions:
import os
os.system('cmd /k "command_1"')
os.system('cmd /k "command_2"')
...
os.system('cmd /c "command_n"')
This only executes the first command.
- Separating them into different lines:
import os
os.system(
'''
cmd /k "command_1 &
command_2 &
...
command_n"
'''
)
# I tried different variations of this, but none of them worked
These also execute only the first command, even when I try to put the "&" on the line below, or even with different formating. I tried passing each command as an argument, but this function can only get one.
|
[
"You can make up the single string on multiple lines:\nos.system('cmd /c \"'\n + 'command_1 & '\n + 'command_2 & '\n + 'command_3 & '\n ...\n + 'command_n\"'\n)\n\nIt's the same string, but formatted differently. Whereas the ''' multi-line string includes the line-breaks in the string, this one doesn't.\nOr you could write a multi-line string and remove the line breaks:\nos.system(\n'''\ncmd /k \"command_1 &\ncommand_2 &\n...\ncommand_n\"\n'''.replace('\\n', '')\n)\n\n"
] |
[
0
] |
[] |
[] |
[
"cmd",
"python",
"python_os",
"subprocess"
] |
stackoverflow_0074477733_cmd_python_python_os_subprocess.txt
|
Q:
Why does Coverage.py ignore files with no coverage?
I first run
nosetests --with-coverage
So I should have a .coverage file with all the default settings.
Within folder_1, I have file_1.py, file_2.py, and file_3.py
When I cd into folder_1 and run
coverage report
It outputs:
It doesn't generate anything for file_3.py! But then when I run:
coverage report file_3.py
it says:
Does it skip files with no coverage in the report? How can I change it so the report shows me the results of every *.py file?
A:
You need to specify a source directory for coverage.py to find files that have never been executed at all. You can use --source=folder_1 on the command line, or [run] source=folder_1 in your .coveragerc file.
A:
I ran into this same scenario yesterday and lost some time trying make Coverage consider the corresponding to this file_3.py. Ned Batchelder's answer is completely correct and helped me but when handling multiple folder_1 folders in the same level in the hierarchy I'd have to set all of them as source and that is not ideal.
The key is this part of the official doc:
If the source option is specified, only code in those locations will be measured. Specifying the source option also enables coverage.py to report on unexecuted files, since it can search the source tree for files that haven’t been measured at all. Only importable files (ones at the root of the tree, or in directories with a __init__.py file) will be considered.
So unexecuted files will only be analysed if you point at them. For this scenario that means two options:
Set your folder directly as source directory running the tests with the flag --source=folder_1 (which is covered on Neds' answer).
If this is a subfolder of a bigger project you can also just set the source folder as the main project folder but then you need to set the directories you want analysed as packages, creating an __init__.py file in them.
For instance if you have:
src/
folder_1/
__init__.py
file_1.py
file_2.py
file_3.py
You can just run with the flag --source=src and folder_1 files will be discoverable as well.
Hope that helps someone in the future.
|
Why does Coverage.py ignore files with no coverage?
|
I first run
nosetests --with-coverage
So I should have a .coverage file with all the default settings.
Within folder_1, I have file_1.py, file_2.py, and file_3.py
When I cd into folder_1 and run
coverage report
It outputs:
It doesn't generate anything for file_3.py! But then when I run:
coverage report file_3.py
it says:
Does it skip files with no coverage in the report? How can I change it so the report shows me the results of every *.py file?
|
[
"You need to specify a source directory for coverage.py to find files that have never been executed at all. You can use --source=folder_1 on the command line, or [run] source=folder_1 in your .coveragerc file.\n",
"I ran into this same scenario yesterday and lost some time trying make Coverage consider the corresponding to this file_3.py. Ned Batchelder's answer is completely correct and helped me but when handling multiple folder_1 folders in the same level in the hierarchy I'd have to set all of them as source and that is not ideal.\nThe key is this part of the official doc:\n\nIf the source option is specified, only code in those locations will be measured. Specifying the source option also enables coverage.py to report on unexecuted files, since it can search the source tree for files that haven’t been measured at all. Only importable files (ones at the root of the tree, or in directories with a __init__.py file) will be considered.\n\nSo unexecuted files will only be analysed if you point at them. For this scenario that means two options:\n\nSet your folder directly as source directory running the tests with the flag --source=folder_1 (which is covered on Neds' answer).\nIf this is a subfolder of a bigger project you can also just set the source folder as the main project folder but then you need to set the directories you want analysed as packages, creating an __init__.py file in them.\n\nFor instance if you have:\nsrc/\n folder_1/\n __init__.py\n file_1.py\n file_2.py\n file_3.py\n\nYou can just run with the flag --source=src and folder_1 files will be discoverable as well.\nHope that helps someone in the future.\n"
] |
[
7,
0
] |
[] |
[] |
[
"coverage.py",
"python"
] |
stackoverflow_0043077589_coverage.py_python.txt
|
Q:
QtWdgets how to make a line dependent on the mouse
I want to achieve the follwing in QtWidjets. I have a line that moves with the mouse but I want it to move only when clicking (and holding the click) on the actual line; when there is no left click on the mouse nothing should happen. so far I only managed to make it move automatically with the mouse. I am new to QtWidgets and I am having trouble finding a solution.
Thank you if you have any tip.
Here is the code snippet:
import numpy as np
from PySide2 import QtWidgets
from matplotlib.backends.backend_qt5agg import FigureCanvas
from matplotlib.figure import Figure
import matplotlib.pyplot as plt
class SnaptoCursor(object):
def __init__(self, ax, x):
self.ax = ax
self.ly = ax.axvline(color='k')
self.x = x
self.txt = ax.text(0.7, 0.9, '', transform=ax.transAxes)
def mouse_move(self, event):
if event.inaxes:
indx = np.searchsorted(self.x, [event.xdata])[0]
x = self.x[indx]
self.ly.set_xdata(x)
self.txt.set_text('x=%1.2f' % x)
self.ax.figure.canvas.draw()
else:
pass
class App(QtWidgets.QMainWindow):
def __init__(self, parent=None):
super(App, self).__init__(parent)
self._main = QtWidgets.QWidget()
self.setCentralWidget(self._main)
self.figure = Figure(figsize=(10, 6.9))
self.canvas = FigureCanvas(self.figure)
self.canvas_ax = self.canvas.figure.subplots()
x = np.arange(0,40)
self.cursor = SnaptoCursor(self.canvas_ax, x)
self.cid = self.canvas.mpl_connect('motion_notify_event', self.cursor.mouse_move)
self.canvas_ax.plot(x, np.random.rand(40))
# Layout
layout = QtWidgets.QVBoxLayout(self._main)
layout.addWidget(self.canvas)
self.showMaximized()
if __name__ == '__main__':
app = QtWidgets.QApplication([])
ex = App()
ex.show()
app.exec_()
A:
I used the button_press_event and button_release_event events documented here to get the button state into SnaptoCursor:
import numpy as np
from PySide2 import QtWidgets
from matplotlib.backends.backend_qt5agg import FigureCanvas
from matplotlib.figure import Figure
import matplotlib.pyplot as plt
class SnaptoCursor(object):
def __init__(self, ax, x):
self.ax = ax
self.ly = ax.axvline(color='k')
self.x = x
self.txt = ax.text(0.7, 0.9, '', transform=ax.transAxes)
self.mouse_down = False
def mouse_move(self, event):
if event.inaxes and self.mouse_down:
indx = np.searchsorted(self.x, [event.xdata])[0]
x = self.x[indx]
self.ly.set_xdata(x)
self.txt.set_text('x=%1.2f' % x)
self.ax.figure.canvas.draw()
else:
pass
def mouse_press(self, event):
# is left click
if event.button == 1:
self.mouse_down = True
def mouse_release(self, event):
# is left click
if event.button == 1:
self.mouse_down = False
class App(QtWidgets.QMainWindow):
def __init__(self, parent=None):
super(App, self).__init__(parent)
self._main = QtWidgets.QWidget()
self.setCentralWidget(self._main)
self.figure = Figure(figsize=(10, 6.9))
self.canvas = FigureCanvas(self.figure)
self.canvas_ax = self.canvas.figure.subplots()
x = np.arange(0,40)
self.cursor = SnaptoCursor(self.canvas_ax, x)
self.cid = self.canvas.mpl_connect('motion_notify_event', self.cursor.mouse_move)
self.cid = self.canvas.mpl_connect('button_press_event', self.cursor.mouse_press)
self.cid = self.canvas.mpl_connect('button_release_event', self.cursor.mouse_release)
self.canvas_ax.plot(x, np.random.rand(40))
# Layout
layout = QtWidgets.QVBoxLayout(self._main)
layout.addWidget(self.canvas)
self.showMaximized()
if __name__ == '__main__':
app = QtWidgets.QApplication([])
ex = App()
ex.show()
app.exec_()
|
QtWdgets how to make a line dependent on the mouse
|
I want to achieve the follwing in QtWidjets. I have a line that moves with the mouse but I want it to move only when clicking (and holding the click) on the actual line; when there is no left click on the mouse nothing should happen. so far I only managed to make it move automatically with the mouse. I am new to QtWidgets and I am having trouble finding a solution.
Thank you if you have any tip.
Here is the code snippet:
import numpy as np
from PySide2 import QtWidgets
from matplotlib.backends.backend_qt5agg import FigureCanvas
from matplotlib.figure import Figure
import matplotlib.pyplot as plt
class SnaptoCursor(object):
def __init__(self, ax, x):
self.ax = ax
self.ly = ax.axvline(color='k')
self.x = x
self.txt = ax.text(0.7, 0.9, '', transform=ax.transAxes)
def mouse_move(self, event):
if event.inaxes:
indx = np.searchsorted(self.x, [event.xdata])[0]
x = self.x[indx]
self.ly.set_xdata(x)
self.txt.set_text('x=%1.2f' % x)
self.ax.figure.canvas.draw()
else:
pass
class App(QtWidgets.QMainWindow):
def __init__(self, parent=None):
super(App, self).__init__(parent)
self._main = QtWidgets.QWidget()
self.setCentralWidget(self._main)
self.figure = Figure(figsize=(10, 6.9))
self.canvas = FigureCanvas(self.figure)
self.canvas_ax = self.canvas.figure.subplots()
x = np.arange(0,40)
self.cursor = SnaptoCursor(self.canvas_ax, x)
self.cid = self.canvas.mpl_connect('motion_notify_event', self.cursor.mouse_move)
self.canvas_ax.plot(x, np.random.rand(40))
# Layout
layout = QtWidgets.QVBoxLayout(self._main)
layout.addWidget(self.canvas)
self.showMaximized()
if __name__ == '__main__':
app = QtWidgets.QApplication([])
ex = App()
ex.show()
app.exec_()
|
[
"I used the button_press_event and button_release_event events documented here to get the button state into SnaptoCursor:\nimport numpy as np\nfrom PySide2 import QtWidgets\nfrom matplotlib.backends.backend_qt5agg import FigureCanvas\nfrom matplotlib.figure import Figure\nimport matplotlib.pyplot as plt\n\nclass SnaptoCursor(object):\n def __init__(self, ax, x):\n self.ax = ax\n self.ly = ax.axvline(color='k')\n self.x = x\n self.txt = ax.text(0.7, 0.9, '', transform=ax.transAxes)\n self.mouse_down = False\n\n def mouse_move(self, event):\n if event.inaxes and self.mouse_down:\n indx = np.searchsorted(self.x, [event.xdata])[0]\n x = self.x[indx]\n self.ly.set_xdata(x)\n self.txt.set_text('x=%1.2f' % x)\n self.ax.figure.canvas.draw()\n else:\n pass\n\n def mouse_press(self, event):\n # is left click\n if event.button == 1:\n self.mouse_down = True\n\n def mouse_release(self, event):\n # is left click\n if event.button == 1:\n self.mouse_down = False\n\nclass App(QtWidgets.QMainWindow):\n def __init__(self, parent=None):\n super(App, self).__init__(parent)\n\n self._main = QtWidgets.QWidget()\n self.setCentralWidget(self._main)\n self.figure = Figure(figsize=(10, 6.9))\n self.canvas = FigureCanvas(self.figure)\n self.canvas_ax = self.canvas.figure.subplots()\n\n x = np.arange(0,40)\n self.cursor = SnaptoCursor(self.canvas_ax, x)\n self.cid = self.canvas.mpl_connect('motion_notify_event', self.cursor.mouse_move)\n self.cid = self.canvas.mpl_connect('button_press_event', self.cursor.mouse_press)\n self.cid = self.canvas.mpl_connect('button_release_event', self.cursor.mouse_release)\n self.canvas_ax.plot(x, np.random.rand(40))\n\n # Layout\n layout = QtWidgets.QVBoxLayout(self._main)\n layout.addWidget(self.canvas)\n self.showMaximized()\n\n\n\nif __name__ == '__main__':\n app = QtWidgets.QApplication([])\n ex = App()\n ex.show()\n app.exec_()\n\n"
] |
[
0
] |
[] |
[] |
[
"events",
"python",
"qtwidgets"
] |
stackoverflow_0074478401_events_python_qtwidgets.txt
|
Q:
Python packaging with setup.py does ignore manifest specifications
I'm currently trying to pack a module that uses precompiled *.pyd files from a swig routine.
The process for the user is supposed to be:
install base library (C, C++); directories linked in the environment variables; here are also the *.pyd files.
get python package; open dir from python environment (be it conda or else) and run "pip install ."
enjoy module....
What I did:
I generated the manifest file to include the *.pyd files from setup.py based on the environmental path from the installation.
I checked and upon installation of the module, the files are listed in the sources text file of the egg info.
But there are no *.pyd files in the module.
This is the way I assumed was correct, but I also tried many other ways (data option in setup specification, etc.), but nothing worked so far.
What did I do wrong?
On research, it seems like C related files are deleted after installation, but I thought the manifest defined files are save.
Edit: setup.py added.
from pathlib import Path
from setuptools import setup, find_packages, Extension
import os
import shutil
name = 'mypackage'
#define requirements
REQUIREMENTS = {
# Installation script (this file) dependencies
'setup': [
'setuptools_scm',
],
# Installation dependencies
# Use with pip install . to install from source
'install': [
'Cpython',
'setuptools < 64',
'numpy >= 1.23',
'matplotlib',
'DateTime',
'psutil',
'xarray',
'PyYAML',
'scipy',
'PySimpleGui'
],
}
#check for installed C library
lib_dir = ""
if 'BASEPACKAGE_IN_C' in os.environ:
lib_dir = os.getenv('BASEPACKAGE_IN_C')
print('BASEPACKAGE_IN_C found at {}!'.format(lib_dir))
else:
Exception('BASEPACKAGE_IN_C does not seem to exist on this machine! Make sure that the environment variable BASEPACKAGE_IN_C is set.')
# define function to make manifest file
def __createManifest__(subdirs):
"""inventory all files in path and create a manifest file"""
current = os.path.dirname(__file__)
relative_paths = [os.path.relpath(path, current) for path in subdirs]
with open(os.path.join(current, "MANIFEST.in"), "w") as manifest:
manifest.writelines("recursive-include {} *".format(" ".join(relative_paths)))
# check for interface layer directory
add_il = Path(lib_dir).parents[0].joinpath("sdk", "my_package_pyd_dir")
il_dest = os.path.join(os.path.dirname(__file__), "pyil" + os.sep)
if not os.path.exists(il_dest):
os.makedirs(il_dest)
if os.path.exists(add_il):
print('Python SDK interface layer found at {}!'.format(add_il))
for root, dirs, files in os.walk(add_il):
for file in files:
#copy files locally
shutil.copy(os.path.join(root, file), il_dest)
else:
Exception('Python SDK interface layer does not seem to exist on this machine! Make sure that the BASEPACKAGE_IN_C SDK is '
'properly installed.')
# make manifest file
__createManifest__([il_dest])
#standard setup call
setup(
name=name,
python_requires='>= 3.9',
version='0.1',
packages=find_packages(),
url='',
license='',
author='Ben',
author_email='',
description='BASEPACKAGE_IN_C Python SDK. Linked to the BASEPACKAGE_IN_C installation at {}.'.format(lib_dir),
setup_requires=REQUIREMENTS['setup'],
install_requires=REQUIREMENTS['install'],
include_package_data=True,
)
The setup.py generates the manifest.in with the following line:recursive-include pyil *
I also tried include pyil * or specifying the ending recursive-include pyil *.pyd or combinations thereof.
The Sources.txt file looks like that:
MANIFEST.in
setup.py
mypackage/moduleClass1.py
mypackage/moduleClass2.py
mypackage/moduleClass3.py
mypackage/__init__.py
mypackage.egg-info/PKG-INFO
mypackage.egg-info/SOURCES.txt
mypackage.egg-info/dependency_links.txt
mypackage.egg-info/requires.txt
mypackage.egg-info/top_level.txt
pyil/_pyil.pyd
So it is working so far. I tried with different filetypes in pyil, and they all worked beside the pyd file.
A:
I made it too complicated. The dir management copying the *.pyd files into a separate directory inside source (src) did not work.
Putting it directly into src/mypackage worked like a charm. The code for setup.py is:
from pathlib import Path
from setuptools import setup, find_packages, Extension
import os
import shutil
name = 'mypackage'
REQUIREMENTS = {
# Installation script (this file) dependencies
'setup': [
'setuptools_scm',
],
# Installation dependencies
# Use with pip install . to install from source
'install': [
'Cpython',
'setuptools < 64',
'numpy >= 1.23',
'matplotlib',
'DateTime',
'psutil',
'xarray',
'PyYAML',
'scipy',
'PySimpleGui'
],
}
lib_dir = ""
if 'BASEPACKAGE_IN_C' in os.environ:
lib_dir = os.getenv('BASEPACKAGE_IN_C')
print('BASEPACKAGE_IN_C SDK found at {}!'.format(lib_dir))
else:
Exception('BASEPACKAGE_IN_C SDK does not seem to exist on this machine! Make sure that the environment variable BASEPACKAGE_IN_C is set.')
def __createManifest__(subdirs):
"""inventory all files in path and create a manifest file"""
current = os.path.dirname(__file__)
relative_paths = [os.path.relpath(path, current) for path in subdirs]
with open(os.path.join(current, "MANIFEST.in"), "w") as manifest:
manifest.writelines("recursive-include {} *.pyd".format(" ".join(relative_paths)))
add_il = os.path.join(os.path.dirname(__file__), "mypackage")
__createManifest__([add_il])
setup(
name=name,
python_requires='>= 3.9',
version='0.1',
packages=find_packages(),
url='',
license='',
author='Ben',
author_email='',
description='BASEPACKAGE_IN_C Python SDK. Linked to the BASEPACKAGE_IN_C installation at {}.'.format(lib_dir),
setup_requires=REQUIREMENTS['setup'],
install_requires=REQUIREMENTS['install'],
include_package_data=True,
)
|
Python packaging with setup.py does ignore manifest specifications
|
I'm currently trying to pack a module that uses precompiled *.pyd files from a swig routine.
The process for the user is supposed to be:
install base library (C, C++); directories linked in the environment variables; here are also the *.pyd files.
get python package; open dir from python environment (be it conda or else) and run "pip install ."
enjoy module....
What I did:
I generated the manifest file to include the *.pyd files from setup.py based on the environmental path from the installation.
I checked and upon installation of the module, the files are listed in the sources text file of the egg info.
But there are no *.pyd files in the module.
This is the way I assumed was correct, but I also tried many other ways (data option in setup specification, etc.), but nothing worked so far.
What did I do wrong?
On research, it seems like C related files are deleted after installation, but I thought the manifest defined files are save.
Edit: setup.py added.
from pathlib import Path
from setuptools import setup, find_packages, Extension
import os
import shutil
name = 'mypackage'
#define requirements
REQUIREMENTS = {
# Installation script (this file) dependencies
'setup': [
'setuptools_scm',
],
# Installation dependencies
# Use with pip install . to install from source
'install': [
'Cpython',
'setuptools < 64',
'numpy >= 1.23',
'matplotlib',
'DateTime',
'psutil',
'xarray',
'PyYAML',
'scipy',
'PySimpleGui'
],
}
#check for installed C library
lib_dir = ""
if 'BASEPACKAGE_IN_C' in os.environ:
lib_dir = os.getenv('BASEPACKAGE_IN_C')
print('BASEPACKAGE_IN_C found at {}!'.format(lib_dir))
else:
Exception('BASEPACKAGE_IN_C does not seem to exist on this machine! Make sure that the environment variable BASEPACKAGE_IN_C is set.')
# define function to make manifest file
def __createManifest__(subdirs):
"""inventory all files in path and create a manifest file"""
current = os.path.dirname(__file__)
relative_paths = [os.path.relpath(path, current) for path in subdirs]
with open(os.path.join(current, "MANIFEST.in"), "w") as manifest:
manifest.writelines("recursive-include {} *".format(" ".join(relative_paths)))
# check for interface layer directory
add_il = Path(lib_dir).parents[0].joinpath("sdk", "my_package_pyd_dir")
il_dest = os.path.join(os.path.dirname(__file__), "pyil" + os.sep)
if not os.path.exists(il_dest):
os.makedirs(il_dest)
if os.path.exists(add_il):
print('Python SDK interface layer found at {}!'.format(add_il))
for root, dirs, files in os.walk(add_il):
for file in files:
#copy files locally
shutil.copy(os.path.join(root, file), il_dest)
else:
Exception('Python SDK interface layer does not seem to exist on this machine! Make sure that the BASEPACKAGE_IN_C SDK is '
'properly installed.')
# make manifest file
__createManifest__([il_dest])
#standard setup call
setup(
name=name,
python_requires='>= 3.9',
version='0.1',
packages=find_packages(),
url='',
license='',
author='Ben',
author_email='',
description='BASEPACKAGE_IN_C Python SDK. Linked to the BASEPACKAGE_IN_C installation at {}.'.format(lib_dir),
setup_requires=REQUIREMENTS['setup'],
install_requires=REQUIREMENTS['install'],
include_package_data=True,
)
The setup.py generates the manifest.in with the following line:recursive-include pyil *
I also tried include pyil * or specifying the ending recursive-include pyil *.pyd or combinations thereof.
The Sources.txt file looks like that:
MANIFEST.in
setup.py
mypackage/moduleClass1.py
mypackage/moduleClass2.py
mypackage/moduleClass3.py
mypackage/__init__.py
mypackage.egg-info/PKG-INFO
mypackage.egg-info/SOURCES.txt
mypackage.egg-info/dependency_links.txt
mypackage.egg-info/requires.txt
mypackage.egg-info/top_level.txt
pyil/_pyil.pyd
So it is working so far. I tried with different filetypes in pyil, and they all worked beside the pyd file.
|
[
"I made it too complicated. The dir management copying the *.pyd files into a separate directory inside source (src) did not work.\nPutting it directly into src/mypackage worked like a charm. The code for setup.py is:\nfrom pathlib import Path\nfrom setuptools import setup, find_packages, Extension\nimport os\nimport shutil\n\nname = 'mypackage'\n\nREQUIREMENTS = {\n # Installation script (this file) dependencies\n 'setup': [\n 'setuptools_scm',\n ],\n # Installation dependencies\n # Use with pip install . to install from source\n 'install': [\n 'Cpython',\n 'setuptools < 64',\n 'numpy >= 1.23',\n 'matplotlib',\n 'DateTime',\n 'psutil',\n 'xarray',\n 'PyYAML',\n 'scipy',\n 'PySimpleGui'\n ],\n}\n\nlib_dir = \"\"\nif 'BASEPACKAGE_IN_C' in os.environ:\n lib_dir = os.getenv('BASEPACKAGE_IN_C')\n print('BASEPACKAGE_IN_C SDK found at {}!'.format(lib_dir))\nelse:\n Exception('BASEPACKAGE_IN_C SDK does not seem to exist on this machine! Make sure that the environment variable BASEPACKAGE_IN_C is set.')\n\n\ndef __createManifest__(subdirs):\n \"\"\"inventory all files in path and create a manifest file\"\"\"\n current = os.path.dirname(__file__)\n relative_paths = [os.path.relpath(path, current) for path in subdirs]\n with open(os.path.join(current, \"MANIFEST.in\"), \"w\") as manifest:\n manifest.writelines(\"recursive-include {} *.pyd\".format(\" \".join(relative_paths)))\n\n\nadd_il = os.path.join(os.path.dirname(__file__), \"mypackage\")\n\n__createManifest__([add_il])\n\nsetup(\n name=name,\n python_requires='>= 3.9',\n version='0.1',\n packages=find_packages(),\n url='',\n license='',\n author='Ben',\n author_email='',\n description='BASEPACKAGE_IN_C Python SDK. Linked to the BASEPACKAGE_IN_C installation at {}.'.format(lib_dir),\n setup_requires=REQUIREMENTS['setup'],\n install_requires=REQUIREMENTS['install'],\n include_package_data=True,\n)\n\n"
] |
[
0
] |
[] |
[] |
[
"pip",
"pyd",
"python",
"setup.py",
"setuptools"
] |
stackoverflow_0074441268_pip_pyd_python_setup.py_setuptools.txt
|
Q:
python: can't open file '//ML_project.py': [Errno 2] No such file or directory in Docker
Here is the content in my Dockerfile. I am trying to containerise a python script (ML_project.py).
FROM continuumio/miniconda3:latest
COPY ML_Project.py .
RUN pip install fxcmpy
CMD ["python", "ML_project.py"]
My dockerfile and ML_project.py lies within the same folder (fxcm_project)
C:\Users\Jack\PycharmProjects\fxcm_project
How do I set my current working directory and docker run the file?
A:
When you docker build, you create a container which embeds all stuffs specified in the Dockerfile.
If during the execution a local resource cannot be found, then it is most likely that the ressource is not wothin the container or you passed a wrong location.
In your case, you might be looking for the WORKDIR dockerfile command: WORKDIR .
NOTE: During the debug phase, feel free to edit your Dockerfile in order to get more (precise) pieces of information. For instance, you could change the last line to print out the current working directory and list all the files it contains. The associated commands are respectively pwd and ls -la.
|
python: can't open file '//ML_project.py': [Errno 2] No such file or directory in Docker
|
Here is the content in my Dockerfile. I am trying to containerise a python script (ML_project.py).
FROM continuumio/miniconda3:latest
COPY ML_Project.py .
RUN pip install fxcmpy
CMD ["python", "ML_project.py"]
My dockerfile and ML_project.py lies within the same folder (fxcm_project)
C:\Users\Jack\PycharmProjects\fxcm_project
How do I set my current working directory and docker run the file?
|
[
"When you docker build, you create a container which embeds all stuffs specified in the Dockerfile.\nIf during the execution a local resource cannot be found, then it is most likely that the ressource is not wothin the container or you passed a wrong location.\nIn your case, you might be looking for the WORKDIR dockerfile command: WORKDIR .\nNOTE: During the debug phase, feel free to edit your Dockerfile in order to get more (precise) pieces of information. For instance, you could change the last line to print out the current working directory and list all the files it contains. The associated commands are respectively pwd and ls -la.\n"
] |
[
2
] |
[] |
[] |
[
"docker",
"python"
] |
stackoverflow_0074478598_docker_python.txt
|
Q:
How do I create a function that takes a numeric argument and prints “The argument is [argument]”
This is the full question I am working on:
Create a function called Q6 that takes a numeric argument and prints “The argument is
[argument]” (for example: With argument 5, the function would print “The argument is 5”)
I got this question right but only because the grading system expected "5" to be the numerical argument. For future reference and proper understanding, I want to know what I would need to change in order for any numerical argument to be used. For example, if someone wanted the numerical argument to be "200", how would I change my function to allow this number to also be used? This is what I did below:
def Q6(x):
print('The argument is 5')
A:
Instead of using literal values as you did, you can use the parameter passed to the function and return it in an f-string:
>>>def Q6(x):
... return f'The argument is {x}'
>>>
>>> Q6(200)
'The argument is 200'
>>> Q6(5)
'The argument is 5'
|
How do I create a function that takes a numeric argument and prints “The argument is [argument]”
|
This is the full question I am working on:
Create a function called Q6 that takes a numeric argument and prints “The argument is
[argument]” (for example: With argument 5, the function would print “The argument is 5”)
I got this question right but only because the grading system expected "5" to be the numerical argument. For future reference and proper understanding, I want to know what I would need to change in order for any numerical argument to be used. For example, if someone wanted the numerical argument to be "200", how would I change my function to allow this number to also be used? This is what I did below:
def Q6(x):
print('The argument is 5')
|
[
"Instead of using literal values as you did, you can use the parameter passed to the function and return it in an f-string:\n>>>def Q6(x): \n... return f'The argument is {x}'\n>>>\n>>> Q6(200)\n'The argument is 200'\n>>> Q6(5)\n'The argument is 5'\n\n"
] |
[
0
] |
[] |
[] |
[
"arguments",
"function",
"jupyter_notebook",
"python"
] |
stackoverflow_0074478623_arguments_function_jupyter_notebook_python.txt
|
Q:
Loop failure when connecting to can network
For a little project I made a Gui where the user selects a folder to save a log file of the Can bus messages on the bus. When the directory is selected and it is an valid directory the logger instantaneously start to connect to the bus and log all the messages.
To keep the Gui from freezing I tried to integrate the window.after function. Only now I encounter a problem with connecting to the canbus via python-can module. When the script can't connect to the Can network, a message box with a warning will pop up.
When I select a directory where to save the files and then the logger tries to connect, the warning box immiadately pops up, and when I click OK button it logs one message to the file. After that the warning box pops up again and when I click OK it logs again one canbus messages. And so on.
I suspect I haven't arranged my code probably, but I can't find the mistake. So I coming to you guys for help.
The piece of code which generates the trouble:
def log_function():
#Try to connect to the CAN Network if not show Warning.
try:
while True:
global bus
bus = can.interface.Bus(interface='pcan', channel='PCAN_USBBUS1', bitrate=250000)
print("connected")
except:
messagebox.showerror("Warning", "NO CONNECTION ESTABLISHED, PLEASE CONNECT TO CAN NETWORK")
#Logger function
try:
message = bus.recv()
logger.debug(db.decode_message(message.arbitration_id, message.data))
print(db.decode_message(message.arbitration_id, message.data))
except KeyError:
pass
window.after(100, log_function)
# When Stop button is pressed the bus will shutdwon and the script/gui will exit.
def stop():
bus.shutdown()
sys.exit()
I also tried to make a separate function of the first Try statement, but that also didn't work.
A:
It seem that you are reconnecting to the bus over and over again.
I don't understand the while loop you are using in there because I would expect you only need to connect once.
You then probably want to download the information and write it to your file.
Your example has missing code, so I'm not sure when and how you trigger the stop function. But I guess something like this should help:
def connect_bus():
try:
global bus
bus = can.interface.Bus(interface='pcan', channel='PCAN_USBBUS1', bitrate=250000)
print("connected")
except:
messagebox.showerror("Warning", "NO CONNECTION ESTABLISHED, PLEASE CONNECT TO CAN NETWORK")
def log_function():
#Logger function
try:
message = bus.recv()
logger.debug(db.decode_message(message.arbitration_id, message.data))
print(db.decode_message(message.arbitration_id, message.data))
except KeyError:
pass
window.after(100, log_function)
# When Stop button is pressed the bus will shutdwon and the script/gui will exit.
def stop():
bus.shutdown()
sys.exit()
|
Loop failure when connecting to can network
|
For a little project I made a Gui where the user selects a folder to save a log file of the Can bus messages on the bus. When the directory is selected and it is an valid directory the logger instantaneously start to connect to the bus and log all the messages.
To keep the Gui from freezing I tried to integrate the window.after function. Only now I encounter a problem with connecting to the canbus via python-can module. When the script can't connect to the Can network, a message box with a warning will pop up.
When I select a directory where to save the files and then the logger tries to connect, the warning box immiadately pops up, and when I click OK button it logs one message to the file. After that the warning box pops up again and when I click OK it logs again one canbus messages. And so on.
I suspect I haven't arranged my code probably, but I can't find the mistake. So I coming to you guys for help.
The piece of code which generates the trouble:
def log_function():
#Try to connect to the CAN Network if not show Warning.
try:
while True:
global bus
bus = can.interface.Bus(interface='pcan', channel='PCAN_USBBUS1', bitrate=250000)
print("connected")
except:
messagebox.showerror("Warning", "NO CONNECTION ESTABLISHED, PLEASE CONNECT TO CAN NETWORK")
#Logger function
try:
message = bus.recv()
logger.debug(db.decode_message(message.arbitration_id, message.data))
print(db.decode_message(message.arbitration_id, message.data))
except KeyError:
pass
window.after(100, log_function)
# When Stop button is pressed the bus will shutdwon and the script/gui will exit.
def stop():
bus.shutdown()
sys.exit()
I also tried to make a separate function of the first Try statement, but that also didn't work.
|
[
"It seem that you are reconnecting to the bus over and over again.\nI don't understand the while loop you are using in there because I would expect you only need to connect once.\nYou then probably want to download the information and write it to your file.\nYour example has missing code, so I'm not sure when and how you trigger the stop function. But I guess something like this should help:\ndef connect_bus():\n try:\n global bus\n bus = can.interface.Bus(interface='pcan', channel='PCAN_USBBUS1', bitrate=250000)\n print(\"connected\")\n except:\n messagebox.showerror(\"Warning\", \"NO CONNECTION ESTABLISHED, PLEASE CONNECT TO CAN NETWORK\")\n\ndef log_function():\n #Logger function\n try:\n message = bus.recv()\n logger.debug(db.decode_message(message.arbitration_id, message.data))\n print(db.decode_message(message.arbitration_id, message.data))\n except KeyError:\n pass\n window.after(100, log_function)\n\n# When Stop button is pressed the bus will shutdwon and the script/gui will exit.\ndef stop():\n bus.shutdown()\n sys.exit()\n\n"
] |
[
0
] |
[] |
[] |
[
"can_bus",
"logging",
"python",
"tkinter"
] |
stackoverflow_0074473289_can_bus_logging_python_tkinter.txt
|
Q:
Python Flask render response body from String instead of template
I know that you can render a view from a template file in Flask.
rendered = render_template('pdf/template.html', toPerson=message.to_user, fromPerson=message.from_user, message=message.user_message)
I'm wondering how you would render from a string instead of providing the 'pdf/template.html' section.
I've tried the below but with no luck.
loader = DictLoader({
'template': Template(template_string),
})
env = Environment(loader=loader)
response = env.get_template('template').render(toPerson="The to person", fromPerson="The from person", message="Lorem Ipsum")
I'm getting an error message
TypeError: Can't compile non template nodes
Thank you in advance
A:
If you want to use a string as a template instead of a loaded file, you can use the from_string function of the existing Jinja environment.
from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
templ_str = '''<h1>Hello {{ name }}</h1>'''
templ = app.jinja_env.from_string(templ_str)
return templ.render(name='John Doe')
|
Python Flask render response body from String instead of template
|
I know that you can render a view from a template file in Flask.
rendered = render_template('pdf/template.html', toPerson=message.to_user, fromPerson=message.from_user, message=message.user_message)
I'm wondering how you would render from a string instead of providing the 'pdf/template.html' section.
I've tried the below but with no luck.
loader = DictLoader({
'template': Template(template_string),
})
env = Environment(loader=loader)
response = env.get_template('template').render(toPerson="The to person", fromPerson="The from person", message="Lorem Ipsum")
I'm getting an error message
TypeError: Can't compile non template nodes
Thank you in advance
|
[
"If you want to use a string as a template instead of a loaded file, you can use the from_string function of the existing Jinja environment.\nfrom flask import Flask\n\napp = Flask(__name__)\n\n@app.route('/')\ndef index():\n templ_str = '''<h1>Hello {{ name }}</h1>'''\n templ = app.jinja_env.from_string(templ_str)\n return templ.render(name='John Doe')\n\n"
] |
[
0
] |
[] |
[] |
[
"flask",
"jinja2",
"python"
] |
stackoverflow_0074473696_flask_jinja2_python.txt
|
Q:
How to use variables from an environment file Python?
I have a project that I'm working on in which I need to store sensitive information into an environment file as variables that can later be called in my code. I'm having issues with it working and so I've dumbed it down to the simplest test I can think of.
I have create a test.py file and a var.env file within the same directory. They are the only files in this directory.
Here is my test.py that simply tried to print the value
#test.py
import os
from dotenv import load_dotenv
print(os.getenv('PROJECT'))
Here is environment file saved as var.env
#.env test file
PROJECT='newproject1234'
When I run test.py I get a response of "none". I know I've gotta be missing something simple here. Any help is appreciated.
A:
You need to call load_dotenv first.
#test.py
import os
from dotenv import load_dotenv
load_dotenv('var.env')
print(os.getenv('PROJECT'))
|
How to use variables from an environment file Python?
|
I have a project that I'm working on in which I need to store sensitive information into an environment file as variables that can later be called in my code. I'm having issues with it working and so I've dumbed it down to the simplest test I can think of.
I have create a test.py file and a var.env file within the same directory. They are the only files in this directory.
Here is my test.py that simply tried to print the value
#test.py
import os
from dotenv import load_dotenv
print(os.getenv('PROJECT'))
Here is environment file saved as var.env
#.env test file
PROJECT='newproject1234'
When I run test.py I get a response of "none". I know I've gotta be missing something simple here. Any help is appreciated.
|
[
"You need to call load_dotenv first.\n#test.py\nimport os\nfrom dotenv import load_dotenv\n\nload_dotenv('var.env')\n\nprint(os.getenv('PROJECT'))\n\n"
] |
[
2
] |
[] |
[] |
[
"environment_variables",
"python",
"python_dotenv"
] |
stackoverflow_0074478643_environment_variables_python_python_dotenv.txt
|
Q:
NamedTuple is shared across variables
from typing import NamedTuple, List, Set, Tuple, Dict
class EmbeddingInfoStruct(NamedTuple):
emb_names : list[str] =[]
idx_in_data: list[int] =[]
emb_dim: list[int] =[]
info1 =EmbeddingInfoStruct()
info1.emb_names.append("name1")
info2=EmbeddingInfoStruct()
print("info1 address = ", id(info1), ", info2 address = " ,id(info2))
print (info1)
print (info2)
output of print :
info1 address = 2547212397920 , info2 address = 2547211152576
EmbeddingInfoStruct(emb_names=['name1'], idx_in_data=[], emb_dim=[])
EmbeddingInfoStruct(emb_names=['name1'], idx_in_data=[], emb_dim=[])
Surprisingly info1 and info2 both share the same value. I'd expect info2.emb_names to be empty. Why does NamedTuple behaves like it's a "static class"?
A:
I think you mistook NamedTuple from the typing module, describing the type of a named tuple for type hinting purpose, and the named tuple you can get from namedtuple() from the collection package (see the collection documentation).
Here, you are actually changing class member of your EmbeddingInfoStruct, thus the "static class" behavior.
Using this, your class declaration would rather look like
from collections import namedtuple
EmbeddingInfoStruct = namedtuple("EmbeddingInfoStruct",["emb_names", "idx_in_data", "emb_dim"],defaults=[list(),list(),list()])
info1 = EmbeddingInfoStruct()
You will, however, probably fall into the pitfall of "mutable" as default arguments, as explained there
A:
As said by others, the problem is the mutable default. You could use a dataclass with a field providing a default factory. See
https://docs.python.org/3/library/dataclasses.html#dataclasses.field
|
NamedTuple is shared across variables
|
from typing import NamedTuple, List, Set, Tuple, Dict
class EmbeddingInfoStruct(NamedTuple):
emb_names : list[str] =[]
idx_in_data: list[int] =[]
emb_dim: list[int] =[]
info1 =EmbeddingInfoStruct()
info1.emb_names.append("name1")
info2=EmbeddingInfoStruct()
print("info1 address = ", id(info1), ", info2 address = " ,id(info2))
print (info1)
print (info2)
output of print :
info1 address = 2547212397920 , info2 address = 2547211152576
EmbeddingInfoStruct(emb_names=['name1'], idx_in_data=[], emb_dim=[])
EmbeddingInfoStruct(emb_names=['name1'], idx_in_data=[], emb_dim=[])
Surprisingly info1 and info2 both share the same value. I'd expect info2.emb_names to be empty. Why does NamedTuple behaves like it's a "static class"?
|
[
"I think you mistook NamedTuple from the typing module, describing the type of a named tuple for type hinting purpose, and the named tuple you can get from namedtuple() from the collection package (see the collection documentation).\nHere, you are actually changing class member of your EmbeddingInfoStruct, thus the \"static class\" behavior.\n\nUsing this, your class declaration would rather look like\nfrom collections import namedtuple\nEmbeddingInfoStruct = namedtuple(\"EmbeddingInfoStruct\",[\"emb_names\", \"idx_in_data\", \"emb_dim\"],defaults=[list(),list(),list()])\n\ninfo1 = EmbeddingInfoStruct()\n\nYou will, however, probably fall into the pitfall of \"mutable\" as default arguments, as explained there\n",
"As said by others, the problem is the mutable default. You could use a dataclass with a field providing a default factory. See\nhttps://docs.python.org/3/library/dataclasses.html#dataclasses.field\n"
] |
[
1,
1
] |
[] |
[] |
[
"namedtuple",
"python"
] |
stackoverflow_0074478576_namedtuple_python.txt
|
Q:
Trying to edit a row of a csv file in python, but for some reason it also adds blank rows when run?
I'm trying to make it so you can edit a single client's details in a csv file, and while the code I wrote runs it for some reason adds a gap in between each client as well as changing the client - i'd really appreciate if someone could tell me why this is happening.
Here's an excerpt of my csv:
first_name,last_name,title,pronouns,dob,occupation,account_balance,overdraft_limit
Genovera,Willgoss,Mrs,Female,25/05/2022,Graphic Designer,2315.16,46.48
Garner,Coupman,Ms,Male,14/04/2022,General Manager,2200.76,2.28
Jens,Eldrid,Honorable,Male,13/11/2021,Research Associate,967.64,79.15
The code i'm running:
if choice == "4":
editClient = int(input("Please enter the index number of the client you wish to edit:"))
print("Please enter the details for each of the following:")
for i in range(len(existing_clients[0])):
newDetails = input("Enter new data for " + str(existing_clients[0][i]) + ":")
existing_clients[editClient][i] = newDetails
changes = input("Are you sure you'd like to make these changes? Enter Yes or No")
if changes == ("Yes"):
with open("mock_data.csv", "w+") as file:
reader = csv.writer(file)
for i in range(len(existing_clients)):
reader.writerow(existing_clients[i])
And what my csv looks like after i've changed a client:
(I just changed all of Jens details to 1)
first_name,last_name,title,pronouns,dob,occupation,account_balance,overdraft_limit
Genovera,Willgoss,Mrs,Female,25/05/2022,Graphic Designer,2315.16,46.48
Garner,Coupman,Ms,Male,14/04/2022,General Manager,2200.76,2.28
1,1,1,1,1,1,1,1
I haven't tried anything because i've got no idea whats making this happen - I've only been programming a month and am very lost.
A:
I believe it is adding an extra carriage return when writing. Try changing this line:
with open("mock_data.csv", "w+") as file:
to
with open("mock_data.csv", newline= "", "w+") as file:
|
Trying to edit a row of a csv file in python, but for some reason it also adds blank rows when run?
|
I'm trying to make it so you can edit a single client's details in a csv file, and while the code I wrote runs it for some reason adds a gap in between each client as well as changing the client - i'd really appreciate if someone could tell me why this is happening.
Here's an excerpt of my csv:
first_name,last_name,title,pronouns,dob,occupation,account_balance,overdraft_limit
Genovera,Willgoss,Mrs,Female,25/05/2022,Graphic Designer,2315.16,46.48
Garner,Coupman,Ms,Male,14/04/2022,General Manager,2200.76,2.28
Jens,Eldrid,Honorable,Male,13/11/2021,Research Associate,967.64,79.15
The code i'm running:
if choice == "4":
editClient = int(input("Please enter the index number of the client you wish to edit:"))
print("Please enter the details for each of the following:")
for i in range(len(existing_clients[0])):
newDetails = input("Enter new data for " + str(existing_clients[0][i]) + ":")
existing_clients[editClient][i] = newDetails
changes = input("Are you sure you'd like to make these changes? Enter Yes or No")
if changes == ("Yes"):
with open("mock_data.csv", "w+") as file:
reader = csv.writer(file)
for i in range(len(existing_clients)):
reader.writerow(existing_clients[i])
And what my csv looks like after i've changed a client:
(I just changed all of Jens details to 1)
first_name,last_name,title,pronouns,dob,occupation,account_balance,overdraft_limit
Genovera,Willgoss,Mrs,Female,25/05/2022,Graphic Designer,2315.16,46.48
Garner,Coupman,Ms,Male,14/04/2022,General Manager,2200.76,2.28
1,1,1,1,1,1,1,1
I haven't tried anything because i've got no idea whats making this happen - I've only been programming a month and am very lost.
|
[
"I believe it is adding an extra carriage return when writing. Try changing this line:\nwith open(\"mock_data.csv\", \"w+\") as file:\nto\nwith open(\"mock_data.csv\", newline= \"\", \"w+\") as file:\n"
] |
[
2
] |
[] |
[] |
[
"csv",
"list",
"python"
] |
stackoverflow_0074478689_csv_list_python.txt
|
Q:
Reading contents of a gzip file from a AWS S3 in Python
I am trying to read some logs from a Hadoop process that I run in AWS. The logs are stored in an S3 folder and have the following path.
bucketname = name
key = y/z/stderr.gz
Here Y is the cluster id and z is a folder name. Both of these act as folders(objects) in AWS. So the full path is like x/y/z/stderr.gz.
Now I want to unzip this .gz file and read the contents of the file. I don't want to download this file to my system wants to save contents in a python variable.
This is what I have tried till now.
bucket_name = "name"
key = "y/z/stderr.gz"
obj = s3.Object(bucket_name,key)
n = obj.get()['Body'].read()
This is giving me a format which is not readable. I also tried
n = obj.get()['Body'].read().decode('utf-8')
which gives an error utf8' codec can't decode byte 0x8b in position 1: invalid start byte.
I have also tried
gzip = StringIO(obj)
gzipfile = gzip.GzipFile(fileobj=gzip)
content = gzipfile.read()
This returns an error IOError: Not a gzipped file
Not sure how to decode this .gz file.
Edit - Found a solution. Needed to pass n in it and use BytesIO
gzip = BytesIO(n)
A:
This is old, but you no longer need the BytesIO object in the middle of it (at least on my boto3==1.9.223 and python3.7)
import boto3
import gzip
s3 = boto3.resource("s3")
obj = s3.Object("YOUR_BUCKET_NAME", "path/to/your_key.gz")
with gzip.GzipFile(fileobj=obj.get()["Body"]) as gzipfile:
content = gzipfile.read()
print(content)
A:
@Amit, I was trying to do the same thing to test decoding a file, and got your code to run with some modifications. I just had to remove the function def, the return, and rename the gzip variable, since that name is in use.
import json
import boto3
from io import BytesIO
import gzip
try:
s3 = boto3.resource('s3')
key='YOUR_FILE_NAME.gz'
obj = s3.Object('YOUR_BUCKET_NAME',key)
n = obj.get()['Body'].read()
gzipfile = BytesIO(n)
gzipfile = gzip.GzipFile(fileobj=gzipfile)
content = gzipfile.read()
print(content)
except Exception as e:
print(e)
raise e
A:
You can use AWS S3 SELECT Object Content to read gzip contents
S3 Select is an Amazon S3 capability designed to pull out only the data you need from an object, which can dramatically improve the performance and reduce the cost of applications that need to access data in S3.
Amazon S3 Select works on objects stored in Apache Parquet format, JSON Arrays, and BZIP2 compression for CSV and JSON objects.
Ref: https://docs.aws.amazon.com/AmazonS3/latest/dev/selecting-content-from-objects.html
from io import StringIO
import boto3
import pandas as pd
bucket = 'my-bucket'
prefix = 'my-prefix'
client = boto3.client('s3')
for object in client.list_objects_v2(Bucket=bucket, Prefix=prefix)['Contents']:
if object['Size'] <= 0:
continue
print(object['Key'])
r = client.select_object_content(
Bucket=bucket,
Key=object['Key'],
ExpressionType='SQL',
Expression="select * from s3object",
InputSerialization = {'CompressionType': 'GZIP', 'JSON': {'Type': 'DOCUMENT'}},
OutputSerialization = {'CSV': {'QuoteFields': 'ASNEEDED', 'RecordDelimiter': '\n', 'FieldDelimiter': ',', 'QuoteCharacter': '"', 'QuoteEscapeCharacter': '"'}},
)
for event in r['Payload']:
if 'Records' in event:
records = event['Records']['Payload'].decode('utf-8')
payloads = (''.join(r for r in records))
try:
select_df = pd.read_csv(StringIO(payloads), error_bad_lines=False)
for row in select_df.iterrows():
print(row)
except Exception as e:
print(e)
A:
Read Bz2 extension file from aws s3 in python
import json
import boto3
from io import BytesIO
import bz2
try:
s3 = boto3.resource('s3')
key='key_name.bz2'
obj = s3.Object('bucket_name',key)
nn = obj.get()['Body'].read()
gzipfile = BytesIO(nn)
content = bz2.decompress(gzipfile.read())
content = content.split('\n')
print len(content)
except Exception as e:
print(e)
A:
Just like what we do with variables, data can be kept as bytes in an in-memory buffer when we use the io module’s Byte IO operations.
Here is a sample program to demonstrate this:
mport io
stream_str = io.BytesIO(b"JournalDev Python: \x00\x01")
print(stream_str.getvalue())
The getvalue() function takes the value from the Buffer as a String.
So, the @Jean-FrançoisFabre answer is correct, and you should use
gzip = BytesIO(n)
For more information read the following doc:
https://docs.python.org/3/library/io.html
A:
Currently the file can be read as
import pandas as pd
role = 'role name'
bucket = 'bucket name'
data_key = 'data key'
data_location = 's3://{}/{}'.format(bucket, data_key)
data = pd.read_csv(data_location,compression='gzip', header=0, sep=',', quotechar='"')
A:
I also stuck with reading contents of gzipped csv files from s3, got the same errors, but finally found a way to read a gzip.GZipFile and iterate through it's rows with csv.reader:
for obj in bucket.objects.filter(Prefix=folder_prefix):
if obj.key.endswith(".gz"):
with gzip.GzipFile(fileobj=obj.get()["Body"]) as gzipped_csv_file:
csv_reader = csv.reader(StringIO(gzipped_csv_file.read().decode()))
for line in csv_reader:
process_line(line)
|
Reading contents of a gzip file from a AWS S3 in Python
|
I am trying to read some logs from a Hadoop process that I run in AWS. The logs are stored in an S3 folder and have the following path.
bucketname = name
key = y/z/stderr.gz
Here Y is the cluster id and z is a folder name. Both of these act as folders(objects) in AWS. So the full path is like x/y/z/stderr.gz.
Now I want to unzip this .gz file and read the contents of the file. I don't want to download this file to my system wants to save contents in a python variable.
This is what I have tried till now.
bucket_name = "name"
key = "y/z/stderr.gz"
obj = s3.Object(bucket_name,key)
n = obj.get()['Body'].read()
This is giving me a format which is not readable. I also tried
n = obj.get()['Body'].read().decode('utf-8')
which gives an error utf8' codec can't decode byte 0x8b in position 1: invalid start byte.
I have also tried
gzip = StringIO(obj)
gzipfile = gzip.GzipFile(fileobj=gzip)
content = gzipfile.read()
This returns an error IOError: Not a gzipped file
Not sure how to decode this .gz file.
Edit - Found a solution. Needed to pass n in it and use BytesIO
gzip = BytesIO(n)
|
[
"This is old, but you no longer need the BytesIO object in the middle of it (at least on my boto3==1.9.223 and python3.7) \nimport boto3\nimport gzip\n\ns3 = boto3.resource(\"s3\")\nobj = s3.Object(\"YOUR_BUCKET_NAME\", \"path/to/your_key.gz\")\nwith gzip.GzipFile(fileobj=obj.get()[\"Body\"]) as gzipfile:\n content = gzipfile.read()\nprint(content)\n\n",
"@Amit, I was trying to do the same thing to test decoding a file, and got your code to run with some modifications. I just had to remove the function def, the return, and rename the gzip variable, since that name is in use. \nimport json\nimport boto3\nfrom io import BytesIO\nimport gzip\n\ntry:\n s3 = boto3.resource('s3')\n key='YOUR_FILE_NAME.gz'\n obj = s3.Object('YOUR_BUCKET_NAME',key)\n n = obj.get()['Body'].read()\n gzipfile = BytesIO(n)\n gzipfile = gzip.GzipFile(fileobj=gzipfile)\n content = gzipfile.read()\n print(content)\nexcept Exception as e:\n print(e)\n raise e\n\n",
"You can use AWS S3 SELECT Object Content to read gzip contents\nS3 Select is an Amazon S3 capability designed to pull out only the data you need from an object, which can dramatically improve the performance and reduce the cost of applications that need to access data in S3.\nAmazon S3 Select works on objects stored in Apache Parquet format, JSON Arrays, and BZIP2 compression for CSV and JSON objects.\nRef: https://docs.aws.amazon.com/AmazonS3/latest/dev/selecting-content-from-objects.html\nfrom io import StringIO\nimport boto3\nimport pandas as pd\n\nbucket = 'my-bucket'\nprefix = 'my-prefix'\n\nclient = boto3.client('s3')\n\nfor object in client.list_objects_v2(Bucket=bucket, Prefix=prefix)['Contents']:\n if object['Size'] <= 0:\n continue\n\n print(object['Key'])\n r = client.select_object_content(\n Bucket=bucket,\n Key=object['Key'],\n ExpressionType='SQL',\n Expression=\"select * from s3object\",\n InputSerialization = {'CompressionType': 'GZIP', 'JSON': {'Type': 'DOCUMENT'}},\n OutputSerialization = {'CSV': {'QuoteFields': 'ASNEEDED', 'RecordDelimiter': '\\n', 'FieldDelimiter': ',', 'QuoteCharacter': '\"', 'QuoteEscapeCharacter': '\"'}},\n )\n\n for event in r['Payload']:\n if 'Records' in event:\n records = event['Records']['Payload'].decode('utf-8')\n payloads = (''.join(r for r in records))\n try:\n select_df = pd.read_csv(StringIO(payloads), error_bad_lines=False)\n for row in select_df.iterrows():\n print(row)\n except Exception as e:\n print(e)\n\n",
"Read Bz2 extension file from aws s3 in python\nimport json\nimport boto3\nfrom io import BytesIO\nimport bz2\ntry:\n s3 = boto3.resource('s3')\n key='key_name.bz2'\n obj = s3.Object('bucket_name',key)\n nn = obj.get()['Body'].read()\n gzipfile = BytesIO(nn)\n content = bz2.decompress(gzipfile.read())\n content = content.split('\\n')\n print len(content)\n\nexcept Exception as e:\n print(e)\n\n",
"Just like what we do with variables, data can be kept as bytes in an in-memory buffer when we use the io module’s Byte IO operations.\nHere is a sample program to demonstrate this:\nmport io\n\nstream_str = io.BytesIO(b\"JournalDev Python: \\x00\\x01\")\nprint(stream_str.getvalue())\n\nThe getvalue() function takes the value from the Buffer as a String.\nSo, the @Jean-FrançoisFabre answer is correct, and you should use\ngzip = BytesIO(n)\n\nFor more information read the following doc:\nhttps://docs.python.org/3/library/io.html\n",
"Currently the file can be read as\nimport pandas as pd\nrole = 'role name'\nbucket = 'bucket name'\ndata_key = 'data key'\ndata_location = 's3://{}/{}'.format(bucket, data_key)\ndata = pd.read_csv(data_location,compression='gzip', header=0, sep=',', quotechar='\"') \n\n",
"I also stuck with reading contents of gzipped csv files from s3, got the same errors, but finally found a way to read a gzip.GZipFile and iterate through it's rows with csv.reader:\nfor obj in bucket.objects.filter(Prefix=folder_prefix):\n if obj.key.endswith(\".gz\"):\n with gzip.GzipFile(fileobj=obj.get()[\"Body\"]) as gzipped_csv_file:\n csv_reader = csv.reader(StringIO(gzipped_csv_file.read().decode()))\n for line in csv_reader:\n process_line(line)\n\n"
] |
[
39,
20,
10,
1,
0,
0,
0
] |
[] |
[] |
[
"amazon_s3",
"amazon_web_services",
"boto3",
"python"
] |
stackoverflow_0041161006_amazon_s3_amazon_web_services_boto3_python.txt
|
Q:
Class object attributes to list in a one liner
I have a list of class objects, e.g.:
child1 = Child(Name = 'Max', height = 5.1, weight = 100)
child2 = Child(Name = 'Mimi, height = 4.1, weight = 80)
my_object_list = [child1, child2]
Is there a way to create a new list dynamically with one similiar attribute of each object as a one liner? I know how to do it in a for loop, that's why I am explicitely asking for a one liner.
desired result: my_new_list = ['Max', 'Mimi']
Many Thanks in advance
A:
this kind of things is made easy by the comprehension syntax in Python;
my_new_list = [item.name for item in old_list]
Now, if one does not know at coding-time which attribute should be retrieved, the getattr built-in can be used to retrieve an attribute by name passed as a string:
attr = 'name'
my_new_list = [geattr(item, attr) for item in old_list]`
Or also, operator.attrgetter:
from operator import attrgetter
op = attrgetter("name")
new_list = [op(item) for item in old_list]
# and this looks pretty when used with "map" as well:
name_iterator = map(op, old_list)
|
Class object attributes to list in a one liner
|
I have a list of class objects, e.g.:
child1 = Child(Name = 'Max', height = 5.1, weight = 100)
child2 = Child(Name = 'Mimi, height = 4.1, weight = 80)
my_object_list = [child1, child2]
Is there a way to create a new list dynamically with one similiar attribute of each object as a one liner? I know how to do it in a for loop, that's why I am explicitely asking for a one liner.
desired result: my_new_list = ['Max', 'Mimi']
Many Thanks in advance
|
[
"this kind of things is made easy by the comprehension syntax in Python;\nmy_new_list = [item.name for item in old_list]\nNow, if one does not know at coding-time which attribute should be retrieved, the getattr built-in can be used to retrieve an attribute by name passed as a string:\nattr = 'name'\nmy_new_list = [geattr(item, attr) for item in old_list]`\n\n\nOr also, operator.attrgetter:\nfrom operator import attrgetter\n\nop = attrgetter(\"name\")\n\nnew_list = [op(item) for item in old_list]\n# and this looks pretty when used with \"map\" as well:\n\nname_iterator = map(op, old_list)\n\n\n"
] |
[
0
] |
[] |
[] |
[
"dynamic",
"list",
"python"
] |
stackoverflow_0074478676_dynamic_list_python.txt
|
Q:
How to split strings with multiple delimiters while keep the delimiters | python
For example, I have a string section 213(d)-456(c)
How can I split it to get a list of strings:
['section', '213', '(', 'd', ')', '-', '456', '(', 'c', ')'].
Thank you!
A:
You can do so using Regex.
import re
text = "section 213(d)-456(c)"
output = re.split("(\W)", text)
Output: ['section', ' ', '213', '(', 'd', ')', '', '-', '456', '(', 'c', ')', '']
Here \W is for non-word character!
A:
You can come close with
re.split(r'([-\s()])', 'section 213(d)-456(c)')
When the delimiter contains a capture group, the result includes the captured text.
However, this will also include the space delimiters in the result:
['section', ' ', '213', '(', 'd', ')', '', '-', '456', '(', 'c', ')', '']
You can easily remove these afterward.
|
How to split strings with multiple delimiters while keep the delimiters | python
|
For example, I have a string section 213(d)-456(c)
How can I split it to get a list of strings:
['section', '213', '(', 'd', ')', '-', '456', '(', 'c', ')'].
Thank you!
|
[
"You can do so using Regex.\nimport re\ntext = \"section 213(d)-456(c)\"\noutput = re.split(\"(\\W)\", text)\n\nOutput: ['section', ' ', '213', '(', 'd', ')', '', '-', '456', '(', 'c', ')', '']\nHere \\W is for non-word character!\n",
"You can come close with\nre.split(r'([-\\s()])', 'section 213(d)-456(c)')\n\nWhen the delimiter contains a capture group, the result includes the captured text.\nHowever, this will also include the space delimiters in the result:\n['section', ' ', '213', '(', 'd', ')', '', '-', '456', '(', 'c', ')', '']\n\nYou can easily remove these afterward.\n"
] |
[
2,
0
] |
[] |
[] |
[
"python",
"split",
"string"
] |
stackoverflow_0074478758_python_split_string.txt
|
Q:
I want to validaton condition for two fields Pydantic
The task is to make a validator for two dependent fields.
If MCC is not empty, then you need to check that OUTSIDE is passed in the type field. And vice versa. If MCC is empty, then INSIDE should be passed in the type field.
I wrote this code, but it doesn't work. Can someone tell me the best way to do this
import json
from pydantic.dataclasses import dataclass, ValidationError
from pydantic import root_validator, validator
from typing import Union, List, Literal
@dataclass
class DataInclude:
type: Literal['INSIDE', 'OUTSIDE']
accountID: Union[None, int]
date: int
tranDate: int
operationType: Literal['CREDIT', 'DEBIT', 'OPEN', 'DV']
paymentDetailType: Literal[
'BETWEEN_THEIR', 'INSIDE_BANK', 'EXTERNAL_INDIVIDUAL', 'EXTERNAL_ENTITY', 'OTHER_BANK',
'HOUSING_AND_COMMUNAL_SERVICE', 'MOBILE', 'INTERNET', 'TRANSPORT', 'TAX_AND_STATE_SERVICE',
'NOT_FINANCE', 'CONTACT_ADDRESSLESS', 'DIRECT', 'SFP', 'OUTSIDE_CASH', 'INSIDE_OTHER',
'OUTSIDE_OTHER', 'C2B_PAYMENT', 'INSIDE_DEPOSIT']
amount: Union[int, float, None]
documentAmount: Union[int, float, None]
comment: str
documentID: int | None
accountNumber: str
currencyCodeNumeric: int
merchantName: str | None
merchantNameRus: str | None
groupName: str
md5hash: str
svgImage: str | None
fastPayment: str | None
terminalCode: str | None
deviceCode: str | None
country: str | None
city: str | None
operationId: str | None
isCancellation: bool | None # BOOL!
cardTranNumber: str | None
opCode: int | None
MCC: int | None
@validator('type', 'MCC')
def check_passwords_match(cls, values):
type_operation, mcc = values['type'], values['MCC']
if mcc is not None:
if type_operation != "OUTSIDE":
raise ValueError('MCC NOT EQUAL TYPE OPERATION')
return values
@dataclass
class MessageResponse:
statusCode: int
errorMessage: Union[None, str]
data: List[DataInclude]
@staticmethod
def validation_body(data):
try:
data_new = json.loads(data)
MessageResponse(**data_new)
return True
except ValidationError as e:
raise e
I have tried various options. I have read the documentation, but could not find the answer to my question.
I use pydantic for automation api testing
A:
I think you are looking for this, the validator on MCC will have to deal with both your cases.
@validator("MCC")
def check_passwords_match(cls, v, values):
if "type" not in values:
raise ValueError("TYPE VALIDATION FAILED")
if (v is not None and values["type"] != "OUTSIDE") or (
v is None and values["type"] != "INSIDE"
):
raise ValueError("MCC NOT EQUAL TYPE OPERATION")
return v
|
I want to validaton condition for two fields Pydantic
|
The task is to make a validator for two dependent fields.
If MCC is not empty, then you need to check that OUTSIDE is passed in the type field. And vice versa. If MCC is empty, then INSIDE should be passed in the type field.
I wrote this code, but it doesn't work. Can someone tell me the best way to do this
import json
from pydantic.dataclasses import dataclass, ValidationError
from pydantic import root_validator, validator
from typing import Union, List, Literal
@dataclass
class DataInclude:
type: Literal['INSIDE', 'OUTSIDE']
accountID: Union[None, int]
date: int
tranDate: int
operationType: Literal['CREDIT', 'DEBIT', 'OPEN', 'DV']
paymentDetailType: Literal[
'BETWEEN_THEIR', 'INSIDE_BANK', 'EXTERNAL_INDIVIDUAL', 'EXTERNAL_ENTITY', 'OTHER_BANK',
'HOUSING_AND_COMMUNAL_SERVICE', 'MOBILE', 'INTERNET', 'TRANSPORT', 'TAX_AND_STATE_SERVICE',
'NOT_FINANCE', 'CONTACT_ADDRESSLESS', 'DIRECT', 'SFP', 'OUTSIDE_CASH', 'INSIDE_OTHER',
'OUTSIDE_OTHER', 'C2B_PAYMENT', 'INSIDE_DEPOSIT']
amount: Union[int, float, None]
documentAmount: Union[int, float, None]
comment: str
documentID: int | None
accountNumber: str
currencyCodeNumeric: int
merchantName: str | None
merchantNameRus: str | None
groupName: str
md5hash: str
svgImage: str | None
fastPayment: str | None
terminalCode: str | None
deviceCode: str | None
country: str | None
city: str | None
operationId: str | None
isCancellation: bool | None # BOOL!
cardTranNumber: str | None
opCode: int | None
MCC: int | None
@validator('type', 'MCC')
def check_passwords_match(cls, values):
type_operation, mcc = values['type'], values['MCC']
if mcc is not None:
if type_operation != "OUTSIDE":
raise ValueError('MCC NOT EQUAL TYPE OPERATION')
return values
@dataclass
class MessageResponse:
statusCode: int
errorMessage: Union[None, str]
data: List[DataInclude]
@staticmethod
def validation_body(data):
try:
data_new = json.loads(data)
MessageResponse(**data_new)
return True
except ValidationError as e:
raise e
I have tried various options. I have read the documentation, but could not find the answer to my question.
I use pydantic for automation api testing
|
[
"I think you are looking for this, the validator on MCC will have to deal with both your cases.\n @validator(\"MCC\")\n def check_passwords_match(cls, v, values):\n if \"type\" not in values:\n raise ValueError(\"TYPE VALIDATION FAILED\")\n if (v is not None and values[\"type\"] != \"OUTSIDE\") or (\n v is None and values[\"type\"] != \"INSIDE\"\n ):\n raise ValueError(\"MCC NOT EQUAL TYPE OPERATION\")\n return v\n\n"
] |
[
0
] |
[] |
[] |
[
"pydantic",
"python",
"validation"
] |
stackoverflow_0074475176_pydantic_python_validation.txt
|
Q:
Selecting columns based on characters in column names
I have a pandas dataframe with columns names as ['INV01_M1_I', 'INV01_M1_V', 'INV01_M2_I', 'INV01_M2_V', 'INV02_M1_I', 'INV02_M1_V', 'INV02_M2_I', 'INV02_M2_V'....] AND SO ON. I want to sum those columns which have same 'INV_no_here' and the last character i.e. I or V. That is sum INV01_M1_I+INVO1_M2_I in one column and INV02_M1_I+INV02_M2_I in one column(if i can name them there it will be nice to do so). I have almost 100+ columns where number changes from 01 to the end for INV. I have gone through different answers where regex, filter(like=), and other different solutions are provided. But I need to match first 5 characters and last character and then also sum those columns.
import numpy as np
import pandas pd
data = [[20, 10, 13, 16, 18, 20, 9, 6], [7, 15, 11, 16, 27, 7, 19, 10]]
df = pd.DataFrame(data, columns=['INV01_M1_I', 'INV01_M1_V','INV01_M2_I','INV01_M2_V',
'INV02_M1_I','INV02_M1_V','INV02_M2_I','INV02_M2_V'])
print(df)
A:
here is one way :
for cols in df.columns.str.split('_'):
if not cols[0] +'_'+ cols[2] in df.columns:
df[cols[0] +'_'+ cols[2]] = df[[col for col in df.columns if col.startswith(cols[0]) and col.endswith(cols[2])]].sum(axis=1)
output :
>>
INV01_M1_I INV01_M1_V INV01_M2_I ... INV01_V INV02_I INV02_V
0 20 10 13 ... 26 27 26
1 7 15 11 ... 31 46 17
|
Selecting columns based on characters in column names
|
I have a pandas dataframe with columns names as ['INV01_M1_I', 'INV01_M1_V', 'INV01_M2_I', 'INV01_M2_V', 'INV02_M1_I', 'INV02_M1_V', 'INV02_M2_I', 'INV02_M2_V'....] AND SO ON. I want to sum those columns which have same 'INV_no_here' and the last character i.e. I or V. That is sum INV01_M1_I+INVO1_M2_I in one column and INV02_M1_I+INV02_M2_I in one column(if i can name them there it will be nice to do so). I have almost 100+ columns where number changes from 01 to the end for INV. I have gone through different answers where regex, filter(like=), and other different solutions are provided. But I need to match first 5 characters and last character and then also sum those columns.
import numpy as np
import pandas pd
data = [[20, 10, 13, 16, 18, 20, 9, 6], [7, 15, 11, 16, 27, 7, 19, 10]]
df = pd.DataFrame(data, columns=['INV01_M1_I', 'INV01_M1_V','INV01_M2_I','INV01_M2_V',
'INV02_M1_I','INV02_M1_V','INV02_M2_I','INV02_M2_V'])
print(df)
|
[
"here is one way :\nfor cols in df.columns.str.split('_'): \n if not cols[0] +'_'+ cols[2] in df.columns:\n df[cols[0] +'_'+ cols[2]] = df[[col for col in df.columns if col.startswith(cols[0]) and col.endswith(cols[2])]].sum(axis=1)\n\noutput :\n>>\n INV01_M1_I INV01_M1_V INV01_M2_I ... INV01_V INV02_I INV02_V\n0 20 10 13 ... 26 27 26\n1 7 15 11 ... 31 46 17\n\n"
] |
[
2
] |
[] |
[] |
[
"character",
"data_science",
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074478602_character_data_science_dataframe_pandas_python.txt
|
Q:
How to use redirect() properly with parameters?
Reverse for 'post_detail' not found. 'post_detail' is not a valid view function or pattern name.
return redirect('post_detail', slug=post.slug)
This is my comment view:
def post_detail(request, year, month, day, slug):
post = get_object_or_404(Post, slug=slug, status='published', publish__year=year, publish__month=month, publish__day=day)
tags = Tag.objects.all()
tagsList = []
for tag in post.tags.get_queryset():
tagsList.append(tag.name)
profile = Profile.objects.get(user=request.user)
comments = post.comments.filter(active=True)
new_comment = None
if request.method == 'POST':
comment_form = CommentForm(data=request.POST)
if comment_form.is_valid():
new_comment = comment_form.save(commit=False)
new_comment.profile = profile
new_comment.post = post
new_comment.save()
return redirect('post_detail', slug=post.slug)
else:
comment_form = CommentForm()
post_tags_ids = post.tags.values_list('id', flat=True)
similar_posts = Post.published.filter(tags__in=post_tags_ids).exclude(id=post.id)
similar_posts = similar_posts.annotate(same_tags=Count('tags')).order_by('-same_tags', '-publish')[:3]
return render(request, 'blog/post/detail.html', {'post': post, 'comments': comments, 'new_comment': new_comment, 'comment_form': comment_form, 'similar_posts': similar_posts, 'tagsList': tagsList, 'tags': tags})
my urls.py
from django.urls import path, register_converter
from django.urls.converters import SlugConverter
from . import views
app_name = 'blog'
class PersianSlugConvertor(SlugConverter):
regex = '[-a-zA-Z0-9_ضصثقفغعهخحجچگکمنتالبیسشظطزژرذدپوءآ]+'
register_converter(PersianSlugConvertor, 'persian_slug')
urlpatterns = [
path('', views.post_list, name='post_list'),
path('search', views.post_search, name='post_search'),
path('tag/<persian_slug:tag_slug>', views.post_list, name='post_list_by_tag'),
path('<int:year>/<int:month>/<int:day>/<persian_slug:slug>', views.post_detail, name='post_detail'),
]
Template:
<div id="post_comments">
<h4>نظرات</h4>
<div class="comment">
{% for comment in comments %}
<div class="row">
<figure class="col-sm-2 col-md-2"> <img width="90" height="90" class="img-circle" src="{{ comment.profile.photo.url }}" alt="عکس کاربر"> </figure>
<div class="col-sm-10 col-md-10">
<div class="comment_name">{{ comment.profile }}<a class="reply" href="#"> </a> </div>
<div class="comment_date"><i class="fa-time"></i>{{ comment.created }}</div>
<div class="the_comment">
<p>{{ comment.body|linebreaks }}</p>
</div>
</div>
</div>
{% empty %}
<h5>هیچ نظری وجود ندارد</h5>
{% endfor %}
</div>
</div>
{% if request.user.is_authenticated %}
<div class="new_comment">
{% if new_comment %}
<h4>نظر شما با موفقیت ثبت شد و در حال بررسی است.</h4>
{% else %}
<h4>نظر خود را اضافه کنید</h4></br>
<form method="post">
<div class="row" dir="rtl">
<div class="col-sm-12 col-md-8">
{{ comment_form.body|attr:"class:form-control"|attr:"type:text" }}
</div>
</div>
{% csrf_token %}
<div class="row"><br/>
<div class="col-sm-12 col-md-8"> <input type="submit" value="ارسال نظر" class="btn send btn-primary" href="#"></input> </div>
</div>
</form>
{% endif %}
</div>
{% else %}
<p>برای انتشار دیدگاه خود <a href="{% url 'account:login' %}">وارد پروفایل کاربری</a> خود شوید یا در سایت <a href="{% url 'account:register' %}">ثبت نام</a> کنید.</p>
{% endif %}
</div>
Is there anything i missed in redirect function?
A:
The post_detail view requires the four url params and you are only passing one param, so kindly pass all the params as:
return redirect('blog:post_detail',year=year, month=month, day=day, slug=post.slug)
For redirecting in the same page simply use:
return HttpResponseRedirect(request.path_info)
|
How to use redirect() properly with parameters?
|
Reverse for 'post_detail' not found. 'post_detail' is not a valid view function or pattern name.
return redirect('post_detail', slug=post.slug)
This is my comment view:
def post_detail(request, year, month, day, slug):
post = get_object_or_404(Post, slug=slug, status='published', publish__year=year, publish__month=month, publish__day=day)
tags = Tag.objects.all()
tagsList = []
for tag in post.tags.get_queryset():
tagsList.append(tag.name)
profile = Profile.objects.get(user=request.user)
comments = post.comments.filter(active=True)
new_comment = None
if request.method == 'POST':
comment_form = CommentForm(data=request.POST)
if comment_form.is_valid():
new_comment = comment_form.save(commit=False)
new_comment.profile = profile
new_comment.post = post
new_comment.save()
return redirect('post_detail', slug=post.slug)
else:
comment_form = CommentForm()
post_tags_ids = post.tags.values_list('id', flat=True)
similar_posts = Post.published.filter(tags__in=post_tags_ids).exclude(id=post.id)
similar_posts = similar_posts.annotate(same_tags=Count('tags')).order_by('-same_tags', '-publish')[:3]
return render(request, 'blog/post/detail.html', {'post': post, 'comments': comments, 'new_comment': new_comment, 'comment_form': comment_form, 'similar_posts': similar_posts, 'tagsList': tagsList, 'tags': tags})
my urls.py
from django.urls import path, register_converter
from django.urls.converters import SlugConverter
from . import views
app_name = 'blog'
class PersianSlugConvertor(SlugConverter):
regex = '[-a-zA-Z0-9_ضصثقفغعهخحجچگکمنتالبیسشظطزژرذدپوءآ]+'
register_converter(PersianSlugConvertor, 'persian_slug')
urlpatterns = [
path('', views.post_list, name='post_list'),
path('search', views.post_search, name='post_search'),
path('tag/<persian_slug:tag_slug>', views.post_list, name='post_list_by_tag'),
path('<int:year>/<int:month>/<int:day>/<persian_slug:slug>', views.post_detail, name='post_detail'),
]
Template:
<div id="post_comments">
<h4>نظرات</h4>
<div class="comment">
{% for comment in comments %}
<div class="row">
<figure class="col-sm-2 col-md-2"> <img width="90" height="90" class="img-circle" src="{{ comment.profile.photo.url }}" alt="عکس کاربر"> </figure>
<div class="col-sm-10 col-md-10">
<div class="comment_name">{{ comment.profile }}<a class="reply" href="#"> </a> </div>
<div class="comment_date"><i class="fa-time"></i>{{ comment.created }}</div>
<div class="the_comment">
<p>{{ comment.body|linebreaks }}</p>
</div>
</div>
</div>
{% empty %}
<h5>هیچ نظری وجود ندارد</h5>
{% endfor %}
</div>
</div>
{% if request.user.is_authenticated %}
<div class="new_comment">
{% if new_comment %}
<h4>نظر شما با موفقیت ثبت شد و در حال بررسی است.</h4>
{% else %}
<h4>نظر خود را اضافه کنید</h4></br>
<form method="post">
<div class="row" dir="rtl">
<div class="col-sm-12 col-md-8">
{{ comment_form.body|attr:"class:form-control"|attr:"type:text" }}
</div>
</div>
{% csrf_token %}
<div class="row"><br/>
<div class="col-sm-12 col-md-8"> <input type="submit" value="ارسال نظر" class="btn send btn-primary" href="#"></input> </div>
</div>
</form>
{% endif %}
</div>
{% else %}
<p>برای انتشار دیدگاه خود <a href="{% url 'account:login' %}">وارد پروفایل کاربری</a> خود شوید یا در سایت <a href="{% url 'account:register' %}">ثبت نام</a> کنید.</p>
{% endif %}
</div>
Is there anything i missed in redirect function?
|
[
"The post_detail view requires the four url params and you are only passing one param, so kindly pass all the params as:\nreturn redirect('blog:post_detail',year=year, month=month, day=day, slug=post.slug)\n\nFor redirecting in the same page simply use:\nreturn HttpResponseRedirect(request.path_info)\n\n"
] |
[
2
] |
[] |
[] |
[
"django",
"django_forms",
"django_urls",
"django_views",
"python"
] |
stackoverflow_0074478825_django_django_forms_django_urls_django_views_python.txt
|
Q:
Trying to filter in dask.read_parquet tries to compare NoneType and str
I have a project where I pass the following load_args to read_parquet:
filters = {'filters': [('itemId', '=', '9403cfde-7fe5-4c9c-916c-41ff0b595c5c')]}
According to the documentation, a List[Tuple] like this should be accepted and I should get all partitions which match the predicate (or equivalently, filter out those that do not).
However, it gives me the following error:
│ │
│ /home/user/project/venv/lib/python3.10/site-packages/dask/dataframe/io/parquet/ |
| core.py:1275 in apply_conjunction │
| |
| 1264 | for part, stats in zip(parts, statistics): |
| 1265 | | | | if "filter" in stats and stats["filter"]: |
| 1266 | | | | | continue # Filtered by engine |
| 1267 | | | | try: |
| 1268 | | | | | c = toolz.groupby("name", stats["columns"])[column][0] |
| 1269 | | | | | min = c["min"] |
| 1270 | | | | | max = c["max"] |
| 1271 | | | | except KeyError: |
│ 1272 │ │ │ │ │ out_parts.append(part) │
│ 1273 │ │ │ │ │ out_statistics.append(stats) │
│ 1274 │ │ │ │ else: │
│ ❱ 1275 │ │ │ │ │ if ( │
│ 1276 │ │ │ │ │ │ operator in ("==", "=") │
│ 1277 │ │ │ │ │ │ and min <= value <= max │
│ 1278 │ │ │ │ │ │ or operator == "!=" │
╰──────────────────────────────────────────────────────────────────────────────────╯
TypeError: '<=' not supported between instances of 'NoneType' and 'str'
It seems that read_parquet tries to compute min and max values for my str value that I wish to filter on, but I'm not sure that makes sense in this case. Even so, str values should be comparable (though it might not make a huge amount of sense in this case, seeing how the itemId is a random UUID).
Still, I expected this to work. What am I doing wrong?
A:
The problem probably arises when min and max haven't been redefined before, so they still refer to the built-in functions that compute the minimum and maximum of two numbers, which obviously can't be compared with a string. Try using different name for these variables (as a rule of thumb, avoid too generic variable names which could be already defined in the standard library).
|
Trying to filter in dask.read_parquet tries to compare NoneType and str
|
I have a project where I pass the following load_args to read_parquet:
filters = {'filters': [('itemId', '=', '9403cfde-7fe5-4c9c-916c-41ff0b595c5c')]}
According to the documentation, a List[Tuple] like this should be accepted and I should get all partitions which match the predicate (or equivalently, filter out those that do not).
However, it gives me the following error:
│ │
│ /home/user/project/venv/lib/python3.10/site-packages/dask/dataframe/io/parquet/ |
| core.py:1275 in apply_conjunction │
| |
| 1264 | for part, stats in zip(parts, statistics): |
| 1265 | | | | if "filter" in stats and stats["filter"]: |
| 1266 | | | | | continue # Filtered by engine |
| 1267 | | | | try: |
| 1268 | | | | | c = toolz.groupby("name", stats["columns"])[column][0] |
| 1269 | | | | | min = c["min"] |
| 1270 | | | | | max = c["max"] |
| 1271 | | | | except KeyError: |
│ 1272 │ │ │ │ │ out_parts.append(part) │
│ 1273 │ │ │ │ │ out_statistics.append(stats) │
│ 1274 │ │ │ │ else: │
│ ❱ 1275 │ │ │ │ │ if ( │
│ 1276 │ │ │ │ │ │ operator in ("==", "=") │
│ 1277 │ │ │ │ │ │ and min <= value <= max │
│ 1278 │ │ │ │ │ │ or operator == "!=" │
╰──────────────────────────────────────────────────────────────────────────────────╯
TypeError: '<=' not supported between instances of 'NoneType' and 'str'
It seems that read_parquet tries to compute min and max values for my str value that I wish to filter on, but I'm not sure that makes sense in this case. Even so, str values should be comparable (though it might not make a huge amount of sense in this case, seeing how the itemId is a random UUID).
Still, I expected this to work. What am I doing wrong?
|
[
"The problem probably arises when min and max haven't been redefined before, so they still refer to the built-in functions that compute the minimum and maximum of two numbers, which obviously can't be compared with a string. Try using different name for these variables (as a rule of thumb, avoid too generic variable names which could be already defined in the standard library).\n"
] |
[
0
] |
[] |
[] |
[
"dask",
"parquet",
"python"
] |
stackoverflow_0074478839_dask_parquet_python.txt
|
Q:
Can't add standard metrics for multioutput model
I have classification + detection model of cats and dogs based on MobileNet v2. It trains well, but now I want to add metrics for it and I can't do that. Here is the main part of code:
def localization_loss(y_true, yhat):
delta_coord = tf.reduce_sum(tf.square(y_true[:,:2] - yhat[:,:2]))
h_true = y_true[:,3] - y_true[:,1]
w_true = y_true[:,2] - y_true[:,0]
h_pred = yhat[:,3] - yhat[:,1]
w_pred = yhat[:,2] - yhat[:,0]
delta_size = tf.reduce_sum(tf.square(w_true - w_pred) + tf.square(h_true-h_pred))
return delta_coord + delta_size
classloss = tf.keras.losses.BinaryCrossentropy()
regressloss = localization_loss
opt = tf.keras.optimizers.Adam(learning_rate=0.0001, decay=0.001)
model.compile(
optimizer = opt,
loss=[classloss, regressloss],
# metrics=["accuracy", "meaniou"],
)
hist = model.fit(train, epochs=10, validation_data=valid)
It works fine, but if I uncomment metrics line, I get this error:
ValueError: as_list() is not defined on an unknown TensorShape.
If I use objects instead of strings (metrics=[Accuracy(), MeanIoU(2)]), it gives this error:
TypeError: '>' not supported between instances of 'NoneType' and 'int'
What am I doing wrong and how can I fix this?
UPD: If I use accuracy for both outputs (metrics=[[Accuracy()], [Accuracy()]]), I train without any error, so I conclude there is something wrong with MeanIoU in my code.
Btw, there is prediction for batch(8) samples (two outputs: class + coordinates as 4 numbers):
(array([[0.7866989 ],
[0.973974 ],
[0.9148978 ],
[0.28471756],
[0.9899457 ],
[0.99033797],
[0.7237025 ],
[0.81942046]], dtype=float32),
array([[0.2515184 , 0.25495493, 0.3642715 , 0.09299589],
[0.87964845, 0.3134839 , 0.54833114, 0.36701256],
[0.0304133 , 0.45813853, 0.19692126, 0.244534 ],
[0.22500503, 0.70299083, 0.00123629, 0.41123846],
[0.37099576, 0.6092719 , 0.13407992, 0.40188596],
[0.32103425, 0.6240243 , 0.02281341, 0.03058532],
[0.28678325, 0.19885723, 0.50342166, 0.57963324],
[0.41590106, 0.21439987, 0.94105315, 0.3379435 ]], dtype=float32))
I thought may be format for MeanIoU is wrong, but arrays of 4 numbers seems valid for MeanIoU, doesn't it?
A:
As I answered here, correct metrics are: BinaryAccuracy and custom MeanIoU (default MeanIoU is not applicable to bboxes regression as I understood). Working code snippet is in the first link.
|
Can't add standard metrics for multioutput model
|
I have classification + detection model of cats and dogs based on MobileNet v2. It trains well, but now I want to add metrics for it and I can't do that. Here is the main part of code:
def localization_loss(y_true, yhat):
delta_coord = tf.reduce_sum(tf.square(y_true[:,:2] - yhat[:,:2]))
h_true = y_true[:,3] - y_true[:,1]
w_true = y_true[:,2] - y_true[:,0]
h_pred = yhat[:,3] - yhat[:,1]
w_pred = yhat[:,2] - yhat[:,0]
delta_size = tf.reduce_sum(tf.square(w_true - w_pred) + tf.square(h_true-h_pred))
return delta_coord + delta_size
classloss = tf.keras.losses.BinaryCrossentropy()
regressloss = localization_loss
opt = tf.keras.optimizers.Adam(learning_rate=0.0001, decay=0.001)
model.compile(
optimizer = opt,
loss=[classloss, regressloss],
# metrics=["accuracy", "meaniou"],
)
hist = model.fit(train, epochs=10, validation_data=valid)
It works fine, but if I uncomment metrics line, I get this error:
ValueError: as_list() is not defined on an unknown TensorShape.
If I use objects instead of strings (metrics=[Accuracy(), MeanIoU(2)]), it gives this error:
TypeError: '>' not supported between instances of 'NoneType' and 'int'
What am I doing wrong and how can I fix this?
UPD: If I use accuracy for both outputs (metrics=[[Accuracy()], [Accuracy()]]), I train without any error, so I conclude there is something wrong with MeanIoU in my code.
Btw, there is prediction for batch(8) samples (two outputs: class + coordinates as 4 numbers):
(array([[0.7866989 ],
[0.973974 ],
[0.9148978 ],
[0.28471756],
[0.9899457 ],
[0.99033797],
[0.7237025 ],
[0.81942046]], dtype=float32),
array([[0.2515184 , 0.25495493, 0.3642715 , 0.09299589],
[0.87964845, 0.3134839 , 0.54833114, 0.36701256],
[0.0304133 , 0.45813853, 0.19692126, 0.244534 ],
[0.22500503, 0.70299083, 0.00123629, 0.41123846],
[0.37099576, 0.6092719 , 0.13407992, 0.40188596],
[0.32103425, 0.6240243 , 0.02281341, 0.03058532],
[0.28678325, 0.19885723, 0.50342166, 0.57963324],
[0.41590106, 0.21439987, 0.94105315, 0.3379435 ]], dtype=float32))
I thought may be format for MeanIoU is wrong, but arrays of 4 numbers seems valid for MeanIoU, doesn't it?
|
[
"As I answered here, correct metrics are: BinaryAccuracy and custom MeanIoU (default MeanIoU is not applicable to bboxes regression as I understood). Working code snippet is in the first link.\n"
] |
[
0
] |
[] |
[] |
[
"keras",
"python",
"tensorflow",
"tensorflow2.0"
] |
stackoverflow_0074460685_keras_python_tensorflow_tensorflow2.0.txt
|
Q:
Print String in Python
I tried running this basic python script to print something, and it doesn't seem to be executing properly.
name = "Tyler";
print{name};
I am getting this error:
File "C:\Users\tyler\main.py", line 2
print{name};
^^^^^^^^^^^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?
I tried changing line 2 to:
print(...)
but it prints out
Ellipsis, not my name.
A:
You don't need semicolons in Python
The issue was the use of curly braces. Call your variable name with print() like this:
name = "Tyler"
print(name)
|
Print String in Python
|
I tried running this basic python script to print something, and it doesn't seem to be executing properly.
name = "Tyler";
print{name};
I am getting this error:
File "C:\Users\tyler\main.py", line 2
print{name};
^^^^^^^^^^^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?
I tried changing line 2 to:
print(...)
but it prints out
Ellipsis, not my name.
|
[
"You don't need semicolons in Python\nThe issue was the use of curly braces. Call your variable name with print() like this:\nname = \"Tyler\"\nprint(name)\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"string"
] |
stackoverflow_0074478880_python_string.txt
|
Q:
Python Keras: Pass y/target to custom activation function
I would like to pass my Python Keras model y (target/response/etc) to a custom activation.
My custom activation function which limits the fit range to be within lower and upper is:
def activation_range(x, lower=-1, upper=1) :
"""
Custom activation layer to restrict layer output range
"""
x02 = backend.tanh(x) + 1 # x in range(0,2)
scale = (upper-lower)/2
return x02 * scale + lower
and I have to pass it to Keras while initiating a follows
model.add(keras.layers.Dense(1, activation=lambda x: activation_range(x, lower=lower, upper=upper)))
where upper and lower are calculated before the model.fit function is called.
However is there a way to set lower and upper based on the values of y e.g lower = y.min() and upper = y.max() after the model has been initialised, so Keras calculates upper and lower while fitting based on y (as model.fit is run), instead of me having to pass it to Keras before
A:
You can. Just use functional API in combination with subclassed layers instead of the basic Sequential which only supports single-input single-output models. Note that this requires you to pass (x,y) as the x argument to model.fit and also as the input during inference.
import tensorflow as tf
import numpy as np
class CustomActivation(tf.keras.layers.Layer):
def call(self, inp):
x, y = inp
upper = tf.math.reduce_max(y)
lower = tf.math.reduce_min(y)
return (tf.math.tanh(x) + 1) * (upper - lower) / 2 + lower
x_in = tf.keras.layers.Input(shape=(10,))
y_in = tf.keras.layers.Input(shape=(1,))
x_before_act = tf.keras.layers.Dense(units=1, activation=None)(x_in)
x_after_act = CustomActivation()([x_before_act, y_in])
model = tf.keras.models.Model(inputs=[x_in, y_in], outputs=x_after_act)
You can verify this with the sample below and see that the model's output is always between -10 and 10.
x = np.random.normal(size=(32,10))
y = np.random.randint(low=-10, high=10, size=(32,))
model([x,y])
You can also compile and train the model as well.
model.compile(optimizer=tf.keras.optimizers.SGD(), loss=tf.keras.losses.mse)
model.fit(x=(x,y), y=y, epochs=1)
|
Python Keras: Pass y/target to custom activation function
|
I would like to pass my Python Keras model y (target/response/etc) to a custom activation.
My custom activation function which limits the fit range to be within lower and upper is:
def activation_range(x, lower=-1, upper=1) :
"""
Custom activation layer to restrict layer output range
"""
x02 = backend.tanh(x) + 1 # x in range(0,2)
scale = (upper-lower)/2
return x02 * scale + lower
and I have to pass it to Keras while initiating a follows
model.add(keras.layers.Dense(1, activation=lambda x: activation_range(x, lower=lower, upper=upper)))
where upper and lower are calculated before the model.fit function is called.
However is there a way to set lower and upper based on the values of y e.g lower = y.min() and upper = y.max() after the model has been initialised, so Keras calculates upper and lower while fitting based on y (as model.fit is run), instead of me having to pass it to Keras before
|
[
"You can. Just use functional API in combination with subclassed layers instead of the basic Sequential which only supports single-input single-output models. Note that this requires you to pass (x,y) as the x argument to model.fit and also as the input during inference.\nimport tensorflow as tf\nimport numpy as np\n\nclass CustomActivation(tf.keras.layers.Layer):\n def call(self, inp):\n x, y = inp\n upper = tf.math.reduce_max(y)\n lower = tf.math.reduce_min(y)\n return (tf.math.tanh(x) + 1) * (upper - lower) / 2 + lower\n\nx_in = tf.keras.layers.Input(shape=(10,))\ny_in = tf.keras.layers.Input(shape=(1,))\nx_before_act = tf.keras.layers.Dense(units=1, activation=None)(x_in)\nx_after_act = CustomActivation()([x_before_act, y_in])\nmodel = tf.keras.models.Model(inputs=[x_in, y_in], outputs=x_after_act)\n\nYou can verify this with the sample below and see that the model's output is always between -10 and 10.\nx = np.random.normal(size=(32,10))\ny = np.random.randint(low=-10, high=10, size=(32,))\nmodel([x,y])\n\nYou can also compile and train the model as well.\nmodel.compile(optimizer=tf.keras.optimizers.SGD(), loss=tf.keras.losses.mse)\nmodel.fit(x=(x,y), y=y, epochs=1)\n\n"
] |
[
1
] |
[] |
[] |
[
"keras",
"python",
"tensorflow"
] |
stackoverflow_0074467258_keras_python_tensorflow.txt
|
Q:
Python NameError: name is not defined (variable names already defined but I get error)
I am trying to run the following codes. I get the error NameError: name 'XXXXX' is not defined.
if __name__ == '__main__':
land_dir = "C:/Users/mb/Documents/Land"
MOD_dir = "C:/Users/mb/Documents/MOD"
def search_land_name(path):
"""to get the land list file name"""
output_list =[]
pt=os.listdir(path)
for item in pt:
if str.find(item,'B3.TIF') != -1: #satisfied conditions
output_list.append(item[:-6])
return np.unique(output_list)
for item in land_file_list:
print(item)
LD_QA_name = item + "QA.TIF"
LD_B1_name = item + "B1.TIF"
LD_B2_name = item + "B2.TIF"
LD_B3_name = item + "B3.TIF"
LD_B4_name = item + "B4.TIF"
LD_B5_name = item + "B5.TIF"
LD_B6_name = item + "B6.TIF"
LD_B7_name = item + "B7.TIF"
print(LD_B3_name)
NameError Traceback (most recent call last)
Cell In [8], line 1
----> 1 print(LD_B3_name)
NameError: name 'LD_B3_name' is not defined
Any suggestion please.
A:
LD_B3_name is locally defined inside your function search_landsat_name.
That means that the variable only exists inside your function.
If you want to access the variable outside of search_landsat_name you can simply return the variable:
def search_landsat_name(path):
# some code
return LD_B3_name
LD_B3_name = search_landsat_name(path)
print(LD_B3_name)
But keep in mind that LD_B3_name = search_landsat_name(path) creates an independent variable. If you change the value it doesn't affect LD_B3_name inside your function.
Check out global vs local variables to help you understand this more.
|
Python NameError: name is not defined (variable names already defined but I get error)
|
I am trying to run the following codes. I get the error NameError: name 'XXXXX' is not defined.
if __name__ == '__main__':
land_dir = "C:/Users/mb/Documents/Land"
MOD_dir = "C:/Users/mb/Documents/MOD"
def search_land_name(path):
"""to get the land list file name"""
output_list =[]
pt=os.listdir(path)
for item in pt:
if str.find(item,'B3.TIF') != -1: #satisfied conditions
output_list.append(item[:-6])
return np.unique(output_list)
for item in land_file_list:
print(item)
LD_QA_name = item + "QA.TIF"
LD_B1_name = item + "B1.TIF"
LD_B2_name = item + "B2.TIF"
LD_B3_name = item + "B3.TIF"
LD_B4_name = item + "B4.TIF"
LD_B5_name = item + "B5.TIF"
LD_B6_name = item + "B6.TIF"
LD_B7_name = item + "B7.TIF"
print(LD_B3_name)
NameError Traceback (most recent call last)
Cell In [8], line 1
----> 1 print(LD_B3_name)
NameError: name 'LD_B3_name' is not defined
Any suggestion please.
|
[
"LD_B3_name is locally defined inside your function search_landsat_name.\nThat means that the variable only exists inside your function.\nIf you want to access the variable outside of search_landsat_name you can simply return the variable:\ndef search_landsat_name(path):\n # some code\n return LD_B3_name\n\nLD_B3_name = search_landsat_name(path)\nprint(LD_B3_name)\n\nBut keep in mind that LD_B3_name = search_landsat_name(path) creates an independent variable. If you change the value it doesn't affect LD_B3_name inside your function.\nCheck out global vs local variables to help you understand this more.\n"
] |
[
0
] |
[] |
[] |
[
"arrays",
"list",
"python",
"python_3.x"
] |
stackoverflow_0074478937_arrays_list_python_python_3.x.txt
|
Q:
Match key word in list of strings to variables
I am reading all files from a directory and storing the file paths of those in that directory in a list using
files = [os.path.abspath(x) for x in os.listdir(r"my directory")]
Each file in a unique template so the resulting list is something like
[C:\Users\....\Template_Coversheet.xlsx
C:\Users\....\Template_Blanks.xlsx,
C:\Users\....\Template_Stocks.xlsx,
C:\Users\....\Template_May.xlsx]
*Note files aren't necessarily always in the same order
I want to reach each of these files and assign them to a variable that corresponds to the type of template.
I can do this by doing a a for loop and a long series of if statements
for f in files:
if "Blanks" in f:
blank=f
if "Stocks" in f:
stock=f
if "May" in f:
may=f
if "Coversheet" in f:
coversheet=f
But is there an easier or more pythonic way to achieve this?
A:
First, if this depends only on the file name, you can use that instead of the whole path. You can use regular expressions if the patterns are complex. But in your case and if it's just Template_TEMPLATENAME.xlsx you can create a dictionary and map TEMPLATENAME to the full name. The code would be something like this:
import os
mapping = dict()
for file in os.listdir():
mapping[file.replace(".xlsx", "").split("-")[-1]] = file
For such a list:
Template_A.xlsx
Template_B.xlsx
Template_Foo.xlsx
You will have:
{
"A": "Template_A.xlsx",
"B": "Template_B.xlsx",
"Foo": "Template_Foo.xlsx"
}
A:
Replace all your individual variables with a dict, and then you can do something like:
TEMPLATE_TYPES = "Blanks", "Stocks", "May", "Coversheet"
files_by_template = {t: [] for t in TEMPLATE_TYPES}
for f in files:
for t in TEMPLATE_TYPES:
if t in f:
files_by_template[t].append(f)
|
Match key word in list of strings to variables
|
I am reading all files from a directory and storing the file paths of those in that directory in a list using
files = [os.path.abspath(x) for x in os.listdir(r"my directory")]
Each file in a unique template so the resulting list is something like
[C:\Users\....\Template_Coversheet.xlsx
C:\Users\....\Template_Blanks.xlsx,
C:\Users\....\Template_Stocks.xlsx,
C:\Users\....\Template_May.xlsx]
*Note files aren't necessarily always in the same order
I want to reach each of these files and assign them to a variable that corresponds to the type of template.
I can do this by doing a a for loop and a long series of if statements
for f in files:
if "Blanks" in f:
blank=f
if "Stocks" in f:
stock=f
if "May" in f:
may=f
if "Coversheet" in f:
coversheet=f
But is there an easier or more pythonic way to achieve this?
|
[
"First, if this depends only on the file name, you can use that instead of the whole path. You can use regular expressions if the patterns are complex. But in your case and if it's just Template_TEMPLATENAME.xlsx you can create a dictionary and map TEMPLATENAME to the full name. The code would be something like this:\nimport os\n\nmapping = dict()\n\nfor file in os.listdir():\n mapping[file.replace(\".xlsx\", \"\").split(\"-\")[-1]] = file\n\nFor such a list:\nTemplate_A.xlsx\nTemplate_B.xlsx\nTemplate_Foo.xlsx\n\nYou will have:\n{\n \"A\": \"Template_A.xlsx\",\n \"B\": \"Template_B.xlsx\",\n \"Foo\": \"Template_Foo.xlsx\"\n}\n\n",
"Replace all your individual variables with a dict, and then you can do something like:\nTEMPLATE_TYPES = \"Blanks\", \"Stocks\", \"May\", \"Coversheet\"\n\nfiles_by_template = {t: [] for t in TEMPLATE_TYPES}\nfor f in files:\n for t in TEMPLATE_TYPES:\n if t in f:\n files_by_template[t].append(f)\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074478971_python.txt
|
Q:
Python - Before adding text to a file check it doesn't already exist - How?
I need to add device names and device IP addresses to the bottom of a text file each time a new device goes live so I can connect via name instead of IP.
My problem is how to check the device I'm adding doesn't already exist, if it does exist then the logic should be to ignore, otherwise it should be added to the bottom of the specified file.
I have managed to add the required text to the file but on running the code for a second time the text is added again rather than ignoring.
Any text specified in lines that already exists in the file called Device_Names should not be added.
I've seen a lot of examples that look for specific key words in the existing text file which returns true/false parameters and/or prints to screen but this isn't sustainable long term. Can someone point me in the right direction on how to go about it? I've used and if/else functions but not getting very far.
I currently have:
lines = [
'\n\device.1 A 10.10.10.10'
'\n\n'
'device.2 A 11.11.11.11'
'\n\n'
'device.3 A 12.12.12.12']
with open ("Device_Names", "a+") as f:
for line in lines:
f.write(line)
f.close()
A:
It's really simple - You need to analize the data inside file and check it.
I suggest you to write data in csv format e.g. row: 'device_name, device_ip\n' - this will facilitate data analysis.
You can also sort your device list by name or ip and optimize searches or use pandas etc.
example file content:
device_0, 10.20.30.40
device_1, 11.23.33.9
e.g. code:
new_ip = "10.20.30.40"
new_name = "device_2"
with open(file_name, "r") as f_r:
for line in f_r:
name, ip = line.split(",")
if ip == new_ip: # or name == new_name
break
else:
with open(file_name, "a") as f_a:
# f'' looks better but is slower then adding
f_a.write(new_name+","+new_ip+"\n")
A:
You can use dictionary to keep track of added items like below.
dic={}
with open ("Device_Names", "a+") as f:
for line in lines:
if line not in dic:
dic[line] =1
f.write(line)
A:
It would be easier if your text file was a CSV instead of using a trillion spaces like you've shown.
from pathlib import Path
cwd = Path().resolve() # Current working directory
file_path = cwd.joinpath("devices.txt")
# Read the existing file
with open(file_path, 'r') as f:
# make a list, each item represents one line
text = f.readlines()
# text = ['device.1 A 10.10.10.10\n',
# 'device.2 A 11.11.11.11\n',
# 'device.3 A 12.12.12.12']
# Now we want to break each line into name:address pairs, but all of
# those extra spaces pose a small challenge.
# This does not work:
# text[0].split(" ")
# >>> ['device.1', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', 'A', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '10.10.10.10']
devices = {}
# I don't know regex, so I lifted this expression from https://stackoverflow.com/a/56823175/6030926
for line in text:
parts = re.split(r'\s{2,}', line) # ['device.1', 'A', '10.10.10.10']
name = parts[0]
addr = parts[2].strip() # strip() gets rid of trailing \n
devices[name] = addr # Add this device to our dictionary
print(f"{devices=}")
# Now you can search for devices
for dev in ["device.3", "device.4"]:
if dev in devices.keys():
print(f"{dev} already exists")
else:
# Open the file for appending and write your line.
print(f"{dev} doesn't exist")
Output:
devices={'device.1': '10.10.10.10', 'device.2': '11.11.11.11', 'device.3': '12.12.12.12'}
device.3 already exists
device.4 doesn't exist
|
Python - Before adding text to a file check it doesn't already exist - How?
|
I need to add device names and device IP addresses to the bottom of a text file each time a new device goes live so I can connect via name instead of IP.
My problem is how to check the device I'm adding doesn't already exist, if it does exist then the logic should be to ignore, otherwise it should be added to the bottom of the specified file.
I have managed to add the required text to the file but on running the code for a second time the text is added again rather than ignoring.
Any text specified in lines that already exists in the file called Device_Names should not be added.
I've seen a lot of examples that look for specific key words in the existing text file which returns true/false parameters and/or prints to screen but this isn't sustainable long term. Can someone point me in the right direction on how to go about it? I've used and if/else functions but not getting very far.
I currently have:
lines = [
'\n\device.1 A 10.10.10.10'
'\n\n'
'device.2 A 11.11.11.11'
'\n\n'
'device.3 A 12.12.12.12']
with open ("Device_Names", "a+") as f:
for line in lines:
f.write(line)
f.close()
|
[
"It's really simple - You need to analize the data inside file and check it.\nI suggest you to write data in csv format e.g. row: 'device_name, device_ip\\n' - this will facilitate data analysis.\nYou can also sort your device list by name or ip and optimize searches or use pandas etc.\nexample file content:\ndevice_0, 10.20.30.40\ndevice_1, 11.23.33.9\n\ne.g. code:\nnew_ip = \"10.20.30.40\"\nnew_name = \"device_2\"\n\nwith open(file_name, \"r\") as f_r:\n for line in f_r:\n name, ip = line.split(\",\")\n if ip == new_ip: # or name == new_name\n break\n else:\n with open(file_name, \"a\") as f_a:\n # f'' looks better but is slower then adding\n f_a.write(new_name+\",\"+new_ip+\"\\n\")\n\n",
"You can use dictionary to keep track of added items like below.\ndic={}\nwith open (\"Device_Names\", \"a+\") as f: \n for line in lines:\n if line not in dic:\n dic[line] =1\n f.write(line)\n\n",
"It would be easier if your text file was a CSV instead of using a trillion spaces like you've shown.\nfrom pathlib import Path\n\ncwd = Path().resolve() # Current working directory\nfile_path = cwd.joinpath(\"devices.txt\")\n\n# Read the existing file\nwith open(file_path, 'r') as f:\n # make a list, each item represents one line\n text = f.readlines()\n # text = ['device.1 A 10.10.10.10\\n',\n # 'device.2 A 11.11.11.11\\n',\n # 'device.3 A 12.12.12.12']\n\n# Now we want to break each line into name:address pairs, but all of \n# those extra spaces pose a small challenge. \n\n# This does not work:\n# text[0].split(\" \") \n# >>> ['device.1', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', 'A', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '10.10.10.10']\n\ndevices = {}\n# I don't know regex, so I lifted this expression from https://stackoverflow.com/a/56823175/6030926 \nfor line in text:\n parts = re.split(r'\\s{2,}', line) # ['device.1', 'A', '10.10.10.10']\n name = parts[0]\n addr = parts[2].strip() # strip() gets rid of trailing \\n\n devices[name] = addr # Add this device to our dictionary\n\nprint(f\"{devices=}\") \n\n# Now you can search for devices\nfor dev in [\"device.3\", \"device.4\"]:\n\n if dev in devices.keys():\n print(f\"{dev} already exists\")\n else:\n # Open the file for appending and write your line.\n print(f\"{dev} doesn't exist\")\n\nOutput:\ndevices={'device.1': '10.10.10.10', 'device.2': '11.11.11.11', 'device.3': '12.12.12.12'}\ndevice.3 already exists \ndevice.4 doesn't exist \n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074478537_python.txt
|
Q:
Random.choice to return and fill null values equally
I'm trying to fill all the null values with random choices made from a list using:
new_df = new_df.fillna(new_df.loc[new_df['rest_type'] == 'Cafe' ,'dish_liked'].fillna(random.choice(top5C)))
Here is the list, for example :
top5C = ['Pasta', 'Waffles', 'Mocktails', 'Coffee', 'BrownieChocolate', 'Burgers']
The problem is, it's just picking up 1 random value from the list and filling the entire column with that value, which is not what I'm trying to do. How can I evenly distribute all values inside the list randomly in all the null values? Thanks
Edit :
Here's how my new_df looks like :
Edit :
(Reason : Tried what the people have suggested)
I've tried lambda function to fill the null value as follows :
new_df.loc[new_df['rest_type'] == 'Quick Bites' ,'dish_liked'].map(lambda x: random.choice(top5) if pd.isnull(x) else x)
But this just returns the output of :
new_df.loc[new_df['rest_type'] == 'Quick Bites' ,'dish_liked'] and not filling any null values.
A:
Edited: I had entirely neglected the fact that the column containing NaN is of type string. Answer updated to use pd.isnull instead of np.isnan
How about this, where I use the pandas map method together with some numpy functions and your random.choice to infill only where we have NaN:
import numpy as np
import pandas as pd
import random
top5C = ['Pasta', 'Waffles', 'Mocktails', 'Coffee', 'BrownieChocolate', 'Burgers']
df=pd.DataFrame(data={"a":[1,2,3,4,5,6],"b":[11,12,13,14,15,16],"c":["sausage",np.NaN,"pie",np.NaN,"fried egg",np.NaN]})
This gives the following starting dataframe:
>>> df
a b c
0 1 11 sausage
1 2 12 NaN
2 3 13 pie
3 4 14 NaN
4 5 15 fried egg
5 6 16 NaN
Then we infill by:
df["c"]= df["c"].map(lambda x: random.choice(top5C) if pd.isnull(x) else x)
giving, for instance:
>>> df
a b c
0 1 11 sausage
1 2 12 Coffee
2 3 13 pie
3 4 14 Mocktails
4 5 15 fried egg
5 6 16 BrownieChocolate
PS: The edit I have made here is to use the pd.isnull() method instead of np.isnan(), because np.isnan() doesn't act on strings, but the pandas method does. Sorry about that!
Now, this infills with random choices from the list, but it does not guarantee that each item in your list will only be chosen once, which I am inferring from your saying "evenly distribute", but I am picking random items from the list.
|
Random.choice to return and fill null values equally
|
I'm trying to fill all the null values with random choices made from a list using:
new_df = new_df.fillna(new_df.loc[new_df['rest_type'] == 'Cafe' ,'dish_liked'].fillna(random.choice(top5C)))
Here is the list, for example :
top5C = ['Pasta', 'Waffles', 'Mocktails', 'Coffee', 'BrownieChocolate', 'Burgers']
The problem is, it's just picking up 1 random value from the list and filling the entire column with that value, which is not what I'm trying to do. How can I evenly distribute all values inside the list randomly in all the null values? Thanks
Edit :
Here's how my new_df looks like :
Edit :
(Reason : Tried what the people have suggested)
I've tried lambda function to fill the null value as follows :
new_df.loc[new_df['rest_type'] == 'Quick Bites' ,'dish_liked'].map(lambda x: random.choice(top5) if pd.isnull(x) else x)
But this just returns the output of :
new_df.loc[new_df['rest_type'] == 'Quick Bites' ,'dish_liked'] and not filling any null values.
|
[
"Edited: I had entirely neglected the fact that the column containing NaN is of type string. Answer updated to use pd.isnull instead of np.isnan\nHow about this, where I use the pandas map method together with some numpy functions and your random.choice to infill only where we have NaN:\nimport numpy as np\nimport pandas as pd\nimport random\n\ntop5C = ['Pasta', 'Waffles', 'Mocktails', 'Coffee', 'BrownieChocolate', 'Burgers']\n\ndf=pd.DataFrame(data={\"a\":[1,2,3,4,5,6],\"b\":[11,12,13,14,15,16],\"c\":[\"sausage\",np.NaN,\"pie\",np.NaN,\"fried egg\",np.NaN]})\n\nThis gives the following starting dataframe:\n>>> df\n a b c\n0 1 11 sausage\n1 2 12 NaN\n2 3 13 pie\n3 4 14 NaN\n4 5 15 fried egg\n5 6 16 NaN\n\nThen we infill by:\ndf[\"c\"]= df[\"c\"].map(lambda x: random.choice(top5C) if pd.isnull(x) else x)\n\ngiving, for instance:\n>>> df\n a b c\n0 1 11 sausage\n1 2 12 Coffee\n2 3 13 pie\n3 4 14 Mocktails\n4 5 15 fried egg\n5 6 16 BrownieChocolate\n\nPS: The edit I have made here is to use the pd.isnull() method instead of np.isnan(), because np.isnan() doesn't act on strings, but the pandas method does. Sorry about that!\nNow, this infills with random choices from the list, but it does not guarantee that each item in your list will only be chosen once, which I am inferring from your saying \"evenly distribute\", but I am picking random items from the list.\n"
] |
[
0
] |
[] |
[] |
[
"numpy",
"pandas",
"python"
] |
stackoverflow_0074474573_numpy_pandas_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.