content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
How to use two variable types in a pydantic.BaseModel with typing.Union?
I need my model to accept either a bytes type variable or a string type variable and to raise an exception if any other type was passed.
from typing import Union
from pydantic import BaseModel
class MyModel(BaseModel):
a: Union[bytes, str]
m1 = MyModel(a='123')
m2 = MyModel(a=b'123')
print(type(m1.a))
print(type(m2.a))
In my case the model interprets both bytes and string as bytes.
Output:
<class 'bytes'>
<class 'bytes'>
Desired output:
<class 'str'>
<class 'bytes'>
The desired output above can be achieved if I re-assign member a:
m1 = MyModel(a='123')
m1.a = '123'
Is it possible to get it in one go?
A:
The problem you are facing is that the str type does some automatic conversions (here in the docs):
strings are accepted as-is, int float and Decimal are coerced using str(v), bytes and bytearray are converted using v.decode(), enums inheriting from str are converted using v.value, and all other types cause an error
bytes are accepted as-is, bytearray is converted using bytes(v), str are converted using v.encode(), and int, float, and Decimal are coerced using str(v).encode()
You can use StrictTypes to avoid automatic conversion between compatible types (e.g.: str and bytes):
from typing import Union
from pydantic import BaseModel, StrictStr, StrictBytes
class MyModel(BaseModel):
a: Union[StrictStr, StrictBytes]
m1 = MyModel(a='123')
m2 = MyModel(a=b'123')
print(type(m1.a))
print(type(m2.a))
Output will be as expected:
<class 'str'>
<class 'bytes'>
|
How to use two variable types in a pydantic.BaseModel with typing.Union?
|
I need my model to accept either a bytes type variable or a string type variable and to raise an exception if any other type was passed.
from typing import Union
from pydantic import BaseModel
class MyModel(BaseModel):
a: Union[bytes, str]
m1 = MyModel(a='123')
m2 = MyModel(a=b'123')
print(type(m1.a))
print(type(m2.a))
In my case the model interprets both bytes and string as bytes.
Output:
<class 'bytes'>
<class 'bytes'>
Desired output:
<class 'str'>
<class 'bytes'>
The desired output above can be achieved if I re-assign member a:
m1 = MyModel(a='123')
m1.a = '123'
Is it possible to get it in one go?
|
[
"The problem you are facing is that the str type does some automatic conversions (here in the docs):\n\nstrings are accepted as-is, int float and Decimal are coerced using str(v), bytes and bytearray are converted using v.decode(), enums inheriting from str are converted using v.value, and all other types cause an error\n\n\nbytes are accepted as-is, bytearray is converted using bytes(v), str are converted using v.encode(), and int, float, and Decimal are coerced using str(v).encode()\n\nYou can use StrictTypes to avoid automatic conversion between compatible types (e.g.: str and bytes):\nfrom typing import Union\n\nfrom pydantic import BaseModel, StrictStr, StrictBytes\n\n\nclass MyModel(BaseModel):\n a: Union[StrictStr, StrictBytes]\n\n\nm1 = MyModel(a='123')\nm2 = MyModel(a=b'123')\n\nprint(type(m1.a))\nprint(type(m2.a))\n\nOutput will be as expected:\n<class 'str'>\n<class 'bytes'>\n\n"
] |
[
2
] |
[] |
[] |
[
"pydantic",
"python",
"python_typing"
] |
stackoverflow_0074460495_pydantic_python_python_typing.txt
|
Q:
django migrations - workflow with multiple dev branches
I'm curious how other django developers manage multiple code branches (in git for instance) with migrations.
My problem is as follows:
- we have multiple feature branches in git, some of them with django migrations (some of them altering fields, or removing them altogether)
- when I switch branches (with git checkout some_other_branch) the database does not reflect always the new code, so I run into "random" errors, where a db table column does not exist anymore, etc...
Right now, I simply drop the db and recreate it, but it means I have to recreate a bunch of dummy data to restart work. I can use fixtures, but it requires keeping track of what data goes where, it's a bit of a hassle.
Is there a good/clean way of dealing with this use-case? I'm thinking a post-checkout git hook script could run the necessary migrations, but I don't even know if migration rollbacks are at all possible.
A:
Migrations rollback are possible and usually handled automatically by django.
Considering the following model:
class MyModel(models.Model):
pass
If you run python manage.py makemigrations myapp, it will generate the initial migration script.
You can then run python manage.py migrate myapp 0001 to apply this initial migration.
If after that you add a field to your model:
class MyModel(models.Model):
my_field = models.CharField()
Then regenerate a new migration, and apply it, you can still go back to the initial state. Just run
python manage.py migrate myapp 0001 and the ORM will go backward, removing the new field.
It's more tricky when you deal with data migrations, because you have to write the forward and backward code.
Considering an empty migration created via python manage.py makemigrations myapp --empty,
you'll end up with something like:
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
def forward(apps, schema_editor):
# load some data
MyModel = apps.get_model('myapp', 'MyModel')
while condition:
instance = MyModel()
instance.save()
def backward(apps, schema_editor):
# delete previously loaded data
MyModel = apps.get_model('myapp', 'MyModel')
while condition:
instance = MyModel.objects.get(myargs)
instance.delete()
class Migration(migrations.Migration):
dependencies = [
('myapp', '0003_auto_20150918_1153'),
]
operations = [
migrations.RunPython(forward, backward),
]
For pure data-loading migrations, you usually don't need the backward migration.
But when you alter the schema and update existing rows
(like converting all values in a column to slug), you'll generally have to write the backward step.
In our team, we try to avoid working on the same models at the same time to avoid collision.
If it is not possible, and two migration with the same number (e.g 0002) are created,
you can still rename one of them to change the order in which they will be applied (also remember to update
the dependencies attribute on the migration class to your new order).
If you end up working on the same model fields at the same time in different features,
you'll still be in trouble, but it may mean these features are related and should be handled
together in a single branch.
For the git-hooks part, it's probably possible to write something, Assuming your are on branch mybranch
and want to check out another feature branch myfeature:
Just before switching, you dump the list of currently applied migrations into
a temporary file mybranch_database_state.txt
Then, you apply myfeature branch migrations, if any
Then, when checking back mybranch, you reapply your previous database state
by looking to the dump file.
However, it seems a bit hackish to me, and it would probably be really difficult to handle properly all scenarios:
rebasing, merging, cherry-picking, etc.
Handling the migrations conflicts when they occurs seems easier to me.
A:
I don't have a good solution to this, but I feel the pain.
A post-checkout hook will be too late. If you are on branch A and you check out branch B, and B has fewer migrations than A, the rollback information is only in A and needs to be run before checkout.
I hit this problem when jumping between several commits trying to locate the origin of a bug. Our database (even in development trim) is huge, so dropping and recreating isn't practical.
I'm imagining a wrapper for git-checkout that:
Notes the newest migration for each of your INSTALLED_APPS
Looks in the requested branch and notes the newest migrations there
For each app where the migrations in #1 are farther ahead than in #2, migrate back to the highest migration in #2
Check out the new branch
For each app where migrations in #2 were ahead of #1, migrate forward
A simple matter of programming!
A:
For simple changes I rely on migration rollback, as discussed by Agate.
However, if I know a feature branch is going to involve highly invasive database changes, or if it will involve a lot of data migration, I like to create a clone of the local (or remote dev) database as soon as I start the new branch. This may not always be convenient, but especially for local development using sqlite it is just a matter op copying a file (which is not under source control).
The first commit on the new branch then updates my Django settings (local/dev) to use the cloned database. This way, when I switch branches, the correct database is selected automatically. No need to worry about rolling back schema changes, missing data, etc. No complicated stuff.
After the feature branch has been fully merged, the cloned database can be removed.
A:
So far I have found two Github projects (django-south-compass and django_nomad) that try to solve the issue of migrating between dev branches and there is a couple of answers on Stack Overflow.
Citing an article on Medium, most of the solutions boil down to one of the following concepts:
Dropping all the tables and reapply migrations in the target branch from scratch. When the tables are created from scratch, all the data will be lost and needs to be recreated as well. This can be handled with fixtures and data migrations but managing them, in turn, will become a nightmare, not to mention that it will take some time (...)
Have a separate database for each branch and change the settings file with the target branch’s settings every time the branch is switched using tools like sed. This can be done with a post_checkout hook. Maintaining one large database for each branch would be very storage-intensive. Also, checking out individual commit IDs might potentially produce the same errors.
Finding the differences in migrations between the source and target branch, and apply the differences. We can do so with post_checkout script but there is a small issue. This post explains the issue in detail. To summarize the issue, post_checkout is run after all the files in the target branch are checked out, which includes migration files. If the target branch doesn’t contain all the migrations in the source branch when we run python manage.py migrate app1 Django won’t be able to find the missing migrations which are needed to apply reverse migrations. We have to temporarily checkout migration files in the source branch, run python manage.py migrate and checkout migration files in the target branch. django-south-compass does something very similar but is available only for up to python 2.6.
Using a management command (which uses python git module), find all the migration operations differences between the source branch and the merge-base of the source branch and target branch and notify the user of these changes. If these changes don’t interfere with the reason for branch change, the user can go ahead and change the branch. Else, using another management command, un-apply all migration till merge base, switch branch, and apply the migrations in the target branch. There will be a small data loss and if the two branches haven’t diverged a lot, is manageable. django_nomad does some of this work.
Keep a track of applied and unapplied migrations in files and use this data to populate the tables when switching branches.
|
django migrations - workflow with multiple dev branches
|
I'm curious how other django developers manage multiple code branches (in git for instance) with migrations.
My problem is as follows:
- we have multiple feature branches in git, some of them with django migrations (some of them altering fields, or removing them altogether)
- when I switch branches (with git checkout some_other_branch) the database does not reflect always the new code, so I run into "random" errors, where a db table column does not exist anymore, etc...
Right now, I simply drop the db and recreate it, but it means I have to recreate a bunch of dummy data to restart work. I can use fixtures, but it requires keeping track of what data goes where, it's a bit of a hassle.
Is there a good/clean way of dealing with this use-case? I'm thinking a post-checkout git hook script could run the necessary migrations, but I don't even know if migration rollbacks are at all possible.
|
[
"Migrations rollback are possible and usually handled automatically by django.\nConsidering the following model:\nclass MyModel(models.Model):\n pass\n \n\nIf you run python manage.py makemigrations myapp, it will generate the initial migration script.\nYou can then run python manage.py migrate myapp 0001 to apply this initial migration.\nIf after that you add a field to your model:\nclass MyModel(models.Model): \n my_field = models.CharField()\n \n\nThen regenerate a new migration, and apply it, you can still go back to the initial state. Just run\npython manage.py migrate myapp 0001 and the ORM will go backward, removing the new field.\nIt's more tricky when you deal with data migrations, because you have to write the forward and backward code.\nConsidering an empty migration created via python manage.py makemigrations myapp --empty,\nyou'll end up with something like:\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\n\ndef forward(apps, schema_editor):\n # load some data\n MyModel = apps.get_model('myapp', 'MyModel')\n \n while condition:\n instance = MyModel()\n instance.save()\n \ndef backward(apps, schema_editor):\n # delete previously loaded data\n MyModel = apps.get_model('myapp', 'MyModel')\n \n while condition:\n instance = MyModel.objects.get(myargs)\n instance.delete()\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('myapp', '0003_auto_20150918_1153'),\n ]\n\n operations = [ \n migrations.RunPython(forward, backward),\n ]\n \n\nFor pure data-loading migrations, you usually don't need the backward migration.\nBut when you alter the schema and update existing rows\n(like converting all values in a column to slug), you'll generally have to write the backward step.\nIn our team, we try to avoid working on the same models at the same time to avoid collision.\nIf it is not possible, and two migration with the same number (e.g 0002) are created,\nyou can still rename one of them to change the order in which they will be applied (also remember to update\nthe dependencies attribute on the migration class to your new order).\nIf you end up working on the same model fields at the same time in different features,\nyou'll still be in trouble, but it may mean these features are related and should be handled\ntogether in a single branch.\nFor the git-hooks part, it's probably possible to write something, Assuming your are on branch mybranch\nand want to check out another feature branch myfeature:\n\nJust before switching, you dump the list of currently applied migrations into\na temporary file mybranch_database_state.txt\nThen, you apply myfeature branch migrations, if any\nThen, when checking back mybranch, you reapply your previous database state\nby looking to the dump file.\n\nHowever, it seems a bit hackish to me, and it would probably be really difficult to handle properly all scenarios:\nrebasing, merging, cherry-picking, etc.\nHandling the migrations conflicts when they occurs seems easier to me.\n",
"I don't have a good solution to this, but I feel the pain. \nA post-checkout hook will be too late. If you are on branch A and you check out branch B, and B has fewer migrations than A, the rollback information is only in A and needs to be run before checkout.\nI hit this problem when jumping between several commits trying to locate the origin of a bug. Our database (even in development trim) is huge, so dropping and recreating isn't practical. \nI'm imagining a wrapper for git-checkout that:\n\nNotes the newest migration for each of your INSTALLED_APPS\nLooks in the requested branch and notes the newest migrations there\nFor each app where the migrations in #1 are farther ahead than in #2, migrate back to the highest migration in #2\nCheck out the new branch\nFor each app where migrations in #2 were ahead of #1, migrate forward\n\nA simple matter of programming!\n",
"For simple changes I rely on migration rollback, as discussed by Agate.\nHowever, if I know a feature branch is going to involve highly invasive database changes, or if it will involve a lot of data migration, I like to create a clone of the local (or remote dev) database as soon as I start the new branch. This may not always be convenient, but especially for local development using sqlite it is just a matter op copying a file (which is not under source control).\nThe first commit on the new branch then updates my Django settings (local/dev) to use the cloned database. This way, when I switch branches, the correct database is selected automatically. No need to worry about rolling back schema changes, missing data, etc. No complicated stuff.\nAfter the feature branch has been fully merged, the cloned database can be removed.\n",
"So far I have found two Github projects (django-south-compass and django_nomad) that try to solve the issue of migrating between dev branches and there is a couple of answers on Stack Overflow.\nCiting an article on Medium, most of the solutions boil down to one of the following concepts:\n\nDropping all the tables and reapply migrations in the target branch from scratch. When the tables are created from scratch, all the data will be lost and needs to be recreated as well. This can be handled with fixtures and data migrations but managing them, in turn, will become a nightmare, not to mention that it will take some time (...)\nHave a separate database for each branch and change the settings file with the target branch’s settings every time the branch is switched using tools like sed. This can be done with a post_checkout hook. Maintaining one large database for each branch would be very storage-intensive. Also, checking out individual commit IDs might potentially produce the same errors.\nFinding the differences in migrations between the source and target branch, and apply the differences. We can do so with post_checkout script but there is a small issue. This post explains the issue in detail. To summarize the issue, post_checkout is run after all the files in the target branch are checked out, which includes migration files. If the target branch doesn’t contain all the migrations in the source branch when we run python manage.py migrate app1 Django won’t be able to find the missing migrations which are needed to apply reverse migrations. We have to temporarily checkout migration files in the source branch, run python manage.py migrate and checkout migration files in the target branch. django-south-compass does something very similar but is available only for up to python 2.6.\nUsing a management command (which uses python git module), find all the migration operations differences between the source branch and the merge-base of the source branch and target branch and notify the user of these changes. If these changes don’t interfere with the reason for branch change, the user can go ahead and change the branch. Else, using another management command, un-apply all migration till merge base, switch branch, and apply the migrations in the target branch. There will be a small data loss and if the two branches haven’t diverged a lot, is manageable. django_nomad does some of this work.\nKeep a track of applied and unapplied migrations in files and use this data to populate the tables when switching branches.\n\n"
] |
[
25,
10,
1,
1
] |
[] |
[] |
[
"django",
"git",
"migration",
"python"
] |
stackoverflow_0032682293_django_git_migration_python.txt
|
Q:
how to set query for show followed posts in home page
I want to make query to show all followed posts in the main page, could you help me in doing this?
Here's my file models.py:
class Relation(models.Model):
from_user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='follower')
to_user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='following')
created = models.DateTimeField(auto_now_add=True)
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, related_name='profile')
avatar = models.FileField(default='default.jpg', verbose_name='avatar')
age = models.PositiveSmallIntegerField(default=0)
location = models.CharField(max_length=30, blank=True)
work_at = models.TextField(null=True, blank=True)
bio = models.TextField(null=True, blank=True)
Thanks!
A:
checkout this code:
followed_people = Relation.objects.filter(from_user=request.user).values('to_user')
posts = Post.objects.filter(
user__in=followed_people
) | Post.objects.filter(user=request.user)
|
how to set query for show followed posts in home page
|
I want to make query to show all followed posts in the main page, could you help me in doing this?
Here's my file models.py:
class Relation(models.Model):
from_user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='follower')
to_user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='following')
created = models.DateTimeField(auto_now_add=True)
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, related_name='profile')
avatar = models.FileField(default='default.jpg', verbose_name='avatar')
age = models.PositiveSmallIntegerField(default=0)
location = models.CharField(max_length=30, blank=True)
work_at = models.TextField(null=True, blank=True)
bio = models.TextField(null=True, blank=True)
Thanks!
|
[
"checkout this code:\nfollowed_people = Relation.objects.filter(from_user=request.user).values('to_user')\n posts = Post.objects.filter(\n user__in=followed_people\n ) | Post.objects.filter(user=request.user)\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_models",
"django_views",
"python"
] |
stackoverflow_0074460496_django_django_models_django_views_python.txt
|
Q:
get data from multiple csv file and print Highest, Lowest day weather with Humid from any year and also print month name and day in python
Hi everyone. I have multiple CSV files I am creating a weatherman app in python. I am getting data from CSV files and here is the code
import os
import csv
lst_temp = []
lst_hum = []
dates = []
class Weather:
def main(self):
path = r'C:\Users\someone\PycharmProjects\untitled\weatherfiles\\'
os.system('cls')
for files in os.listdir(path):
if files.endswith('.txt'):
with open(path + files, 'r') as weather:
input_file = csv.reader(weather)
for row in input_file:
date = row[0].split('-')
if date[0] == '2013':
lst_temp.append(row[1])
lst_hum.append(row[7])
lst_temp_int = [int(i) for i in lst_temp if i]
lst_hum_int = [int(i) for i in lst_hum if i]
sorted_lst = sorted(lst_temp_int)
sorted_hum_lst = sorted(lst_hum_int)
print(f"Highest: {sorted_lst[-1]}C")
print(f"Lowest: {sorted_lst[0]}C")
print(f"Humid: {sorted_hum_lst[-1]}%")
they are giving me data in this format
Highest: 70C
Lowest: -1C
Humid: 100%
I need the result in this format
Highest: 45C on June 23
Lowest: 01C on December 22
Humid: 95% on August 14
can anyone help me I am very grateful for this? thank you
A:
You might want to use pandas to parse data files.
Assuming the column names are the same throughout your .txt files:
import pandas as pd
data = pd.read_csv(filepath, sep=',', parse_dates=['PKT'])
After that, you can retrieve the index of the max temperature using .idxmax() like so:
max_i = df['Max TemperatureC'].idxmax()
max_temp_row = df.iloc[max_i]
Or the minimum temperature using .idxmin()
min_i = df['Max TemperatureC'].idxmin()
min_temp_row = df.iloc[max_i]
I highly recommend you read the pandas documentation for more info.
|
get data from multiple csv file and print Highest, Lowest day weather with Humid from any year and also print month name and day in python
|
Hi everyone. I have multiple CSV files I am creating a weatherman app in python. I am getting data from CSV files and here is the code
import os
import csv
lst_temp = []
lst_hum = []
dates = []
class Weather:
def main(self):
path = r'C:\Users\someone\PycharmProjects\untitled\weatherfiles\\'
os.system('cls')
for files in os.listdir(path):
if files.endswith('.txt'):
with open(path + files, 'r') as weather:
input_file = csv.reader(weather)
for row in input_file:
date = row[0].split('-')
if date[0] == '2013':
lst_temp.append(row[1])
lst_hum.append(row[7])
lst_temp_int = [int(i) for i in lst_temp if i]
lst_hum_int = [int(i) for i in lst_hum if i]
sorted_lst = sorted(lst_temp_int)
sorted_hum_lst = sorted(lst_hum_int)
print(f"Highest: {sorted_lst[-1]}C")
print(f"Lowest: {sorted_lst[0]}C")
print(f"Humid: {sorted_hum_lst[-1]}%")
they are giving me data in this format
Highest: 70C
Lowest: -1C
Humid: 100%
I need the result in this format
Highest: 45C on June 23
Lowest: 01C on December 22
Humid: 95% on August 14
can anyone help me I am very grateful for this? thank you
|
[
"You might want to use pandas to parse data files.\nAssuming the column names are the same throughout your .txt files:\nimport pandas as pd\n\ndata = pd.read_csv(filepath, sep=',', parse_dates=['PKT'])\n\nAfter that, you can retrieve the index of the max temperature using .idxmax() like so:\nmax_i = df['Max TemperatureC'].idxmax()\nmax_temp_row = df.iloc[max_i]\n\nOr the minimum temperature using .idxmin()\nmin_i = df['Max TemperatureC'].idxmin()\nmin_temp_row = df.iloc[max_i]\n\nI highly recommend you read the pandas documentation for more info.\n"
] |
[
0
] |
[] |
[] |
[
"csv",
"extract",
"python",
"python_3.x"
] |
stackoverflow_0074460315_csv_extract_python_python_3.x.txt
|
Q:
SQLAlchemy add_all() inserting not working
I have my Flask API endpoint that doesn't to be saving all the information from for loop The endpoint uploads multiple images. All is working fine, i.e the images are being uploaded however when it comes to inserting the names to the database, no record(file name/url) is being inserted.
Endpoint:
def upload_images(args):
"""Upload room images """
image_id = None
for file in request.files.getlist('image_name'):
if file and allowed_file(file.filename):
image_id = str(uuid.uuid4())
filename = image_id + '.png'
file.save(os.path.join(current_app.config['UPLOAD_FOLDER'], filename))
#Resize Images
_image_resize(current_app.config['UPLOAD_FOLDER'], image_id, 600, 'lg')
_image_resize(current_app.config['UPLOAD_FOLDER'], image_id, 150, 'sm')
get_image = RoomImages(**args)
get_image.image_name = url_for('uploaded_file', filename=filename, _external=True)
db.session.add_all(get_image)
db.session.flush()
db.session.commit()
return get_image
Model:
class RoomImages(db.Model):
__tablename__='ep_roomimages'
id = sqla.Column(sqla.Integer, primary_key=True)
image_name = sqla.Column(sqla.String(128), unique=True)
room_id = sqla.Column(sqla.Integer, sqla.ForeignKey(Room.id), index=True)
room_img = sqla_orm.relationship('Room', back_populates='room_images')
def __repr__(self):
return '<Room Images {}>'. format(self.image_name)
Error I am getting is: TypeError: 'RoomImages' object is not iterable
A:
For those looking at this for a solution, here is my final edit. The code needed some indenting of the functions for it to work well.
files = request.files.getlist('image_name')
for file in files:
if file and allowed_file(file.filename):
image_id = str(uuid.uuid4())
filename = image_id + '.png'
file.save(os.path.join(current_app.config['UPLOAD_FOLDER'], filename))
# Resize Images
_image_resize(current_app.config['UPLOAD_FOLDER'], image_id, 600, 'lg')
_image_resize(current_app.config['UPLOAD_FOLDER'], image_id, 150, 'sm')
get_image = RoomImages(**args)
get_image.image_name = url_for('uploaded_file', filename=filename, _external=True)
db.session.add(get_image)
db.session.commit()
return get_image
|
SQLAlchemy add_all() inserting not working
|
I have my Flask API endpoint that doesn't to be saving all the information from for loop The endpoint uploads multiple images. All is working fine, i.e the images are being uploaded however when it comes to inserting the names to the database, no record(file name/url) is being inserted.
Endpoint:
def upload_images(args):
"""Upload room images """
image_id = None
for file in request.files.getlist('image_name'):
if file and allowed_file(file.filename):
image_id = str(uuid.uuid4())
filename = image_id + '.png'
file.save(os.path.join(current_app.config['UPLOAD_FOLDER'], filename))
#Resize Images
_image_resize(current_app.config['UPLOAD_FOLDER'], image_id, 600, 'lg')
_image_resize(current_app.config['UPLOAD_FOLDER'], image_id, 150, 'sm')
get_image = RoomImages(**args)
get_image.image_name = url_for('uploaded_file', filename=filename, _external=True)
db.session.add_all(get_image)
db.session.flush()
db.session.commit()
return get_image
Model:
class RoomImages(db.Model):
__tablename__='ep_roomimages'
id = sqla.Column(sqla.Integer, primary_key=True)
image_name = sqla.Column(sqla.String(128), unique=True)
room_id = sqla.Column(sqla.Integer, sqla.ForeignKey(Room.id), index=True)
room_img = sqla_orm.relationship('Room', back_populates='room_images')
def __repr__(self):
return '<Room Images {}>'. format(self.image_name)
Error I am getting is: TypeError: 'RoomImages' object is not iterable
|
[
"For those looking at this for a solution, here is my final edit. The code needed some indenting of the functions for it to work well.\nfiles = request.files.getlist('image_name')\n for file in files:\n if file and allowed_file(file.filename):\n image_id = str(uuid.uuid4())\n filename = image_id + '.png'\n file.save(os.path.join(current_app.config['UPLOAD_FOLDER'], filename))\n\n # Resize Images\n _image_resize(current_app.config['UPLOAD_FOLDER'], image_id, 600, 'lg')\n _image_resize(current_app.config['UPLOAD_FOLDER'], image_id, 150, 'sm')\n\n get_image = RoomImages(**args)\n get_image.image_name = url_for('uploaded_file', filename=filename, _external=True)\n db.session.add(get_image)\n db.session.commit()\n\n return get_image\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0074155101_python_sqlalchemy.txt
|
Q:
Python, ThreadPoolExecutor, pool execution doesnt terminate
I have got I simple code modelling a more complicated problem I am to solve. Here I have 3 funcs- worker, task submitter (seek tasks and put it to queue once it gets new ones) and function creating a pool and adding new tasks to this pool. But the code doesnt happen to finish the run after queue gets empty and all the tasks in a list turn finished.I am too dump to have an idea why the hell it doesnt terminate the While loop with condition... I have tried a different ways to code the thing, nothing works
from concurrent.futures import ThreadPoolExecutor as Tpe
import time
import random
import queue
import threading
def task_submit(q):
for i in range(7):
threading.currentThread().setName('task_submit')
new_task = random.randint(10, 20)
q.put_nowait(new_task)
print(f' {i} new task with argument {new_task} has been added to queue')
time.sleep(5)
def worker(t):
threading.currentThread().setName(f'worker {t}')
print(f'{threading.currentThread().getName()} started')
time.sleep(t)
print(f'{threading.currentThread().getName()} FINISHED!')
def execution():
executor = Tpe(max_workers=4)
q = queue.Queue(maxsize=100)
q_thread = executor.submit(task_submit, q)
tasks = [executor.submit(worker, q.get())]
execution_finished = False
while not execution_finished: #all([task.done() for task in tasks]):
if not all([task.done() for task in tasks]):
print(' still in progress .....................')
tasks.append(executor.submit(worker, q.get()))
else:
print(' all done!')
executor.shutdown()
execution_finished = True
execution()
A:
It doesn't terminate because you are trying to remove an item from an empty queue. The problem is here:
while not execution_finished:
if not all([task.done() for task in tasks]):
print(' still in progress .....................')
tasks.append(executor.submit(worker, q.get()))
The last line here submits a new work item to the executor. Suppose that happens to be the last item in the queue. At that moment, the executor is not finished and will not be finished for a few seconds. Your main thread goes back to the while not execution_finished line, and the if statement evaluates true because some of the tasks are still running. So you try to submit one more item but you can't, because the queue is now empty. The call to q.get blocks the main loop until the queue contains an item, which never happens. The other threads finish but the program doesn't exit because the main thread is blocked.
Perhaps you should check for an empty queue, but I'm not sure that's the right idea because I probably don't understand your requirements. In any case, that's why your script doesn't exit.
|
Python, ThreadPoolExecutor, pool execution doesnt terminate
|
I have got I simple code modelling a more complicated problem I am to solve. Here I have 3 funcs- worker, task submitter (seek tasks and put it to queue once it gets new ones) and function creating a pool and adding new tasks to this pool. But the code doesnt happen to finish the run after queue gets empty and all the tasks in a list turn finished.I am too dump to have an idea why the hell it doesnt terminate the While loop with condition... I have tried a different ways to code the thing, nothing works
from concurrent.futures import ThreadPoolExecutor as Tpe
import time
import random
import queue
import threading
def task_submit(q):
for i in range(7):
threading.currentThread().setName('task_submit')
new_task = random.randint(10, 20)
q.put_nowait(new_task)
print(f' {i} new task with argument {new_task} has been added to queue')
time.sleep(5)
def worker(t):
threading.currentThread().setName(f'worker {t}')
print(f'{threading.currentThread().getName()} started')
time.sleep(t)
print(f'{threading.currentThread().getName()} FINISHED!')
def execution():
executor = Tpe(max_workers=4)
q = queue.Queue(maxsize=100)
q_thread = executor.submit(task_submit, q)
tasks = [executor.submit(worker, q.get())]
execution_finished = False
while not execution_finished: #all([task.done() for task in tasks]):
if not all([task.done() for task in tasks]):
print(' still in progress .....................')
tasks.append(executor.submit(worker, q.get()))
else:
print(' all done!')
executor.shutdown()
execution_finished = True
execution()
|
[
"It doesn't terminate because you are trying to remove an item from an empty queue. The problem is here:\nwhile not execution_finished: \n if not all([task.done() for task in tasks]):\n print(' still in progress .....................')\n tasks.append(executor.submit(worker, q.get()))\n\nThe last line here submits a new work item to the executor. Suppose that happens to be the last item in the queue. At that moment, the executor is not finished and will not be finished for a few seconds. Your main thread goes back to the while not execution_finished line, and the if statement evaluates true because some of the tasks are still running. So you try to submit one more item but you can't, because the queue is now empty. The call to q.get blocks the main loop until the queue contains an item, which never happens. The other threads finish but the program doesn't exit because the main thread is blocked.\nPerhaps you should check for an empty queue, but I'm not sure that's the right idea because I probably don't understand your requirements. In any case, that's why your script doesn't exit.\n"
] |
[
0
] |
[] |
[] |
[
"concurrency",
"multithreading",
"python",
"python_multithreading",
"threadpoolexecutor"
] |
stackoverflow_0074456840_concurrency_multithreading_python_python_multithreading_threadpoolexecutor.txt
|
Q:
Ursina engine not rendering mesh properly?
i am creating a small game using Ursina and i have code which generates a terrain mesh using perlin noise. the mesh itself renders but i can't put textures on it properly and shaders do not work on it, it just renders as a solid colour.
screenshot of the game - terrain is all one colour and not shaded
here's my code
`
from ursina import *
from ursina.prefabs.first_person_controller import FirstPersonController
from ursina.shaders import lit_with_shadows_shader
from perlin_noise import PerlinNoise
import time
game = Ursina()
window.title = "new_game"
window.borderless = False
window.fps_counter.enabled = True
window.exit_button.visible = False
window.fullscreen = False
groundTexture = load_texture("assets/placeholder.png")
crosshairTexture = load_texture("assets/crosshair.png")
crosshair = Entity(model = "cube", texture = crosshairTexture, parent = camera.ui, scale = 0.2)
crosshair.always_on_top = True
title = Text("new game", origin = (6.825, -19))
coordinates = Text("", origin = (3.35, -8))
mode = 1
size = 20
level = size / 10
seed = random.randint(1, 1000000)
noise = PerlinNoise(octaves = 3, seed = seed)
vertices = [0] * ((size + 1) * (size + 1))
i = 0
for z in range(size + 1):
for x in range(size + 1):
y = level * noise([x / size, z / size])
vertices[i] = x, y, z
i = i + 1
triangles = [0] * (size * size * 6)
vert = 0
tris = 0
for z in range(size):
for x in range(size):
triangles[tris + 0] = vert + 0
triangles[tris + 1] = vert + size + 1
triangles[tris + 2] = vert + 1
triangles[tris + 3] = vert + 1
triangles[tris + 4] = vert + size + 1
triangles[tris + 5] = vert + size + 2
vert = vert + 1
tris = tris + 6
vert = vert + 1
triangles.reverse() # array is made counter-clockwise
def input(key):
if key == "escape":
Audio(sound_file_name = "assets/tick.wav")
time.sleep(0.25)
exit()
def update():
coordinates.text = "coordinates (x, y, z):\n" + str(player.position)
# MAIN
terrainMesh = Mesh(vertices, triangles)
terrain = Entity(model = terrainMesh, collider = "mesh", texture = "grass_big", shader = lit_with_shadows_shader)
box = Entity(model = "cube", collider = "mesh", texture = "white_cube", position = (10, 5, 10), shader = lit_with_shadows_shader)
pivot = Entity()
DirectionalLight(parent=pivot, x = 10, y = 10, z = 15, shadows = True, rotation = (45, -45, 45))
if mode == 1:
player = FirstPersonController()
if mode == 2:
player = EditorCamera()
player.position = (10, 5, 10)
game.run()
`
i have tried looking into how to normalise the mesh or use shaders with it but there is practically no helpful documentation on it whatsoever. i wrote the code that generates the mesh but i don't know enough about shaders and normalisation to fix the issue myself. i was wondering if there's a built in function that does this? (i have tried a few and i didn't know how to use them/they didn't work) any help is much appreciated, thanks
A:
How should the texture map to the model? You have to define this by giving it uvs.
UVs are two-dimensional texture coordinates that correspond with the
vertex information for your geometry. UVs are vital because they
provide the link between a surface mesh and how an image texture gets
applied onto that surface. They are basically marker points that
control which pixels on the texture correspond to which vertex on the
3D mesh.
https://www.pluralsight.com/blog/film-games/understanding-uvs-love-them-or-hate-them-theyre-essential-to-know
To do this with ursina, give the Mesh a list of two dimensional coordinates, one Vec2 for each vertex.
|
Ursina engine not rendering mesh properly?
|
i am creating a small game using Ursina and i have code which generates a terrain mesh using perlin noise. the mesh itself renders but i can't put textures on it properly and shaders do not work on it, it just renders as a solid colour.
screenshot of the game - terrain is all one colour and not shaded
here's my code
`
from ursina import *
from ursina.prefabs.first_person_controller import FirstPersonController
from ursina.shaders import lit_with_shadows_shader
from perlin_noise import PerlinNoise
import time
game = Ursina()
window.title = "new_game"
window.borderless = False
window.fps_counter.enabled = True
window.exit_button.visible = False
window.fullscreen = False
groundTexture = load_texture("assets/placeholder.png")
crosshairTexture = load_texture("assets/crosshair.png")
crosshair = Entity(model = "cube", texture = crosshairTexture, parent = camera.ui, scale = 0.2)
crosshair.always_on_top = True
title = Text("new game", origin = (6.825, -19))
coordinates = Text("", origin = (3.35, -8))
mode = 1
size = 20
level = size / 10
seed = random.randint(1, 1000000)
noise = PerlinNoise(octaves = 3, seed = seed)
vertices = [0] * ((size + 1) * (size + 1))
i = 0
for z in range(size + 1):
for x in range(size + 1):
y = level * noise([x / size, z / size])
vertices[i] = x, y, z
i = i + 1
triangles = [0] * (size * size * 6)
vert = 0
tris = 0
for z in range(size):
for x in range(size):
triangles[tris + 0] = vert + 0
triangles[tris + 1] = vert + size + 1
triangles[tris + 2] = vert + 1
triangles[tris + 3] = vert + 1
triangles[tris + 4] = vert + size + 1
triangles[tris + 5] = vert + size + 2
vert = vert + 1
tris = tris + 6
vert = vert + 1
triangles.reverse() # array is made counter-clockwise
def input(key):
if key == "escape":
Audio(sound_file_name = "assets/tick.wav")
time.sleep(0.25)
exit()
def update():
coordinates.text = "coordinates (x, y, z):\n" + str(player.position)
# MAIN
terrainMesh = Mesh(vertices, triangles)
terrain = Entity(model = terrainMesh, collider = "mesh", texture = "grass_big", shader = lit_with_shadows_shader)
box = Entity(model = "cube", collider = "mesh", texture = "white_cube", position = (10, 5, 10), shader = lit_with_shadows_shader)
pivot = Entity()
DirectionalLight(parent=pivot, x = 10, y = 10, z = 15, shadows = True, rotation = (45, -45, 45))
if mode == 1:
player = FirstPersonController()
if mode == 2:
player = EditorCamera()
player.position = (10, 5, 10)
game.run()
`
i have tried looking into how to normalise the mesh or use shaders with it but there is practically no helpful documentation on it whatsoever. i wrote the code that generates the mesh but i don't know enough about shaders and normalisation to fix the issue myself. i was wondering if there's a built in function that does this? (i have tried a few and i didn't know how to use them/they didn't work) any help is much appreciated, thanks
|
[
"How should the texture map to the model? You have to define this by giving it uvs.\n\nUVs are two-dimensional texture coordinates that correspond with the\nvertex information for your geometry. UVs are vital because they\nprovide the link between a surface mesh and how an image texture gets\napplied onto that surface. They are basically marker points that\ncontrol which pixels on the texture correspond to which vertex on the\n3D mesh.\nhttps://www.pluralsight.com/blog/film-games/understanding-uvs-love-them-or-hate-them-theyre-essential-to-know\n\nTo do this with ursina, give the Mesh a list of two dimensional coordinates, one Vec2 for each vertex.\n"
] |
[
0
] |
[] |
[] |
[
"game_development",
"python",
"rendering",
"ursina"
] |
stackoverflow_0074460669_game_development_python_rendering_ursina.txt
|
Q:
get the miniumum value in pandas vectorization
I'm creating a column which is based on 2 other columns but also has an extra condition:
df['C'] = min((df['B'] - df['A']) , 0)
The new column is the subtraction of A and B, but if the value is negative it has to be 0. The above function does not work unfortunately. Can anyone help?
A:
You could use df.clip to set a lower bound for the data (i.e. any data below 0 to show as 0):
df['C'] = (df['B'] - df['A']).clip(lower=0)
Note: If you don't want any negatives, your original idea should use max rather than min. A negative would be < 0, it would keep the negative. You'd end up replacing all positive numbers with 0 rather than the negatives (e.g. min(-5, 0) would output -5)
|
get the miniumum value in pandas vectorization
|
I'm creating a column which is based on 2 other columns but also has an extra condition:
df['C'] = min((df['B'] - df['A']) , 0)
The new column is the subtraction of A and B, but if the value is negative it has to be 0. The above function does not work unfortunately. Can anyone help?
|
[
"You could use df.clip to set a lower bound for the data (i.e. any data below 0 to show as 0):\ndf['C'] = (df['B'] - df['A']).clip(lower=0)\n\nNote: If you don't want any negatives, your original idea should use max rather than min. A negative would be < 0, it would keep the negative. You'd end up replacing all positive numbers with 0 rather than the negatives (e.g. min(-5, 0) would output -5)\n"
] |
[
3
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074460758_pandas_python.txt
|
Q:
how to create a tags in azure disk using python?
I want to add or create a new tag in Azure Disk using python but not able to do anyone please help me with python sdk/code for this.
for disk in compute_client.disks.list():
if disk.as_dict()["name"] == "test_disk_rohit":
tags = target_disk.tags["DetachedTime"] = datetime.now()
compute_client.disks.begin_create_or_update(resrc,disk.as_dict()["name"],disk)
This is what I tried to add/create a new tag for my azure disk called "test_disk_rohit".
Anyone help me with this..
A:
Instead of using begin_create_or_update you can use create_or_update.
I have followed the below code snippet I can be able to create/update the tags in desk
AZURE_TENANT_ID= '<Tenent ID>'
AZURE_CLIENT_ID='<Client ID>'
AZURE_CLIENT_SECRET='<Client Secret>'
AZURE_SUBSCRIPTION_ID='<Sub_ID>'
credentials = ServicePrincipalCredentials(client_id=AZURE_CLIENT_ID,secret=AZURE_CLIENT_SECRET,tenant=AZURE_TENANT_ID)
resource_client = ResourceManagementClient(credentials, AZURE_SUBSCRIPTION_ID)
compute_client = ComputeManagementClient(credentials,AZURE_SUBSCRIPTION_ID)
Diskdetails = compute_client.disks.create_or_update(
'<ResourceGroupName>',
'<Disk Name>',
{
'location': 'eastasia',
'creation_data': {
'create_option': DiskCreateOption.copy,
'source_resource_id': <Source Resource ID>
},
"tags": {
"tagtest": "testtagGanesh"
},
}
)
disk_resource = Diskdetails.result()
#get Disk details
disk = compute_client.disks.get('<ResourceGroupName>','<Disk Name>')
print(disk.sku)
|
how to create a tags in azure disk using python?
|
I want to add or create a new tag in Azure Disk using python but not able to do anyone please help me with python sdk/code for this.
for disk in compute_client.disks.list():
if disk.as_dict()["name"] == "test_disk_rohit":
tags = target_disk.tags["DetachedTime"] = datetime.now()
compute_client.disks.begin_create_or_update(resrc,disk.as_dict()["name"],disk)
This is what I tried to add/create a new tag for my azure disk called "test_disk_rohit".
Anyone help me with this..
|
[
"Instead of using begin_create_or_update you can use create_or_update.\nI have followed the below code snippet I can be able to create/update the tags in desk\nAZURE_TENANT_ID= '<Tenent ID>'\nAZURE_CLIENT_ID='<Client ID>'\nAZURE_CLIENT_SECRET='<Client Secret>'\nAZURE_SUBSCRIPTION_ID='<Sub_ID>'\n\ncredentials = ServicePrincipalCredentials(client_id=AZURE_CLIENT_ID,secret=AZURE_CLIENT_SECRET,tenant=AZURE_TENANT_ID) \n\nresource_client = ResourceManagementClient(credentials, AZURE_SUBSCRIPTION_ID)\ncompute_client = ComputeManagementClient(credentials,AZURE_SUBSCRIPTION_ID)\n\nDiskdetails = compute_client.disks.create_or_update(\n '<ResourceGroupName>',\n '<Disk Name>',\n {\n 'location': 'eastasia',\n 'creation_data': {\n 'create_option': DiskCreateOption.copy,\n 'source_resource_id': <Source Resource ID>\n\n},\n \"tags\": {\n \"tagtest\": \"testtagGanesh\"\n },\n }\n)\n\ndisk_resource = Diskdetails.result()\n#get Disk details\ndisk = compute_client.disks.get('<ResourceGroupName>','<Disk Name>')\n\nprint(disk.sku)\n\n\n"
] |
[
0
] |
[] |
[] |
[
"azure",
"python"
] |
stackoverflow_0074440190_azure_python.txt
|
Q:
Error using simoid activation function in the last dense layer of a LSTN
Trying to use sigmoid as an activation function for the last dense layer of an LSTN, I get this error
ValueError: `logits` and `labels` must have the same shape, received ((None, 60, 1) vs (None,)).
The code is this
scaler = StandardScaler()
X_train_s = scaler.fit_transform(X_train) #scaled_train
X_test_s = scaler.transform(X_test) #scaled_test
length = 60
n_features=89
generator = TimeseriesGenerator(X_train_s, Y_train['TARGET_ENTRY_LONG'], length=length, batch_size=1)
validation_generator = TimeseriesGenerator(X_test_s, Y_test['TARGET_ENTRY_LONG'], length=length, batch_size=1)
# define model
model = Sequential()
model.add(LSTM(90, activation='relu', input_shape=(length, n_features), return_sequences=True, dropout = 0.3))
model.add(LSTM(30,activation='relu',return_sequences=True, dropout = 0.3))
model.add(Dense(1, activation = 'sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy')
model.summary()
# fit model
model.fit(generator,epochs=3,
validation_data=validation_generator)
#callbacks=[early_stop])
If I replace the last layer declaration with the following one
model.add(Dense(1))
I get no errors, but probably also not the expected result. Any idea?
A:
Found the cause of the trouble after several attempts, as Dr. Snoopy said in a previous remark, it was in the layer before the last one: it shall have no "return_sequences=True" set, that is for all the layers before if the last one is a dense layer for binary classification using sigmoid as activation function. Therefore, this layer
model.add(LSTM(30,activation='relu',return_sequences=True, dropout = 0.3))
shall be written instead as following
model.add(LSTM(30,activation='relu', dropout = 0.3))
|
Error using simoid activation function in the last dense layer of a LSTN
|
Trying to use sigmoid as an activation function for the last dense layer of an LSTN, I get this error
ValueError: `logits` and `labels` must have the same shape, received ((None, 60, 1) vs (None,)).
The code is this
scaler = StandardScaler()
X_train_s = scaler.fit_transform(X_train) #scaled_train
X_test_s = scaler.transform(X_test) #scaled_test
length = 60
n_features=89
generator = TimeseriesGenerator(X_train_s, Y_train['TARGET_ENTRY_LONG'], length=length, batch_size=1)
validation_generator = TimeseriesGenerator(X_test_s, Y_test['TARGET_ENTRY_LONG'], length=length, batch_size=1)
# define model
model = Sequential()
model.add(LSTM(90, activation='relu', input_shape=(length, n_features), return_sequences=True, dropout = 0.3))
model.add(LSTM(30,activation='relu',return_sequences=True, dropout = 0.3))
model.add(Dense(1, activation = 'sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy')
model.summary()
# fit model
model.fit(generator,epochs=3,
validation_data=validation_generator)
#callbacks=[early_stop])
If I replace the last layer declaration with the following one
model.add(Dense(1))
I get no errors, but probably also not the expected result. Any idea?
|
[
"Found the cause of the trouble after several attempts, as Dr. Snoopy said in a previous remark, it was in the layer before the last one: it shall have no \"return_sequences=True\" set, that is for all the layers before if the last one is a dense layer for binary classification using sigmoid as activation function. Therefore, this layer\nmodel.add(LSTM(30,activation='relu',return_sequences=True, dropout = 0.3))\n\nshall be written instead as following\nmodel.add(LSTM(30,activation='relu', dropout = 0.3))\n\n"
] |
[
0
] |
[] |
[] |
[
"neural_network",
"numpy",
"pandas",
"python"
] |
stackoverflow_0074445524_neural_network_numpy_pandas_python.txt
|
Q:
Combining Restapi and Websockets
I have a rest api server which makes call to some other Apis,I am accessing the data I get from the server on a react js frontend,But for certain usecases I need to fetch real time data from backed,is there any way do this together,below is my code
from flask import Flask,request
from flask_cors import CORS
from tuya_connector import TuyaOpenAPI, TUYA_LOGGER
app = Flask(__name__)
CORS(app)
@app.get("/api/device/<string:deviceid>")
def getdata(deviceid):
ACCESS_ID = ""
ACCESS_KEY = ""
API_ENDPOINT = ""
# Enable debug log
# Init OpenAPI and connect
openapi = TuyaOpenAPI(API_ENDPOINT, ACCESS_ID, ACCESS_KEY)
openapi.connect()
# Set up device_id
DEVICE_ID = deviceid
# Call APIs from Tuya
# Get the device information
response = openapi.get("/v1.0/devices/{}".format(DEVICE_ID))
return response
I want to have traditional request response service along with real time data fetching
A:
Websockets endpoints are exactly what you're looking for. If that is not too late, I'd recommend switching to FastAPI which supports WebSockets "natively" (out-of-the-box) - https://fastapi.tiangolo.com/advanced/websockets
If you need to keep using Flask, there are a few packages that allow you to add WebSockets endpoints: https://flask-sock.readthedocs.io/en/latest/
With FastAPI, this is that simple:
@app.get("/")
async def get():
return {"msg": "This is a regular HTTP endpoint"}
@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
while True:
data = await websocket.receive_text()
await websocket.send_text(f"Message text was: {data}")
|
Combining Restapi and Websockets
|
I have a rest api server which makes call to some other Apis,I am accessing the data I get from the server on a react js frontend,But for certain usecases I need to fetch real time data from backed,is there any way do this together,below is my code
from flask import Flask,request
from flask_cors import CORS
from tuya_connector import TuyaOpenAPI, TUYA_LOGGER
app = Flask(__name__)
CORS(app)
@app.get("/api/device/<string:deviceid>")
def getdata(deviceid):
ACCESS_ID = ""
ACCESS_KEY = ""
API_ENDPOINT = ""
# Enable debug log
# Init OpenAPI and connect
openapi = TuyaOpenAPI(API_ENDPOINT, ACCESS_ID, ACCESS_KEY)
openapi.connect()
# Set up device_id
DEVICE_ID = deviceid
# Call APIs from Tuya
# Get the device information
response = openapi.get("/v1.0/devices/{}".format(DEVICE_ID))
return response
I want to have traditional request response service along with real time data fetching
|
[
"Websockets endpoints are exactly what you're looking for. If that is not too late, I'd recommend switching to FastAPI which supports WebSockets \"natively\" (out-of-the-box) - https://fastapi.tiangolo.com/advanced/websockets\nIf you need to keep using Flask, there are a few packages that allow you to add WebSockets endpoints: https://flask-sock.readthedocs.io/en/latest/\nWith FastAPI, this is that simple:\n@app.get(\"/\")\nasync def get():\n return {\"msg\": \"This is a regular HTTP endpoint\"}\n\n\n@app.websocket(\"/ws\")\nasync def websocket_endpoint(websocket: WebSocket):\n await websocket.accept()\n while True:\n data = await websocket.receive_text()\n await websocket.send_text(f\"Message text was: {data}\")\n\n"
] |
[
0
] |
[] |
[] |
[
"flask",
"python",
"websocket"
] |
stackoverflow_0074450622_flask_python_websocket.txt
|
Q:
Pyomo cannot find ipopt in Linux even though it's installed
I'm using Kali Linux and I needed to install ipopt to use with pyomo in Python which I'm currently learning. I have tried several things and none of them have worked with trying to run ipopt in pyomo. First, following their official website's instructions did not work (https://coin-or.github.io/Ipopt/INSTALL.html) for pyomo even though everything seemed to install:
sudo apt-get install gcc g++ gfortran git patch wget pkg-config liblapack-dev libmetis-dev
Next, I attempted to use coinbrew following coin-or repo's suggestion:
/path/to/coinbrew fetch Ipopt --no-prompt
/path/to/coinbrew build Ipopt --prefix=/dir/to/install --test --no-prompt --verbosity=3
/path/to/coinbrew install Ipopt --no-prompt
It took a long time to build from the source and I am not sure if this did anything.
My third attempt was installing cyipopt (https://github.com/mechmotum/cyipopt) with pip and running one of the examples there in their repo. That worked perfectly fine with using cyipot but not pyomo which still could not find the solver when I tried to run it. On my fourth attempt, I went ahead and downloaded the ipopt linux64 directly from https://ampl.com/dl/open/ipopt/. I then unzipped the file and copied the executable ipopt into my /usr/bin and then added +x permission to it. I tested out the executable by ./ipopt and it appears to work properly there:
$ ipopt
No stub!
usage: ipopt [options] stub [-AMPL] [<assignment> ...]
Options:
-- {end of options}
-= {show name= possibilities}
-? {show usage}
-bf {read boundsfile f}
-e {suppress echoing of assignments}
-of {write .sol file to file f}
-s {write .sol file (without -AMPL)}
-v {just show version}
I went ahead and ran it on an example file:
from pyomo.environ import *
V = 40 # liters
kA = 0.5 # 1/min
kB = 0.1 # l/min
CAf = 2.0 # moles/liter
# create a model instance
m = ConcreteModel()
# create the decision variable
m.q = Var(domain=NonNegativeReals)
# create the objective
m.CBmax = Objective(expr=m.q*V*kA*CAf/(m.q + V*kB)/(m.q + V*kA), sense=maximize)
# solve using the nonlinear solver ipopt
SolverFactory('ipopt').solve(m)
# print solution
print('Flowrate at maximum CB = ', m.q(), 'liters per minute.')
print('Maximum CB =', m.CBmax(), 'moles per liter.')
print('Productivity = ', m.q()*m.CBmax(), 'moles per minute.')
The error is:
WARNING: Could not locate the 'ipopt' executable, which is required for solver
ipopt
Traceback (most recent call last):
File "/media/sf_SharedFiles/Code/optimization/examplescalaroptimize.py", line 20, in <module>
SolverFactory('ipopt').solve(m)
File "/home/kali/.local/lib/python3.9/site-packages/pyomo/opt/base/solvers.py", line 512, in solve
self.available(exception_flag=True)
File "/home/kali/.local/lib/python3.9/site-packages/pyomo/opt/solver/shellcmd.py", line 128, in available
raise ApplicationError(msg % self.name)
pyomo.common.errors.ApplicationError: No executable found for solver 'ipopt'
I have spent several hours looking into this and nothing has worked.
A:
Hey I also faced this problem. I was running my script in remote HPC with Linux system.
However, when I use command to execute the file, it works and solve the model very well. When I use pycharm run the script, it doesn't work, showing that the solver can not be located.
Thats super strange
|
Pyomo cannot find ipopt in Linux even though it's installed
|
I'm using Kali Linux and I needed to install ipopt to use with pyomo in Python which I'm currently learning. I have tried several things and none of them have worked with trying to run ipopt in pyomo. First, following their official website's instructions did not work (https://coin-or.github.io/Ipopt/INSTALL.html) for pyomo even though everything seemed to install:
sudo apt-get install gcc g++ gfortran git patch wget pkg-config liblapack-dev libmetis-dev
Next, I attempted to use coinbrew following coin-or repo's suggestion:
/path/to/coinbrew fetch Ipopt --no-prompt
/path/to/coinbrew build Ipopt --prefix=/dir/to/install --test --no-prompt --verbosity=3
/path/to/coinbrew install Ipopt --no-prompt
It took a long time to build from the source and I am not sure if this did anything.
My third attempt was installing cyipopt (https://github.com/mechmotum/cyipopt) with pip and running one of the examples there in their repo. That worked perfectly fine with using cyipot but not pyomo which still could not find the solver when I tried to run it. On my fourth attempt, I went ahead and downloaded the ipopt linux64 directly from https://ampl.com/dl/open/ipopt/. I then unzipped the file and copied the executable ipopt into my /usr/bin and then added +x permission to it. I tested out the executable by ./ipopt and it appears to work properly there:
$ ipopt
No stub!
usage: ipopt [options] stub [-AMPL] [<assignment> ...]
Options:
-- {end of options}
-= {show name= possibilities}
-? {show usage}
-bf {read boundsfile f}
-e {suppress echoing of assignments}
-of {write .sol file to file f}
-s {write .sol file (without -AMPL)}
-v {just show version}
I went ahead and ran it on an example file:
from pyomo.environ import *
V = 40 # liters
kA = 0.5 # 1/min
kB = 0.1 # l/min
CAf = 2.0 # moles/liter
# create a model instance
m = ConcreteModel()
# create the decision variable
m.q = Var(domain=NonNegativeReals)
# create the objective
m.CBmax = Objective(expr=m.q*V*kA*CAf/(m.q + V*kB)/(m.q + V*kA), sense=maximize)
# solve using the nonlinear solver ipopt
SolverFactory('ipopt').solve(m)
# print solution
print('Flowrate at maximum CB = ', m.q(), 'liters per minute.')
print('Maximum CB =', m.CBmax(), 'moles per liter.')
print('Productivity = ', m.q()*m.CBmax(), 'moles per minute.')
The error is:
WARNING: Could not locate the 'ipopt' executable, which is required for solver
ipopt
Traceback (most recent call last):
File "/media/sf_SharedFiles/Code/optimization/examplescalaroptimize.py", line 20, in <module>
SolverFactory('ipopt').solve(m)
File "/home/kali/.local/lib/python3.9/site-packages/pyomo/opt/base/solvers.py", line 512, in solve
self.available(exception_flag=True)
File "/home/kali/.local/lib/python3.9/site-packages/pyomo/opt/solver/shellcmd.py", line 128, in available
raise ApplicationError(msg % self.name)
pyomo.common.errors.ApplicationError: No executable found for solver 'ipopt'
I have spent several hours looking into this and nothing has worked.
|
[
"Hey I also faced this problem. I was running my script in remote HPC with Linux system.\nHowever, when I use command to execute the file, it works and solve the model very well. When I use pycharm run the script, it doesn't work, showing that the solver can not be located.\nThats super strange\n"
] |
[
0
] |
[] |
[] |
[
"ipopt",
"optimization",
"pyomo",
"python"
] |
stackoverflow_0071454400_ipopt_optimization_pyomo_python.txt
|
Q:
Searching for keyword combinations in pandas dataframe for classification
This is a follow up question to Searching for certain keywords in pandas dataframe for classification.
I have a list of keywords based on which I want to categorize the job description. Here are input file, example keywords and code
job_description
Managing engineer is responsible for
This job entails assisting to
Engineer is required the execute
Pilot should be able to control
Customer specialist advices
Different cases brought by human resources department
cat_dict = {
"manager": ["manager", "president", "management", "managing"],
"assistant": ["assistant", "assisting", "customer specialist"],
"engineer": ["engineer", "engineering", "scientist", "architect"],
"HR": ["human resources"]
}
def classify(desc):
for cat, lst in cat_dict.items():
if any(x in desc.lower() for x in lst):
return cat
df['classification'] = df["job_description"].apply(classify)
The code works well if there is a single word e.g. "mamanger" or "assistant" but cannot identify the cases when there two words e.g. "customer specialist" or "human resources"
A:
I think you are missing a comma in your cat_dict dictionary. I tried your example:
import pandas as pd
cat_dict = {
"manager": ["manager", "president", "management", "managing"],
"assistant": ["assistant", "assisting", "customer specialist"],
"engineer": ["engineer", "engineering", "scientist", "architect"],
"HR": ["human resources"]
}
def classify(desc):
for cat, lst in cat_dict.items():
if any(x in desc.lower() for x in lst):
return cat
text_df = pd.Series(text.split('\n')[1:])
text_df.apply(classify)
Result:
0 manager
1 assistant
2 engineer
3 None
4 assistant
5 HR
dtype: object
which successfully classified assistant for "customer specialist" and HR for "human resources".
|
Searching for keyword combinations in pandas dataframe for classification
|
This is a follow up question to Searching for certain keywords in pandas dataframe for classification.
I have a list of keywords based on which I want to categorize the job description. Here are input file, example keywords and code
job_description
Managing engineer is responsible for
This job entails assisting to
Engineer is required the execute
Pilot should be able to control
Customer specialist advices
Different cases brought by human resources department
cat_dict = {
"manager": ["manager", "president", "management", "managing"],
"assistant": ["assistant", "assisting", "customer specialist"],
"engineer": ["engineer", "engineering", "scientist", "architect"],
"HR": ["human resources"]
}
def classify(desc):
for cat, lst in cat_dict.items():
if any(x in desc.lower() for x in lst):
return cat
df['classification'] = df["job_description"].apply(classify)
The code works well if there is a single word e.g. "mamanger" or "assistant" but cannot identify the cases when there two words e.g. "customer specialist" or "human resources"
|
[
"I think you are missing a comma in your cat_dict dictionary. I tried your example:\nimport pandas as pd\n\ncat_dict = {\n \"manager\": [\"manager\", \"president\", \"management\", \"managing\"],\n \"assistant\": [\"assistant\", \"assisting\", \"customer specialist\"],\n \"engineer\": [\"engineer\", \"engineering\", \"scientist\", \"architect\"],\n \"HR\": [\"human resources\"]\n}\n\ndef classify(desc):\n for cat, lst in cat_dict.items():\n if any(x in desc.lower() for x in lst):\n return cat\n\ntext_df = pd.Series(text.split('\\n')[1:])\ntext_df.apply(classify)\n\nResult:\n0 manager\n1 assistant\n2 engineer\n3 None\n4 assistant\n5 HR\ndtype: object\n\nwhich successfully classified assistant for \"customer specialist\" and HR for \"human resources\".\n"
] |
[
1
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074460677_pandas_python.txt
|
Q:
select_set() method of Listbox tkinter widget in Python enables multiple selections even selectionmode is set to BROWSE
I am working with Python 3.10.5 64bit and a strange behavior regarding the listboy widget of the tkinter modul.
Look at the following code:
import tkinter as tk
root = tk.Tk()
cities = ['New York', 'Beijing', 'Cairo', 'Mumbai', 'Mexico']
list_source = tk.StringVar(value=cities)
lst_cities = tk.Listbox(
master=root,
listvariable=list_source,
height=6,
selectmode=tk.SINGLE,
exportselection=False) # enables that the selected item will be highlighted
lst_cities.grid(row=0, column=0, sticky=tk.EW)
lst_cities.select_set(0)
lst_cities.select_set(1)
lst_cities.select_set(2)
root.mainloop()
As you can see I have created a simple listbox and finally used the 'select_set' method several times with different indexes. I would assume as I have set selectmode to SINGLE that a new 'select_set' call would remove the previous selection, but this isn't the case so I ended with 3 selected entries. Is this a desired behavior? If so it looks like an inconsistent behavior.
I tried to clear the selection with:
`
lst_cities.selection_clear(tk.END)
lst_cities.select_clear(tk.END)
but this doesn't seem to have any effect. So I am also looking for way to clear the selection,
so I can select a new entry. Seems I am missing something.
A:
According to the help on selection_set():
selection_set(self, first, last=None)
Set the selection from FIRST to LAST (included) without
changing the currently selected elements.
currently selected elements are not affected.
So you need to clear current selections using selection_clear() (or select_clear()):
selection_clear(0, "end")
Better to use a function to simplify it:
def select_set(idx):
lst_cities.selection_clear(0, "end")
lst_cities.selection_set(idx)
selection_set(0)
selection_set(1)
selection_set(2)
A:
The simple way to do this. As you saying that several times with different indexes by using tk.MULTIPLE is good choice instead of tk.SINGLE. There are several ways to do.
import tkinter as tk
root = tk.Tk()
root.geometry("200x200")
cities = ['New York', 'Beijing', 'Cairo', 'Mumbai', 'Mexico']
def remove_item():
selected_checkboxs = lst_cities.curselection()
for selected_checkbox in selected_checkboxs[::-1]:
lst_cities.delete(selected_checkbox)
lst_cities = tk.Listbox(root,
selectmode=tk.MULTIPLE,
exportselection=False,
height=6)
lst_cities.pack()
for item in cities:
lst_cities.insert(tk.END, item)
tk.Button(root, text="delete", command=remove_item).pack()
root.mainloop()
Result MULTIPLE:
Result after delete:
|
select_set() method of Listbox tkinter widget in Python enables multiple selections even selectionmode is set to BROWSE
|
I am working with Python 3.10.5 64bit and a strange behavior regarding the listboy widget of the tkinter modul.
Look at the following code:
import tkinter as tk
root = tk.Tk()
cities = ['New York', 'Beijing', 'Cairo', 'Mumbai', 'Mexico']
list_source = tk.StringVar(value=cities)
lst_cities = tk.Listbox(
master=root,
listvariable=list_source,
height=6,
selectmode=tk.SINGLE,
exportselection=False) # enables that the selected item will be highlighted
lst_cities.grid(row=0, column=0, sticky=tk.EW)
lst_cities.select_set(0)
lst_cities.select_set(1)
lst_cities.select_set(2)
root.mainloop()
As you can see I have created a simple listbox and finally used the 'select_set' method several times with different indexes. I would assume as I have set selectmode to SINGLE that a new 'select_set' call would remove the previous selection, but this isn't the case so I ended with 3 selected entries. Is this a desired behavior? If so it looks like an inconsistent behavior.
I tried to clear the selection with:
`
lst_cities.selection_clear(tk.END)
lst_cities.select_clear(tk.END)
but this doesn't seem to have any effect. So I am also looking for way to clear the selection,
so I can select a new entry. Seems I am missing something.
|
[
"According to the help on selection_set():\nselection_set(self, first, last=None)\n Set the selection from FIRST to LAST (included) without\n changing the currently selected elements.\n\ncurrently selected elements are not affected.\nSo you need to clear current selections using selection_clear() (or select_clear()):\nselection_clear(0, \"end\")\n\nBetter to use a function to simplify it:\ndef select_set(idx):\n lst_cities.selection_clear(0, \"end\")\n lst_cities.selection_set(idx)\n\nselection_set(0)\nselection_set(1)\nselection_set(2)\n\n",
"The simple way to do this. As you saying that several times with different indexes by using tk.MULTIPLE is good choice instead of tk.SINGLE. There are several ways to do.\nimport tkinter as tk\n\nroot = tk.Tk()\nroot.geometry(\"200x200\")\n\ncities = ['New York', 'Beijing', 'Cairo', 'Mumbai', 'Mexico']\n\ndef remove_item():\n selected_checkboxs = lst_cities.curselection()\n\n for selected_checkbox in selected_checkboxs[::-1]:\n lst_cities.delete(selected_checkbox)\n\nlst_cities = tk.Listbox(root,\n selectmode=tk.MULTIPLE,\n exportselection=False,\n height=6)\nlst_cities.pack()\n\nfor item in cities:\n lst_cities.insert(tk.END, item)\n\ntk.Button(root, text=\"delete\", command=remove_item).pack()\n\nroot.mainloop()\n\nResult MULTIPLE:\n\nResult after delete:\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"listbox",
"python",
"tkinter"
] |
stackoverflow_0074457532_listbox_python_tkinter.txt
|
Q:
I'm receiving a NetworkX related error on Memgraph startup
When I start Memgraph I can't access query modules. Right after the startup, I get ImportError for the NetworkX module. I've checked and I can see that I have NetworkX installed. I've also tried to reinstall Mmegraph but I had no luck. The error is still there.
A:
This is most likely due to the Python version. Memgraph is using the default system Python.
Check the Python version with python --version. If you don't run Python 3, upgrade it. With python3 there shouldn't be such problems.
|
I'm receiving a NetworkX related error on Memgraph startup
|
When I start Memgraph I can't access query modules. Right after the startup, I get ImportError for the NetworkX module. I've checked and I can see that I have NetworkX installed. I've also tried to reinstall Mmegraph but I had no luck. The error is still there.
|
[
"This is most likely due to the Python version. Memgraph is using the default system Python.\nCheck the Python version with python --version. If you don't run Python 3, upgrade it. With python3 there shouldn't be such problems.\n"
] |
[
0
] |
[] |
[] |
[
"memgraphdb",
"networkx",
"python"
] |
stackoverflow_0074461034_memgraphdb_networkx_python.txt
|
Q:
AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'
So recently I had to reinstall python due to corrupt executable. This made one of our python scripts bomb with the following error:
AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'
The line of code that caused it to bomb was:
from apiclient.discovery import build
I tried pip uninstalling and pip upgrading the google-api-python-client but I cant seem to find any information on this particular error.
For what it is worth - I am trying to pull google analytics info down via API call.
here is an output of the command prompt error
File "C:\Analytics\Puritan_GoogleAnalytics\Google_Conversions\mcfTest.py", line 1, in <module>
from apiclient.discovery import build
File "C:\ProgramData\Anaconda3\lib\site-packages\apiclient\__init__.py", line 3, in <module>
from googleapiclient import channel, discovery, errors, http, mimeparse, model
File "C:\ProgramData\Anaconda3\lib\site-packages\googleapiclient\discovery.py", line 57, in <module>
from googleapiclient import _auth, mimeparse
File "C:\ProgramData\Anaconda3\lib\site-packages\googleapiclient\_auth.py", line 34, in <module>
import oauth2client.client
File "C:\ProgramData\Anaconda3\lib\site-packages\oauth2client\client.py", line 45, in <module>
from oauth2client import crypt
File "C:\ProgramData\Anaconda3\lib\site-packages\oauth2client\crypt.py", line 45, in <module>
from oauth2client import _openssl_crypt
File "C:\ProgramData\Anaconda3\lib\site-packages\oauth2client\_openssl_crypt.py", line 16, in <module>
from OpenSSL import crypto
File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\__init__.py", line 8, in <module>
from OpenSSL import crypto, SSL
File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\crypto.py", line 1517, in <module>
class X509StoreFlags(object):
File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\crypto.py", line 1537, in X509StoreFlags
CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK
AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'
A:
Edit the crypto.py file and remove the offending line by commenting it out with a #
Then upgrade latest version of PyOpenSSL.
pip install pip --upgrade
pip install pyopenssl --upgrade
Now you can re-add the commented line again and it should be working
A:
on my ubuntu "20.04.5" I manage solving the error:
CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK\r
by re-installing the following packages:
apt-get --reinstall install python-apt
apt-get --reinstall install apt-transport-https
apt-get install build-essential libssl-dev libffi-dev python-dev
I do not use pip as I received this error message using ansible playbook and wasn't able to reach the servers anymore.
Hope it helps somebody on day.
A:
For me, earlier answers can't help me as I meet this problem for all pip commands, even pip3 -V. But I solved it by:
wget https://files.pythonhosted.org/packages/00/3f/ea5cfb789dddb327e6d2cf9377c36d9d8607af85530af0e7001165587ae7/pyOpenSSL-22.1.0-py3-none-any.whl (get url from https://pypi.org/project/pyOpenSSL/#files if you need the latest version)
python3 -m easy_install pyOpenSSL-22.1.0-py3-none-any.whl
Thanks https://askubuntu.com/a/1429674
A:
As all the above failed for me i used the trick here: https://askubuntu.com/a/1433089/497392
sudo apt remove python3-pip
wget https://bootstrap.pypa.io/get-pip.py
sudo python3 get-pip.py
And then after a reboot:
pip install pyopenssl --upgrade
A:
If you have pip completely broken, as @sgdesmet propose in a comment, the only option to resolve this issue is
"Edit the crypto.py file and remove the offending line by commenting it out with a #"
No other solutions work with me.
A:
If pip / pip3 is completely broken and nothing of the other option work (as described by @DarkSkull), then the line in the crypto.py file that's causing the issue has to be deleted or commented out.
Here's an automated way of doing it:
python_openssl_crypto_file="/usr/lib/python3/dist-packages/OpenSSL/crypto.py"
search_term="CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK"
cb_issuer_check_line_number="$(awk "/$search_term/ {print FNR}" $python_openssl_crypto_file)"
sed -i "${cb_issuer_check_line_number}s/.*/ # $search_term/" $python_openssl_crypto_file
A:
I've tried upgrading pip and installing another version of pyOpenSSL from whl file, but that didn't work. The only thing that helped is removing the entire folder with OpenSSL module like that rm -rf ...python-3.8.10/lib/python3.8/site-packages/OpenSSL and then doing all the thing you need.
|
AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'
|
So recently I had to reinstall python due to corrupt executable. This made one of our python scripts bomb with the following error:
AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'
The line of code that caused it to bomb was:
from apiclient.discovery import build
I tried pip uninstalling and pip upgrading the google-api-python-client but I cant seem to find any information on this particular error.
For what it is worth - I am trying to pull google analytics info down via API call.
here is an output of the command prompt error
File "C:\Analytics\Puritan_GoogleAnalytics\Google_Conversions\mcfTest.py", line 1, in <module>
from apiclient.discovery import build
File "C:\ProgramData\Anaconda3\lib\site-packages\apiclient\__init__.py", line 3, in <module>
from googleapiclient import channel, discovery, errors, http, mimeparse, model
File "C:\ProgramData\Anaconda3\lib\site-packages\googleapiclient\discovery.py", line 57, in <module>
from googleapiclient import _auth, mimeparse
File "C:\ProgramData\Anaconda3\lib\site-packages\googleapiclient\_auth.py", line 34, in <module>
import oauth2client.client
File "C:\ProgramData\Anaconda3\lib\site-packages\oauth2client\client.py", line 45, in <module>
from oauth2client import crypt
File "C:\ProgramData\Anaconda3\lib\site-packages\oauth2client\crypt.py", line 45, in <module>
from oauth2client import _openssl_crypt
File "C:\ProgramData\Anaconda3\lib\site-packages\oauth2client\_openssl_crypt.py", line 16, in <module>
from OpenSSL import crypto
File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\__init__.py", line 8, in <module>
from OpenSSL import crypto, SSL
File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\crypto.py", line 1517, in <module>
class X509StoreFlags(object):
File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\crypto.py", line 1537, in X509StoreFlags
CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK
AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'
|
[
"Edit the crypto.py file and remove the offending line by commenting it out with a #\nThen upgrade latest version of PyOpenSSL.\npip install pip --upgrade\npip install pyopenssl --upgrade\n\nNow you can re-add the commented line again and it should be working\n",
"on my ubuntu \"20.04.5\" I manage solving the error:\nCB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK\\r\n\nby re-installing the following packages:\napt-get --reinstall install python-apt\napt-get --reinstall install apt-transport-https\napt-get install build-essential libssl-dev libffi-dev python-dev\n\nI do not use pip as I received this error message using ansible playbook and wasn't able to reach the servers anymore.\nHope it helps somebody on day.\n",
"For me, earlier answers can't help me as I meet this problem for all pip commands, even pip3 -V. But I solved it by:\n\nwget https://files.pythonhosted.org/packages/00/3f/ea5cfb789dddb327e6d2cf9377c36d9d8607af85530af0e7001165587ae7/pyOpenSSL-22.1.0-py3-none-any.whl (get url from https://pypi.org/project/pyOpenSSL/#files if you need the latest version)\n\npython3 -m easy_install pyOpenSSL-22.1.0-py3-none-any.whl\n\n\nThanks https://askubuntu.com/a/1429674\n",
"As all the above failed for me i used the trick here: https://askubuntu.com/a/1433089/497392\nsudo apt remove python3-pip \nwget https://bootstrap.pypa.io/get-pip.py\nsudo python3 get-pip.py\n\nAnd then after a reboot:\npip install pyopenssl --upgrade\n\n",
"If you have pip completely broken, as @sgdesmet propose in a comment, the only option to resolve this issue is\n\n\"Edit the crypto.py file and remove the offending line by commenting it out with a #\"\n\nNo other solutions work with me.\n",
"If pip / pip3 is completely broken and nothing of the other option work (as described by @DarkSkull), then the line in the crypto.py file that's causing the issue has to be deleted or commented out.\nHere's an automated way of doing it:\npython_openssl_crypto_file=\"/usr/lib/python3/dist-packages/OpenSSL/crypto.py\"\nsearch_term=\"CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK\"\ncb_issuer_check_line_number=\"$(awk \"/$search_term/ {print FNR}\" $python_openssl_crypto_file)\"\nsed -i \"${cb_issuer_check_line_number}s/.*/ # $search_term/\" $python_openssl_crypto_file\n\n",
"I've tried upgrading pip and installing another version of pyOpenSSL from whl file, but that didn't work. The only thing that helped is removing the entire folder with OpenSSL module like that rm -rf ...python-3.8.10/lib/python3.8/site-packages/OpenSSL and then doing all the thing you need.\n"
] |
[
39,
4,
2,
2,
0,
0,
0
] |
[] |
[] |
[
"google_analytics_api",
"python"
] |
stackoverflow_0073830524_google_analytics_api_python.txt
|
Q:
Delete dictionary from JSON based on condition in Value - Python
I have JSON as follows
dict =[
{'name':'Test01-Serial01'
},
{'name':'Tests04-Serial04'
}
]
First I wanted to separate the name with - and then with the index 0 that is Test01
I wanted to delete the dictionary which don't follow the rule in name
Rule: 4 Digit Word 2 Digit Number
Here Tests04 doesn't follow 4 Digit Word 2 Digit Number rule and it contains 5 digit word.
A:
Write a function that validates the value according to your rules. Reconstruct the original list with a list comprehension.
from string import ascii_letters, digits
def isvalid(s):
return len(s) == 6 and all(c in ascii_letters for c in s[:4]) and all(c in digits for c in s[4:])
_list = [
{'name': 'Test01-Serial01'},
{'name': 'Tests04-Serial04'}
]
_list = [e for e in _list if isvalid(e['name'].split('-')[0])]
print(_list)
Output:
[{'name': 'Test01-Serial01'}]
A:
You can also use regular expressions to parse the values:
import re
mylist = [
{'name': 'Test01' },
{'name': 'Tests04' }
]
regex = re.compile(r'^\w{4}\d{2}$')
mylist = [{k:v} for _ in mylist for k,v in _.items() if regex.match(v)]
# Or, maybe this is more clear?
# mylist = [item for item in mylist if regex.match(list(item.values())[0])]
print(mylist)
This returns:
[{'name': 'Test01'}]
This says to look for four "word" characters then two digits between the start and end of the value of each object. Anything that doesn't match this pattern is filtered out. Check out the definition of \w to make sure that you and the authors of re agree on what word chars are.
And, as @cobra pointed out, using dict as a variable name (particularly for a list) is not best practice.
|
Delete dictionary from JSON based on condition in Value - Python
|
I have JSON as follows
dict =[
{'name':'Test01-Serial01'
},
{'name':'Tests04-Serial04'
}
]
First I wanted to separate the name with - and then with the index 0 that is Test01
I wanted to delete the dictionary which don't follow the rule in name
Rule: 4 Digit Word 2 Digit Number
Here Tests04 doesn't follow 4 Digit Word 2 Digit Number rule and it contains 5 digit word.
|
[
"Write a function that validates the value according to your rules. Reconstruct the original list with a list comprehension.\nfrom string import ascii_letters, digits\n\n\ndef isvalid(s):\n return len(s) == 6 and all(c in ascii_letters for c in s[:4]) and all(c in digits for c in s[4:])\n\n\n_list = [\n {'name': 'Test01-Serial01'},\n {'name': 'Tests04-Serial04'}\n]\n_list = [e for e in _list if isvalid(e['name'].split('-')[0])]\n\nprint(_list)\n\nOutput:\n[{'name': 'Test01-Serial01'}]\n\n",
"You can also use regular expressions to parse the values:\nimport re\n\nmylist = [\n {'name': 'Test01' },\n {'name': 'Tests04' }\n]\n\nregex = re.compile(r'^\\w{4}\\d{2}$')\nmylist = [{k:v} for _ in mylist for k,v in _.items() if regex.match(v)]\n\n# Or, maybe this is more clear?\n# mylist = [item for item in mylist if regex.match(list(item.values())[0])]\n\nprint(mylist)\n\nThis returns:\n[{'name': 'Test01'}]\n\nThis says to look for four \"word\" characters then two digits between the start and end of the value of each object. Anything that doesn't match this pattern is filtered out. Check out the definition of \\w to make sure that you and the authors of re agree on what word chars are.\nAnd, as @cobra pointed out, using dict as a variable name (particularly for a list) is not best practice.\n"
] |
[
1,
1
] |
[] |
[] |
[
"dictionary",
"json",
"python"
] |
stackoverflow_0074460676_dictionary_json_python.txt
|
Q:
how do I insert a row for under a specific cell value
I have a dataframe below and I want to insert a new row under shop with values, how do I do that ?
values = 0.2, park, false
df1 =
number variable values
1 NaN bank True
2 3.0 shop False
3 0.5 market True
4 NaN government True
5 1.0 hotel true
A:
You can try:
import pandas as pd
df = pd.DataFrame({'number': [float('NaN'), 3.0, 0.5, float('NaN'), 1.0], 'variable':['bank','shop','market','government','hotel'], 'values':[True, False, True, True, True]})
print("----- ORIGINAL ------")
print(df)
shop_index = df.reset_index()['variable'].tolist().index('shop')
insert = pd.DataFrame({"number": 0.2, "variable": "park", "values": False}, index=[shop_index+1])
df2 = pd.concat([df.iloc[:shop_index+1], insert, df.iloc[shop_index+1:]]).reset_index(drop=True)
print("----- AFTER INSERT ------")
print(df2)
Output:
----- ORIGINAL ------
number variable values
0 NaN bank True
1 3.0 shop False
2 0.5 market True
3 NaN government True
4 1.0 hotel True
----- AFTER INSERT ------
number variable values
0 NaN bank True
1 3.0 shop False
2 0.2 park False
3 0.5 market True
4 NaN government True
5 1.0 hotel True
A:
Using indices, you can specify a row to modify using df.loc[]
To input
To append to the last row in the current dataframe, get the last index using df.loc[-1], add a new index and sort them.
In your case:
df.loc[-1] = values
df.index = df.index + 1
df = df.sort_index()
|
how do I insert a row for under a specific cell value
|
I have a dataframe below and I want to insert a new row under shop with values, how do I do that ?
values = 0.2, park, false
df1 =
number variable values
1 NaN bank True
2 3.0 shop False
3 0.5 market True
4 NaN government True
5 1.0 hotel true
|
[
"You can try:\nimport pandas as pd\n\ndf = pd.DataFrame({'number': [float('NaN'), 3.0, 0.5, float('NaN'), 1.0], 'variable':['bank','shop','market','government','hotel'], 'values':[True, False, True, True, True]})\nprint(\"----- ORIGINAL ------\")\nprint(df)\nshop_index = df.reset_index()['variable'].tolist().index('shop') \ninsert = pd.DataFrame({\"number\": 0.2, \"variable\": \"park\", \"values\": False}, index=[shop_index+1])\n\ndf2 = pd.concat([df.iloc[:shop_index+1], insert, df.iloc[shop_index+1:]]).reset_index(drop=True)\nprint(\"----- AFTER INSERT ------\")\nprint(df2)\n\nOutput:\n----- ORIGINAL ------\n number variable values\n0 NaN bank True\n1 3.0 shop False\n2 0.5 market True\n3 NaN government True\n4 1.0 hotel True\n\n\n----- AFTER INSERT ------\n number variable values\n0 NaN bank True\n1 3.0 shop False\n2 0.2 park False\n3 0.5 market True\n4 NaN government True\n5 1.0 hotel True\n\n",
"Using indices, you can specify a row to modify using df.loc[]\nTo input\nTo append to the last row in the current dataframe, get the last index using df.loc[-1], add a new index and sort them.\nIn your case:\ndf.loc[-1] = values\ndf.index = df.index + 1 \ndf = df.sort_index()\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074459369_pandas_python.txt
|
Q:
Pytorch How do you implement Hadamard (element-wise) products within nn.Module, safely?
I need to write an nn.Module class with layers that feed into one another. I need to perform an element-wise product on some of the results of my layers, but (emphasis) I do not need a parametrized layer that does that. I need to place it somehow between several parametrized layers. How can I implement an element-wise product into my model without breaking the gradient or causing other problems?
Element-wise products are also called "Hadamard product". I have been unable to find a single example of an nn.Module doing this kind of product anywhere on the internet.
A:
Have you checked out the torch.mul function (https://pytorch.org/docs/stable/generated/torch.mul.html)? This will perform the hadamard product for two inputs of equal size (for unequal sizes broadcasting will be used).
As per usual, executing this function on tensors for whom requires_grad=True, gradients will be calculated properly and stored in the tensor.
As you have not supplied enough information to reconstruct/fully understand the problem you want to solve, I have just come up with an example. Let's say you have a simple CNN in pytorch, but with the twist that (for whatever reason) you would like to mask out the output of the layer in certain regions. This could be implemented in the following way:
import torch.nn as nn
class Masking_CNN(nn.Module):
def __init__(self,input_channels):
"""
input_channels: int, number of channels in the input
"""
super().__init__()
#describe the network
self.conv_layer= nn.Conv2d(input_channels,4,kernel_size=3,padding=1)
def forward(self,x,mask):
"""
x: pytorch tensor of size(batch_size,input_channels,input_size,input_size)
mask: pytorch tensor of size(1,4,input_size,input_size) containing either 0 or 1
"""
x= self.conv_layer(x)
x= torch.mul(x,mask)
return x
So that would be an example on how to integrate the function into your net. All you need is to add the torch.mul function into the forward pass and pass the tensors you want to multiply elementwise (hadamard product) as arguments.
In case you need more narrowed down help concerning your problem, please specify your question and supply more information on your problem.
|
Pytorch How do you implement Hadamard (element-wise) products within nn.Module, safely?
|
I need to write an nn.Module class with layers that feed into one another. I need to perform an element-wise product on some of the results of my layers, but (emphasis) I do not need a parametrized layer that does that. I need to place it somehow between several parametrized layers. How can I implement an element-wise product into my model without breaking the gradient or causing other problems?
Element-wise products are also called "Hadamard product". I have been unable to find a single example of an nn.Module doing this kind of product anywhere on the internet.
|
[
"Have you checked out the torch.mul function (https://pytorch.org/docs/stable/generated/torch.mul.html)? This will perform the hadamard product for two inputs of equal size (for unequal sizes broadcasting will be used).\nAs per usual, executing this function on tensors for whom requires_grad=True, gradients will be calculated properly and stored in the tensor.\nAs you have not supplied enough information to reconstruct/fully understand the problem you want to solve, I have just come up with an example. Let's say you have a simple CNN in pytorch, but with the twist that (for whatever reason) you would like to mask out the output of the layer in certain regions. This could be implemented in the following way:\nimport torch.nn as nn\nclass Masking_CNN(nn.Module):\n def __init__(self,input_channels):\n \"\"\"\n input_channels: int, number of channels in the input \n \"\"\"\n\n super().__init__() \n\n #describe the network\n self.conv_layer= nn.Conv2d(input_channels,4,kernel_size=3,padding=1)\n\n\n def forward(self,x,mask):\n \"\"\"\n x: pytorch tensor of size(batch_size,input_channels,input_size,input_size)\n mask: pytorch tensor of size(1,4,input_size,input_size) containing either 0 or 1 \n \"\"\"\n \n x= self.conv_layer(x)\n x= torch.mul(x,mask) \n return x \n\nSo that would be an example on how to integrate the function into your net. All you need is to add the torch.mul function into the forward pass and pass the tensors you want to multiply elementwise (hadamard product) as arguments.\nIn case you need more narrowed down help concerning your problem, please specify your question and supply more information on your problem.\n"
] |
[
0
] |
[] |
[] |
[
"machine_learning",
"python",
"pytorch"
] |
stackoverflow_0074243005_machine_learning_python_pytorch.txt
|
Q:
_tkinter.TclError: bad window path name when closing window
I've made a tk.Toplevel class to get a date from the user. After the user clicked the date, the window is closing and the date should return to the mainprocess.
When the tk.Toplevel is closed I've got the date, but also an error:
\_tkinter.TclError: bad window path name ".!kalender.!dateentry.!toplevel"
What did I do wrong?
class Kalender(tk.Toplevel):
def __init__(self, parent, date=''):
Toplevel.__init__(self, parent)
# Fenster mittig zentrieren
x = (self.winfo_screenwidth() // 2) - (100 // 2)
y = (self.winfo_screenheight() // 2) - (50 // 2)
self.grab_set()
self.geometry('{}x{}+{}+{}'.format(180, 90, x, y))
self.attributes('-toolwindow', True)
self.title('Datum auswählen')
self.resizable(width=False, height=False)
self.date = None
self.sel = StringVar()
self.cal = DateEntry(self, font="Arial 14", selectmode='day', locale='de_DE', date_pattern="dd.mm.y ",
textvariable=self.sel)
self.cal.bind("<<DateEntrySelected>>", self.close_window)
self.cal.set_date(date)
self.cal.grid(row=0, column=0, padx=10, pady=10, sticky=W+E)
self.focus_set()
def close_window(self, e):
self.date = self.cal.get()
self.destroy()
def show(self):
self.deiconify()
self.wm_protocol("WM_DELETE_WINDOW", self.close_window)
self.wait_window()
return self.date
cal = Kalender(main_window, d).show()
I've got the following error:
Exception in Tkinter callback
Traceback (most recent call last):
File "B:\Python 310\lib\tkinter\__init__.py", line 1921, in __call__
return self.func(*args)
File "B:\Python 310\lib\site-packages\tkcalendar\dateentry.py", line 301, in _select
self._top_cal.withdraw()
File "B:\Python 310\lib\tkinter\__init__.py", line 2269, in wm_withdraw
return self.tk.call('wm', 'withdraw', self._w)
_tkinter.TclError: bad window path name ".!kalender.!dateentry.!toplevel"
It seems, that tkinter tries to call the kalender.dateentry after it has been destroyed.
A:
It is because when the user has selected a date in the pop-up calendar, the bind function self.close_window() will be executed and the toplevel is destroyed (so is the DateEntry widget). Then DateEntry widget closes the pop-up calendar which raises the exception.
To fix this, you can delay the execution of self.close_window() a bit so that it is executed after the pop-up calendar is closed using after():
self.cal.bind("<<DateEntrySelected>>", lambda e: self.after(10, self.close_window, None))
A:
I am not 100% sure about this, but it seems the that the tkcalendar module has some trouble forcing a destroy on the parent widget through a bind on the DateEntry class. You can try using the withdraw command instead, which hides the window rather than destroying it
def close_window(self, e):
self.date = self.cal.get()
self.withdraw()
|
_tkinter.TclError: bad window path name when closing window
|
I've made a tk.Toplevel class to get a date from the user. After the user clicked the date, the window is closing and the date should return to the mainprocess.
When the tk.Toplevel is closed I've got the date, but also an error:
\_tkinter.TclError: bad window path name ".!kalender.!dateentry.!toplevel"
What did I do wrong?
class Kalender(tk.Toplevel):
def __init__(self, parent, date=''):
Toplevel.__init__(self, parent)
# Fenster mittig zentrieren
x = (self.winfo_screenwidth() // 2) - (100 // 2)
y = (self.winfo_screenheight() // 2) - (50 // 2)
self.grab_set()
self.geometry('{}x{}+{}+{}'.format(180, 90, x, y))
self.attributes('-toolwindow', True)
self.title('Datum auswählen')
self.resizable(width=False, height=False)
self.date = None
self.sel = StringVar()
self.cal = DateEntry(self, font="Arial 14", selectmode='day', locale='de_DE', date_pattern="dd.mm.y ",
textvariable=self.sel)
self.cal.bind("<<DateEntrySelected>>", self.close_window)
self.cal.set_date(date)
self.cal.grid(row=0, column=0, padx=10, pady=10, sticky=W+E)
self.focus_set()
def close_window(self, e):
self.date = self.cal.get()
self.destroy()
def show(self):
self.deiconify()
self.wm_protocol("WM_DELETE_WINDOW", self.close_window)
self.wait_window()
return self.date
cal = Kalender(main_window, d).show()
I've got the following error:
Exception in Tkinter callback
Traceback (most recent call last):
File "B:\Python 310\lib\tkinter\__init__.py", line 1921, in __call__
return self.func(*args)
File "B:\Python 310\lib\site-packages\tkcalendar\dateentry.py", line 301, in _select
self._top_cal.withdraw()
File "B:\Python 310\lib\tkinter\__init__.py", line 2269, in wm_withdraw
return self.tk.call('wm', 'withdraw', self._w)
_tkinter.TclError: bad window path name ".!kalender.!dateentry.!toplevel"
It seems, that tkinter tries to call the kalender.dateentry after it has been destroyed.
|
[
"It is because when the user has selected a date in the pop-up calendar, the bind function self.close_window() will be executed and the toplevel is destroyed (so is the DateEntry widget). Then DateEntry widget closes the pop-up calendar which raises the exception.\nTo fix this, you can delay the execution of self.close_window() a bit so that it is executed after the pop-up calendar is closed using after():\nself.cal.bind(\"<<DateEntrySelected>>\", lambda e: self.after(10, self.close_window, None))\n\n",
"I am not 100% sure about this, but it seems the that the tkcalendar module has some trouble forcing a destroy on the parent widget through a bind on the DateEntry class. You can try using the withdraw command instead, which hides the window rather than destroying it\ndef close_window(self, e):\n self.date = self.cal.get()\n self.withdraw()\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"tkinter"
] |
stackoverflow_0074459096_python_tkinter.txt
|
Q:
Context manager: Error handling inside __init__ method
A bit of context
I am working with a package that allows you to calculate several things about planets (such as their speed, or position), using information stored in files. The package includes methods to load, and unload files, so its basic usage would look like this:
load(["File_1", "File_2"])
try:
function()
finally:
unload(["File_1", "File_2"])
As this is a textbook example of the utility of a context manager, and the package lacks one, I am writing my own.
class file_manager:
def __init__(self, file_list) -> None:
self.file_list = file_list
load(self.file_list)
return None
def __enter__(self) -> None:
return None
def __exit__(self, exc_type, exc_value, traceback) -> None:
unload(self.file_list)
return None
With the new context manager, the previous example can be rewritten as follows:
with file_manager(["File_1", "File_2"]):
function()
and the __exit__ method guarantees that files will still be unloaded if function raises an error.
My problem
The load function loads files one by one, without first checking if all of them are available. As a result, if File_1 exists, but File_2 doesn't, File_1 will be loaded, and an exception will be raised while loading File_2. According to python documentation:
The with statement guarantees that if the __enter__() method returns without an error, then __exit__() will always be called.
Therefore, in the previous case, the execution of the program will end without File_2 being unloaded.
What am I looking for
I can obviously fix this by using a try...except clause inside the __init__() method:
def __init__(self, file_list) -> None:
self.file_list = file_list
try:
load(self.file_list)
except FileDoesNotExistError:
self.__exit__(FileDoesNotExistError, False, None)
but I want to know if this is the proper way to solve this problem. For example, in Cython, classes have a __dealloc__() method, which is guaranteed to run, no matter what type of exception occurs.
A:
You can wrap your original code using contextlib.contextmanager.
from contextlib import contextmanager
@contextmanager
def file_manager(file_list):
try:
load(file_list)
yield None # after this the code inside the with block is executed
finally:
# this is called when the with block has finished
# or when load raises an exception
unload(file_list)
and use it like
with file_manager(["File_1", "File_2"]):
function()
|
Context manager: Error handling inside __init__ method
|
A bit of context
I am working with a package that allows you to calculate several things about planets (such as their speed, or position), using information stored in files. The package includes methods to load, and unload files, so its basic usage would look like this:
load(["File_1", "File_2"])
try:
function()
finally:
unload(["File_1", "File_2"])
As this is a textbook example of the utility of a context manager, and the package lacks one, I am writing my own.
class file_manager:
def __init__(self, file_list) -> None:
self.file_list = file_list
load(self.file_list)
return None
def __enter__(self) -> None:
return None
def __exit__(self, exc_type, exc_value, traceback) -> None:
unload(self.file_list)
return None
With the new context manager, the previous example can be rewritten as follows:
with file_manager(["File_1", "File_2"]):
function()
and the __exit__ method guarantees that files will still be unloaded if function raises an error.
My problem
The load function loads files one by one, without first checking if all of them are available. As a result, if File_1 exists, but File_2 doesn't, File_1 will be loaded, and an exception will be raised while loading File_2. According to python documentation:
The with statement guarantees that if the __enter__() method returns without an error, then __exit__() will always be called.
Therefore, in the previous case, the execution of the program will end without File_2 being unloaded.
What am I looking for
I can obviously fix this by using a try...except clause inside the __init__() method:
def __init__(self, file_list) -> None:
self.file_list = file_list
try:
load(self.file_list)
except FileDoesNotExistError:
self.__exit__(FileDoesNotExistError, False, None)
but I want to know if this is the proper way to solve this problem. For example, in Cython, classes have a __dealloc__() method, which is guaranteed to run, no matter what type of exception occurs.
|
[
"You can wrap your original code using contextlib.contextmanager.\nfrom contextlib import contextmanager\n\n@contextmanager\ndef file_manager(file_list):\n try:\n load(file_list)\n yield None # after this the code inside the with block is executed \n finally:\n # this is called when the with block has finished\n # or when load raises an exception\n unload(file_list)\n\nand use it like\nwith file_manager([\"File_1\", \"File_2\"]):\n function()\n\n"
] |
[
4
] |
[] |
[] |
[
"contextmanager",
"python"
] |
stackoverflow_0074460663_contextmanager_python.txt
|
Q:
Socket Connection Refused [Errno 111]
I am trying to implement a simple ftp with sockets using C (server side) and Python (client side). When the server code is compiled and run, the user enters a port number. The client then enters "localhost " when compiling. For some reason I am getting [Errno 111] on the client side when I run the code. It is saying that the issue is with my client.connect statement. I have tried using multiple different port numbers and it throws this same error:
flip1 ~/FTPClient 54% python ftpclientNew.py localhost 2500
Traceback (most recent call last):
File "ftpclientNew.py", line 86, in <module>
main()
File "ftpclientNew.py", line 27, in main
if client.connect((serverName, portNumber)) == None:
File "<string>", line 1, in connect
socket.error: [Errno 111] Connection refused
Another weird thing is that this connection error was not happening when I ran this same code a few days ago. Has anyone experienced a problem like this? Any idea what might be causing this? Thanks!
Here is the client code:
import sys, posix, string
from socket import *
def main():
if len(sys.argv) < 3:
print "\nFormat: 'localhost' <port number>!\n"
return 0
buffer = ""
bufferSize = 500
serverName = "localhost"
fileBuffer = [10000]
if sys.argv[1] != serverName:
print "Incorrect Server Name! \n"
return 0
portNumber = int(sys.argv[2])
client = socket(AF_INET, SOCK_STREAM)
if client < 0:
print "Error Creating Socket!! \n"
return 0
if client.connect((serverName, portNumber)) == None:
print "Client Socket Created...\n"
print "Connecting to the server...\n"
print "Connected!\n"
##clientName = raw_input("Enter a file name: ")
A:
Sometimes localhost isn't working on host
Change this
serverName = 127.0.0.1
|
Socket Connection Refused [Errno 111]
|
I am trying to implement a simple ftp with sockets using C (server side) and Python (client side). When the server code is compiled and run, the user enters a port number. The client then enters "localhost " when compiling. For some reason I am getting [Errno 111] on the client side when I run the code. It is saying that the issue is with my client.connect statement. I have tried using multiple different port numbers and it throws this same error:
flip1 ~/FTPClient 54% python ftpclientNew.py localhost 2500
Traceback (most recent call last):
File "ftpclientNew.py", line 86, in <module>
main()
File "ftpclientNew.py", line 27, in main
if client.connect((serverName, portNumber)) == None:
File "<string>", line 1, in connect
socket.error: [Errno 111] Connection refused
Another weird thing is that this connection error was not happening when I ran this same code a few days ago. Has anyone experienced a problem like this? Any idea what might be causing this? Thanks!
Here is the client code:
import sys, posix, string
from socket import *
def main():
if len(sys.argv) < 3:
print "\nFormat: 'localhost' <port number>!\n"
return 0
buffer = ""
bufferSize = 500
serverName = "localhost"
fileBuffer = [10000]
if sys.argv[1] != serverName:
print "Incorrect Server Name! \n"
return 0
portNumber = int(sys.argv[2])
client = socket(AF_INET, SOCK_STREAM)
if client < 0:
print "Error Creating Socket!! \n"
return 0
if client.connect((serverName, portNumber)) == None:
print "Client Socket Created...\n"
print "Connecting to the server...\n"
print "Connected!\n"
##clientName = raw_input("Enter a file name: ")
|
[
"Sometimes localhost isn't working on host\nChange this\nserverName = 127.0.0.1\n\n"
] |
[
0
] |
[
"Try to change the serverName variable to 127.0.0.1.\n"
] |
[
-1
] |
[
"python",
"sockets"
] |
stackoverflow_0035817295_python_sockets.txt
|
Q:
Fill oceans in high resolution to hide low resolution contours in basemap
When plotting low-resolution contours over a high-resolution coastline I get the following result
I would like to fill the area outside of the coastlines (caused by the low resolution of the underlining filled contour plot) with the ocean color at high resolution.
I tried to use the land-sea mask option without coloring the land
m.drawlsmask(land_color=(0, 0, 0, 0), ocean_color='#2081C3',
resolution='h', lakes=True, zorder=2, grid=1.25)
but the 1.25 resolution is not enough for this level of detail (see second image)
Unfortunately there is no builtin method that fills the ocean (and lakes) with the same resolution used for the coastlines ('h' in my case). As a workaround is there any way to fill the area "outside" of the coastline using the original resolution?
I could use a high resolution land-sea mask in drawlsmask but that's a waste of resource since basemap already has indirectly that information with the polygons given by the coastlines.
General notes:
It looks like other questions on Stack Overflow suggest to use the builtin land sea mask of basemap. I can't because it is too low resolution at this zoom level.
Unfortunately I cannot use Cartopy. I already built my entire pipeline on Cartopy but it is way too slow for what I have to do.
A:
I ended up using the solution posted in Fill oceans in basemap adapted to my needs. Note that, in order to retain the lakes, I had to do multiple passes of fillcontinents, so that's how I did
# extents contain the projection extents as [lon1, lon2, lat1, lat2]
m = Basemap(projection='merc',
llcrnrlat=extents[2],
urcrnrlat=extents[3],
llcrnrlon=extents[0],
urcrnrlon=extents[1],
lat_ts=20,
resolution='h')
m.fillcontinents(color='#c5c5c5', lake_color='#acddfe', zorder=1)
# Fill again the lakes over the contour plot
m.fillcontinents(color=(0, 0, 0, 0), lake_color='#acddfe', zorder=3)
ax = plt.gca()
# Workaround to add high resolution oceans
x0,x1 = ax.get_xlim()
y0,y1 = ax.get_ylim()
map_edges = np.array([[x0,y0],[x1,y0],[x1,y1],[x0,y1]])
# getting all polygons used to draw the coastlines of the map
polys = [p.boundary for p in m.landpolygons]
polys = [map_edges]+polys[:]
codes = [
[Path.MOVETO] + [Path.LINETO for p in p[1:]]
for p in polys
]
polys_lin = [v for p in polys for v in p]
codes_lin = [c for cs in codes for c in cs]
path = Path(polys_lin, codes_lin)
patch = PathPatch(path, facecolor='#acddfe', lw=0, zorder=2)
ax.add_patch(patch)
m.drawcountries(linewidth=0.6)
m.readshapefile(f'{SHAPEFILES_DIR}/ITA_adm_shp/ITA_adm2',
'ITA_adm2', linewidth=0.1, color='gray', zorder=5)
which gives something like this
Not perfect (because the shapefile which defines the coastline has a different resolution), but definitely better than before.
|
Fill oceans in high resolution to hide low resolution contours in basemap
|
When plotting low-resolution contours over a high-resolution coastline I get the following result
I would like to fill the area outside of the coastlines (caused by the low resolution of the underlining filled contour plot) with the ocean color at high resolution.
I tried to use the land-sea mask option without coloring the land
m.drawlsmask(land_color=(0, 0, 0, 0), ocean_color='#2081C3',
resolution='h', lakes=True, zorder=2, grid=1.25)
but the 1.25 resolution is not enough for this level of detail (see second image)
Unfortunately there is no builtin method that fills the ocean (and lakes) with the same resolution used for the coastlines ('h' in my case). As a workaround is there any way to fill the area "outside" of the coastline using the original resolution?
I could use a high resolution land-sea mask in drawlsmask but that's a waste of resource since basemap already has indirectly that information with the polygons given by the coastlines.
General notes:
It looks like other questions on Stack Overflow suggest to use the builtin land sea mask of basemap. I can't because it is too low resolution at this zoom level.
Unfortunately I cannot use Cartopy. I already built my entire pipeline on Cartopy but it is way too slow for what I have to do.
|
[
"I ended up using the solution posted in Fill oceans in basemap adapted to my needs. Note that, in order to retain the lakes, I had to do multiple passes of fillcontinents, so that's how I did\n# extents contain the projection extents as [lon1, lon2, lat1, lat2]\nm = Basemap(projection='merc',\n llcrnrlat=extents[2],\n urcrnrlat=extents[3],\n llcrnrlon=extents[0],\n urcrnrlon=extents[1],\n lat_ts=20,\n resolution='h')\n\nm.fillcontinents(color='#c5c5c5', lake_color='#acddfe', zorder=1)\n# Fill again the lakes over the contour plot\nm.fillcontinents(color=(0, 0, 0, 0), lake_color='#acddfe', zorder=3)\n\nax = plt.gca()\n\n# Workaround to add high resolution oceans \nx0,x1 = ax.get_xlim()\ny0,y1 = ax.get_ylim()\nmap_edges = np.array([[x0,y0],[x1,y0],[x1,y1],[x0,y1]])\n\n# getting all polygons used to draw the coastlines of the map\npolys = [p.boundary for p in m.landpolygons]\npolys = [map_edges]+polys[:]\n\ncodes = [\n [Path.MOVETO] + [Path.LINETO for p in p[1:]]\n for p in polys\n ]\n\npolys_lin = [v for p in polys for v in p] \ncodes_lin = [c for cs in codes for c in cs]\npath = Path(polys_lin, codes_lin)\npatch = PathPatch(path, facecolor='#acddfe', lw=0, zorder=2)\n\nax.add_patch(patch)\n\nm.drawcountries(linewidth=0.6)\nm.readshapefile(f'{SHAPEFILES_DIR}/ITA_adm_shp/ITA_adm2',\n 'ITA_adm2', linewidth=0.1, color='gray', zorder=5)\n\nwhich gives something like this\n\nNot perfect (because the shapefile which defines the coastline has a different resolution), but definitely better than before.\n"
] |
[
2
] |
[] |
[] |
[
"matplotlib_basemap",
"python",
"shapefile"
] |
stackoverflow_0074433797_matplotlib_basemap_python_shapefile.txt
|
Q:
binary_crossentrophy vs categorical_crossentropy
I have a dataset with 10 categorical features and one output feature with class 0 and 1. X_train follows a 3D array so I have done label encoding beforehand on the dataset.
I have applied categorical_crossentrophy but I am getting 26% accuracy with activation function sigmoid. When I apply binary_crossentrophy, the accuracy drastically increased to 98%.
model = Sequential()
model.add(LSTM(256, input_shape=(n_timesteps,n_features),recurrent_activation='hard_sigmoid'))
model.add(Dense(16))
model.add(Dense(n_outputs, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
and dataset is divided as:
X_train: (430000, 5, 10)
y_train: (430000, 1)
I am confused if I am doing the right thing. Please suggest!!
A:
If you want to predict 10 different classes, you will need to use the categorical_crossentropy. The final output layer must have 10 units with the softmax activation function. The binary_crossentrophy is for binary classification like cat and dog, or yes or no.
|
binary_crossentrophy vs categorical_crossentropy
|
I have a dataset with 10 categorical features and one output feature with class 0 and 1. X_train follows a 3D array so I have done label encoding beforehand on the dataset.
I have applied categorical_crossentrophy but I am getting 26% accuracy with activation function sigmoid. When I apply binary_crossentrophy, the accuracy drastically increased to 98%.
model = Sequential()
model.add(LSTM(256, input_shape=(n_timesteps,n_features),recurrent_activation='hard_sigmoid'))
model.add(Dense(16))
model.add(Dense(n_outputs, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
and dataset is divided as:
X_train: (430000, 5, 10)
y_train: (430000, 1)
I am confused if I am doing the right thing. Please suggest!!
|
[
"If you want to predict 10 different classes, you will need to use the categorical_crossentropy. The final output layer must have 10 units with the softmax activation function. The binary_crossentrophy is for binary classification like cat and dog, or yes or no.\n"
] |
[
0
] |
[] |
[] |
[
"cross_entropy",
"python"
] |
stackoverflow_0071275268_cross_entropy_python.txt
|
Q:
Cannot set a Categorical with another, without identical categories. Replace almost identical categories
I have the following dataframe
np.random.seed(3)
s = pd.DataFrame((np.random.choice(['Feijão','feijão'],size=[3,2])),dtype='category')
print(s[0].cat.categories)
print(s[1].cat.categories)
As you can see the dataframe is basically two similar strings with one letter in uppercase. What I am trying to do is replace the category 'feijão' with 'Feijão'
When I write the following line of code I get this error
s.loc[s[0].isin(['feijão']),1] = s.loc[s[0].isin(['feijão']),1].replace({'feijão':'Feijão'})
TypeError: Cannot set a Categorical with another, without identical categories
I was wondering what does this error means, and also I am genuinely curious if filtering the invalid values and replacing them uniquely on the dataframe is the most optimal way of doing this. Should I just use replace without the filter part?
A:
Use DataFrame.update:
s.update( s.loc[s[0].isin(['feijão']),1].replace({'feijão':'Feijão'}))
print (s)
0 1
0 Feijão Feijão
1 feijão Feijão
2 Feijão Feijão
|
Cannot set a Categorical with another, without identical categories. Replace almost identical categories
|
I have the following dataframe
np.random.seed(3)
s = pd.DataFrame((np.random.choice(['Feijão','feijão'],size=[3,2])),dtype='category')
print(s[0].cat.categories)
print(s[1].cat.categories)
As you can see the dataframe is basically two similar strings with one letter in uppercase. What I am trying to do is replace the category 'feijão' with 'Feijão'
When I write the following line of code I get this error
s.loc[s[0].isin(['feijão']),1] = s.loc[s[0].isin(['feijão']),1].replace({'feijão':'Feijão'})
TypeError: Cannot set a Categorical with another, without identical categories
I was wondering what does this error means, and also I am genuinely curious if filtering the invalid values and replacing them uniquely on the dataframe is the most optimal way of doing this. Should I just use replace without the filter part?
|
[
"Use DataFrame.update:\ns.update( s.loc[s[0].isin(['feijão']),1].replace({'feijão':'Feijão'}))\nprint (s)\n 0 1\n0 Feijão Feijão\n1 feijão Feijão\n2 Feijão Feijão\n\n"
] |
[
1
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074461116_pandas_python.txt
|
Q:
intersection and union set between two lists base on data frame
a = {'A' : [1,2,3,4],
'B' : [[1,4,5,6],[2,3,6],[4,5,6]],
'C' : [[1,4,6],[3,5],[4,10],[10]]
}
Base on dataframes:
How to find the intersection and union set between column B and C? the output like that:
A B C intersect union
0 1 [1,4,5,6] [1,4,6] [1,4,6] [1,4,5,6]
1 2 [2,3,6] [3,5] [3] [2,3,5,6]
2 3 [4,5,6] [4,10] [4] [4,5,6,10]
3 4 [4,5,6] [10] [] [4,5,6,10]
A:
You can define a custom function that returns two values at a time and apply that function rowwise.
def func(row):
inters = list(set(row['B']).intersection(row['C']))
uni = list(set(row['B']).union(row['C']))
return inters, uni
a[['intersect', 'union']] = a.apply(func, axis=1, result_type='expand')
print(df)
A B C intersect union
0 1 [1, 4, 5, 6] [1, 4, 6] [1, 4, 6] [1, 4, 5, 6]
1 2 [2, 3, 6] [3, 5] [3] [2, 3, 5, 6]
2 3 [4, 5, 6] [4, 10] [4] [10, 4, 5, 6]
3 4 [4, 5, 6] [10] [] [10, 4, 5, 6]
|
intersection and union set between two lists base on data frame
|
a = {'A' : [1,2,3,4],
'B' : [[1,4,5,6],[2,3,6],[4,5,6]],
'C' : [[1,4,6],[3,5],[4,10],[10]]
}
Base on dataframes:
How to find the intersection and union set between column B and C? the output like that:
A B C intersect union
0 1 [1,4,5,6] [1,4,6] [1,4,6] [1,4,5,6]
1 2 [2,3,6] [3,5] [3] [2,3,5,6]
2 3 [4,5,6] [4,10] [4] [4,5,6,10]
3 4 [4,5,6] [10] [] [4,5,6,10]
|
[
"You can define a custom function that returns two values at a time and apply that function rowwise.\ndef func(row):\n inters = list(set(row['B']).intersection(row['C']))\n uni = list(set(row['B']).union(row['C']))\n return inters, uni\n\na[['intersect', 'union']] = a.apply(func, axis=1, result_type='expand')\nprint(df)\n\n A B C intersect union\n0 1 [1, 4, 5, 6] [1, 4, 6] [1, 4, 6] [1, 4, 5, 6]\n1 2 [2, 3, 6] [3, 5] [3] [2, 3, 5, 6]\n2 3 [4, 5, 6] [4, 10] [4] [10, 4, 5, 6]\n3 4 [4, 5, 6] [10] [] [10, 4, 5, 6]\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"intersection",
"list",
"python"
] |
stackoverflow_0074461000_dataframe_intersection_list_python.txt
|
Q:
Faster method of copying bounding box content onto canvas with numpy
I have an image with several detections with bounding boxes that overlap. I want to be able to extract different combinations of overlapping boxes onto a blank canvas, then save it as an image.
To visualise, if there are detections like this:
I want to be able to test boxes 1+2, 1+3, 2+3 with the box not included set to white.
To do this I'm using the following code:
import numpy as np
import cv2
import time
#orig = cv2.imread("Orig.jpg")
orig = np.zeros((9536, 13480, 3), dtype=np.uint8) # dummy
print(orig.shape)
rects = [[1000,1000,1100,1100], [1100,1000,1200,1100],[1100,1100,1200,1200]]
loops = 10
starttime = time.perf_counter()
blankCanvas = np.full(orig.shape, (255,255,255), np.uint8)
for i in range(loops):
canvas = blankCanvas.copy()
for rect in rects:
x1,y1,x2,y2 = rect
canvas[y1:y2,x1:x2] = orig[y1:y2, x1:x2]
xs = np.hstack((np.array(rects)[:,0],np.array(rects)[:,2]))
ys = np.hstack((np.array(rects)[:,1],np.array(rects)[:,3]))
cv2.imwrite(str(i) + ".jpg", canvas[min(ys):max(ys), min(xs):max(xs)])
fulltime = time.perf_counter()-starttime
looptime = fulltime/loops
recttime = looptime/len(rects)
print("Time taken per loop:: ", looptime)
print("Time taken per rect:: ", recttime)
Output:::
(9536, 13480, 3)
Time taken per loop:: 0.22753073000000001
Time taken per rect:: 0.07584357666666668
Each loop is taking roughly quarter a second, the issue is I have to do thousands of loops. I managed to speed it up by copying a blank canvas rather than reinitizializing the canvas every loop, but I'm not sure how else I can optimise the process of copying across sections to the canvas.
The majority of time in a loop is spend on this section:
canvas = blankCanvas.copy()
for rect in rects:
x1,y1,x2,y2 = rect
canvas[y1:y2,x1:x2] = orig[y1:y2, x1:x2]
Writing to disk doesn't seem to make any difference to timing.
Thanks in advance.
A:
Loop time is dominated by the np.full (~500 ms) and the .copy() (100 ms).
The actual calculations cost four orders of magnitude less time.
You introduced the .copy() operation purely for the time measurement, so your measurement method disturbed the thing you tried to measure.
You also included constant setup cost (the np.full) when you only tried to determine the cost of the actual calculation.
Please use a profiler: https://docs.python.org/3/library/profile.html
|
Faster method of copying bounding box content onto canvas with numpy
|
I have an image with several detections with bounding boxes that overlap. I want to be able to extract different combinations of overlapping boxes onto a blank canvas, then save it as an image.
To visualise, if there are detections like this:
I want to be able to test boxes 1+2, 1+3, 2+3 with the box not included set to white.
To do this I'm using the following code:
import numpy as np
import cv2
import time
#orig = cv2.imread("Orig.jpg")
orig = np.zeros((9536, 13480, 3), dtype=np.uint8) # dummy
print(orig.shape)
rects = [[1000,1000,1100,1100], [1100,1000,1200,1100],[1100,1100,1200,1200]]
loops = 10
starttime = time.perf_counter()
blankCanvas = np.full(orig.shape, (255,255,255), np.uint8)
for i in range(loops):
canvas = blankCanvas.copy()
for rect in rects:
x1,y1,x2,y2 = rect
canvas[y1:y2,x1:x2] = orig[y1:y2, x1:x2]
xs = np.hstack((np.array(rects)[:,0],np.array(rects)[:,2]))
ys = np.hstack((np.array(rects)[:,1],np.array(rects)[:,3]))
cv2.imwrite(str(i) + ".jpg", canvas[min(ys):max(ys), min(xs):max(xs)])
fulltime = time.perf_counter()-starttime
looptime = fulltime/loops
recttime = looptime/len(rects)
print("Time taken per loop:: ", looptime)
print("Time taken per rect:: ", recttime)
Output:::
(9536, 13480, 3)
Time taken per loop:: 0.22753073000000001
Time taken per rect:: 0.07584357666666668
Each loop is taking roughly quarter a second, the issue is I have to do thousands of loops. I managed to speed it up by copying a blank canvas rather than reinitizializing the canvas every loop, but I'm not sure how else I can optimise the process of copying across sections to the canvas.
The majority of time in a loop is spend on this section:
canvas = blankCanvas.copy()
for rect in rects:
x1,y1,x2,y2 = rect
canvas[y1:y2,x1:x2] = orig[y1:y2, x1:x2]
Writing to disk doesn't seem to make any difference to timing.
Thanks in advance.
|
[
"Loop time is dominated by the np.full (~500 ms) and the .copy() (100 ms).\nThe actual calculations cost four orders of magnitude less time.\nYou introduced the .copy() operation purely for the time measurement, so your measurement method disturbed the thing you tried to measure.\nYou also included constant setup cost (the np.full) when you only tried to determine the cost of the actual calculation.\nPlease use a profiler: https://docs.python.org/3/library/profile.html\n"
] |
[
2
] |
[] |
[] |
[
"numpy",
"optimization",
"profiling",
"python"
] |
stackoverflow_0074459739_numpy_optimization_profiling_python.txt
|
Q:
Regex to find multiline comments in Python that contain a certain word
How can I define a regex to find multiline comments in python that contain the word "xyz".
Example for a string that should match:
"""
blah blah
blah
xyz
blah blah
"""
I tried this regex:
"""((.|\n)(?!"""))*?xyz(.|\n)*?"""
(grep -i -Pz '"""((.|\n)(?!"""))?xyz(.|\n)?"""')
but it was not good enough. for example, for this input
"""
blah blah blah
blah
"""
# xyz
def foo(self):
"""
blah
"""
it matched this string:
"""
# xyz
def foo(self):
"""
The expected behavior in this case it to not match anything since "xyz" is not inside a comment block.
I wanted it to only find "xyz" within opening quotes and closing quotes, but the string it matches is not inside a quotes block. It matches a string that starts with a quote, has "xyz" in it and ends with a quote, but the matched string is NOT inside a python comment block.
Any idea how to get the required behavior from this regex?
A:
The main challenge is keeping the """ ... """ balance of inside and outside a comment.
Here an idea with PCRE (e.g. PyPI regex with Python) or grep -Pz (like in your example).
(?ims)^"""(?:(?:[^"]|"(?!""))*?(xyz))?.*?^"""(?(1)|(*SKIP)(*F))
See this demo at regex101 (used with i ignorecase, m multiline and s dotall flags)
This works because the searchstring is matched optional to prevent backtracking into another match and loosing overall balance. The most simple pattern for keeping the balance would be """.*?""". But as soon as you want to match some substring inside, the regex engine will try to succeed.
To get around this, the searchstring can be matched optionally for keeping balance by preventing backtracking. Simplified example: """([^"]*?xyz)?.*?""" VS not wanted """([^"]*?xyz).*?""".
Now to still let the matches without searchstring fail, I used a conditional afterwards together with PCRE verbs (*SKIP)(*F). If the first group fails (no searchstring inside) the match just gets skipped.
For usage with grep here is a demo at tio.run, or alternatively: pcregrep -M '(?is)pattern'
As mentioned above in Python this pattern requires PyPI regex, see a Python demo at tio.run.
|
Regex to find multiline comments in Python that contain a certain word
|
How can I define a regex to find multiline comments in python that contain the word "xyz".
Example for a string that should match:
"""
blah blah
blah
xyz
blah blah
"""
I tried this regex:
"""((.|\n)(?!"""))*?xyz(.|\n)*?"""
(grep -i -Pz '"""((.|\n)(?!"""))?xyz(.|\n)?"""')
but it was not good enough. for example, for this input
"""
blah blah blah
blah
"""
# xyz
def foo(self):
"""
blah
"""
it matched this string:
"""
# xyz
def foo(self):
"""
The expected behavior in this case it to not match anything since "xyz" is not inside a comment block.
I wanted it to only find "xyz" within opening quotes and closing quotes, but the string it matches is not inside a quotes block. It matches a string that starts with a quote, has "xyz" in it and ends with a quote, but the matched string is NOT inside a python comment block.
Any idea how to get the required behavior from this regex?
|
[
"The main challenge is keeping the \"\"\" ... \"\"\" balance of inside and outside a comment.\nHere an idea with PCRE (e.g. PyPI regex with Python) or grep -Pz (like in your example).\n(?ims)^\"\"\"(?:(?:[^\"]|\"(?!\"\"))*?(xyz))?.*?^\"\"\"(?(1)|(*SKIP)(*F))\n\nSee this demo at regex101 (used with i ignorecase, m multiline and s dotall flags)\nThis works because the searchstring is matched optional to prevent backtracking into another match and loosing overall balance. The most simple pattern for keeping the balance would be \"\"\".*?\"\"\". But as soon as you want to match some substring inside, the regex engine will try to succeed.\nTo get around this, the searchstring can be matched optionally for keeping balance by preventing backtracking. Simplified example: \"\"\"([^\"]*?xyz)?.*?\"\"\" VS not wanted \"\"\"([^\"]*?xyz).*?\"\"\".\nNow to still let the matches without searchstring fail, I used a conditional afterwards together with PCRE verbs (*SKIP)(*F). If the first group fails (no searchstring inside) the match just gets skipped.\n\nFor usage with grep here is a demo at tio.run, or alternatively: pcregrep -M '(?is)pattern'\nAs mentioned above in Python this pattern requires PyPI regex, see a Python demo at tio.run.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0074459864_python_regex.txt
|
Q:
Visual C++ redist not detected by command line
When trying to install discord.py, I keep getting this error:
error: Microsoft Visual C++ 14.0 or greater is required. Get it with
"Microsoft C++ Build Tools":
https://visualstudio.microsoft.com/visual-cpp-build-tools/
Even though I have Visual C++, I installed the things in build tools, and I added C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC to the Path.
Why does Visual C++ still not work?
A:
I previously had this issue, to solve it you need to head to the link provided. Afterwards, open up the file (should be called vs_BuildTools) that was given to you. You should see a menu appear with multiple options (see image).
The solution is to click the Desktop development with C++ as checked and install (bottom right).
Afterwards, make sure no errors happens and then restart your computer.
Once that's done, you should be set and the issue should be resolved.
If you've already installed this, try uninstalling and reinstalling (what I had to do since I already had it installed).
|
Visual C++ redist not detected by command line
|
When trying to install discord.py, I keep getting this error:
error: Microsoft Visual C++ 14.0 or greater is required. Get it with
"Microsoft C++ Build Tools":
https://visualstudio.microsoft.com/visual-cpp-build-tools/
Even though I have Visual C++, I installed the things in build tools, and I added C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC to the Path.
Why does Visual C++ still not work?
|
[
"I previously had this issue, to solve it you need to head to the link provided. Afterwards, open up the file (should be called vs_BuildTools) that was given to you. You should see a menu appear with multiple options (see image).\nThe solution is to click the Desktop development with C++ as checked and install (bottom right).\n\nAfterwards, make sure no errors happens and then restart your computer.\nOnce that's done, you should be set and the issue should be resolved.\nIf you've already installed this, try uninstalling and reinstalling (what I had to do since I already had it installed).\n"
] |
[
0
] |
[] |
[] |
[
"discord.py",
"python",
"visual_c++",
"windows"
] |
stackoverflow_0074454466_discord.py_python_visual_c++_windows.txt
|
Q:
Telethon Telegram workers are too busy to respond immediately (caused by SendMultiMediaRequest)
warning
I'm getting this warning every time when my reposter sends a post from one channel to another and this post contains more than 8 media files. If it has more than 8, it will divide my post: first post - 8 media files and second one - 1-2 media files without text(it is left in the first part)
sending message
How can I fix it? I want to get rid of warning and dividing post from another channer to separate ones if it has more than 8 media files
I tried everything,
A:
As the error states, "Telegram is having internal issues". This means the issue is outside of your control and can't really "fix" it.
The library automatically retries a few times by default. You can turn this off if you want to handle the error yourself. You can also check logging's documentation to learn how to filter out messages if all you want is to "turn off" the warning from being printed.
|
Telethon Telegram workers are too busy to respond immediately (caused by SendMultiMediaRequest)
|
warning
I'm getting this warning every time when my reposter sends a post from one channel to another and this post contains more than 8 media files. If it has more than 8, it will divide my post: first post - 8 media files and second one - 1-2 media files without text(it is left in the first part)
sending message
How can I fix it? I want to get rid of warning and dividing post from another channer to separate ones if it has more than 8 media files
I tried everything,
|
[
"As the error states, \"Telegram is having internal issues\". This means the issue is outside of your control and can't really \"fix\" it.\nThe library automatically retries a few times by default. You can turn this off if you want to handle the error yourself. You can also check logging's documentation to learn how to filter out messages if all you want is to \"turn off\" the warning from being printed.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"telethon"
] |
stackoverflow_0074457741_python_telethon.txt
|
Q:
Get tags of a commit
Given an object of GitPython Commit, how can I get the tags related to this commit?
I'd enjoy having something like:
next(repo.iter_commits()).tags
A:
The problem is that tags point to commits, not the other way around. To get this information would require a linear scan of all tags to find out which ones point to the given commit. You could probably write something yourself that would do it. The following would get you a commit-to-tags dictionary:
tagmap = {}
for t in repo.tags():
tagmap.setdefault(r.commit(t), []).append(t)
And for a given commit, you can get any tags associated with it from:
tags = tagmap[repo.commit(commit_id)]
A:
I believe you can use something like:
git tag --points-at
Should be quite easy to run from GitPython
|
Get tags of a commit
|
Given an object of GitPython Commit, how can I get the tags related to this commit?
I'd enjoy having something like:
next(repo.iter_commits()).tags
|
[
"The problem is that tags point to commits, not the other way around. To get this information would require a linear scan of all tags to find out which ones point to the given commit. You could probably write something yourself that would do it. The following would get you a commit-to-tags dictionary:\ntagmap = {}\nfor t in repo.tags():\n tagmap.setdefault(r.commit(t), []).append(t)\n\nAnd for a given commit, you can get any tags associated with it from:\ntags = tagmap[repo.commit(commit_id)]\n\n",
"I believe you can use something like:\ngit tag --points-at \nShould be quite easy to run from GitPython\n"
] |
[
7,
0
] |
[] |
[] |
[
"commit",
"git",
"git_tag",
"gitpython",
"python"
] |
stackoverflow_0034932306_commit_git_git_tag_gitpython_python.txt
|
Q:
Training a RNN/LSTM model got KeyError equal to the val of the length
Trying to train this model
scaler = StandardScaler()
X_train_s = scaler.fit_transform(X_train)
X_test_s = scaler.transform(X_test)
length = 60
n_features = X_train_s.shape[1]
batch_size = 1
early_stop = EarlyStopping(monitor = 'val_accuracy', mode = 'max', verbose = 1, patience = 5)
generator = TimeseriesGenerator(data = X_train_s,
targets = Y_train[['TARGET_KEEP_LONG',
'TARGET_KEEP_SHORT',
'TARGET_STAY_FLAT']],
length = length,
batch_size = batch_size)
RNN_model = Sequential()
RNN_model.add(LSTM(180, activation = 'relu', input_shape = (length, n_features)))
RNN_model.add(Dense(3))
RNN_model.compile(optimizer = 'adam', loss = 'binary_crossentropy')
validation_generator = TimeseriesGenerator(data = X_test_s,
targets = Y_test[['TARGET_KEEP_LONG',
'TARGET_KEEP_SHORT',
'TARGET_STAY_FLAT']],
length = length,
batch_size = batch_size)
RNN_model.fit(generator,
epochs=20,
validation_data = validation_generator,
callbacks = [early_stop])
I get the error "KeyError: 60" where actually 60 is the value of the variable "length" (if I change it, the error changes accordingly).
The shapes of the training dataset are
X_test_s.shape
(114125, 89)
same for X_train_s.shape as well as n_features == 89.
A:
It was exhausting to find the cause due to the poor and misleading error message. Anyway, the trouble was on the target data set form, the TimeseriesGenerator does not accept panda dataframes, just np.arrays. Therefore this
generator = TimeseriesGenerator(data = X_train_s,
targets = Y_train[['TARGET_KEEP_LONG', 'TARGET_KEEP_SHORT', 'TARGET_STAY_FLAT']], length = length, batch_size = batch_size)
shall have been written as
generator = TimeseriesGenerator(X_train_s, pd.DataFrame.to_numpy(Y_train[['TARGET_KEEP_LONG', 'TARGET_KEEP_SHORT', 'TARGET_STAY_FLAT']]), length=length, batch_size=batch_size)
in the case of just one target, it was enough
generator = TimeseriesGenerator(data = X_train_s, targets = Y_train['TARGET_KEEP_LONG'], length = length, batch_size = batch_size)
just one level of squared brackets, not two.
|
Training a RNN/LSTM model got KeyError equal to the val of the length
|
Trying to train this model
scaler = StandardScaler()
X_train_s = scaler.fit_transform(X_train)
X_test_s = scaler.transform(X_test)
length = 60
n_features = X_train_s.shape[1]
batch_size = 1
early_stop = EarlyStopping(monitor = 'val_accuracy', mode = 'max', verbose = 1, patience = 5)
generator = TimeseriesGenerator(data = X_train_s,
targets = Y_train[['TARGET_KEEP_LONG',
'TARGET_KEEP_SHORT',
'TARGET_STAY_FLAT']],
length = length,
batch_size = batch_size)
RNN_model = Sequential()
RNN_model.add(LSTM(180, activation = 'relu', input_shape = (length, n_features)))
RNN_model.add(Dense(3))
RNN_model.compile(optimizer = 'adam', loss = 'binary_crossentropy')
validation_generator = TimeseriesGenerator(data = X_test_s,
targets = Y_test[['TARGET_KEEP_LONG',
'TARGET_KEEP_SHORT',
'TARGET_STAY_FLAT']],
length = length,
batch_size = batch_size)
RNN_model.fit(generator,
epochs=20,
validation_data = validation_generator,
callbacks = [early_stop])
I get the error "KeyError: 60" where actually 60 is the value of the variable "length" (if I change it, the error changes accordingly).
The shapes of the training dataset are
X_test_s.shape
(114125, 89)
same for X_train_s.shape as well as n_features == 89.
|
[
"It was exhausting to find the cause due to the poor and misleading error message. Anyway, the trouble was on the target data set form, the TimeseriesGenerator does not accept panda dataframes, just np.arrays. Therefore this\n generator = TimeseriesGenerator(data = X_train_s, \n targets = Y_train[['TARGET_KEEP_LONG', 'TARGET_KEEP_SHORT', 'TARGET_STAY_FLAT']], length = length, batch_size = batch_size)\n\nshall have been written as\ngenerator = TimeseriesGenerator(X_train_s, pd.DataFrame.to_numpy(Y_train[['TARGET_KEEP_LONG', 'TARGET_KEEP_SHORT', 'TARGET_STAY_FLAT']]), length=length, batch_size=batch_size)\n\nin the case of just one target, it was enough\n generator = TimeseriesGenerator(data = X_train_s, targets = Y_train['TARGET_KEEP_LONG'], length = length, batch_size = batch_size) \n\njust one level of squared brackets, not two.\n"
] |
[
0
] |
[] |
[] |
[
"lstm",
"pandas",
"python",
"recurrent_neural_network"
] |
stackoverflow_0074432853_lstm_pandas_python_recurrent_neural_network.txt
|
Q:
Python | How do I swap two unknown words in an unknown string?
I cannot find how to swap two words in a string using Python, without using any external/imported functions.
What I have is a string that I get from a text document.
For example the string is:
line = "Welcome to your personal dashboard, where you can find an introduction to how GitHub works, tools to help you build software, and help merging your first lines of code."
I find the longest and the shortest words, from a list, that containts all the words from the line string, without punctions.
longest = "introduction"
shortest = "to"
What I need to do is to swap tthe longest and the shortest words together, while keeping the punctions intact.
Tried using replace, but can only get it to replace 1 word with the other, but the second word remains the same.
Don't know what exactly to use or how to.
The string needs to end up from:
"Welcome to your personal dashboard, where you can find an introduction to how GitHub works, tools to help you build software, and help merging your first lines of code."
When swapped:
"Welcome to your personal dashboard, where you can find an to to how GitHub works, tools introduction help you build software, and help merging your first lines of code."
Tried replacing it with:
newline = newline.replace(shortest, longest)
But it will only replace 1 word as mentioned before.
A:
You're going to need to split the text into each word and find the min/max words by their size. Afterwards, iterate through the split words and check if it's equal to either the min/max word. If it is, then you need to replace it with the proper word.
import string #to check for punctuation
line = "Welcome to your personal dashboard, where you can find an introduction to how GitHub works, tools to help you build software, and help merging your first lines of code."
words = line.split() #this includes punctuation attached to words as well
shortest = min(words, key = len) #find the length of the words that is the smallest
longest = max(words, key = len) #opposite of above
for i, word in enumerate(words): #iterate with both the index and word in the list
if word == shortest:
if word[-1] in string.punctuation: #check if the punctuation is at the end since we want to keep it
words[i] = longest + word[-1] #this keeps the punctuation
else:
words[i] = longest
elif word == longest:
if word[-1] in string.punctuation:
words[i] = shortest + word[-1]
else:
words[i] = shortest
line = ' '.join(words) #make a new line that has the replaced words
A:
If this is a homework or an assignment (as in performance does not matter) then my advice to you is to replace word 1 with {word1} and word 2 with {word2} and then do string format. The solution becomes:
line = line.replace(longest, "{word1}").replace(shortest, "{word2}")
line = line.format(word1=shortest,word2=longest)
|
Python | How do I swap two unknown words in an unknown string?
|
I cannot find how to swap two words in a string using Python, without using any external/imported functions.
What I have is a string that I get from a text document.
For example the string is:
line = "Welcome to your personal dashboard, where you can find an introduction to how GitHub works, tools to help you build software, and help merging your first lines of code."
I find the longest and the shortest words, from a list, that containts all the words from the line string, without punctions.
longest = "introduction"
shortest = "to"
What I need to do is to swap tthe longest and the shortest words together, while keeping the punctions intact.
Tried using replace, but can only get it to replace 1 word with the other, but the second word remains the same.
Don't know what exactly to use or how to.
The string needs to end up from:
"Welcome to your personal dashboard, where you can find an introduction to how GitHub works, tools to help you build software, and help merging your first lines of code."
When swapped:
"Welcome to your personal dashboard, where you can find an to to how GitHub works, tools introduction help you build software, and help merging your first lines of code."
Tried replacing it with:
newline = newline.replace(shortest, longest)
But it will only replace 1 word as mentioned before.
|
[
"You're going to need to split the text into each word and find the min/max words by their size. Afterwards, iterate through the split words and check if it's equal to either the min/max word. If it is, then you need to replace it with the proper word.\nimport string #to check for punctuation\n\nline = \"Welcome to your personal dashboard, where you can find an introduction to how GitHub works, tools to help you build software, and help merging your first lines of code.\"\n\nwords = line.split() #this includes punctuation attached to words as well\n\nshortest = min(words, key = len) #find the length of the words that is the smallest\n\nlongest = max(words, key = len) #opposite of above\n\nfor i, word in enumerate(words): #iterate with both the index and word in the list\n if word == shortest:\n if word[-1] in string.punctuation: #check if the punctuation is at the end since we want to keep it\n words[i] = longest + word[-1] #this keeps the punctuation\n else:\n words[i] = longest\n elif word == longest:\n if word[-1] in string.punctuation:\n words[i] = shortest + word[-1]\n else:\n words[i] = shortest\n\nline = ' '.join(words) #make a new line that has the replaced words\n\n\n",
"If this is a homework or an assignment (as in performance does not matter) then my advice to you is to replace word 1 with {word1} and word 2 with {word2} and then do string format. The solution becomes:\nline = line.replace(longest, \"{word1}\").replace(shortest, \"{word2}\")\nline = line.format(word1=shortest,word2=longest)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"replace",
"string",
"swap"
] |
stackoverflow_0074460956_python_replace_string_swap.txt
|
Q:
how to convert array to float insind a list
hi i'm new to python and i was working on a mini project but i have this problem with some of my list output where i and array instead of float in my list tried to convert it using astype(float) but still nothing changed here is my code :
import numpy as np
import scipy.stats as st
import numpy.random as rd
from IPython.display import Markdown, display
import pandas as pd
service=['essence','gasoil','lavage','vidange'] #list of services
prob=[0.5, 0.25, 0.16, 0.09] #probability of the services (50%,25%,16%,9%)
λ = 2 #avrege cars per minute
n=1440 #simulation for 1440 minute
model=st.poisson(λ)
#essence and gasoil simulation
e ,g , σ = 2.12,1.99, 0.1 #'e'/'g' is the avrege time of pumping essence and gasoil / 'σ' is the gap
m=1
model_e=st.norm(e, σ)
model_g=st.norm(g, σ)
#code
J=30 #simulation for 1 mounth
d=[] # a list for conversion
#the list of different servvices
essence=[]
gasoil=[]
vidange=[]
lavage=[]
for j in range(J):
c=0 #the number of cars in a day
simulation=model.rvs(n)
for i in range(n):
c +=simulation[i] #calculating the number of cars in eache day
commandes=rd.choice(service, c, p=prob)
d= commandes.tolist() #commandes is ndarray i need to transform it to list in order to use "count"
simulation_e=model_e.rvs(m)
simulation_e=simulation_e.astype(float) # treid to use astype(float) to convert array to list but did't work
simulation_e=simulation_e*d.count('essence')
essence.append(simulation_e)
gasoil.append(d.count('gasoil')*model_g.rvs(m))
vidange.append(d.count('vidange'))
lavage.append(d.count('lavage'))
print(essence)
print("\n")
print(gasoil)
print("\n")
print(vidange)
this is the result that i get whene runing my code the array tag is always there in my list
[array([3049.02567421]), array([3115.46971158]), array([3057.74798456]), array([3169.46760693]), array([2993.79610725]), array([3075.71865925]), array([3204.53370577]), array([3129.65493394]), array([2975.22631282]), array([2945.63018474]), array([2843.09430445]), array([3314.12357151]), array([2796.23558937]), array([3123.59352839]), array([2983.00360539]), array([2883.79955281]), array([3056.7536885]), array([2556.95916304]), array([3050.10908716]), array([3226.86445445]), array([3282.64925171]), array([2922.09414665]), array([3127.7556254]), array([2901.03020042]), array([3186.59201801]), array([3100.92830043]), array([2920.10972545]), array([3279.21261218]), array([3189.59323404]), array([3120.11085555])]
[array([1339.12703414]), array([1354.32216511]), array([1467.59247591]), array([1331.76259858]), array([1397.63279282]), array([1452.76958164]), array([1444.76437058]), array([1301.58913082]), array([1361.35320908]), array([1467.51667652]), array([1572.44383252]), array([1252.75929698]), array([1434.85771546]), array([1487.10124071]), array([1334.69536144]), array([1499.65478204]), array([1513.6470695]), array([1531.20406829]), array([1402.24883398]), array([1464.77013383]), array([1566.69506967]), array([1341.01313426]), array([1364.15290992]), array([1477.4313931]), array([1564.68352222]), array([1622.30631325]), array([1340.63426348]), array([1423.24463625]), array([1577.83964284]), array([1533.9487886])]
[268, 247, 260, 246, 276, 263, 284, 270, 265, 280, 279, 244, 281, 247, 283, 257, 248, 241, 259, 240, 250, 237, 273, 273, 288, 256, 272, 254, 259, 279]
A:
If I understand correctly, you want to perform an operation with each item of a standard Python list. For that you can either use list comprehension (eager evaluation) or map function (lazy evaluation).
# input list
l = [1.1, 2.2, 3.3]
# list comprehension, evaluated immediately (eager)
l1 = [round(i) for i in l]
# map, evaluated on demand (lazy)
l2 = map(round, l)
# you can turn this eager by casting to list
l2 = list(map(round, l))
Replace the round() function by whatever your function is.
A:
I think you're referring to the rvs(1) method returning an Numpy array containing a single value, e.g.:
import scipy.stats as st
print([st.norm(2, 0.1).rvs(1)])
outputs [array([2.12212313])] because you've got a Numpy vector inside your Python list. There are a few ways of fixing this, you could either just not pass 1 to the method:
print([st.norm(2, 0.1).rvs()])
which outputs [1.876287479148669], i.e. no nested vector. Or your could use destructuring assignment:
[sim_val] = st.norm(2, 0.1).rvs(1)
print([sim_val])
or your could explicitly index the vector:
print(st.norm(2, 0.1).rvs(1)[0])
but I'd suggest doing the first variant.
|
how to convert array to float insind a list
|
hi i'm new to python and i was working on a mini project but i have this problem with some of my list output where i and array instead of float in my list tried to convert it using astype(float) but still nothing changed here is my code :
import numpy as np
import scipy.stats as st
import numpy.random as rd
from IPython.display import Markdown, display
import pandas as pd
service=['essence','gasoil','lavage','vidange'] #list of services
prob=[0.5, 0.25, 0.16, 0.09] #probability of the services (50%,25%,16%,9%)
λ = 2 #avrege cars per minute
n=1440 #simulation for 1440 minute
model=st.poisson(λ)
#essence and gasoil simulation
e ,g , σ = 2.12,1.99, 0.1 #'e'/'g' is the avrege time of pumping essence and gasoil / 'σ' is the gap
m=1
model_e=st.norm(e, σ)
model_g=st.norm(g, σ)
#code
J=30 #simulation for 1 mounth
d=[] # a list for conversion
#the list of different servvices
essence=[]
gasoil=[]
vidange=[]
lavage=[]
for j in range(J):
c=0 #the number of cars in a day
simulation=model.rvs(n)
for i in range(n):
c +=simulation[i] #calculating the number of cars in eache day
commandes=rd.choice(service, c, p=prob)
d= commandes.tolist() #commandes is ndarray i need to transform it to list in order to use "count"
simulation_e=model_e.rvs(m)
simulation_e=simulation_e.astype(float) # treid to use astype(float) to convert array to list but did't work
simulation_e=simulation_e*d.count('essence')
essence.append(simulation_e)
gasoil.append(d.count('gasoil')*model_g.rvs(m))
vidange.append(d.count('vidange'))
lavage.append(d.count('lavage'))
print(essence)
print("\n")
print(gasoil)
print("\n")
print(vidange)
this is the result that i get whene runing my code the array tag is always there in my list
[array([3049.02567421]), array([3115.46971158]), array([3057.74798456]), array([3169.46760693]), array([2993.79610725]), array([3075.71865925]), array([3204.53370577]), array([3129.65493394]), array([2975.22631282]), array([2945.63018474]), array([2843.09430445]), array([3314.12357151]), array([2796.23558937]), array([3123.59352839]), array([2983.00360539]), array([2883.79955281]), array([3056.7536885]), array([2556.95916304]), array([3050.10908716]), array([3226.86445445]), array([3282.64925171]), array([2922.09414665]), array([3127.7556254]), array([2901.03020042]), array([3186.59201801]), array([3100.92830043]), array([2920.10972545]), array([3279.21261218]), array([3189.59323404]), array([3120.11085555])]
[array([1339.12703414]), array([1354.32216511]), array([1467.59247591]), array([1331.76259858]), array([1397.63279282]), array([1452.76958164]), array([1444.76437058]), array([1301.58913082]), array([1361.35320908]), array([1467.51667652]), array([1572.44383252]), array([1252.75929698]), array([1434.85771546]), array([1487.10124071]), array([1334.69536144]), array([1499.65478204]), array([1513.6470695]), array([1531.20406829]), array([1402.24883398]), array([1464.77013383]), array([1566.69506967]), array([1341.01313426]), array([1364.15290992]), array([1477.4313931]), array([1564.68352222]), array([1622.30631325]), array([1340.63426348]), array([1423.24463625]), array([1577.83964284]), array([1533.9487886])]
[268, 247, 260, 246, 276, 263, 284, 270, 265, 280, 279, 244, 281, 247, 283, 257, 248, 241, 259, 240, 250, 237, 273, 273, 288, 256, 272, 254, 259, 279]
|
[
"If I understand correctly, you want to perform an operation with each item of a standard Python list. For that you can either use list comprehension (eager evaluation) or map function (lazy evaluation).\n# input list\nl = [1.1, 2.2, 3.3]\n\n# list comprehension, evaluated immediately (eager)\nl1 = [round(i) for i in l]\n\n# map, evaluated on demand (lazy)\nl2 = map(round, l)\n# you can turn this eager by casting to list\nl2 = list(map(round, l))\n\nReplace the round() function by whatever your function is.\n",
"I think you're referring to the rvs(1) method returning an Numpy array containing a single value, e.g.:\nimport scipy.stats as st \n\nprint([st.norm(2, 0.1).rvs(1)])\n\noutputs [array([2.12212313])] because you've got a Numpy vector inside your Python list. There are a few ways of fixing this, you could either just not pass 1 to the method:\nprint([st.norm(2, 0.1).rvs()])\n\nwhich outputs [1.876287479148669], i.e. no nested vector. Or your could use destructuring assignment:\n[sim_val] = st.norm(2, 0.1).rvs(1)\nprint([sim_val])\n\nor your could explicitly index the vector:\nprint(st.norm(2, 0.1).rvs(1)[0])\n\nbut I'd suggest doing the first variant.\n"
] |
[
0,
0
] |
[] |
[] |
[
"arrays",
"list",
"numpy",
"python",
"random"
] |
stackoverflow_0074459338_arrays_list_numpy_python_random.txt
|
Q:
Running FastAPI in docker with uvicorn and gunicorn nginx
I am trying to build a FastAPI application with ubuntu 22.04 docker image, gunicorn and uvicorn and nginx as webserver. Gunicorn and uvicorn services are started using supervisord.
python is installed in a virtual environment located in /opt/venv
Dockerfile
FROM ubuntu:22.04
LABEL maintainer="test"
ENV GROUP_ID=1000 \
USER_ID=1000
RUN apt-get update && apt-get install -y apt-transport-https ca-certificates supervisor procps cron python3.10-venv python3-gdbm wget gnupg unzip curl
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN ["python", "-m", "pip", "install", "--upgrade", "pip", "wheel"]
RUN apt-get install -y python3-wheel
COPY ./requirements.txt /app/requirements.txt
RUN ["python", "-m", "pip", "install", "--no-cache-dir", "--upgrade", "-r", "/app/requirements.txt"]
COPY ./app /app
RUN which python
nginx is using a separate docker image. Mongodb is the database and the content of docker-compose is
docker-compose.yml
version: '3.10'
services:
web:
container_name: "fastapi"
build: ./
volumes:
- ./app:/app
ports:
- "8000:8000"
environment:
- DEPLOYMENT_TYPE=production
depends_on:
- mongo
links:
- mongo
nginx:
container_name: "nginx"
restart: always
image: nginx
volumes:
- ./app/nginx/conf.d:/etc/nginx/conf.d
ports:
- 80:80
- 443:443
links:
- web
python packages are specified in requirements.txt
requirements.txt
setuptools>=59.1.1,<59.7.0
fastapi==0.87.0
uvicorn==0.19.0
gunicorn==20.1.0
python-decouple==3.5
nginx configuration file is
app.conf
upstream web {
server web:8000;
}
server {
listen 80;
charset utf-8;
server_name 0.0.0.0;
client_max_body_size 20m;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
location / {
proxy_pass http://web/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
uvicorn and gunicorn are started using supervisor
supervisord.conf
[supervisord]
nodaemon=true
[program:fastapi_guni]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=/opt/venv/bin/gunicorn app.main:app --workers 4 --name main --worker-class /opt/venv/bin/uvicorn.workers.UvicornWorker --host 0.0.0.0:8000 --reload
the python main.py is
main.py
import uvicorn
from fastapi.middleware.cors import CORSMiddleware
from fastapi import FastAPI
from app.routes.api import router as api_router
app = FastAPI()
origins = ["http://localhost:8000"]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(api_router)
if __name__ == '__main__':
uvicorn.run(host='127.0.0.1', debug=True, port=8000, log_level="info", reload=True)
print("running")
when i try to bring up the containers, fastapi container will get stopped immediately once the container starts with the following message
fastapi exited with code 0
and nginx will also get stopped throwing the following message
nginx | 2022/11/15 19:10:46 [emerg] 1#1: host not found in upstream "web:8000" in /etc/nginx/conf.d/app.conf:2
nginx | nginx: [emerg] host not found in upstream "web:8000" in /etc/nginx/conf.d/app.conf:2
nginx exited with code 1
I am sitting on this for several hours and how can bring the service up and running.
update
Folder structure
A:
In the mentioned Dockerfile, I don't see any command for running the server.
Something like this should work:
CMD ["python", "<path-to>/main.py"]
Also, to make it discoverable within the docker network, I had to run the application on '0.0.0.0' instead of localhost.
(It might be an issue specifically on my system, But I didn't have time to debug that.)
Also, you should consider making the web service a dependency for nginx, so that nginx will start only after web is up and running.
|
Running FastAPI in docker with uvicorn and gunicorn nginx
|
I am trying to build a FastAPI application with ubuntu 22.04 docker image, gunicorn and uvicorn and nginx as webserver. Gunicorn and uvicorn services are started using supervisord.
python is installed in a virtual environment located in /opt/venv
Dockerfile
FROM ubuntu:22.04
LABEL maintainer="test"
ENV GROUP_ID=1000 \
USER_ID=1000
RUN apt-get update && apt-get install -y apt-transport-https ca-certificates supervisor procps cron python3.10-venv python3-gdbm wget gnupg unzip curl
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN ["python", "-m", "pip", "install", "--upgrade", "pip", "wheel"]
RUN apt-get install -y python3-wheel
COPY ./requirements.txt /app/requirements.txt
RUN ["python", "-m", "pip", "install", "--no-cache-dir", "--upgrade", "-r", "/app/requirements.txt"]
COPY ./app /app
RUN which python
nginx is using a separate docker image. Mongodb is the database and the content of docker-compose is
docker-compose.yml
version: '3.10'
services:
web:
container_name: "fastapi"
build: ./
volumes:
- ./app:/app
ports:
- "8000:8000"
environment:
- DEPLOYMENT_TYPE=production
depends_on:
- mongo
links:
- mongo
nginx:
container_name: "nginx"
restart: always
image: nginx
volumes:
- ./app/nginx/conf.d:/etc/nginx/conf.d
ports:
- 80:80
- 443:443
links:
- web
python packages are specified in requirements.txt
requirements.txt
setuptools>=59.1.1,<59.7.0
fastapi==0.87.0
uvicorn==0.19.0
gunicorn==20.1.0
python-decouple==3.5
nginx configuration file is
app.conf
upstream web {
server web:8000;
}
server {
listen 80;
charset utf-8;
server_name 0.0.0.0;
client_max_body_size 20m;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
location / {
proxy_pass http://web/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
uvicorn and gunicorn are started using supervisor
supervisord.conf
[supervisord]
nodaemon=true
[program:fastapi_guni]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=/opt/venv/bin/gunicorn app.main:app --workers 4 --name main --worker-class /opt/venv/bin/uvicorn.workers.UvicornWorker --host 0.0.0.0:8000 --reload
the python main.py is
main.py
import uvicorn
from fastapi.middleware.cors import CORSMiddleware
from fastapi import FastAPI
from app.routes.api import router as api_router
app = FastAPI()
origins = ["http://localhost:8000"]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(api_router)
if __name__ == '__main__':
uvicorn.run(host='127.0.0.1', debug=True, port=8000, log_level="info", reload=True)
print("running")
when i try to bring up the containers, fastapi container will get stopped immediately once the container starts with the following message
fastapi exited with code 0
and nginx will also get stopped throwing the following message
nginx | 2022/11/15 19:10:46 [emerg] 1#1: host not found in upstream "web:8000" in /etc/nginx/conf.d/app.conf:2
nginx | nginx: [emerg] host not found in upstream "web:8000" in /etc/nginx/conf.d/app.conf:2
nginx exited with code 1
I am sitting on this for several hours and how can bring the service up and running.
update
Folder structure
|
[
"In the mentioned Dockerfile, I don't see any command for running the server.\nSomething like this should work:\nCMD [\"python\", \"<path-to>/main.py\"]\n\nAlso, to make it discoverable within the docker network, I had to run the application on '0.0.0.0' instead of localhost.\n(It might be an issue specifically on my system, But I didn't have time to debug that.)\nAlso, you should consider making the web service a dependency for nginx, so that nginx will start only after web is up and running.\n"
] |
[
1
] |
[] |
[] |
[
"docker",
"docker_compose",
"fastapi",
"nginx",
"python"
] |
stackoverflow_0074451135_docker_docker_compose_fastapi_nginx_python.txt
|
Q:
Pandas read_json converts string to decimal (though it has double quotes enclosing the data)
I have a JSON file with a field which is supposed to be a string that represents an NPI Number. The JSON file looks like this:
[{ ...
"npi_109":"1234567891",
...
},
{ ...more records }]
I use pandas to read it in like this:
import pandas as pd
df = pd.read_json("temp/" + file.orig_filename, encoding = 'unicode_escape')
I read into a dataframe and then use pyarrow to write to Parquet. I see that field in parquet gets defined as a decimal. To get around the issue of the field being read as a decimal (despite the enclosing double quotes in the JSON), I am converting that one column to a string as follows:
df['npi_109'] = df['npi_109'].astype(str)
But what ends up happening is the number gets converted to:
"1234567891.0" which is not what we want, so is there a workaround for this issue?
A:
How about:
df['npi_109'] = df['npi_109'].astype(int).astype(str)
Or, if you don't need pandas to infer types when reading the json:
df = pd.read_json(filename, encoding = 'unicode_escape', dtype=False)
Or, force it to be a string column
df = pd.read_json(filename, encoding = 'unicode_escape', dtype={column_name: str})
|
Pandas read_json converts string to decimal (though it has double quotes enclosing the data)
|
I have a JSON file with a field which is supposed to be a string that represents an NPI Number. The JSON file looks like this:
[{ ...
"npi_109":"1234567891",
...
},
{ ...more records }]
I use pandas to read it in like this:
import pandas as pd
df = pd.read_json("temp/" + file.orig_filename, encoding = 'unicode_escape')
I read into a dataframe and then use pyarrow to write to Parquet. I see that field in parquet gets defined as a decimal. To get around the issue of the field being read as a decimal (despite the enclosing double quotes in the JSON), I am converting that one column to a string as follows:
df['npi_109'] = df['npi_109'].astype(str)
But what ends up happening is the number gets converted to:
"1234567891.0" which is not what we want, so is there a workaround for this issue?
|
[
"How about:\ndf['npi_109'] = df['npi_109'].astype(int).astype(str)\n\nOr, if you don't need pandas to infer types when reading the json:\ndf = pd.read_json(filename, encoding = 'unicode_escape', dtype=False)\n\nOr, force it to be a string column\ndf = pd.read_json(filename, encoding = 'unicode_escape', dtype={column_name: str})\n\n"
] |
[
2
] |
[] |
[] |
[
"pandas",
"pyarrow",
"python"
] |
stackoverflow_0074461241_pandas_pyarrow_python.txt
|
Q:
Slice pandas series for each list in a list without using list comprehension
I have a pandas Series which I want to slice based of list of slice-indices. It's fairly easy using list comprehension like
slizes = [[0,1,2],[4,5,6],[7,8,9]]
series = pd.Series(["a","b","c","d","e","f","g","h","i"])
[series.iloc[slize] for slize in slizes] #[["a","b","c"],["d","e","f"],...]
But since I have 1.5 mio rows and 1.5 mio slices this takes quite a while. I was wondering if it could be done in a faster vectorized-way?
The result could be anything from numpy-arrays, list, Series, tuples, that doesn't really matter.
A:
This is straightforward with numpy if you have always the same size of sublists, just slice:
a = series.to_numpy()
out = a[slizes]
Output:
array([['a', 'b', 'c'],
['e', 'f', 'g'],
['h', 'i', 'j']], dtype=object)
|
Slice pandas series for each list in a list without using list comprehension
|
I have a pandas Series which I want to slice based of list of slice-indices. It's fairly easy using list comprehension like
slizes = [[0,1,2],[4,5,6],[7,8,9]]
series = pd.Series(["a","b","c","d","e","f","g","h","i"])
[series.iloc[slize] for slize in slizes] #[["a","b","c"],["d","e","f"],...]
But since I have 1.5 mio rows and 1.5 mio slices this takes quite a while. I was wondering if it could be done in a faster vectorized-way?
The result could be anything from numpy-arrays, list, Series, tuples, that doesn't really matter.
|
[
"This is straightforward with numpy if you have always the same size of sublists, just slice:\na = series.to_numpy()\nout = a[slizes]\n\nOutput:\narray([['a', 'b', 'c'],\n ['e', 'f', 'g'],\n ['h', 'i', 'j']], dtype=object)\n\n"
] |
[
1
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074461309_pandas_python.txt
|
Q:
From a numpy array of coordinates[x,y], remove other coordinates with a same x-value to keep the coordinate which has the maximum for y
Suppose I have a Numpy array of a bunch of coordinates [x, y].
I want to filter this array.
For all coordinates in the array with a same x-value, I want to keep only one coordinate: The coordinate with the maximum for the y.
What is the most efficient or Pythonic way to do this.
I will explain with an example below.
coord_arr= array([[10,5], [11,6], [12,6], [10,1], [11,0],[12,2]])
[10, 5] and [10,1] have the same x-value: x=10
maximum for y-values: max(5,1) = 5
So I only keep coordinate [10,5]
Same procedure for x=11 and x=12
So I finally end up with:
filtered_coord_arr= array([[10,5],[11,6],[12,6]])
I have a solution by converting to a list and using list comprehension (see below). But I am looking for a more efficient and elegant solution. (The actual arrays are much larger than in this example.)
My solution:
coord_list = coord_arr.tolist()
x_set = set([coord[0] for coord in coord_list])
coord_max_y_list= []
for x in x_set:
compare_list=[coord for coord in coord_list if coord[0]==x]
coord_max = compare_list[compare_list.index(max([coord[1] for coord[1] in compare_list]))]
coord_max_y_list.append(coord_max)
filtered_coord_arr= np.array(coord_max_y_list)
A:
if your array in small you can just do it one line:
np.array([[x, max(coord[coord[:,0] == x][:,1])] for x in set(coord[:,0])])
however that is not correct complexity, if array is big and you care about correct complexity , do like this:
d = {}
for x, y in coord:
d[x] = max(d.get(x, float('-Inf')), y)
np.array([[x, y] for x,y in d.items()])
A:
you can refer below answer :
Solution :
coord_arr= np.array([[10, 5], [11, 6], [12, 6], [13,7], [10,1], [10,7],[12,2], [13,0]])
df = pd.DataFrame(coord_arr,columns=['a','b'])
df = df.groupby(['a']).agg({'b': ['max']})
df.columns = ['b']
df = df.reset_index()
filtered_coord_arr = np.array(df)
filtered_coord_arr
Output :
array([[10, 7],
[11, 6],
[12, 6],
[13, 7]], dtype=int64)
|
From a numpy array of coordinates[x,y], remove other coordinates with a same x-value to keep the coordinate which has the maximum for y
|
Suppose I have a Numpy array of a bunch of coordinates [x, y].
I want to filter this array.
For all coordinates in the array with a same x-value, I want to keep only one coordinate: The coordinate with the maximum for the y.
What is the most efficient or Pythonic way to do this.
I will explain with an example below.
coord_arr= array([[10,5], [11,6], [12,6], [10,1], [11,0],[12,2]])
[10, 5] and [10,1] have the same x-value: x=10
maximum for y-values: max(5,1) = 5
So I only keep coordinate [10,5]
Same procedure for x=11 and x=12
So I finally end up with:
filtered_coord_arr= array([[10,5],[11,6],[12,6]])
I have a solution by converting to a list and using list comprehension (see below). But I am looking for a more efficient and elegant solution. (The actual arrays are much larger than in this example.)
My solution:
coord_list = coord_arr.tolist()
x_set = set([coord[0] for coord in coord_list])
coord_max_y_list= []
for x in x_set:
compare_list=[coord for coord in coord_list if coord[0]==x]
coord_max = compare_list[compare_list.index(max([coord[1] for coord[1] in compare_list]))]
coord_max_y_list.append(coord_max)
filtered_coord_arr= np.array(coord_max_y_list)
|
[
"if your array in small you can just do it one line:\nnp.array([[x, max(coord[coord[:,0] == x][:,1])] for x in set(coord[:,0])])\n\nhowever that is not correct complexity, if array is big and you care about correct complexity , do like this:\nd = {}\nfor x, y in coord:\n d[x] = max(d.get(x, float('-Inf')), y)\nnp.array([[x, y] for x,y in d.items()])\n\n",
"you can refer below answer :\nSolution :\ncoord_arr= np.array([[10, 5], [11, 6], [12, 6], [13,7], [10,1], [10,7],[12,2], [13,0]])\n\ndf = pd.DataFrame(coord_arr,columns=['a','b'])\ndf = df.groupby(['a']).agg({'b': ['max']})\ndf.columns = ['b']\ndf = df.reset_index()\nfiltered_coord_arr = np.array(df)\n\nfiltered_coord_arr\n\nOutput :\narray([[10, 7],\n [11, 6],\n [12, 6],\n [13, 7]], dtype=int64)\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"arrays",
"coordinates",
"filter",
"numpy",
"python"
] |
stackoverflow_0074459169_arrays_coordinates_filter_numpy_python.txt
|
Q:
unable a install pyspellcheck module on Linux(Raspbian)
so I am working on Linux(Raspbian) and I am unable to install the pyspellcheck module.
so previously I managed to install it by just
pip install pyspellcheck
but recently I had to factory reset my machine and I am not able to install pyspellcheck anymore.
I get the following error:
ERROR: Could not find a version that satisfies the requirement pyspellcheck (from versions: none)
ERROR: No matching distribution found for pyspellcheck
so I would just like to know how can I install it on my machine
NOTE:I am working on a Linux machine
A:
There is no such a package at PyPI: https://pypi.org/project/pyspellcheck/ — error 404. What are you trying to install? Do you want pyspellchecker?
pip install pyspellchecker
|
unable a install pyspellcheck module on Linux(Raspbian)
|
so I am working on Linux(Raspbian) and I am unable to install the pyspellcheck module.
so previously I managed to install it by just
pip install pyspellcheck
but recently I had to factory reset my machine and I am not able to install pyspellcheck anymore.
I get the following error:
ERROR: Could not find a version that satisfies the requirement pyspellcheck (from versions: none)
ERROR: No matching distribution found for pyspellcheck
so I would just like to know how can I install it on my machine
NOTE:I am working on a Linux machine
|
[
"There is no such a package at PyPI: https://pypi.org/project/pyspellcheck/ — error 404. What are you trying to install? Do you want pyspellchecker?\npip install pyspellchecker\n\n"
] |
[
0
] |
[] |
[] |
[
"linux",
"pip",
"pyspellchecker",
"python",
"raspbian"
] |
stackoverflow_0074461305_linux_pip_pyspellchecker_python_raspbian.txt
|
Q:
Levenstein distance substring
Is there a good way to use levenstein distance to match one particular string to any region within a second longer string?
Example:
str1='aaaaa'
str2='bbbbbbaabaabbbb'
if str1 in str2 with a distance < 2:
return True
So in the above example part of string 2 is aabaa and distance(str1,str2) < 2 so the statement should return True.
The only way I can think to do this is take 5 chars from str2 at a time, compare that with str1 and then repeat this moving through str2. Unfortunately this seems really inefficient and I need to process a large amount of data this way.
A:
You might have a look at the regex module that supports fuzzy matching:
>>> import regex
>>> regex.search("(aaaaa){s<2}", 'bbbbbbaabaabbbb')
<regex.Match object; span=(6, 11), match='aabaa', fuzzy_counts=(1, 0, 0)>
Since you are looking are strings of equal length, you can also do a a Hamming distance which is likely far faster than a Levenstein distance on the same two strings:
str1='aaaaa'
str2='bbbbbbaabaabbbb'
for s in [str2[i:i+len(str1)] for i in range(0,len(str2)-len(str1)+1)]:
if sum(a!=b for a,b in zip(str1,s))<2:
print s # prints 'aabaa'
A:
The trick is to generate all the substrings of appropriate length of b, then compare each one.
def lev_dist(a,b):
length_cost = abs(len(a) - len(b))
diff_cost = sum(1 for (aa, bb) in zip(a,b) if aa != bb)
return diff_cost + length_cost
def all_substr_of_length(n, s):
if n > len(s):
return [s]
else:
return [s[i:i+n] for i in range(0, len(s)-n+1)]
def lev_substr(a, b):
"""Gives minimum lev distance of all substrings of b and
the single string a.
"""
return min(lev_dist(a, bb) for bb in all_substr_of_length(len(a), b))
if lev_substr(str1, str2) < 2:
# it works!
A:
The trick is usually to play with the insert (for shorter) or delete (for longer) costs. You may also want to consider using Damerau-Levenshtein instead.
https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance
A:
I encountered this problem before, and I have not found a solution without involving at least one for loop. I have implemented a solution that returns the number of matches under a given tolerance calling the already implemented Levenshtein distance in polyleven, which can speed up the calculation.
def count_matches(seq,frag,sim_thresh=0.9):
cont=0
n = len(frag)
L = len(seq)
assert(L>=n)
for m in range(L-n):
sim = 1-poly_lev(frag,seq[m:m+n])/n
if sim >= sim_thresh:
cont = cont+1
return cont
The function calculates a similarity value (between 0 and 1) between a string fragment and all the same-length substrings of a longer sequence, being the similarity 1-levenshtein(str1,str2)/len(str1). This normalizes over the length of the fragment so it can give meaningful results for fragments of arbitrary length.
|
Levenstein distance substring
|
Is there a good way to use levenstein distance to match one particular string to any region within a second longer string?
Example:
str1='aaaaa'
str2='bbbbbbaabaabbbb'
if str1 in str2 with a distance < 2:
return True
So in the above example part of string 2 is aabaa and distance(str1,str2) < 2 so the statement should return True.
The only way I can think to do this is take 5 chars from str2 at a time, compare that with str1 and then repeat this moving through str2. Unfortunately this seems really inefficient and I need to process a large amount of data this way.
|
[
"You might have a look at the regex module that supports fuzzy matching:\n>>> import regex\n>>> regex.search(\"(aaaaa){s<2}\", 'bbbbbbaabaabbbb')\n<regex.Match object; span=(6, 11), match='aabaa', fuzzy_counts=(1, 0, 0)>\n\nSince you are looking are strings of equal length, you can also do a a Hamming distance which is likely far faster than a Levenstein distance on the same two strings:\nstr1='aaaaa'\nstr2='bbbbbbaabaabbbb'\nfor s in [str2[i:i+len(str1)] for i in range(0,len(str2)-len(str1)+1)]:\n if sum(a!=b for a,b in zip(str1,s))<2:\n print s # prints 'aabaa'\n\n",
"The trick is to generate all the substrings of appropriate length of b, then compare each one.\ndef lev_dist(a,b):\n length_cost = abs(len(a) - len(b))\n diff_cost = sum(1 for (aa, bb) in zip(a,b) if aa != bb)\n return diff_cost + length_cost\n\ndef all_substr_of_length(n, s):\n if n > len(s):\n return [s]\n else:\n return [s[i:i+n] for i in range(0, len(s)-n+1)]\n\ndef lev_substr(a, b):\n \"\"\"Gives minimum lev distance of all substrings of b and\n the single string a.\n \"\"\"\n\n return min(lev_dist(a, bb) for bb in all_substr_of_length(len(a), b))\n\nif lev_substr(str1, str2) < 2:\n # it works!\n\n",
"The trick is usually to play with the insert (for shorter) or delete (for longer) costs. You may also want to consider using Damerau-Levenshtein instead.\nhttps://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance\n",
"I encountered this problem before, and I have not found a solution without involving at least one for loop. I have implemented a solution that returns the number of matches under a given tolerance calling the already implemented Levenshtein distance in polyleven, which can speed up the calculation.\ndef count_matches(seq,frag,sim_thresh=0.9):\n cont=0\n n = len(frag) \n L = len(seq)\n assert(L>=n)\n for m in range(L-n):\n sim = 1-poly_lev(frag,seq[m:m+n])/n\n if sim >= sim_thresh:\n cont = cont+1\n return cont\n\nThe function calculates a similarity value (between 0 and 1) between a string fragment and all the same-length substrings of a longer sequence, being the similarity 1-levenshtein(str1,str2)/len(str1). This normalizes over the length of the fragment so it can give meaningful results for fragments of arbitrary length.\n"
] |
[
5,
3,
0,
0
] |
[] |
[] |
[
"levenshtein_distance",
"python"
] |
stackoverflow_0044398027_levenshtein_distance_python.txt
|
Q:
return the maximum value of each row with cluster name in dataframe
I I have a pandas dataframe, (df) that has three columns (user, values, and group name), the values column with multiple comma-separated values in each row.
df = pd.DataFrame({'user': ['user_1', 'user_2', 'user_3', 'user_4', 'user_5', 'user_6'],
'values': [[1, 0, 2, 0], [1, 8, 0, 2],[6, 2, 0, 0], [5, 0, 2, 2], [3, 8, 0, 0],[6, 0, 0, 2]],
'group': ['B', 'A', 'C', 'A', 'B', 'B']})
df
output:
user values group
0 user_1 [1, 0, 2, 0] B
1 user_2 [1, 8, 0, 2] A
2 user_3 [6, 2, 0, 0] C
3 user_4 [5, 0, 2, 2] A
4 user_5 [3, 8, 0, 0] B
5 user_6 [6, 0, 0, 2] B
Then I calculate the average of each cluster, which is called a centroid in the dataframe (df1).
df1 = (df.groupby('group', as_index=False)['values']
.agg(lambda x: np.vstack(x).mean(0).round(2))
)
df1
Output:
group values
0 A [3.0, 4.0, 1.0, 2.0]
1 B [3.33, 2.67, 0.67, 0.67]
2 C [6.0, 2.0, 0.0, 0.0]
Finally, I compute the average distance from each user to all clusters in the following code using euclidean distance.
for value in df['values']:
distance_values = []
for centroid in df1['values']:
distance_values.append(distance.euclidean(value, centroid))
print(distance_values)
Output:
[5.0, 3.8439042651970405, 5.744562646538029]
[4.58257569495584, 6.004631545732011, 8.06225774829855]
[4.242640687119285, 2.9112883745860696, 0.0]
[4.58257569495584, 3.668187563361503, 3.605551275463989]
[4.58257569495584, 5.4236150305861495, 6.708203932499369]
[5.0990195135927845, 4.059014658756482, 2.8284271247461903]
So, for each user, I calculate the average distance to the centroid of each cluster.
For example:
For user_1 the average distance to clusters A=5.0, B=3.8439042651970405, and C=5.744562646538029.
How do I return the maximum value of each row in distance values with its cluster name in the dataframe?
For example, the expected output is:
user max_value group
0 user_1 5.744562646538029 C
1 user_2 8.06225774829855 C
2 user_3 4.242640687119285 A
3 user_4 4.58257569495584 A
4 user_5 6.708203932499369 C
5 user_6 5.0990195135927845 A
A:
You can use apply to extract max values with their indexes
and then use basic string manipulations:
df['distance_values'] = [[5.0, 3.8439042651970405, 5.744562646538029],
[4.58257569495584, 6.004631545732011, 8.06225774829855],
[4.242640687119285, 2.9112883745860696, 0.0],
[4.58257569495584, 3.668187563361503, 3.605551275463989],
[4.58257569495584, 5.4236150305861495, 6.708203932499369],
[5.0990195135927845, 4.059014658756482, 2.8284271247461903]]
max_df = df['distance_values'].apply(lambda x: [max(x), x.index(max(x))])
df['max_value'] = max_df.str[0]
df['group'] = max_df.str[1].map(dict(zip(range(4), 'ABC')))
A:
You can also include you euclidean distance calculation in the function you'll apply for more efficiency:
def calc_max_dist(value):
dist_series = df1['values'].apply(lambda x: distance.euclidean(value, x))
return dist_series.max(), df1[dist_series == dist_series.max()]['group'].values
df[['max_value', 'closest_group(s)']] = pd.DataFrame(df['values'].apply(calc_max_dist).tolist())
Output:
user values group max_value closest_group(s)
0 user_1 [1, 0, 2, 0] B 5.744563 [C]
1 user_2 [1, 8, 0, 2] A 8.062258 [C]
2 user_3 [6, 2, 0, 0] C 4.242641 [A]
3 user_4 [5, 0, 2, 2] A 4.582576 [A]
4 user_5 [3, 8, 0, 0] B 6.708204 [C]
5 user_6 [6, 0, 0, 2] B 5.099020 [A]
A:
max_dist_idx = []
distant_cluster = []
for value in df['values']:
distance_values = []
for centroid in df1['values']:
distance_values.append(distance.euclidean(value, centroid))
max_dist_idx.append(max(distance_values))
distant_cluster.append(distance_values.index(max(distance_values)))
cluster_map = {0: 'A', 1: 'B', 2: 'C'}
max_group = [cluster_map[i] for i in distant_cluster]
then you can just mount your dataframe:
pd.DataFrame(data={'user': df.user,
'max_value': max_dist_idx,
'group': max_group})
user max_value group
0 user_1 5.744563 C
1 user_2 8.062258 C
2 user_3 4.242641 A
3 user_4 4.582576 A
4 user_5 6.708204 C
5 user_6 5.099020 A
|
return the maximum value of each row with cluster name in dataframe
|
I I have a pandas dataframe, (df) that has three columns (user, values, and group name), the values column with multiple comma-separated values in each row.
df = pd.DataFrame({'user': ['user_1', 'user_2', 'user_3', 'user_4', 'user_5', 'user_6'],
'values': [[1, 0, 2, 0], [1, 8, 0, 2],[6, 2, 0, 0], [5, 0, 2, 2], [3, 8, 0, 0],[6, 0, 0, 2]],
'group': ['B', 'A', 'C', 'A', 'B', 'B']})
df
output:
user values group
0 user_1 [1, 0, 2, 0] B
1 user_2 [1, 8, 0, 2] A
2 user_3 [6, 2, 0, 0] C
3 user_4 [5, 0, 2, 2] A
4 user_5 [3, 8, 0, 0] B
5 user_6 [6, 0, 0, 2] B
Then I calculate the average of each cluster, which is called a centroid in the dataframe (df1).
df1 = (df.groupby('group', as_index=False)['values']
.agg(lambda x: np.vstack(x).mean(0).round(2))
)
df1
Output:
group values
0 A [3.0, 4.0, 1.0, 2.0]
1 B [3.33, 2.67, 0.67, 0.67]
2 C [6.0, 2.0, 0.0, 0.0]
Finally, I compute the average distance from each user to all clusters in the following code using euclidean distance.
for value in df['values']:
distance_values = []
for centroid in df1['values']:
distance_values.append(distance.euclidean(value, centroid))
print(distance_values)
Output:
[5.0, 3.8439042651970405, 5.744562646538029]
[4.58257569495584, 6.004631545732011, 8.06225774829855]
[4.242640687119285, 2.9112883745860696, 0.0]
[4.58257569495584, 3.668187563361503, 3.605551275463989]
[4.58257569495584, 5.4236150305861495, 6.708203932499369]
[5.0990195135927845, 4.059014658756482, 2.8284271247461903]
So, for each user, I calculate the average distance to the centroid of each cluster.
For example:
For user_1 the average distance to clusters A=5.0, B=3.8439042651970405, and C=5.744562646538029.
How do I return the maximum value of each row in distance values with its cluster name in the dataframe?
For example, the expected output is:
user max_value group
0 user_1 5.744562646538029 C
1 user_2 8.06225774829855 C
2 user_3 4.242640687119285 A
3 user_4 4.58257569495584 A
4 user_5 6.708203932499369 C
5 user_6 5.0990195135927845 A
|
[
"You can use apply to extract max values with their indexes\nand then use basic string manipulations:\ndf['distance_values'] = [[5.0, 3.8439042651970405, 5.744562646538029],\n[4.58257569495584, 6.004631545732011, 8.06225774829855],\n[4.242640687119285, 2.9112883745860696, 0.0],\n[4.58257569495584, 3.668187563361503, 3.605551275463989],\n[4.58257569495584, 5.4236150305861495, 6.708203932499369],\n[5.0990195135927845, 4.059014658756482, 2.8284271247461903]] \n\nmax_df = df['distance_values'].apply(lambda x: [max(x), x.index(max(x))])\ndf['max_value'] = max_df.str[0]\ndf['group'] = max_df.str[1].map(dict(zip(range(4), 'ABC')))\n\n",
"You can also include you euclidean distance calculation in the function you'll apply for more efficiency:\ndef calc_max_dist(value):\n dist_series = df1['values'].apply(lambda x: distance.euclidean(value, x))\n return dist_series.max(), df1[dist_series == dist_series.max()]['group'].values\n\ndf[['max_value', 'closest_group(s)']] = pd.DataFrame(df['values'].apply(calc_max_dist).tolist())\n\nOutput:\n user values group max_value closest_group(s)\n0 user_1 [1, 0, 2, 0] B 5.744563 [C]\n1 user_2 [1, 8, 0, 2] A 8.062258 [C]\n2 user_3 [6, 2, 0, 0] C 4.242641 [A]\n3 user_4 [5, 0, 2, 2] A 4.582576 [A]\n4 user_5 [3, 8, 0, 0] B 6.708204 [C]\n5 user_6 [6, 0, 0, 2] B 5.099020 [A]\n\n",
"max_dist_idx = []\ndistant_cluster = []\n\nfor value in df['values']:\n distance_values = []\n\n for centroid in df1['values']:\n distance_values.append(distance.euclidean(value, centroid))\n\n max_dist_idx.append(max(distance_values))\n distant_cluster.append(distance_values.index(max(distance_values)))\n\ncluster_map = {0: 'A', 1: 'B', 2: 'C'}\nmax_group = [cluster_map[i] for i in distant_cluster]\n\nthen you can just mount your dataframe:\n\npd.DataFrame(data={'user': df.user,\n 'max_value': max_dist_idx,\n 'group': max_group})\n\n user max_value group\n0 user_1 5.744563 C\n1 user_2 8.062258 C\n2 user_3 4.242641 A\n3 user_4 4.582576 A\n4 user_5 6.708204 C\n5 user_6 5.099020 A\n\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"dataframe",
"group_by",
"pandas",
"python"
] |
stackoverflow_0074460679_dataframe_group_by_pandas_python.txt
|
Q:
How to put data into a tempfile and post as CSV on SFTP
Goal is
Create a temporary SCP file filled with data and upload it to an sftp. The data to fill is TheList and is from class list.
What I am able to achieve
Create the connection to the SFTP
Push a file to the SFTP
What happens with the code below
There is a file created/put to the SFTP, but the file is empty and has 0 byte.
Question
How can I achieve that I have a file with type SCP on SFTP with the content of TheList?
import paramiko
import tempfile
import csv
# code part to make and open sftp connection
TheList = [['name', 'address'], [ 'peter', 'london']]
csvfile = tempfile.NamedTemporaryFile(suffix='.csv', mode='w', delete=False)
filewriter = csv.writer(csvfile)
filewriter.writerows(TheList)
sftp.put(csvfile.name, SftpPath + "anewfile.csv")
# code part to close sftp connection
A:
You do not need to create a temporary file. You can use csv.writer to write the rows directly to the SFTP with use of file-like object opened using SFTPClient.open:
with sftp.open(SftpPath + "anewfile.csv", mode='w', bufsize=32768) as csvfile:
writer = csv.writer(csvfile, delimiter=',')
filewriter.writerows(TheList)
See also pysftp putfo creates an empty file on SFTP server but not streaming the content from StringIO
To answer your literal question: I believe you need to flush the temporary file before trying to upload it:
filewriter.flush()
See How to use tempfile.NamedTemporaryFile() in Python
Though better option would be to use Paramiko SFTPClient.putfo to upload the NamedTemporaryFile object, rather then trying to refer to the temporary file via the filename (what allegedly would not work at least on Windows anyway):
csvfile.seek(0)
sftp.putfo(csvfile, SftpPath + "anewfile.csv")
|
How to put data into a tempfile and post as CSV on SFTP
|
Goal is
Create a temporary SCP file filled with data and upload it to an sftp. The data to fill is TheList and is from class list.
What I am able to achieve
Create the connection to the SFTP
Push a file to the SFTP
What happens with the code below
There is a file created/put to the SFTP, but the file is empty and has 0 byte.
Question
How can I achieve that I have a file with type SCP on SFTP with the content of TheList?
import paramiko
import tempfile
import csv
# code part to make and open sftp connection
TheList = [['name', 'address'], [ 'peter', 'london']]
csvfile = tempfile.NamedTemporaryFile(suffix='.csv', mode='w', delete=False)
filewriter = csv.writer(csvfile)
filewriter.writerows(TheList)
sftp.put(csvfile.name, SftpPath + "anewfile.csv")
# code part to close sftp connection
|
[
"You do not need to create a temporary file. You can use csv.writer to write the rows directly to the SFTP with use of file-like object opened using SFTPClient.open:\nwith sftp.open(SftpPath + \"anewfile.csv\", mode='w', bufsize=32768) as csvfile:\n writer = csv.writer(csvfile, delimiter=',')\n filewriter.writerows(TheList)\n\nSee also pysftp putfo creates an empty file on SFTP server but not streaming the content from StringIO\n\nTo answer your literal question: I believe you need to flush the temporary file before trying to upload it:\nfilewriter.flush()\n\nSee How to use tempfile.NamedTemporaryFile() in Python\nThough better option would be to use Paramiko SFTPClient.putfo to upload the NamedTemporaryFile object, rather then trying to refer to the temporary file via the filename (what allegedly would not work at least on Windows anyway):\ncsvfile.seek(0)\nsftp.putfo(csvfile, SftpPath + \"anewfile.csv\")\n\n"
] |
[
1
] |
[] |
[] |
[
"csv",
"paramiko",
"python",
"sftp",
"temporary_files"
] |
stackoverflow_0074461295_csv_paramiko_python_sftp_temporary_files.txt
|
Q:
reformat my repetitive code into a while or for loop for minesweeper game in python using oop
So for the game mine sweeper when you click a box with 0 surrounding mines not only is that cell revealed but all surrounding cells. I want to make my code so that if there is then another 0 that was revealed all cells around that 0 are also revealed.
ie if there are 2 0s together (side by side say) when clicked not just the 8 surrounding the one clicked are revealed but the 8 plus the 3 revealed by the other 0. (assuming not near edge)
Im using oop and this is the Cell class and the cells are the instances.
def single_click(self, event):
function when the cell is single clicked (reveal mine or number of surrounding mines)
if self.is_mine:
self.show_mine()
if the cell is a mine its a revealed. if not ...
(surrounded_mines is the number of neighbouring mines
else:
if self.surrounded_mines == 0:
for cell in self.surrounded_cells:
cell.show_cell()
first lot of neighbouring cells are revealed if cell clicked is 0 ^
if cell.surrounded_mines == 0:
for cell2 in cell.surrounded_cells:
cell2.show_cell()
second lot of neighbouring cells are revealed if a 0 was revealed in the stem before
if cell2.surrounded_mines == 0:
for cell3 in cell2.surrounded_cells:
cell3.show_cell()
and again
self.show_cell()
and i could do this a few times because its unlikely there will be that many 0s together but id rather know how to code it properly.
Im usually ok with while loops but im just a bit confused as I dont know what id do for the condition.
If there is a better way to format my code id love to hear!
while im at it im using the tkiner package for the GUI and since i dont have a left/right click mouse (mac) I wanted to play the game using single and double clicks however I havnt been able to find out how to double click without also calling the single click.
THANKS!
A:
Here's a recursive solution. You might need to tweak it to suit your needs.
import random
# 10x10 mine field
# "-" = not clicked, " " = cleared, "m" = mine
game = [['-'] * 10 for _ in range(10)]
# Returns r,c of every space around row,col
def get_surrounding_spaces(row, col):
spaces = []
for r in range(row-1, row+2):
for c in range(col-1, col+2):
if r >= 0 and r < len(game) and c >= 0 and c < len(game[r]) and (r,c) != (row,col):
spaces.append((r,c,))
return spaces
# Checks the cell. If it has no mines around it, clear all around.
# Calls recursively on those
def clear_space(row, col):
if game[row][col] != '-': return
game[row][col] = ' '
surrounding_spaces = get_surrounding_spaces(row, col)
has_mines = any(True for r,c in surrounding_spaces if game[r][c] == 'm')
if not has_mines:
# Recursive clear surrounding spaces
for r,c in surrounding_spaces:
clear_space(r, c)
# Helper function to print minefield
def print_game():
print(f" {' '.join(f'{x:2}' for x in range(10))}")
for i,row in enumerate(game):
print(f"{i:<2} {' '.join(row)}")
# 15 or so random mines
for _ in range(15):
r = random.randint(0, len(game)-1)
c = random.randint(0, len(game[0])-1)
game[r][c] = 'm'
# Test
print_game()
r,c = map(int, input("row,col: ").split(','))
clear_space(r, c)
print_game()
Sample run:
0 1 2 3 4 5 6 7 8 9
0 m - - - - - m - - -
1 - - - m - - - - - -
2 - - - - - - m - m -
3 - - - - - - - - - -
4 - - - - - - - - - -
5 - - - - m - - - - -
6 - - - - m - - - - -
7 m - - - - - m - m -
8 - - - - m - m - m -
9 - - - m - - m - - -
row,col: 3,2
0 1 2 3 4 5 6 7 8 9
0 m - - - - - m - - -
1 m - - - - - -
2 m - m -
3 - - - -
4 - - - -
5 m - - - - -
6 m - - - - -
7 m - - m - m -
8 - m - m - m -
9 - - - m - - m - - -
|
reformat my repetitive code into a while or for loop for minesweeper game in python using oop
|
So for the game mine sweeper when you click a box with 0 surrounding mines not only is that cell revealed but all surrounding cells. I want to make my code so that if there is then another 0 that was revealed all cells around that 0 are also revealed.
ie if there are 2 0s together (side by side say) when clicked not just the 8 surrounding the one clicked are revealed but the 8 plus the 3 revealed by the other 0. (assuming not near edge)
Im using oop and this is the Cell class and the cells are the instances.
def single_click(self, event):
function when the cell is single clicked (reveal mine or number of surrounding mines)
if self.is_mine:
self.show_mine()
if the cell is a mine its a revealed. if not ...
(surrounded_mines is the number of neighbouring mines
else:
if self.surrounded_mines == 0:
for cell in self.surrounded_cells:
cell.show_cell()
first lot of neighbouring cells are revealed if cell clicked is 0 ^
if cell.surrounded_mines == 0:
for cell2 in cell.surrounded_cells:
cell2.show_cell()
second lot of neighbouring cells are revealed if a 0 was revealed in the stem before
if cell2.surrounded_mines == 0:
for cell3 in cell2.surrounded_cells:
cell3.show_cell()
and again
self.show_cell()
and i could do this a few times because its unlikely there will be that many 0s together but id rather know how to code it properly.
Im usually ok with while loops but im just a bit confused as I dont know what id do for the condition.
If there is a better way to format my code id love to hear!
while im at it im using the tkiner package for the GUI and since i dont have a left/right click mouse (mac) I wanted to play the game using single and double clicks however I havnt been able to find out how to double click without also calling the single click.
THANKS!
|
[
"Here's a recursive solution. You might need to tweak it to suit your needs.\nimport random\n\n# 10x10 mine field\n# \"-\" = not clicked, \" \" = cleared, \"m\" = mine\ngame = [['-'] * 10 for _ in range(10)]\n\n# Returns r,c of every space around row,col\ndef get_surrounding_spaces(row, col):\n spaces = []\n for r in range(row-1, row+2):\n for c in range(col-1, col+2):\n if r >= 0 and r < len(game) and c >= 0 and c < len(game[r]) and (r,c) != (row,col):\n spaces.append((r,c,))\n return spaces\n\n# Checks the cell. If it has no mines around it, clear all around.\n# Calls recursively on those\ndef clear_space(row, col):\n if game[row][col] != '-': return\n game[row][col] = ' '\n surrounding_spaces = get_surrounding_spaces(row, col)\n has_mines = any(True for r,c in surrounding_spaces if game[r][c] == 'm')\n if not has_mines:\n # Recursive clear surrounding spaces\n for r,c in surrounding_spaces:\n clear_space(r, c)\n\n# Helper function to print minefield\ndef print_game():\n print(f\" {' '.join(f'{x:2}' for x in range(10))}\")\n for i,row in enumerate(game):\n print(f\"{i:<2} {' '.join(row)}\")\n\n# 15 or so random mines\nfor _ in range(15):\n r = random.randint(0, len(game)-1)\n c = random.randint(0, len(game[0])-1)\n game[r][c] = 'm'\n\n# Test\nprint_game()\nr,c = map(int, input(\"row,col: \").split(','))\nclear_space(r, c)\nprint_game()\n\nSample run:\n 0 1 2 3 4 5 6 7 8 9\n0 m - - - - - m - - -\n1 - - - m - - - - - -\n2 - - - - - - m - m -\n3 - - - - - - - - - -\n4 - - - - - - - - - -\n5 - - - - m - - - - -\n6 - - - - m - - - - -\n7 m - - - - - m - m -\n8 - - - - m - m - m -\n9 - - - m - - m - - -\nrow,col: 3,2\n 0 1 2 3 4 5 6 7 8 9\n0 m - - - - - m - - -\n1 m - - - - - -\n2 m - m -\n3 - - - -\n4 - - - -\n5 m - - - - -\n6 m - - - - -\n7 m - - m - m -\n8 - m - m - m -\n9 - - - m - - m - - -\n\n"
] |
[
0
] |
[] |
[] |
[
"for_loop",
"oop",
"python",
"python_3.x",
"while_loop"
] |
stackoverflow_0074452700_for_loop_oop_python_python_3.x_while_loop.txt
|
Q:
Check if string starts with any of two (sub) strings
I'm trying to pass a number of options for a bolean function and I wrote it like this:
s = 'https://www.youtube.com/watch?v=nVNG8jjZN7k'
s.startswith('http://') or s.startswith('https://')
But I was wondering if there's a more efficient way to write it,
something like:
s.startswith('http://' or 'https://')
A:
str.startswith can take a tuple of strings as an argument. It will return true if the string starts with any of them.
s.startswith(('http://', 'https://'))
However, it might be simpler to use a regular expression to capture the idea of the s being optional:
bool(re.match('https?://', s))
If the match succeeds, you get back a truthy re.Match object. If the match fails, you get back the falsy value None.
A:
you can use urllib.parse.urlparse
from urllib.parse import urlparse
url = 'https://www.youtube.com/watch?v=nVNG8jjZN7k'
if urlparse(url).scheme in ("http", "https"):
...
More useful methods in the docs https://docs.python.org/3/library/urllib.parse.html#module-urllib.parse
|
Check if string starts with any of two (sub) strings
|
I'm trying to pass a number of options for a bolean function and I wrote it like this:
s = 'https://www.youtube.com/watch?v=nVNG8jjZN7k'
s.startswith('http://') or s.startswith('https://')
But I was wondering if there's a more efficient way to write it,
something like:
s.startswith('http://' or 'https://')
|
[
"str.startswith can take a tuple of strings as an argument. It will return true if the string starts with any of them.\ns.startswith(('http://', 'https://'))\n\nHowever, it might be simpler to use a regular expression to capture the idea of the s being optional:\nbool(re.match('https?://', s))\n\nIf the match succeeds, you get back a truthy re.Match object. If the match fails, you get back the falsy value None.\n",
"you can use urllib.parse.urlparse\nfrom urllib.parse import urlparse\n\n\nurl = 'https://www.youtube.com/watch?v=nVNG8jjZN7k'\n\nif urlparse(url).scheme in (\"http\", \"https\"):\n ...\n\nMore useful methods in the docs https://docs.python.org/3/library/urllib.parse.html#module-urllib.parse\n"
] |
[
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074460317_python.txt
|
Q:
AttributeError: 'module' object has no attribute 'set_start_method'
That code below starts fine in pycharm.
But by starting with the command line:
"python field_basket_design_uwr.py"
it gives error:
Traceback (most recent call last):
File "field_basket_design_uwr.py", line 677, in <module>
mp.set_start_method('spawn')
AttributeError: 'module' object has no attribute 'set_start_method'
Has somebody an idea how to make the script starting without error?
#!/usr/bin/python3.5
import math
import sys
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk as gtk, Gdk as gdk, GLib, GObject as gobject
import string
import os
import subprocess
import glob
from datetime import datetime, timedelta
import time
import numpy as np
import matplotlib; matplotlib.use('Gtk3Agg')
import matplotlib.animation as animation
from mpl_toolkits.mplot3d.proj3d import proj_transform
from matplotlib.text import Annotation
from matplotlib.backends.backend_gtk3cairo import FigureCanvasGTK3Cairo as FigureCanvas
import matplotlib.pyplot as plt
import multiprocessing as mp
class Annotation3D(Annotation):
'''Annotate the point xyz with text s'''
def __init__(self, s, xyz, *args, **kwargs):
Annotation.__init__(self,s, xy=(0,0), *args, **kwargs)
self._verts3d = xyz
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.xy=(xs,ys)
Annotation.draw(self, renderer)
#
def annotate3D(ax, s, *args, **kwargs):
'''add anotation text s to to Axes3d ax'''
tag = Annotation3D(s, *args, **kwargs)
ax.add_artist(tag)
#
def draw_basket(ax1, x, y, z, h, color='black'):
'''add basket to the ax1 figure'''
t = np.linspace(0, np.pi * 2, 16)
ax1.plot(x+0.24*np.cos(t), y+0.24*np.sin(t), z, linewidth=1, color=color)
ax1.plot(x+0.16*np.cos(t), y+0.16*np.sin(t), z, linewidth=1, color=color)
ax1.plot(x+0.24*np.cos(t), y+0.24*np.sin(t), z+h, linewidth=1, color=color)
A=0
while A < 16:
xBar = [x+ 0.16 * math.sin(A*22.5*np.pi/180),x+ 0.24 * math.sin(A*22.5*np.pi/180)]
yBar = [y+ 0.16 * math.cos(A*22.5*np.pi/180),y+ 0.24 * math.cos(A*22.5*np.pi/180)]
zBar = [0,h]
ax1.plot(xBar, yBar, zBar, color=color)
A = A+1
def draw_halfsphere (ax1, x, y, z, sph_radius, color=(0,0,1,1)):
''' add free distance surface to Axes3d ax1 '''
u, v = np.mgrid[0:2 * np.pi:20j, 0:np.pi/2:10j]
xP1 = x + sph_radius * np.cos(u) * np.sin(v)
yP1 = y + sph_radius * np.sin(u) * np.sin(v)
zP1 = z - sph_radius * np.cos(v)
halffreesphere = ax1.plot_wireframe(xP1, yP1, zP1, color=color, alpha=0.3)
return halffreesphere
def OnClick(event):
global selected_coord
global clicked_coord
clicked_coord [0, 0] = clicked_coord [1, 0]
clicked_coord [0, 1] = clicked_coord [1, 1]
clicked_coord [0, 2] = clicked_coord [1, 2]
clicked_coord [1, 0] = selected_coord[0]
clicked_coord [1, 1] = selected_coord[1]
clicked_coord [1, 2] = selected_coord[2]
print ("selected position X: %5.2f Y: %5.2f Z: %5.2f" % (selected_coord[0], selected_coord[1],selected_coord[2]))
print ("distance between selected points: %5.2f", np.sqrt ((clicked_coord [0, 0] - clicked_coord [1, 0])**2
+ (clicked_coord [0, 1]- clicked_coord [1, 1])**2
+ (clicked_coord [0, 2] - clicked_coord [1, 2])**2))
def distance(point, event):
"""Return distance between mouse position and given data point
Args:
point (np.array): np.array of shape (3,), with x,y,z in data coords
event (MouseEvent): mouse event (which contains mouse position in .x and .xdata)
Returns:
distance (np.float64): distance (in screen coords) between mouse pos and data point
"""
x2, y2, _ = proj_transform(point[0], point[1], point[2], plt.gca().get_proj())
x3, y3 = ax1.transData.transform((x2, y2))
return np.sqrt ((x3 - event.x)**2 + (y3 - event.y)**2)
def calcClosestDatapoint(X, event):
""""Calculate which data point is closest to the mouse position.
Args:
X (np.array) - array of points, of shape (numPoints, 3)
event (MouseEvent) - mouse event (containing mouse position)
returns:
smallestIndex (int) - the index (into the array of points X) of the element closest to the mouse position
"""
distances = [distance (X[i, 0:3], event) for i in range(X.shape[0])]
return np.argmin(distances),np.amin(distances)
def annotatePlot(X, index):
global selected_coord
"""Create popover label in 3d chart
Args:
X (np.array) - array of points, of shape (numPoints, 3)
index (int) - index (into points array X) of item which should be printed
Returns:
None
"""
# If we have previously displayed another label, remove it first
if hasattr(annotatePlot, 'label'):
annotatePlot.label.remove()
# Get data point from array of points X, at position index
x2, y2, _ = proj_transform(X[index, 0], X[index, 1], X[index, 2], ax1.get_proj())
annotatePlot.label = plt.annotate( "Select %d" % (index+1),
xy = (x2, y2), xytext = (-20, 20), textcoords = 'offset points', ha = 'right', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5),
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))
# make coord from label available global for other function like distance measurement between points
selected_coord[0]=X[index, 0]
selected_coord[1]=X[index, 1]
selected_coord[2]=X[index, 2]
#
fig.canvas.draw()
def onMouseMotion(event):
global pos_pb_now, pos_pw_now
"""Event that is triggered when mouse is moved. Shows text annotation over data point closest to mouse."""
closestIndexW,LowestDistanceW = calcClosestDatapoint(pos_pw_now, event)
closestIndexB,LowestDistanceB = calcClosestDatapoint(pos_pb_now, event)
if LowestDistanceW < LowestDistanceB:
annotatePlot (pos_pw_now, closestIndexW)
else:
annotatePlot (pos_pb_now, closestIndexB)
#
def OneWindow(s_w_shared,s_d_shared,s_l_shared,el_w_shared,elevation_shared, azimut_shared, pb,
pw, ball):
import numpy as np
import matplotlib.pyplot as plt
''' Sub-processed Plot viewer of the main windows; copy/paste in one; it helps for PC with 2 monitors
The main windows remain the control window of the trainer. This window is the view windows of the trained player'''
#
def animate_one(i):
p_b_one._offsets3d = pos_pb_now_one[:, 0], pos_pb_now_one[:, 1], pos_pb_now_one[:, 2]
p_w_one._offsets3d = pos_pw_now_one[:, 0], pos_pw_now_one[:, 1], pos_pw_now_one[:, 2]
p_ball_one._offsets3d = pos_ball_now_one[:, 0], pos_ball_now_one[:, 1], pos_ball_now_one[:, 2]
ax1_one.view_init(elev=elevation_shared.value, azim=azimut_shared.value)
fig_one = plt.figure()
ax1_one = fig_one.add_subplot(111,projection='3d')
#
arrpb = np.frombuffer(pb.get_obj(), dtype='f')
pos_pb_now_one = np.reshape(arrpb, (6, 3))
#
arrpw = np.frombuffer(pw.get_obj(), dtype='f')
pos_pw_now_one = np.reshape(arrpw, (6, 3))
#
arrball = np.frombuffer(ball.get_obj(), dtype='f')
pos_ball_now_one = np.reshape(arrball, (1, 3))
xG = [0,s_w_shared.value,s_w_shared.value,0,0, 0,s_w_shared.value,s_w_shared.value,s_w_shared.value,
s_w_shared.value,s_w_shared.value, 0, 0,0, 0,s_w_shared.value]
yG = [0, 0, 0,0,0,s_l_shared.value,s_l_shared.value, 0, 0,s_l_shared.value,s_l_shared.value,s_l_shared.value,
s_l_shared.value,0,s_l_shared.value,s_l_shared.value]
zG = [0, 0, s_d_shared.value,s_d_shared.value,0, 0, 0, 0, s_d_shared.value, s_d_shared.value, 0, 0,
s_d_shared.value,s_d_shared.value, s_d_shared.value, s_d_shared.value]
ax1_one.plot_wireframe (xG,yG,zG,colors= (0,0,1,1)) # blue line game area
xW = [s_w_shared.value,s_w_shared.value+el_w_shared.value,s_w_shared.value+el_w_shared.value,s_w_shared.value,
s_w_shared.value,s_w_shared.value,s_w_shared.value+el_w_shared.value,s_w_shared.value+el_w_shared.value,
s_w_shared.value+el_w_shared.value,s_w_shared.value+el_w_shared.value,s_w_shared.value+el_w_shared.value,
s_w_shared.value,s_w_shared.value,s_w_shared.value,s_w_shared.value,s_w_shared.value+el_w_shared.value]
yW = [0, 0, 0, 0, 0,s_l_shared.value,s_l_shared.value, 0, 0,s_l_shared.value,s_l_shared.value,s_l_shared.value,
s_l_shared.value, 0,s_l_shared.value,s_l_shared.value]
zW = [0, 0, s_d_shared.value, s_d_shared.value, 0, 0, 0, 0, s_d_shared.value, s_d_shared.value, 0, 0,
s_d_shared.value, s_d_shared.value, s_d_shared.value, s_d_shared.value]
ax1_one.plot_wireframe (xW,yW,zW,colors= (0,1,1,1)) # light blue line exchange area
#
ax1_one.set_xlabel('Wide')
ax1_one.set_ylabel('Length')
ax1_one.set_zlabel('Water')
#
# draw the 2 lines which show the depth
xG1 = [0, s_w_shared.value]
yG1 = [s_d_shared.value, s_d_shared.value]
zG1 = [0, 0]
ax1_one.plot_wireframe(xG1, yG1, zG1, colors=(0, 0, 1, 1),linestyle=':') # blue line
xG2 = [0, s_w_shared.value]
yG2 = [s_l_shared.value-s_d_shared.value, s_l_shared.value-s_d_shared.value]
zG2 = [0, 0]
ax1_one.plot_wireframe(xG2, yG2, zG2, colors=(0, 0, 1, 1),linestyle=':') # blue line
#
# put the axis fix
ax1_one.set_xlim3d(0, s_w_shared.value+el_w_shared.value)
ax1_one.set_ylim3d(0, s_l_shared.value)
ax1_one.set_zlim3d(0, s_d_shared.value)
ax1_one.set_aspect(aspect=0.222)
draw_basket(ax1_one, s_w_shared.value / 2, 0.24, 0., 0.45)
draw_basket(ax1_one, s_w_shared.value / 2, s_l_shared.value - 0.24, 0., 0.45)
#
p_b_one = ax1_one.scatter(pos_pb_now_one[:, 0], pos_pb_now_one[:, 1], pos_pb_now_one[:, 2],
s=400, alpha = 0.5, c=(0, 0, 1, 1))
p_w_one = ax1_one.scatter(pos_pw_now_one[:, 0], pos_pw_now_one[:, 1],
pos_pw_now_one[:, 2], s=400, alpha = 0.5, c="darkgrey")
p_ball_one = ax1_one.scatter(pos_ball_now_one[:,0], pos_ball_now_one[:,1],
pos_ball_now_one[:,2], s=100, alpha = 0.5, c="red")
for j, xyz_ in enumerate(pos_pb_now_one):
annotate3D(ax1_one, s=str(j+1), xyz=xyz_, fontsize=10, xytext=(-3,3),
textcoords='offset points', ha='right',va='bottom')
for j, xyz_ in enumerate(pos_pw_now_one):
annotate3D(ax1_one, s=str(j+1), xyz=xyz_, fontsize=10, xytext=(-3,3),
textcoords='offset points', ha='right', va='bottom')
Frame = 10
ani1_one = animation.FuncAnimation(fig_one, animate_one, frames=Frame, interval=600, blit=False, repeat=True,
repeat_delay=500)
#
plt.pause(0.001)
plt.show()
def animate(i):
global pos_pb_now, pos_pb_now_shared, pos_pb_target, p_b, pos_pb_deltamove
global pos_pw_now, pos_pw_now_shared, pos_pw_target, p_w, pos_pw_deltamove
global pos_ball_now, pos_ball_now_shared, pos_ball_target, p_ball, pos_ball_deltamove
global Frame
global count_iter
global video_page_iter
global azimut_shared
global elevation_shared
global video_file_name
# global EmitPosOneWin
# global EmitPosFourWin
global ax1
global free_sphere
#
azimut, elevation = ax1.azim, ax1.elev
# print ("azimut from main",azimut)
azimut_shared.value = azimut
# print ("azimut_shared value from main",azimut_shared.value)
elevation_shared.value = elevation
pos_ball_now[0,0] += (1. / Frame) * pos_ball_deltamove[0,0]
pos_ball_now[0,1] += (1. / Frame) * pos_ball_deltamove[0,1]
pos_ball_now[0,2] += (1. / Frame) * pos_ball_deltamove[0,2]
#
# EmitPosOneWin.put(['bp', 0, pos_ball_now[0,0], pos_ball_now[0,1], pos_ball_now[0,2]])
# EmitPosFourWin.put(['bp', 0, pos_ball_now[0,0], pos_ball_now[0,1], pos_ball_now[0,2]])
pos_ball_now_shared[0] = pos_ball_now[0, 0]
pos_ball_now_shared[1] = pos_ball_now[0, 1]
pos_ball_now_shared[2] = pos_ball_now[0, 2]
for j in range(6):
pos_pb_now[j, 0] += (1. / Frame) * pos_pb_deltamove[j, 0]
pos_pb_now[j, 1] += (1. / Frame) * pos_pb_deltamove[j, 1]
pos_pb_now[j, 2] += (1. / Frame) * pos_pb_deltamove[j, 2]
pos_pw_now[j, 0] += (1. / Frame) * pos_pw_deltamove[j, 0]
pos_pw_now[j, 1] += (1. / Frame) * pos_pw_deltamove[j, 1]
pos_pw_now[j, 2] += (1. / Frame) * pos_pw_deltamove[j, 2]
#
# feed the queue; queue because that animation could be paused
# EmitPosOneWin.put(['pb', j, pos_pb_now[j, 0], pos_pb_now[j, 1], pos_pb_now[j, 2]])
# EmitPosOneWin.put(['pw', j, pos_pw_now[j, 0], pos_pw_now[j, 1], pos_pw_now[j, 2]])
# EmitPosFourWin.put(['pb', j, pos_pb_now[j, 0], pos_pb_now[j, 1], pos_pb_now[j, 2]])
# EmitPosFourWin.put(['pw', j, pos_pw_now[j, 0], pos_pw_now[j, 1], pos_pw_now[j, 2]])
pos_pb_now_shared[j*3] = pos_pb_now[j,0]
pos_pb_now_shared[j*3+1] = pos_pb_now[j,1]
pos_pb_now_shared[j*3+2] = pos_pb_now[j,2]
pos_pw_now_shared[j*3] = pos_pw_now[j,0]
pos_pw_now_shared[j*3+1] = pos_pw_now[j,1]
pos_pw_now_shared[j*3+2] = pos_pw_now[j,2]
#
p_b._offsets3d = pos_pb_now[:, 0], pos_pb_now[:, 1], pos_pb_now[:, 2]
p_w._offsets3d = pos_pw_now[:, 0], pos_pw_now[:, 1], pos_pw_now[:, 2]
p_ball._offsets3d = pos_ball_now[:,0],pos_ball_now[:,1],pos_ball_now[:,2]
#
video_page_iter = video_page_iter+1 # if video is on
plt.savefig("/home/family/Bilder" + "/file%03d.png" % video_page_iter) # if video is on
#
if video_page_iter==100: # or if command store video
os.chdir("/home/family/Bilder")
subprocess.call([
'ffmpeg', '-framerate', '8', '-i', 'file%03d.png', '-r', '30', '-pix_fmt', 'yuv420p',
# 'video_name.mp4'
video_file_name
]) # add -y to overwrite test this
for file_name in glob.glob("*.png"):
os.remove(file_name)
video_page_iter = 0
# simulate the deletion of the free domain. Will be activated later by a GUI
free_sphere.remove()
# fig.canvas.draw()
if i == (Frame - 1):
# reset the deltamove to a clean zero for last position in case of rounding elements
# or set to next step of dynamic move
count_iter = count_iter+1
m, s = divmod(count_iter, 2)
if s == 1:
free_sphere.remove()
fig.canvas.draw()
pos_ball_deltamove[0,0] = -1.
pos_ball_deltamove[0,1] = -1.
pos_ball_deltamove[0,2] = -1.
for k in range(6):
pos_pb_deltamove[k, 0] = -1.
pos_pb_deltamove[k, 1] = -1.
pos_pb_deltamove[k, 2] = -1.
pos_pw_deltamove[k, 0] = -1.
pos_pw_deltamove[k, 1] = -1.
pos_pw_deltamove[k, 2] = -1.
else:
free_sphere = draw_halfsphere(ax1, 5., 9., 4., 2.)
pos_ball_deltamove[0,0] = 1.
pos_ball_deltamove[0,1] = 1.
pos_ball_deltamove[0,2] = 1.
for k in range(6):
pos_pb_deltamove[k, 0] = 1.
pos_pb_deltamove[k, 1] = 1.
pos_pb_deltamove[k, 2] = 1.
pos_pw_deltamove[k, 0] = 1.
pos_pw_deltamove[k, 1] = 1.
pos_pw_deltamove[k, 2] = 1.
pos_ball_now[0,0] = pos_ball_target[0,0]
pos_ball_now[0,1] = pos_ball_target[0,1]
pos_ball_now[0,2] = pos_ball_target[0,2]
pos_ball_now_shared[0] = pos_ball_now[0, 0]
pos_ball_now_shared[1] = pos_ball_now[0, 1]
pos_ball_now_shared[2] = pos_ball_now[0, 2]
for k in range(6):
pos_pb_now[k, 0] = pos_pb_target[k, 0]
pos_pb_now[k, 1] = pos_pb_target[k, 1]
pos_pb_now[k, 2] = pos_pb_target[k, 2]
pos_pw_now[k, 0] = pos_pw_target[k, 0]
pos_pw_now[k, 1] = pos_pw_target[k, 1]
pos_pw_now[k, 2] = pos_pw_target[k, 2]
pos_pb_now_shared[k * 3] = pos_pb_now[k, 0]
pos_pb_now_shared[k * 3 + 1] = pos_pb_now[k, 1]
pos_pb_now_shared[k * 3 + 2] = pos_pb_now[k, 2]
pos_pw_now_shared[k * 3] = pos_pw_now[k, 0]
pos_pw_now_shared[k * 3 + 1] = pos_pw_now[k, 1]
pos_pw_now_shared[k * 3 + 2] = pos_pw_now[k, 2]
#
if __name__=="__main__":
#
######## define the queues for the 2 detached plot processes
mp.set_start_method('spawn')
#
s_w = 10.0
# s_w_shared = Value('d', 10.0)
s_w_shared = mp.Value('f', 10.0)
#
s_d = 4.0
s_d_shared = mp.Value('f', 4.0)
#
s_l = 18.0
s_l_shared = mp.Value('f', 18.0)
# exchange lane width
el_w = 1.0 # normally 3
el_w_shared = mp.Value('f', 1.0) # just 1m in order to show the side
# ball radius
# b_r = 0.53 / (2 * math.pi)
# b_r_shared = Value('d', 0.53 / (2 * math.pi))
#
elevation_shared = mp.Value('f', 10.)
azimut_shared = mp.Value('f', 30.)
#
# define/initiate teams blue and white; array
pos_pb_now = []
pos_pb_now_shared = mp.Array('f',3*6)
pos_pb_target = []
pos_pw_now = []
pos_pw_now_shared = mp.Array('f',3*6)
pos_pw_target = []
pos_pb_deltamove = []
pos_pw_deltamove = []
#
pos_ball_now = []
pos_ball_now_shared = mp.Array('f',3)
pos_ball_target = []
pos_ball_deltamove = []
#
clicked_coord = [] # matrix 2x3 for storing coord of clicked points for distance calculation
clicked_coord.append([0., 0., 0.])
clicked_coord.append([0., 0., 0.])
#
selected_coord = [0., 0., 0.]
#
numb_seq = 0
video_page_iter = 0
video_file_name = "test_video_name.mp4"
#
pos_ball_now.append([5.,9.,0.2]) # ball in the middle
pos_ball_target.append([5.,9.,0.2])
pos_ball_deltamove.append([0., 0., 0.])
#
for i in range(6):
# distribute the players at the side with the same distance
# at game start
pos_pb_now.append([((s_w/6)/2)+i*(s_w/6),1.0, s_d])
pos_pb_target.append([((s_w/6)/2)+i*(s_w/6),1.0, s_d])
pos_pw_now.append([s_w - ((s_w / 6) / 2) - i * (s_w / 6), s_l - 1.0, s_d])
pos_pw_target.append([s_w - ((s_w / 6) / 2) - i * (s_w / 6), s_l - 1.0, s_d])
pos_pb_deltamove.append([0., 0., 0.])
pos_pw_deltamove.append([0., 0., 0.])
#
# Define numpy array which is faster to work with
pos_pb_now = np.array(pos_pb_now, dtype='f')
pos_pb_target = np.array(pos_pb_target, dtype='f')
pos_pw_now = np.array(pos_pw_now, dtype='f')
pos_pw_target = np.array(pos_pw_target, dtype='f')
pos_pb_deltamove = np.array(pos_pb_deltamove, dtype='f')
pos_pw_deltamove = np.array(pos_pw_deltamove, dtype='f')
#
pos_ball_now = np.array(pos_ball_now, dtype='f')
pos_ball_target = np.array(pos_ball_target, dtype='f')
pos_ball_deltamove = np.array(pos_ball_deltamove, dtype='f')
#
clicked_coord = np.array(clicked_coord, dtype='f')
selected_coord = np.array(selected_coord, dtype='f')
#
fig = plt.figure()
ax1 = fig.add_subplot(111,projection='3d')
# field
xG = [0,s_w,s_w,0,0, 0,s_w,s_w,s_w,s_w,s_w, 0, 0,0, 0,s_w]
yG = [0, 0, 0,0,0,s_l,s_l, 0, 0,s_l,s_l,s_l,s_l,0,s_l,s_l]
zG = [0, 0, s_d,s_d,0, 0, 0, 0, s_d, s_d, 0, 0, s_d,s_d, s_d, s_d]
ax1.plot_wireframe (xG,yG,zG,colors= (0,0,1,1)) # blue line game area
# exchange area
xW = [s_w,s_w+el_w,s_w+el_w,s_w,s_w,s_w,s_w+el_w,s_w+el_w,s_w+el_w,s_w+el_w,s_w+el_w,s_w,s_w,s_w,s_w,s_w+el_w]
yW = [0, 0, 0, 0, 0,s_l,s_l, 0, 0,s_l,s_l,s_l,s_l, 0,s_l,s_l]
zW = [0, 0, s_d, s_d, 0, 0, 0, 0, s_d, s_d, 0, 0, s_d, s_d, s_d, s_d]
ax1.plot_wireframe (xW,yW,zW,colors= (0,1,1,1)) # light blue line exchange area
#
ax1.set_xlabel('Wide')
ax1.set_ylabel('Length')
ax1.set_zlabel('Water')
#
# draw the 2 lines which show the depth
xG1 = [0, s_w]
yG1 = [s_d, s_d]
zG1 = [0, 0]
ax1.plot_wireframe(xG1, yG1, zG1, colors=(0, 0, 1, 1),linestyle=':') # blue line
xG2 = [0, s_w]
yG2 = [s_l-s_d, s_l-s_d]
zG2 = [0, 0]
ax1.plot_wireframe(xG2, yG2, zG2, colors=(0, 0, 1, 1),linestyle=':') # blue line
#
# put the axis fix
ax1.set_xlim3d(0, s_w+el_w)
ax1.set_ylim3d(0, s_l)
ax1.set_zlim3d(0, s_d)
ax1.set_aspect(aspect=0.15) # the best
draw_basket(ax1, s_w / 2, 0.24, 0., 0.45)
draw_basket(ax1, s_w / 2, s_l - 0.24, 0., 0.45)
free_sphere = draw_halfsphere(ax1, 5., 9., 4., 2.)
p_b = ax1.scatter(pos_pb_now[:, 0], pos_pb_now[:, 1], pos_pb_now[:, 2],
s=400, alpha = 0.5, c=(0, 0, 1, 1))
p_w = ax1.scatter(pos_pw_now[:, 0], pos_pw_now[:, 1],
pos_pw_now[:, 2], s=400, alpha = 0.5, c="darkgrey")
p_ball = ax1.scatter(pos_ball_now[:,0], pos_ball_now[:,1],
pos_ball_now[:,2], s=100, alpha = 0.5, c="red")
for j, xyz_ in enumerate(pos_pb_now):
annotate3D(ax1, s=str(j+1), xyz=xyz_, fontsize=10, xytext=(-3,3),
textcoords='offset points', ha='right',va='bottom')
for j, xyz_ in enumerate(pos_pw_now):
annotate3D(ax1, s=str(j+1), xyz=xyz_, fontsize=10, xytext=(-3,3),
textcoords='offset points', ha='right', va='bottom')
Frame = 5
for j in range(6):
pos_pb_deltamove[j, 0] = 1.
pos_pb_deltamove[j, 1] = 1.
pos_pb_deltamove[j, 2] = 1.
pos_pw_deltamove[j, 0] = 1.
pos_pw_deltamove[j, 1] = 1.
pos_pw_deltamove[j, 2] = 1.
pos_ball_deltamove[0,0] = 1.
pos_ball_deltamove[0,1] = 1.
pos_ball_deltamove[0,2] = 1.
count_iter = 0
ani1 = animation.FuncAnimation(fig, animate, frames=Frame, interval=1000, blit=False, repeat=True, repeat_delay=1000)
plt.pause(0.001)
p1 = mp.Process(target=OneWindow, args=(s_w_shared, s_d_shared, s_l_shared, el_w_shared,elevation_shared,
azimut_shared, pos_pb_now_shared, pos_pw_now_shared, pos_ball_now_shared))
p1.start()
fig.canvas.mpl_connect('motion_notify_event', onMouseMotion)
fig.canvas.mpl_connect('button_press_event', OnClick)
plt.show()
EDIT1:
"python3 field_basket_design_uwr.py" works.
error which is still coming; perhaps subject to a new thread (not disturbing for the moment); anyway, any comment ho to take this away is welcome. Thanks.
/usr/lib/python3/dist-packages/matplotlib/backend_bases.py:2445: MatplotlibDeprecationWarning: Using default event loop until function specific to this GUI is implemented
warnings.warn(str, mplDeprecation)
/usr/lib/python3/dist-packages/cairocffi/surfaces.py:651: UserWarning: implicit cast from 'char *' to a different pointer type: will be forbidden in the future (check that the types are as you expect; use an explicit ffi.cast() if they are correct)
ffi.cast('char*', address), format, width, height, stride)
A:
The set_start_method in multiprocessing was introduced in Python version 3.4
The error you are facing is due to the fact that you are using an older version of Python. Upgrading to Python 3.4 and above will fix the error.
For more information, refer to -
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.set_start_method
A:
I had the same issue, but it was not a version problem.
The problem was the file name which is multiprocessing.py in my own library.
When I import multiprocessing, it was importing the wrong file (my own file). So, I just changed the file name. I know it is a bit silly, but it may help others...
Edit: Here is an example. If you have multiprocessing.py file, and cat multiprocessing.py output is:
import multiprocessing
if __name__ == '__main__':
multiprocessing.set_start_method('fork')
you get this error. This is obviously because you include your own file instead of the real multiprocessing library. The solution is simply to change your file name to a different one.
|
AttributeError: 'module' object has no attribute 'set_start_method'
|
That code below starts fine in pycharm.
But by starting with the command line:
"python field_basket_design_uwr.py"
it gives error:
Traceback (most recent call last):
File "field_basket_design_uwr.py", line 677, in <module>
mp.set_start_method('spawn')
AttributeError: 'module' object has no attribute 'set_start_method'
Has somebody an idea how to make the script starting without error?
#!/usr/bin/python3.5
import math
import sys
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk as gtk, Gdk as gdk, GLib, GObject as gobject
import string
import os
import subprocess
import glob
from datetime import datetime, timedelta
import time
import numpy as np
import matplotlib; matplotlib.use('Gtk3Agg')
import matplotlib.animation as animation
from mpl_toolkits.mplot3d.proj3d import proj_transform
from matplotlib.text import Annotation
from matplotlib.backends.backend_gtk3cairo import FigureCanvasGTK3Cairo as FigureCanvas
import matplotlib.pyplot as plt
import multiprocessing as mp
class Annotation3D(Annotation):
'''Annotate the point xyz with text s'''
def __init__(self, s, xyz, *args, **kwargs):
Annotation.__init__(self,s, xy=(0,0), *args, **kwargs)
self._verts3d = xyz
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.xy=(xs,ys)
Annotation.draw(self, renderer)
#
def annotate3D(ax, s, *args, **kwargs):
'''add anotation text s to to Axes3d ax'''
tag = Annotation3D(s, *args, **kwargs)
ax.add_artist(tag)
#
def draw_basket(ax1, x, y, z, h, color='black'):
'''add basket to the ax1 figure'''
t = np.linspace(0, np.pi * 2, 16)
ax1.plot(x+0.24*np.cos(t), y+0.24*np.sin(t), z, linewidth=1, color=color)
ax1.plot(x+0.16*np.cos(t), y+0.16*np.sin(t), z, linewidth=1, color=color)
ax1.plot(x+0.24*np.cos(t), y+0.24*np.sin(t), z+h, linewidth=1, color=color)
A=0
while A < 16:
xBar = [x+ 0.16 * math.sin(A*22.5*np.pi/180),x+ 0.24 * math.sin(A*22.5*np.pi/180)]
yBar = [y+ 0.16 * math.cos(A*22.5*np.pi/180),y+ 0.24 * math.cos(A*22.5*np.pi/180)]
zBar = [0,h]
ax1.plot(xBar, yBar, zBar, color=color)
A = A+1
def draw_halfsphere (ax1, x, y, z, sph_radius, color=(0,0,1,1)):
''' add free distance surface to Axes3d ax1 '''
u, v = np.mgrid[0:2 * np.pi:20j, 0:np.pi/2:10j]
xP1 = x + sph_radius * np.cos(u) * np.sin(v)
yP1 = y + sph_radius * np.sin(u) * np.sin(v)
zP1 = z - sph_radius * np.cos(v)
halffreesphere = ax1.plot_wireframe(xP1, yP1, zP1, color=color, alpha=0.3)
return halffreesphere
def OnClick(event):
global selected_coord
global clicked_coord
clicked_coord [0, 0] = clicked_coord [1, 0]
clicked_coord [0, 1] = clicked_coord [1, 1]
clicked_coord [0, 2] = clicked_coord [1, 2]
clicked_coord [1, 0] = selected_coord[0]
clicked_coord [1, 1] = selected_coord[1]
clicked_coord [1, 2] = selected_coord[2]
print ("selected position X: %5.2f Y: %5.2f Z: %5.2f" % (selected_coord[0], selected_coord[1],selected_coord[2]))
print ("distance between selected points: %5.2f", np.sqrt ((clicked_coord [0, 0] - clicked_coord [1, 0])**2
+ (clicked_coord [0, 1]- clicked_coord [1, 1])**2
+ (clicked_coord [0, 2] - clicked_coord [1, 2])**2))
def distance(point, event):
"""Return distance between mouse position and given data point
Args:
point (np.array): np.array of shape (3,), with x,y,z in data coords
event (MouseEvent): mouse event (which contains mouse position in .x and .xdata)
Returns:
distance (np.float64): distance (in screen coords) between mouse pos and data point
"""
x2, y2, _ = proj_transform(point[0], point[1], point[2], plt.gca().get_proj())
x3, y3 = ax1.transData.transform((x2, y2))
return np.sqrt ((x3 - event.x)**2 + (y3 - event.y)**2)
def calcClosestDatapoint(X, event):
""""Calculate which data point is closest to the mouse position.
Args:
X (np.array) - array of points, of shape (numPoints, 3)
event (MouseEvent) - mouse event (containing mouse position)
returns:
smallestIndex (int) - the index (into the array of points X) of the element closest to the mouse position
"""
distances = [distance (X[i, 0:3], event) for i in range(X.shape[0])]
return np.argmin(distances),np.amin(distances)
def annotatePlot(X, index):
global selected_coord
"""Create popover label in 3d chart
Args:
X (np.array) - array of points, of shape (numPoints, 3)
index (int) - index (into points array X) of item which should be printed
Returns:
None
"""
# If we have previously displayed another label, remove it first
if hasattr(annotatePlot, 'label'):
annotatePlot.label.remove()
# Get data point from array of points X, at position index
x2, y2, _ = proj_transform(X[index, 0], X[index, 1], X[index, 2], ax1.get_proj())
annotatePlot.label = plt.annotate( "Select %d" % (index+1),
xy = (x2, y2), xytext = (-20, 20), textcoords = 'offset points', ha = 'right', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5),
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))
# make coord from label available global for other function like distance measurement between points
selected_coord[0]=X[index, 0]
selected_coord[1]=X[index, 1]
selected_coord[2]=X[index, 2]
#
fig.canvas.draw()
def onMouseMotion(event):
global pos_pb_now, pos_pw_now
"""Event that is triggered when mouse is moved. Shows text annotation over data point closest to mouse."""
closestIndexW,LowestDistanceW = calcClosestDatapoint(pos_pw_now, event)
closestIndexB,LowestDistanceB = calcClosestDatapoint(pos_pb_now, event)
if LowestDistanceW < LowestDistanceB:
annotatePlot (pos_pw_now, closestIndexW)
else:
annotatePlot (pos_pb_now, closestIndexB)
#
def OneWindow(s_w_shared,s_d_shared,s_l_shared,el_w_shared,elevation_shared, azimut_shared, pb,
pw, ball):
import numpy as np
import matplotlib.pyplot as plt
''' Sub-processed Plot viewer of the main windows; copy/paste in one; it helps for PC with 2 monitors
The main windows remain the control window of the trainer. This window is the view windows of the trained player'''
#
def animate_one(i):
p_b_one._offsets3d = pos_pb_now_one[:, 0], pos_pb_now_one[:, 1], pos_pb_now_one[:, 2]
p_w_one._offsets3d = pos_pw_now_one[:, 0], pos_pw_now_one[:, 1], pos_pw_now_one[:, 2]
p_ball_one._offsets3d = pos_ball_now_one[:, 0], pos_ball_now_one[:, 1], pos_ball_now_one[:, 2]
ax1_one.view_init(elev=elevation_shared.value, azim=azimut_shared.value)
fig_one = plt.figure()
ax1_one = fig_one.add_subplot(111,projection='3d')
#
arrpb = np.frombuffer(pb.get_obj(), dtype='f')
pos_pb_now_one = np.reshape(arrpb, (6, 3))
#
arrpw = np.frombuffer(pw.get_obj(), dtype='f')
pos_pw_now_one = np.reshape(arrpw, (6, 3))
#
arrball = np.frombuffer(ball.get_obj(), dtype='f')
pos_ball_now_one = np.reshape(arrball, (1, 3))
xG = [0,s_w_shared.value,s_w_shared.value,0,0, 0,s_w_shared.value,s_w_shared.value,s_w_shared.value,
s_w_shared.value,s_w_shared.value, 0, 0,0, 0,s_w_shared.value]
yG = [0, 0, 0,0,0,s_l_shared.value,s_l_shared.value, 0, 0,s_l_shared.value,s_l_shared.value,s_l_shared.value,
s_l_shared.value,0,s_l_shared.value,s_l_shared.value]
zG = [0, 0, s_d_shared.value,s_d_shared.value,0, 0, 0, 0, s_d_shared.value, s_d_shared.value, 0, 0,
s_d_shared.value,s_d_shared.value, s_d_shared.value, s_d_shared.value]
ax1_one.plot_wireframe (xG,yG,zG,colors= (0,0,1,1)) # blue line game area
xW = [s_w_shared.value,s_w_shared.value+el_w_shared.value,s_w_shared.value+el_w_shared.value,s_w_shared.value,
s_w_shared.value,s_w_shared.value,s_w_shared.value+el_w_shared.value,s_w_shared.value+el_w_shared.value,
s_w_shared.value+el_w_shared.value,s_w_shared.value+el_w_shared.value,s_w_shared.value+el_w_shared.value,
s_w_shared.value,s_w_shared.value,s_w_shared.value,s_w_shared.value,s_w_shared.value+el_w_shared.value]
yW = [0, 0, 0, 0, 0,s_l_shared.value,s_l_shared.value, 0, 0,s_l_shared.value,s_l_shared.value,s_l_shared.value,
s_l_shared.value, 0,s_l_shared.value,s_l_shared.value]
zW = [0, 0, s_d_shared.value, s_d_shared.value, 0, 0, 0, 0, s_d_shared.value, s_d_shared.value, 0, 0,
s_d_shared.value, s_d_shared.value, s_d_shared.value, s_d_shared.value]
ax1_one.plot_wireframe (xW,yW,zW,colors= (0,1,1,1)) # light blue line exchange area
#
ax1_one.set_xlabel('Wide')
ax1_one.set_ylabel('Length')
ax1_one.set_zlabel('Water')
#
# draw the 2 lines which show the depth
xG1 = [0, s_w_shared.value]
yG1 = [s_d_shared.value, s_d_shared.value]
zG1 = [0, 0]
ax1_one.plot_wireframe(xG1, yG1, zG1, colors=(0, 0, 1, 1),linestyle=':') # blue line
xG2 = [0, s_w_shared.value]
yG2 = [s_l_shared.value-s_d_shared.value, s_l_shared.value-s_d_shared.value]
zG2 = [0, 0]
ax1_one.plot_wireframe(xG2, yG2, zG2, colors=(0, 0, 1, 1),linestyle=':') # blue line
#
# put the axis fix
ax1_one.set_xlim3d(0, s_w_shared.value+el_w_shared.value)
ax1_one.set_ylim3d(0, s_l_shared.value)
ax1_one.set_zlim3d(0, s_d_shared.value)
ax1_one.set_aspect(aspect=0.222)
draw_basket(ax1_one, s_w_shared.value / 2, 0.24, 0., 0.45)
draw_basket(ax1_one, s_w_shared.value / 2, s_l_shared.value - 0.24, 0., 0.45)
#
p_b_one = ax1_one.scatter(pos_pb_now_one[:, 0], pos_pb_now_one[:, 1], pos_pb_now_one[:, 2],
s=400, alpha = 0.5, c=(0, 0, 1, 1))
p_w_one = ax1_one.scatter(pos_pw_now_one[:, 0], pos_pw_now_one[:, 1],
pos_pw_now_one[:, 2], s=400, alpha = 0.5, c="darkgrey")
p_ball_one = ax1_one.scatter(pos_ball_now_one[:,0], pos_ball_now_one[:,1],
pos_ball_now_one[:,2], s=100, alpha = 0.5, c="red")
for j, xyz_ in enumerate(pos_pb_now_one):
annotate3D(ax1_one, s=str(j+1), xyz=xyz_, fontsize=10, xytext=(-3,3),
textcoords='offset points', ha='right',va='bottom')
for j, xyz_ in enumerate(pos_pw_now_one):
annotate3D(ax1_one, s=str(j+1), xyz=xyz_, fontsize=10, xytext=(-3,3),
textcoords='offset points', ha='right', va='bottom')
Frame = 10
ani1_one = animation.FuncAnimation(fig_one, animate_one, frames=Frame, interval=600, blit=False, repeat=True,
repeat_delay=500)
#
plt.pause(0.001)
plt.show()
def animate(i):
global pos_pb_now, pos_pb_now_shared, pos_pb_target, p_b, pos_pb_deltamove
global pos_pw_now, pos_pw_now_shared, pos_pw_target, p_w, pos_pw_deltamove
global pos_ball_now, pos_ball_now_shared, pos_ball_target, p_ball, pos_ball_deltamove
global Frame
global count_iter
global video_page_iter
global azimut_shared
global elevation_shared
global video_file_name
# global EmitPosOneWin
# global EmitPosFourWin
global ax1
global free_sphere
#
azimut, elevation = ax1.azim, ax1.elev
# print ("azimut from main",azimut)
azimut_shared.value = azimut
# print ("azimut_shared value from main",azimut_shared.value)
elevation_shared.value = elevation
pos_ball_now[0,0] += (1. / Frame) * pos_ball_deltamove[0,0]
pos_ball_now[0,1] += (1. / Frame) * pos_ball_deltamove[0,1]
pos_ball_now[0,2] += (1. / Frame) * pos_ball_deltamove[0,2]
#
# EmitPosOneWin.put(['bp', 0, pos_ball_now[0,0], pos_ball_now[0,1], pos_ball_now[0,2]])
# EmitPosFourWin.put(['bp', 0, pos_ball_now[0,0], pos_ball_now[0,1], pos_ball_now[0,2]])
pos_ball_now_shared[0] = pos_ball_now[0, 0]
pos_ball_now_shared[1] = pos_ball_now[0, 1]
pos_ball_now_shared[2] = pos_ball_now[0, 2]
for j in range(6):
pos_pb_now[j, 0] += (1. / Frame) * pos_pb_deltamove[j, 0]
pos_pb_now[j, 1] += (1. / Frame) * pos_pb_deltamove[j, 1]
pos_pb_now[j, 2] += (1. / Frame) * pos_pb_deltamove[j, 2]
pos_pw_now[j, 0] += (1. / Frame) * pos_pw_deltamove[j, 0]
pos_pw_now[j, 1] += (1. / Frame) * pos_pw_deltamove[j, 1]
pos_pw_now[j, 2] += (1. / Frame) * pos_pw_deltamove[j, 2]
#
# feed the queue; queue because that animation could be paused
# EmitPosOneWin.put(['pb', j, pos_pb_now[j, 0], pos_pb_now[j, 1], pos_pb_now[j, 2]])
# EmitPosOneWin.put(['pw', j, pos_pw_now[j, 0], pos_pw_now[j, 1], pos_pw_now[j, 2]])
# EmitPosFourWin.put(['pb', j, pos_pb_now[j, 0], pos_pb_now[j, 1], pos_pb_now[j, 2]])
# EmitPosFourWin.put(['pw', j, pos_pw_now[j, 0], pos_pw_now[j, 1], pos_pw_now[j, 2]])
pos_pb_now_shared[j*3] = pos_pb_now[j,0]
pos_pb_now_shared[j*3+1] = pos_pb_now[j,1]
pos_pb_now_shared[j*3+2] = pos_pb_now[j,2]
pos_pw_now_shared[j*3] = pos_pw_now[j,0]
pos_pw_now_shared[j*3+1] = pos_pw_now[j,1]
pos_pw_now_shared[j*3+2] = pos_pw_now[j,2]
#
p_b._offsets3d = pos_pb_now[:, 0], pos_pb_now[:, 1], pos_pb_now[:, 2]
p_w._offsets3d = pos_pw_now[:, 0], pos_pw_now[:, 1], pos_pw_now[:, 2]
p_ball._offsets3d = pos_ball_now[:,0],pos_ball_now[:,1],pos_ball_now[:,2]
#
video_page_iter = video_page_iter+1 # if video is on
plt.savefig("/home/family/Bilder" + "/file%03d.png" % video_page_iter) # if video is on
#
if video_page_iter==100: # or if command store video
os.chdir("/home/family/Bilder")
subprocess.call([
'ffmpeg', '-framerate', '8', '-i', 'file%03d.png', '-r', '30', '-pix_fmt', 'yuv420p',
# 'video_name.mp4'
video_file_name
]) # add -y to overwrite test this
for file_name in glob.glob("*.png"):
os.remove(file_name)
video_page_iter = 0
# simulate the deletion of the free domain. Will be activated later by a GUI
free_sphere.remove()
# fig.canvas.draw()
if i == (Frame - 1):
# reset the deltamove to a clean zero for last position in case of rounding elements
# or set to next step of dynamic move
count_iter = count_iter+1
m, s = divmod(count_iter, 2)
if s == 1:
free_sphere.remove()
fig.canvas.draw()
pos_ball_deltamove[0,0] = -1.
pos_ball_deltamove[0,1] = -1.
pos_ball_deltamove[0,2] = -1.
for k in range(6):
pos_pb_deltamove[k, 0] = -1.
pos_pb_deltamove[k, 1] = -1.
pos_pb_deltamove[k, 2] = -1.
pos_pw_deltamove[k, 0] = -1.
pos_pw_deltamove[k, 1] = -1.
pos_pw_deltamove[k, 2] = -1.
else:
free_sphere = draw_halfsphere(ax1, 5., 9., 4., 2.)
pos_ball_deltamove[0,0] = 1.
pos_ball_deltamove[0,1] = 1.
pos_ball_deltamove[0,2] = 1.
for k in range(6):
pos_pb_deltamove[k, 0] = 1.
pos_pb_deltamove[k, 1] = 1.
pos_pb_deltamove[k, 2] = 1.
pos_pw_deltamove[k, 0] = 1.
pos_pw_deltamove[k, 1] = 1.
pos_pw_deltamove[k, 2] = 1.
pos_ball_now[0,0] = pos_ball_target[0,0]
pos_ball_now[0,1] = pos_ball_target[0,1]
pos_ball_now[0,2] = pos_ball_target[0,2]
pos_ball_now_shared[0] = pos_ball_now[0, 0]
pos_ball_now_shared[1] = pos_ball_now[0, 1]
pos_ball_now_shared[2] = pos_ball_now[0, 2]
for k in range(6):
pos_pb_now[k, 0] = pos_pb_target[k, 0]
pos_pb_now[k, 1] = pos_pb_target[k, 1]
pos_pb_now[k, 2] = pos_pb_target[k, 2]
pos_pw_now[k, 0] = pos_pw_target[k, 0]
pos_pw_now[k, 1] = pos_pw_target[k, 1]
pos_pw_now[k, 2] = pos_pw_target[k, 2]
pos_pb_now_shared[k * 3] = pos_pb_now[k, 0]
pos_pb_now_shared[k * 3 + 1] = pos_pb_now[k, 1]
pos_pb_now_shared[k * 3 + 2] = pos_pb_now[k, 2]
pos_pw_now_shared[k * 3] = pos_pw_now[k, 0]
pos_pw_now_shared[k * 3 + 1] = pos_pw_now[k, 1]
pos_pw_now_shared[k * 3 + 2] = pos_pw_now[k, 2]
#
if __name__=="__main__":
#
######## define the queues for the 2 detached plot processes
mp.set_start_method('spawn')
#
s_w = 10.0
# s_w_shared = Value('d', 10.0)
s_w_shared = mp.Value('f', 10.0)
#
s_d = 4.0
s_d_shared = mp.Value('f', 4.0)
#
s_l = 18.0
s_l_shared = mp.Value('f', 18.0)
# exchange lane width
el_w = 1.0 # normally 3
el_w_shared = mp.Value('f', 1.0) # just 1m in order to show the side
# ball radius
# b_r = 0.53 / (2 * math.pi)
# b_r_shared = Value('d', 0.53 / (2 * math.pi))
#
elevation_shared = mp.Value('f', 10.)
azimut_shared = mp.Value('f', 30.)
#
# define/initiate teams blue and white; array
pos_pb_now = []
pos_pb_now_shared = mp.Array('f',3*6)
pos_pb_target = []
pos_pw_now = []
pos_pw_now_shared = mp.Array('f',3*6)
pos_pw_target = []
pos_pb_deltamove = []
pos_pw_deltamove = []
#
pos_ball_now = []
pos_ball_now_shared = mp.Array('f',3)
pos_ball_target = []
pos_ball_deltamove = []
#
clicked_coord = [] # matrix 2x3 for storing coord of clicked points for distance calculation
clicked_coord.append([0., 0., 0.])
clicked_coord.append([0., 0., 0.])
#
selected_coord = [0., 0., 0.]
#
numb_seq = 0
video_page_iter = 0
video_file_name = "test_video_name.mp4"
#
pos_ball_now.append([5.,9.,0.2]) # ball in the middle
pos_ball_target.append([5.,9.,0.2])
pos_ball_deltamove.append([0., 0., 0.])
#
for i in range(6):
# distribute the players at the side with the same distance
# at game start
pos_pb_now.append([((s_w/6)/2)+i*(s_w/6),1.0, s_d])
pos_pb_target.append([((s_w/6)/2)+i*(s_w/6),1.0, s_d])
pos_pw_now.append([s_w - ((s_w / 6) / 2) - i * (s_w / 6), s_l - 1.0, s_d])
pos_pw_target.append([s_w - ((s_w / 6) / 2) - i * (s_w / 6), s_l - 1.0, s_d])
pos_pb_deltamove.append([0., 0., 0.])
pos_pw_deltamove.append([0., 0., 0.])
#
# Define numpy array which is faster to work with
pos_pb_now = np.array(pos_pb_now, dtype='f')
pos_pb_target = np.array(pos_pb_target, dtype='f')
pos_pw_now = np.array(pos_pw_now, dtype='f')
pos_pw_target = np.array(pos_pw_target, dtype='f')
pos_pb_deltamove = np.array(pos_pb_deltamove, dtype='f')
pos_pw_deltamove = np.array(pos_pw_deltamove, dtype='f')
#
pos_ball_now = np.array(pos_ball_now, dtype='f')
pos_ball_target = np.array(pos_ball_target, dtype='f')
pos_ball_deltamove = np.array(pos_ball_deltamove, dtype='f')
#
clicked_coord = np.array(clicked_coord, dtype='f')
selected_coord = np.array(selected_coord, dtype='f')
#
fig = plt.figure()
ax1 = fig.add_subplot(111,projection='3d')
# field
xG = [0,s_w,s_w,0,0, 0,s_w,s_w,s_w,s_w,s_w, 0, 0,0, 0,s_w]
yG = [0, 0, 0,0,0,s_l,s_l, 0, 0,s_l,s_l,s_l,s_l,0,s_l,s_l]
zG = [0, 0, s_d,s_d,0, 0, 0, 0, s_d, s_d, 0, 0, s_d,s_d, s_d, s_d]
ax1.plot_wireframe (xG,yG,zG,colors= (0,0,1,1)) # blue line game area
# exchange area
xW = [s_w,s_w+el_w,s_w+el_w,s_w,s_w,s_w,s_w+el_w,s_w+el_w,s_w+el_w,s_w+el_w,s_w+el_w,s_w,s_w,s_w,s_w,s_w+el_w]
yW = [0, 0, 0, 0, 0,s_l,s_l, 0, 0,s_l,s_l,s_l,s_l, 0,s_l,s_l]
zW = [0, 0, s_d, s_d, 0, 0, 0, 0, s_d, s_d, 0, 0, s_d, s_d, s_d, s_d]
ax1.plot_wireframe (xW,yW,zW,colors= (0,1,1,1)) # light blue line exchange area
#
ax1.set_xlabel('Wide')
ax1.set_ylabel('Length')
ax1.set_zlabel('Water')
#
# draw the 2 lines which show the depth
xG1 = [0, s_w]
yG1 = [s_d, s_d]
zG1 = [0, 0]
ax1.plot_wireframe(xG1, yG1, zG1, colors=(0, 0, 1, 1),linestyle=':') # blue line
xG2 = [0, s_w]
yG2 = [s_l-s_d, s_l-s_d]
zG2 = [0, 0]
ax1.plot_wireframe(xG2, yG2, zG2, colors=(0, 0, 1, 1),linestyle=':') # blue line
#
# put the axis fix
ax1.set_xlim3d(0, s_w+el_w)
ax1.set_ylim3d(0, s_l)
ax1.set_zlim3d(0, s_d)
ax1.set_aspect(aspect=0.15) # the best
draw_basket(ax1, s_w / 2, 0.24, 0., 0.45)
draw_basket(ax1, s_w / 2, s_l - 0.24, 0., 0.45)
free_sphere = draw_halfsphere(ax1, 5., 9., 4., 2.)
p_b = ax1.scatter(pos_pb_now[:, 0], pos_pb_now[:, 1], pos_pb_now[:, 2],
s=400, alpha = 0.5, c=(0, 0, 1, 1))
p_w = ax1.scatter(pos_pw_now[:, 0], pos_pw_now[:, 1],
pos_pw_now[:, 2], s=400, alpha = 0.5, c="darkgrey")
p_ball = ax1.scatter(pos_ball_now[:,0], pos_ball_now[:,1],
pos_ball_now[:,2], s=100, alpha = 0.5, c="red")
for j, xyz_ in enumerate(pos_pb_now):
annotate3D(ax1, s=str(j+1), xyz=xyz_, fontsize=10, xytext=(-3,3),
textcoords='offset points', ha='right',va='bottom')
for j, xyz_ in enumerate(pos_pw_now):
annotate3D(ax1, s=str(j+1), xyz=xyz_, fontsize=10, xytext=(-3,3),
textcoords='offset points', ha='right', va='bottom')
Frame = 5
for j in range(6):
pos_pb_deltamove[j, 0] = 1.
pos_pb_deltamove[j, 1] = 1.
pos_pb_deltamove[j, 2] = 1.
pos_pw_deltamove[j, 0] = 1.
pos_pw_deltamove[j, 1] = 1.
pos_pw_deltamove[j, 2] = 1.
pos_ball_deltamove[0,0] = 1.
pos_ball_deltamove[0,1] = 1.
pos_ball_deltamove[0,2] = 1.
count_iter = 0
ani1 = animation.FuncAnimation(fig, animate, frames=Frame, interval=1000, blit=False, repeat=True, repeat_delay=1000)
plt.pause(0.001)
p1 = mp.Process(target=OneWindow, args=(s_w_shared, s_d_shared, s_l_shared, el_w_shared,elevation_shared,
azimut_shared, pos_pb_now_shared, pos_pw_now_shared, pos_ball_now_shared))
p1.start()
fig.canvas.mpl_connect('motion_notify_event', onMouseMotion)
fig.canvas.mpl_connect('button_press_event', OnClick)
plt.show()
EDIT1:
"python3 field_basket_design_uwr.py" works.
error which is still coming; perhaps subject to a new thread (not disturbing for the moment); anyway, any comment ho to take this away is welcome. Thanks.
/usr/lib/python3/dist-packages/matplotlib/backend_bases.py:2445: MatplotlibDeprecationWarning: Using default event loop until function specific to this GUI is implemented
warnings.warn(str, mplDeprecation)
/usr/lib/python3/dist-packages/cairocffi/surfaces.py:651: UserWarning: implicit cast from 'char *' to a different pointer type: will be forbidden in the future (check that the types are as you expect; use an explicit ffi.cast() if they are correct)
ffi.cast('char*', address), format, width, height, stride)
|
[
"The set_start_method in multiprocessing was introduced in Python version 3.4\nThe error you are facing is due to the fact that you are using an older version of Python. Upgrading to Python 3.4 and above will fix the error.\nFor more information, refer to -\nhttps://docs.python.org/3/library/multiprocessing.html#multiprocessing.set_start_method\n",
"I had the same issue, but it was not a version problem.\nThe problem was the file name which is multiprocessing.py in my own library.\nWhen I import multiprocessing, it was importing the wrong file (my own file). So, I just changed the file name. I know it is a bit silly, but it may help others...\nEdit: Here is an example. If you have multiprocessing.py file, and cat multiprocessing.py output is:\nimport multiprocessing\n\n\nif __name__ == '__main__':\n multiprocessing.set_start_method('fork')\n\nyou get this error. This is obviously because you include your own file instead of the real multiprocessing library. The solution is simply to change your file name to a different one.\n"
] |
[
3,
0
] |
[] |
[] |
[
"matplotlib",
"multiprocessing",
"python"
] |
stackoverflow_0049597563_matplotlib_multiprocessing_python.txt
|
Q:
Install old spaCy release in a MAC computer
I would like to install spaCy V3.2.1 in my virtual environment (MacBook Air, Apple M1 processor, MACOs Ventura 13.0). The commands I run, inspired by the spaCy widget and the specific information for Apple computers, are:
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate
# Install latest pip version
python -m pip install --upgrade pip
# Install spaCy with my needed requirements
pip install setuptools wheel
pip install -U 'spacy[apple]'==3.2.1
python -m spacy download en_core_web_sm
The previous commands get me the following error:
1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
----------------------------------------
ERROR: Failed building wheel for spacy
Failed to build spacy
ERROR: Could not build wheels for spacy which use PEP 517 and cannot be installed directly
UPDATE: The full error can be temporarily found here.
If I run pip install -U 'spacy[apple]' alone, this will (successfully) install spaCy V3.4.3, which is the latest release when this post was written, BUT this is NOT what I am looking for.
IMPORTANT: It is preferred to install spaCy V3.2.1 via pip, however not mandatory (i.e., as long as I have that spaCy version installed in venv and can be successfully imported from Python scripts, it will work for me).
Thanks!
A:
As suggested in the comment section, the error got solved after upgrading OS to macOS Ventura 13, which implicitely upgraded Xcode to version 14.1.
|
Install old spaCy release in a MAC computer
|
I would like to install spaCy V3.2.1 in my virtual environment (MacBook Air, Apple M1 processor, MACOs Ventura 13.0). The commands I run, inspired by the spaCy widget and the specific information for Apple computers, are:
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate
# Install latest pip version
python -m pip install --upgrade pip
# Install spaCy with my needed requirements
pip install setuptools wheel
pip install -U 'spacy[apple]'==3.2.1
python -m spacy download en_core_web_sm
The previous commands get me the following error:
1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
----------------------------------------
ERROR: Failed building wheel for spacy
Failed to build spacy
ERROR: Could not build wheels for spacy which use PEP 517 and cannot be installed directly
UPDATE: The full error can be temporarily found here.
If I run pip install -U 'spacy[apple]' alone, this will (successfully) install spaCy V3.4.3, which is the latest release when this post was written, BUT this is NOT what I am looking for.
IMPORTANT: It is preferred to install spaCy V3.2.1 via pip, however not mandatory (i.e., as long as I have that spaCy version installed in venv and can be successfully imported from Python scripts, it will work for me).
Thanks!
|
[
"As suggested in the comment section, the error got solved after upgrading OS to macOS Ventura 13, which implicitely upgraded Xcode to version 14.1.\n"
] |
[
0
] |
[] |
[] |
[
"pip",
"python",
"spacy_3"
] |
stackoverflow_0074185326_pip_python_spacy_3.txt
|
Q:
python cut row in pandas df
I have dataframe
0 г. Санкт-Петербург, ул. Карпинского,
1 г. Челябинск, проспект Комсомольский,
2 г. Екатеринбург, ул. Щербакова,
3 г. Санкт-Петербург, ул. Латышских Стрелков,
4 г. Москва, вн.тер.г. муниципальный округ Измай...
I want all between 'г.' and ',' like
0 Санкт-Петербург
1 Челябинск
2 Екатеринбург
3 Санкт-Петербург
4 Москва
I have code data['col'] = data['address'].str.extract('(г.*,)') but it doesn't give me desired result
A:
You can use str.extract with:
data['col'] = data['address'].str.extract(r'г. *([^,]+),', expand=False)
output:
address col
0 г. Санкт-Петербург, ул. Карпинского, Санкт-Петербург
1 г. Челябинск, проспект Комсомольский, Челябинск
2 г. Екатеринбург, ул. Щербакова, Екатеринбург
3 г. Санкт-Петербург, ул. Латышских Стрелков, Санкт-Петербург
4 г. Москва, вн.тер.г. муниципальный округ Измай... Москва
A:
Consider using split() for this case given the pattern:
data['col'] = [x.split()[1][:-1] for x in data['address']]
Returning:
col
0 Санкт-Петербург
1 Челябинск
2 Екатеринбург
3 Санкт-Петербург
4 Москва
|
python cut row in pandas df
|
I have dataframe
0 г. Санкт-Петербург, ул. Карпинского,
1 г. Челябинск, проспект Комсомольский,
2 г. Екатеринбург, ул. Щербакова,
3 г. Санкт-Петербург, ул. Латышских Стрелков,
4 г. Москва, вн.тер.г. муниципальный округ Измай...
I want all between 'г.' and ',' like
0 Санкт-Петербург
1 Челябинск
2 Екатеринбург
3 Санкт-Петербург
4 Москва
I have code data['col'] = data['address'].str.extract('(г.*,)') but it doesn't give me desired result
|
[
"You can use str.extract with:\ndata['col'] = data['address'].str.extract(r'г. *([^,]+),', expand=False)\n\noutput:\n address col\n0 г. Санкт-Петербург, ул. Карпинского, Санкт-Петербург\n1 г. Челябинск, проспект Комсомольский, Челябинск\n2 г. Екатеринбург, ул. Щербакова, Екатеринбург\n3 г. Санкт-Петербург, ул. Латышских Стрелков, Санкт-Петербург\n4 г. Москва, вн.тер.г. муниципальный округ Измай... Москва\n\n",
"Consider using split() for this case given the pattern:\ndata['col'] = [x.split()[1][:-1] for x in data['address']]\n\nReturning:\n col\n0 Санкт-Петербург\n1 Челябинск\n2 Екатеринбург\n3 Санкт-Петербург\n4 Москва\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074461445_pandas_python.txt
|
Q:
Apply function to each element of list and rename
I would like to run two separate loops on df. In the first step, I would like to filter the df by sex (male, female) and year (yrs 2008:2013) and save these dataframes in a list. In the second step, I would like to do some kind of analysis to each element of the list and name the output based on which sex & year combination it came from.
I realize I can do this in one step, but my actual code and significantly more complex and throws an error, which stops the loop and it never advances to the second stage. consequently, I need to break it up into two steps. This is what I have so far. I would like to ask for help on the second stage. How do I run the make_graph function on each element of the list and name it according to sex&year combination?
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df_toy=pd.DataFrame([])
df_toy['value'] = np.random.randint(low=1, high=1000, size=100000)
df_toy['age'] = np.random.choice(range(0, 92), 100000)
df_toy['sex'] = np.random.choice([0, 1], 100000)
df_toy['year'] = np.random.randint(low=2008, high=2013, size=100000)
def format_data(df_toy, SEX, YEAR):
df_toy = df_toy[(df_toy["sex"] == SEX) & (df_toy["year"] == YEAR) ]
return df_toy
def make_graph(df_):
plt.scatter(age, value)
return df_toy
dfs = []
for SEX in range(0,3):
for YEAR in range(2008,2014):
dfs.append(format_data(df_toy, SEX, YEAR))
for i in range(len(dfs)):
df_=dfs[i]
make_graph(df_)
df_YEAR_SEX=df_
A:
IIUC you could filter plot and save the data like this. Since I don't know the actual data I don't know why you need to do it in 2 steps, here is how you could do it with a few changes.
# Input data
df_toy = pd.DataFrame({
'value' : np.random.randint(low=1, high=1000, size=100000),
'age' : np.random.choice(range(0, 92), 100000),
'sex' : np.random.choice([0, 1], 100000),
'year' : np.random.randint(low=2008, high=2013, size=100000)
})
def filter_and_plot(df, SEX, YEAR):
# filter the df for sex and year
tmp = df[(df["sex"] == SEX) & (df["year"] == YEAR)]
# create a new plot for each filtered df and plot it
fig, ax = plt.subplots()
ax.scatter(x=tmp['age'], y=tmp['value'], s=0.4)
# return the filtered df
return tmp
result_dict = {}
for SEX in range(0,2):
for YEAR in range(2008, 2013):
# use a f-string to build a key in a dictionary which includes sex and year
# keys look like this: "df_1_2009", the value to each key is the filtered dataframe
result_dict[f"df_{SEX}_{YEAR}"] = filter_and_plot(df_toy, SEX, YEAR)
|
Apply function to each element of list and rename
|
I would like to run two separate loops on df. In the first step, I would like to filter the df by sex (male, female) and year (yrs 2008:2013) and save these dataframes in a list. In the second step, I would like to do some kind of analysis to each element of the list and name the output based on which sex & year combination it came from.
I realize I can do this in one step, but my actual code and significantly more complex and throws an error, which stops the loop and it never advances to the second stage. consequently, I need to break it up into two steps. This is what I have so far. I would like to ask for help on the second stage. How do I run the make_graph function on each element of the list and name it according to sex&year combination?
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df_toy=pd.DataFrame([])
df_toy['value'] = np.random.randint(low=1, high=1000, size=100000)
df_toy['age'] = np.random.choice(range(0, 92), 100000)
df_toy['sex'] = np.random.choice([0, 1], 100000)
df_toy['year'] = np.random.randint(low=2008, high=2013, size=100000)
def format_data(df_toy, SEX, YEAR):
df_toy = df_toy[(df_toy["sex"] == SEX) & (df_toy["year"] == YEAR) ]
return df_toy
def make_graph(df_):
plt.scatter(age, value)
return df_toy
dfs = []
for SEX in range(0,3):
for YEAR in range(2008,2014):
dfs.append(format_data(df_toy, SEX, YEAR))
for i in range(len(dfs)):
df_=dfs[i]
make_graph(df_)
df_YEAR_SEX=df_
|
[
"IIUC you could filter plot and save the data like this. Since I don't know the actual data I don't know why you need to do it in 2 steps, here is how you could do it with a few changes.\n# Input data\ndf_toy = pd.DataFrame({\n 'value' : np.random.randint(low=1, high=1000, size=100000),\n 'age' : np.random.choice(range(0, 92), 100000),\n 'sex' : np.random.choice([0, 1], 100000),\n 'year' : np.random.randint(low=2008, high=2013, size=100000)\n})\n\ndef filter_and_plot(df, SEX, YEAR):\n # filter the df for sex and year\n tmp = df[(df[\"sex\"] == SEX) & (df[\"year\"] == YEAR)]\n\n # create a new plot for each filtered df and plot it\n fig, ax = plt.subplots()\n ax.scatter(x=tmp['age'], y=tmp['value'], s=0.4)\n\n # return the filtered df\n return tmp\n\nresult_dict = {}\nfor SEX in range(0,2):\n for YEAR in range(2008, 2013):\n # use a f-string to build a key in a dictionary which includes sex and year\n # keys look like this: \"df_1_2009\", the value to each key is the filtered dataframe\n result_dict[f\"df_{SEX}_{YEAR}\"] = filter_and_plot(df_toy, SEX, YEAR)\n\n"
] |
[
0
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0074461006_list_python.txt
|
Q:
How to terminate / stop a for loop
I am making a survey in Spyder. I need to make it so the output does not allow for anyone under 18 to complete the survey.... I can get it to print the error message but the survey still continues...
As you can probably tell, I am a beginner.
excluded_ages= '17''16''15''14''13''12''11''10''9''8''7''6''5''4''3''2''1''0'
age_input=input('Enter your age: ')
print(input)
if age_input in excluded_ages:
print('You may not proceed with this survey')
break
postcode_input=input('Enter your postcode: ')
print(input)
I don't even know if break is the right function here, either way, it is showing up as an error because it is outside the loop... everything I type is either outside the loop or outside a function!
A:
First there is no loop to break from so the break statement does nothing. Let me suggest a way to simplify this. My example will simply exit the program.
import sys
age = int(input('Enter your age: '))
if age < 18:
print('You may not proceed with this survey')
sys.exit()
without using packages you can do this:
age = int((input('Enter your age: '))
if age < 18:
print('You may not proceed with this survey')
else:
print('Starting survey...')
# survey code here
A:
Break is used to forcefully exit a loop, but an if statement is not a loop, which is why you're getting that error. Just put the rest of your survey code in an else statement:
if age_input in excluded_ages:
print('You may not proceed with this survey')
else:
print('Continuing survey')
...
|
How to terminate / stop a for loop
|
I am making a survey in Spyder. I need to make it so the output does not allow for anyone under 18 to complete the survey.... I can get it to print the error message but the survey still continues...
As you can probably tell, I am a beginner.
excluded_ages= '17''16''15''14''13''12''11''10''9''8''7''6''5''4''3''2''1''0'
age_input=input('Enter your age: ')
print(input)
if age_input in excluded_ages:
print('You may not proceed with this survey')
break
postcode_input=input('Enter your postcode: ')
print(input)
I don't even know if break is the right function here, either way, it is showing up as an error because it is outside the loop... everything I type is either outside the loop or outside a function!
|
[
"First there is no loop to break from so the break statement does nothing. Let me suggest a way to simplify this. My example will simply exit the program.\nimport sys\n\nage = int(input('Enter your age: '))\nif age < 18:\n print('You may not proceed with this survey')\n sys.exit()\n\nwithout using packages you can do this:\nage = int((input('Enter your age: '))\nif age < 18:\n print('You may not proceed with this survey')\nelse:\n print('Starting survey...')\n # survey code here\n\n",
"Break is used to forcefully exit a loop, but an if statement is not a loop, which is why you're getting that error. Just put the rest of your survey code in an else statement:\nif age_input in excluded_ages:\n print('You may not proceed with this survey')\nelse:\n print('Continuing survey')\n ...\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"loops",
"python",
"survey",
"terminate"
] |
stackoverflow_0074461398_loops_python_survey_terminate.txt
|
Q:
Shift each row of pandas dataframe independently
I have a dataframe
df1 = pd.DataFrame({
'uid': [11, 22],
1: [0.001, 0.005],
2: [0.004, 0.006],
}).set_index(')
and another df that specifies the left shift we need to make for each uid
s_df = pd.DataFrame({
'uid': [11, 22],
'shift_val': [0, 1],
}).set_index('uid')
I want to left shift the ids 1 and 2 by the corresponding shift_val
out = pd.DataFrame({
'uid': [11, 22],
1: [0.001, 0.006],
2: [0.004, np.nan],
}).set_index('uid')
Please suggest
Thanks
A:
shift doesn't support multiple periods, so you have to loop.
You can use:
df1.apply(lambda s: s.shift(-s_df['shift_val'].get(s.name, 0)), axis=1)
Or, with concat:
pd.concat([df1.loc[x].shift(-s_df['shift_val'].get(x, 0))
for x in df1.index], axis=1).T
Output:
1 2
uid
11 0.001 0.004
22 0.006 NaN
|
Shift each row of pandas dataframe independently
|
I have a dataframe
df1 = pd.DataFrame({
'uid': [11, 22],
1: [0.001, 0.005],
2: [0.004, 0.006],
}).set_index(')
and another df that specifies the left shift we need to make for each uid
s_df = pd.DataFrame({
'uid': [11, 22],
'shift_val': [0, 1],
}).set_index('uid')
I want to left shift the ids 1 and 2 by the corresponding shift_val
out = pd.DataFrame({
'uid': [11, 22],
1: [0.001, 0.006],
2: [0.004, np.nan],
}).set_index('uid')
Please suggest
Thanks
|
[
"shift doesn't support multiple periods, so you have to loop.\nYou can use:\ndf1.apply(lambda s: s.shift(-s_df['shift_val'].get(s.name, 0)), axis=1)\n\nOr, with concat:\npd.concat([df1.loc[x].shift(-s_df['shift_val'].get(x, 0))\n for x in df1.index], axis=1).T\n\nOutput:\n 1 2\nuid \n11 0.001 0.004\n22 0.006 NaN\n\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074461515_pandas_python.txt
|
Q:
Using eval to create datestamp
Trying to get eval to work on a dictionary that comprises a datetime field. I'm attempting to do the following:
from datetime import datetime as datetime
print(eval("{'datestamp': datetime.today()}", {}, {}))
gives the following:
NameError: name 'datetime' is not defined'
I want to return a string with a date computed from the datetime function. How do I do this?
A:
Pass the copy of datetime you imported as a global, rather than telling eval to pass empty sets of both locals and globals, when you want that copy of datetime to be accessible within the eval'd code:
from datetime import datetime
print(eval("{'datestamp': datetime.today()}", {'datetime': datetime}))
Alternately, you can avoid depending on the import at all by using __import__ to pull the module in from within the eval'd string:
print(eval("{'datestamp': __import__('datetime').datetime.today()}", {}, {}))
...or you can just stop overriding the set of variables exposed to the eval:
from datetime import datetime
print(eval("{'datestamp': datetime.today()}"))
|
Using eval to create datestamp
|
Trying to get eval to work on a dictionary that comprises a datetime field. I'm attempting to do the following:
from datetime import datetime as datetime
print(eval("{'datestamp': datetime.today()}", {}, {}))
gives the following:
NameError: name 'datetime' is not defined'
I want to return a string with a date computed from the datetime function. How do I do this?
|
[
"Pass the copy of datetime you imported as a global, rather than telling eval to pass empty sets of both locals and globals, when you want that copy of datetime to be accessible within the eval'd code:\nfrom datetime import datetime\nprint(eval(\"{'datestamp': datetime.today()}\", {'datetime': datetime}))\n\nAlternately, you can avoid depending on the import at all by using __import__ to pull the module in from within the eval'd string:\nprint(eval(\"{'datestamp': __import__('datetime').datetime.today()}\", {}, {}))\n\n...or you can just stop overriding the set of variables exposed to the eval:\nfrom datetime import datetime\nprint(eval(\"{'datestamp': datetime.today()}\"))\n\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074461580_python.txt
|
Q:
Regex to match a float after a particular expression
I'm trying to extract, in my python script, from a long documents all the floats that follow a particular expression, that is
>250
After ">250" there are a certain number of spaces and the float can be in the form
12.34
or
12
An example is:
word word 150 175 200 225 >250 12.3 word word
and 12.3 should be matched
I managed to build a regex of the type
\b>250\s+\S+
but I do not know what to put in place of S+
A:
Try:
>250\s+(-?\d+\.?\d*)
Regex demo.
>250 - match >250
\s+ - match 1 or more number of spaces
(-?\d+\.?\d*) - match a int/float into a capturing group
|
Regex to match a float after a particular expression
|
I'm trying to extract, in my python script, from a long documents all the floats that follow a particular expression, that is
>250
After ">250" there are a certain number of spaces and the float can be in the form
12.34
or
12
An example is:
word word 150 175 200 225 >250 12.3 word word
and 12.3 should be matched
I managed to build a regex of the type
\b>250\s+\S+
but I do not know what to put in place of S+
|
[
"Try:\n>250\\s+(-?\\d+\\.?\\d*)\n\nRegex demo.\n>250 - match >250\n\\s+ - match 1 or more number of spaces\n(-?\\d+\\.?\\d*) - match a int/float into a capturing group\n"
] |
[
1
] |
[] |
[] |
[
"match",
"python",
"regex"
] |
stackoverflow_0074461498_match_python_regex.txt
|
Q:
Convert a list to a matrix
I want to convert a list with numbers to a matrix. This is my code:
def converttomtx(mylist, rows, columns):
mtx = []
for r in range(rows):
lrow = []
for c in range(columns):
lrow.append(mylist[rows * r + c])
mtx.append(lrow)
return mtx
Assuming I use the following list:
l = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
The code works if I set the rows to 3 and columns to 4, but when I set rows to 4 and columns to 3 then it throws an error that the list index is out of range. I cannot see why. The same happens when I use 2x6 and 6x2, 2x6 works but 6x2 doesn't.
A:
You can try the robust numpy library for any type of list reshaping, for example:
>>> import numpy as np
>>> li = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
>>> li = np.array(li) # convert to an ndarray
>>> li.reshape(2, 6)
array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11]])
>>> li.reshape(6, 2)
array([[ 0, 1],
[ 2, 3],
[ 4, 5],
[ 6, 7],
[ 8, 9],
[10, 11]])
Let's assume, you don't know any of the dimensions you can also use: li.reshape(3, -1) which will compute the dimension for you.
>>> li.reshape(3, -1)
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
A:
mylist[rows * r + c] should be mylist[columns * r + c] as columns is the correct offset in mylist.
A:
The issue is with how you are calculating the index of the list item to be inserted in the matrix, rows * r + c would be more than 11 if rows = 6 for example, so it should be columns * r + c or rows * c + r
You can try printing columns * r + c and rows * r + c in your for loop for understanding the bug in your original solution.
Here's the updated code -
def converttomtx(mylist, rows, columns):
mtx = []
for r in range(rows):
lrow = []
for c in range(columns):
lrow.append(mylist[columns * r + c]) # updated
mtx.append(lrow)
return mtx
print(converttomtx([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], 2, 6))
Output:
[[0, 1, 2, 3, 4, 5], [6, 7, 8, 9, 10, 11]]
For
print(converttomtx([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], 6, 2))
Output:
[[0, 1], [2, 3], [4, 5], [6, 7], [8, 9], [10, 11]]
A:
Another approach:
def l2m(mylist, rows, cols):
if len(mylist) != rows * cols:
print('wrong shape')
return
return [mylist[row*cols:(row+1)*cols] for row in range(rows)]
mylist = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
print(l2m(mylist, 12, 1))
print(l2m(mylist, 3, 4))
print(l2m(mylist, 9, 3)
This returns:
[[0], [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11]]
[[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]
wrong shape
None
A:
You just used the wrong formula for index in line 6.
Instead of mylist[rows * r + c]
It should be mylist[columns * r + c]
after this your code would look like:
def converttomtx(mylist, rows, columns):
mtx = []
for r in range(rows):
lrow = []
for c in range(columns):
lrow.append(mylist[columns * r + c])
mtx.append(lrow)
return mtx
|
Convert a list to a matrix
|
I want to convert a list with numbers to a matrix. This is my code:
def converttomtx(mylist, rows, columns):
mtx = []
for r in range(rows):
lrow = []
for c in range(columns):
lrow.append(mylist[rows * r + c])
mtx.append(lrow)
return mtx
Assuming I use the following list:
l = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
The code works if I set the rows to 3 and columns to 4, but when I set rows to 4 and columns to 3 then it throws an error that the list index is out of range. I cannot see why. The same happens when I use 2x6 and 6x2, 2x6 works but 6x2 doesn't.
|
[
"You can try the robust numpy library for any type of list reshaping, for example:\n>>> import numpy as np\n>>> li = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]\n>>> li = np.array(li) # convert to an ndarray\n\n>>> li.reshape(2, 6)\narray([[ 0, 1, 2, 3, 4, 5],\n [ 6, 7, 8, 9, 10, 11]])\n\n>>> li.reshape(6, 2)\narray([[ 0, 1],\n [ 2, 3],\n [ 4, 5],\n [ 6, 7],\n [ 8, 9],\n [10, 11]])\n\nLet's assume, you don't know any of the dimensions you can also use: li.reshape(3, -1) which will compute the dimension for you.\n>>> li.reshape(3, -1)\narray([[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]])\n\n",
"mylist[rows * r + c] should be mylist[columns * r + c] as columns is the correct offset in mylist.\n",
"The issue is with how you are calculating the index of the list item to be inserted in the matrix, rows * r + c would be more than 11 if rows = 6 for example, so it should be columns * r + c or rows * c + r\nYou can try printing columns * r + c and rows * r + c in your for loop for understanding the bug in your original solution.\nHere's the updated code -\ndef converttomtx(mylist, rows, columns):\n mtx = []\n for r in range(rows):\n lrow = []\n for c in range(columns):\n lrow.append(mylist[columns * r + c]) # updated\n mtx.append(lrow)\n return mtx\n\nprint(converttomtx([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], 2, 6))\n\nOutput:\n[[0, 1, 2, 3, 4, 5], [6, 7, 8, 9, 10, 11]]\n\nFor\nprint(converttomtx([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], 6, 2))\n\nOutput:\n[[0, 1], [2, 3], [4, 5], [6, 7], [8, 9], [10, 11]]\n\n",
"Another approach:\ndef l2m(mylist, rows, cols):\n if len(mylist) != rows * cols:\n print('wrong shape')\n return\n return [mylist[row*cols:(row+1)*cols] for row in range(rows)]\n\n\nmylist = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]\n\nprint(l2m(mylist, 12, 1))\nprint(l2m(mylist, 3, 4))\nprint(l2m(mylist, 9, 3)\n\n\nThis returns:\n[[0], [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11]]\n[[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]\nwrong shape\nNone\n\n",
"You just used the wrong formula for index in line 6.\nInstead of mylist[rows * r + c]\nIt should be mylist[columns * r + c]\nafter this your code would look like:\ndef converttomtx(mylist, rows, columns):\n mtx = []\n for r in range(rows):\n lrow = []\n for c in range(columns):\n lrow.append(mylist[columns * r + c])\n mtx.append(lrow)\n return mtx\n\n"
] |
[
1,
0,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074461185_python.txt
|
Q:
How to check if an object has an attribute?
How do I check if an object has some attribute? For example:
>>> a = SomeClass()
>>> a.property
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: SomeClass instance has no attribute 'property'
How do I tell if a has the attribute property before using it?
A:
Try hasattr():
if hasattr(a, 'property'):
a.property
See zweiterlinde's answer below, who offers good advice about asking forgiveness! A very pythonic approach!
The general practice in python is that, if the property is likely to be there most of the time, simply call it and either let the exception propagate, or trap it with a try/except block. This will likely be faster than hasattr. If the property is likely to not be there most of the time, or you're not sure, using hasattr will probably be faster than repeatedly falling into an exception block.
A:
As Jarret Hardie answered, hasattr will do the trick. I would like to add, though, that many in the Python community recommend a strategy of "easier to ask for forgiveness than permission" (EAFP) rather than "look before you leap" (LBYL). See these references:
EAFP vs LBYL (was Re: A little disappointed so far)
EAFP vs. LBYL @Code Like a Pythonista: Idiomatic Python
ie:
try:
doStuff(a.property)
except AttributeError:
otherStuff()
... is preferred to:
if hasattr(a, 'property'):
doStuff(a.property)
else:
otherStuff()
A:
You can use hasattr() or catch AttributeError, but if you really just want the value of the attribute with a default if it isn't there, the best option is just to use getattr():
getattr(a, 'property', 'default value')
A:
I think what you are looking for is hasattr. However, I'd recommend something like this if you want to detect python properties-
try:
getattr(someObject, 'someProperty')
except AttributeError:
print "Doesn't exist"
else
print "Exists"
The disadvantage here is that attribute errors in the properties __get__ code are also caught.
Otherwise, do-
if hasattr(someObject, 'someProp'):
#Access someProp/ set someProp
pass
Docs:http://docs.python.org/library/functions.html
Warning:
The reason for my recommendation is that hasattr doesn't detect properties.
Link:http://mail.python.org/pipermail/python-dev/2005-December/058498.html
A:
According to pydoc, hasattr(obj, prop) simply calls getattr(obj, prop) and catches exceptions. So, it is just as valid to wrap the attribute access with a try statement and catch AttributeError as it is to use hasattr() beforehand.
a = SomeClass()
try:
return a.fake_prop
except AttributeError:
return default_value
A:
I would like to suggest avoid this:
try:
doStuff(a.property)
except AttributeError:
otherStuff()
The user @jpalecek mentioned it: If an AttributeError occurs inside doStuff(), you are lost.
Maybe this approach is better:
try:
val = a.property
except AttributeError:
otherStuff()
else:
doStuff(val)
A:
EDIT:This approach has serious limitation. It should work if the object is an iterable one. Please check the comments below.
If you are using Python 3.6 or higher like me there is a convenient alternative to check whether an object has a particular attribute:
if 'attr1' in obj1:
print("attr1 = {}".format(obj1["attr1"]))
However, I'm not sure which is the best approach right now. using hasattr(), using getattr() or using in. Comments are welcome.
A:
hasattr() is the right answer. What I want to add is that hasattr() can be used well in conjunction with assert (to avoid unnecessary if statements and make the code more readable):
assert hasattr(a, 'property'), 'object lacks property'
print(a.property)
In case that the property is missing, the program will exit with an AssertionError and printing out the provided error message (object lacks property in this case).
As stated in another answer on SO:
Asserts should be used to test conditions that should never happen.
The purpose is to crash early in the case of a corrupt program state.
Often this is the case when a property is missing and then assert is very appropriate.
A:
For objects other than dictonary:
if hasattr(a, 'property'):
a.property
For dictionary, hasattr() will not work.
Many people are telling to use has_key() for dictionary, but it is depreciated.
So for dictionary, you have to use has_attr()
if a.has_attr('property'):
a['property']
Or you can also use
if 'property' in a:
A:
Hope you expecting hasattr(), but try to avoid hasattr() and please prefer getattr(). getattr() is faster than hasattr()
using hasattr():
if hasattr(a, 'property'):
print a.property
same here i am using getattr to get property if there is no property it return none
property = getattr(a,"property",None)
if property:
print property
A:
Depending on the situation you can check with isinstance what kind of object you have, and then use the corresponding attributes. With the introduction of abstract base classes in Python 2.6/3.0 this approach has also become much more powerful (basically ABCs allow for a more sophisticated way of duck typing).
One situation were this is useful would be if two different objects have an attribute with the same name, but with different meaning. Using only hasattr might then lead to strange errors.
One nice example is the distinction between iterators and iterables (see this question). The __iter__ methods in an iterator and an iterable have the same name but are semantically quite different! So hasattr is useless, but isinstance together with ABC's provides a clean solution.
However, I agree that in most situations the hasattr approach (described in other answers) is the most appropriate solution.
A:
Here's a very intuitive approach :
if 'property' in dir(a):
a.property
A:
This is super simple, just use dir(object)
This will return a list of every available function and attribute of the object.
A:
You can check whether object contains attribute by using hasattr builtin method.
For an instance if your object is a and you want to check for attribute stuff
>>> class a:
... stuff = "something"
...
>>> hasattr(a,'stuff')
True
>>> hasattr(a,'other_stuff')
False
The method signature itself is hasattr(object, name) -> bool which mean if object has attribute which is passed to second argument in hasattr than it gives boolean True or False according to the presence of name attribute in object.
A:
You can use hasattr() to check if object or class has an attribute in Python.
For example, there is Person class as shown below:
class Person:
greeting = "Hello"
def __init__(self, name, age):
self.name = name
self.age = age
def test(self):
print("Test")
Then, you can use hasattr() for object as shown below:
obj = Person("John", 27)
obj.gender = "Male"
print("greeting:", hasattr(obj, 'greeting'))
print("name:", hasattr(obj, 'name'))
print("age:", hasattr(obj, 'age'))
print("gender:", hasattr(obj, 'gender'))
print("test:", hasattr(obj, 'test'))
print("__init__:", hasattr(obj, '__init__'))
print("__str__:", hasattr(obj, '__str__'))
print("__module__:", hasattr(obj, '__module__'))
Output:
greeting: True
name: True
age: True
gender: True
test: True
__init__: True
__str__: True
__module__: True
And, you can also use hasattr() directly for class name as shown below:
print("greeting:", hasattr(Person, 'greeting'))
print("name:", hasattr(Person, 'name'))
print("age:", hasattr(Person, 'age'))
print("gender:", hasattr(Person, 'gender'))
print("test:", hasattr(Person, 'test'))
print("__init__:", hasattr(Person, '__init__'))
print("__str__:", hasattr(Person, '__str__'))
print("__module__:", hasattr(Person, '__module__'))
Output:
greeting: True
name: False
age: False
gender: False
test: True
__init__: True
__str__: True
__module__: True
A:
Another possible option, but it depends if what you mean by before:
undefined = object()
class Widget:
def __init__(self):
self.bar = 1
def zoom(self):
print("zoom!")
a = Widget()
bar = getattr(a, "bar", undefined)
if bar is not undefined:
print("bar:%s" % (bar))
foo = getattr(a, "foo", undefined)
if foo is not undefined:
print("foo:%s" % (foo))
zoom = getattr(a, "zoom", undefined)
if zoom is not undefined:
zoom()
output:
bar:1
zoom!
This allows you to even check for None-valued attributes.
But! Be very careful you don't accidentally instantiate and compare undefined multiple places because the is will never work in that case.
Update:
because of what I was warning about in the above paragraph, having multiple undefineds that never match, I have recently slightly modified this pattern:
undefined = NotImplemented
NotImplemented, not to be confused with NotImplementedError, is a built-in: it semi-matches the intent of a JS undefined and you can reuse its definition everywhere and it will always match. The drawbacks is that it is "truthy" in booleans and it can look weird in logs and stack traces (but you quickly get over it when you know it only appears in this context).
|
How to check if an object has an attribute?
|
How do I check if an object has some attribute? For example:
>>> a = SomeClass()
>>> a.property
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: SomeClass instance has no attribute 'property'
How do I tell if a has the attribute property before using it?
|
[
"Try hasattr():\nif hasattr(a, 'property'):\n a.property\n\nSee zweiterlinde's answer below, who offers good advice about asking forgiveness! A very pythonic approach!\nThe general practice in python is that, if the property is likely to be there most of the time, simply call it and either let the exception propagate, or trap it with a try/except block. This will likely be faster than hasattr. If the property is likely to not be there most of the time, or you're not sure, using hasattr will probably be faster than repeatedly falling into an exception block.\n",
"As Jarret Hardie answered, hasattr will do the trick. I would like to add, though, that many in the Python community recommend a strategy of \"easier to ask for forgiveness than permission\" (EAFP) rather than \"look before you leap\" (LBYL). See these references:\nEAFP vs LBYL (was Re: A little disappointed so far)\nEAFP vs. LBYL @Code Like a Pythonista: Idiomatic Python\nie:\ntry:\n doStuff(a.property)\nexcept AttributeError:\n otherStuff()\n\n... is preferred to:\nif hasattr(a, 'property'):\n doStuff(a.property)\nelse:\n otherStuff()\n\n",
"You can use hasattr() or catch AttributeError, but if you really just want the value of the attribute with a default if it isn't there, the best option is just to use getattr():\ngetattr(a, 'property', 'default value')\n\n",
"I think what you are looking for is hasattr. However, I'd recommend something like this if you want to detect python properties-\ntry:\n getattr(someObject, 'someProperty') \nexcept AttributeError:\n print \"Doesn't exist\"\nelse\n print \"Exists\"\n\nThe disadvantage here is that attribute errors in the properties __get__ code are also caught.\nOtherwise, do-\nif hasattr(someObject, 'someProp'):\n #Access someProp/ set someProp\n pass\n\nDocs:http://docs.python.org/library/functions.html\nWarning:\nThe reason for my recommendation is that hasattr doesn't detect properties.\nLink:http://mail.python.org/pipermail/python-dev/2005-December/058498.html\n",
"According to pydoc, hasattr(obj, prop) simply calls getattr(obj, prop) and catches exceptions. So, it is just as valid to wrap the attribute access with a try statement and catch AttributeError as it is to use hasattr() beforehand.\na = SomeClass()\ntry:\n return a.fake_prop\nexcept AttributeError:\n return default_value\n\n",
"I would like to suggest avoid this:\ntry:\n doStuff(a.property)\nexcept AttributeError:\n otherStuff()\n\nThe user @jpalecek mentioned it: If an AttributeError occurs inside doStuff(), you are lost.\nMaybe this approach is better:\ntry:\n val = a.property\nexcept AttributeError:\n otherStuff()\nelse:\n doStuff(val)\n\n",
"EDIT:This approach has serious limitation. It should work if the object is an iterable one. Please check the comments below.\nIf you are using Python 3.6 or higher like me there is a convenient alternative to check whether an object has a particular attribute:\nif 'attr1' in obj1:\n print(\"attr1 = {}\".format(obj1[\"attr1\"]))\n\nHowever, I'm not sure which is the best approach right now. using hasattr(), using getattr() or using in. Comments are welcome.\n",
"hasattr() is the right answer. What I want to add is that hasattr() can be used well in conjunction with assert (to avoid unnecessary if statements and make the code more readable):\nassert hasattr(a, 'property'), 'object lacks property' \nprint(a.property)\n\nIn case that the property is missing, the program will exit with an AssertionError and printing out the provided error message (object lacks property in this case).\nAs stated in another answer on SO:\n\nAsserts should be used to test conditions that should never happen.\nThe purpose is to crash early in the case of a corrupt program state.\n\nOften this is the case when a property is missing and then assert is very appropriate.\n",
"For objects other than dictonary:\nif hasattr(a, 'property'):\n a.property\n\nFor dictionary, hasattr() will not work.\nMany people are telling to use has_key() for dictionary, but it is depreciated.\nSo for dictionary, you have to use has_attr()\nif a.has_attr('property'):\n a['property']\n \n\nOr you can also use\nif 'property' in a:\n\n",
"Hope you expecting hasattr(), but try to avoid hasattr() and please prefer getattr(). getattr() is faster than hasattr()\nusing hasattr():\n if hasattr(a, 'property'):\n print a.property\n\nsame here i am using getattr to get property if there is no property it return none\n property = getattr(a,\"property\",None)\n if property:\n print property\n\n",
"Depending on the situation you can check with isinstance what kind of object you have, and then use the corresponding attributes. With the introduction of abstract base classes in Python 2.6/3.0 this approach has also become much more powerful (basically ABCs allow for a more sophisticated way of duck typing).\nOne situation were this is useful would be if two different objects have an attribute with the same name, but with different meaning. Using only hasattr might then lead to strange errors.\nOne nice example is the distinction between iterators and iterables (see this question). The __iter__ methods in an iterator and an iterable have the same name but are semantically quite different! So hasattr is useless, but isinstance together with ABC's provides a clean solution.\nHowever, I agree that in most situations the hasattr approach (described in other answers) is the most appropriate solution.\n",
"Here's a very intuitive approach :\nif 'property' in dir(a):\n a.property\n\n",
"This is super simple, just use dir(object)\nThis will return a list of every available function and attribute of the object.\n",
"You can check whether object contains attribute by using hasattr builtin method.\nFor an instance if your object is a and you want to check for attribute stuff\n>>> class a:\n... stuff = \"something\"\n... \n>>> hasattr(a,'stuff')\nTrue\n>>> hasattr(a,'other_stuff')\nFalse\n\nThe method signature itself is hasattr(object, name) -> bool which mean if object has attribute which is passed to second argument in hasattr than it gives boolean True or False according to the presence of name attribute in object.\n",
"You can use hasattr() to check if object or class has an attribute in Python.\nFor example, there is Person class as shown below:\nclass Person:\n greeting = \"Hello\"\n\n def __init__(self, name, age):\n self.name = name\n self.age = age\n\n def test(self):\n print(\"Test\")\n\nThen, you can use hasattr() for object as shown below:\nobj = Person(\"John\", 27)\nobj.gender = \"Male\"\nprint(\"greeting:\", hasattr(obj, 'greeting'))\nprint(\"name:\", hasattr(obj, 'name'))\nprint(\"age:\", hasattr(obj, 'age'))\nprint(\"gender:\", hasattr(obj, 'gender'))\nprint(\"test:\", hasattr(obj, 'test'))\nprint(\"__init__:\", hasattr(obj, '__init__'))\nprint(\"__str__:\", hasattr(obj, '__str__'))\nprint(\"__module__:\", hasattr(obj, '__module__'))\n\nOutput:\ngreeting: True\nname: True\nage: True\ngender: True\ntest: True\n__init__: True\n__str__: True\n__module__: True\n\nAnd, you can also use hasattr() directly for class name as shown below:\nprint(\"greeting:\", hasattr(Person, 'greeting'))\nprint(\"name:\", hasattr(Person, 'name'))\nprint(\"age:\", hasattr(Person, 'age'))\nprint(\"gender:\", hasattr(Person, 'gender'))\nprint(\"test:\", hasattr(Person, 'test'))\nprint(\"__init__:\", hasattr(Person, '__init__'))\nprint(\"__str__:\", hasattr(Person, '__str__'))\nprint(\"__module__:\", hasattr(Person, '__module__'))\n\nOutput:\ngreeting: True\nname: False\nage: False\ngender: False\ntest: True\n__init__: True\n__str__: True\n__module__: True\n\n",
"Another possible option, but it depends if what you mean by before:\nundefined = object()\n\nclass Widget:\n\n def __init__(self):\n self.bar = 1\n\n def zoom(self):\n print(\"zoom!\")\n\na = Widget()\n\nbar = getattr(a, \"bar\", undefined)\nif bar is not undefined:\n print(\"bar:%s\" % (bar))\n\nfoo = getattr(a, \"foo\", undefined)\nif foo is not undefined:\n print(\"foo:%s\" % (foo))\n\nzoom = getattr(a, \"zoom\", undefined)\nif zoom is not undefined:\n zoom()\n\noutput:\nbar:1\nzoom!\n\nThis allows you to even check for None-valued attributes.\nBut! Be very careful you don't accidentally instantiate and compare undefined multiple places because the is will never work in that case.\nUpdate:\nbecause of what I was warning about in the above paragraph, having multiple undefineds that never match, I have recently slightly modified this pattern:\nundefined = NotImplemented\nNotImplemented, not to be confused with NotImplementedError, is a built-in: it semi-matches the intent of a JS undefined and you can reuse its definition everywhere and it will always match. The drawbacks is that it is \"truthy\" in booleans and it can look weird in logs and stack traces (but you quickly get over it when you know it only appears in this context).\n"
] |
[
3204,
796,
630,
53,
40,
34,
20,
20,
20,
17,
16,
15,
3,
2,
1,
0
] |
[] |
[] |
[
"attributes",
"class_attributes",
"object",
"python",
"python_3.x"
] |
stackoverflow_0000610883_attributes_class_attributes_object_python_python_3.x.txt
|
Q:
VS Code: ModuleNotFoundError: No module named 'pandas'
Tried to import pandas in VS Code with
import pandas
and got
Traceback (most recent call last):
File "c:\Users\xxxx\hello\sqltest.py", line 2, in <module>
import pandas
ModuleNotFoundError: No module named 'pandas'
Tried to install pandas with
pip install pandas
pip3 install pandas
python -m pip install pandas
separately which returned
(.venv) PS C:\Users\xxxx\hello> pip3 install pandas
Requirement already satisfied: pandas in c:\users\xxxx\hello\.venv\lib\site-packages (1.1.0)
Requirement already satisfied: pytz>=2017.2 in c:\users\xxxx\hello\.venv\lib\site-packages (from pandas) (2020.1)
Requirement already satisfied: numpy>=1.15.4 in c:\users\xxxx\hello\.venv\lib\site-packages (from pandas) (1.19.1)
Requirement already satisfied: python-dateutil>=2.7.3 in c:\users\xxxx\hello\.venv\lib\site-packages (from pandas) (2.8.1)
Requirement already satisfied: six>=1.5 in c:\users\xxxx\hello\.venv\lib\site-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)
Tried:
sudo pip install pandas
and got
(.venv) PS C:\Users\xxxx\hello> sudo pip install pandas
sudo : The term 'sudo' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ sudo pip install pandas
+ ~~~~
+ CategoryInfo : ObjectNotFound: (sudo:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
I also tried to change the python path under workspace settings following this answer. with C:\Users\xxxx\AppData\Local\Microsoft\WindowsApps\python.exe which is the python path I found in Command Prompt using where python but didn't work.
Then I tried
python -m venv .venv
which returned
(.venv) PS C:\Users\xxxx\hello> python -m venv .venv
Error: [Errno 13] Permission denied: 'C:\\Users\\xxxx\\hello\\.venv\\Scripts\\python.exe'
Update:
Tried
python3.8.5 -m pip install pandas
and returned
(.venv) PS C:\Users\xxxx\hello> python3.8.5 -m pip install pandas
python3.8.5 : The term 'python3.8.5' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ python3.8.5 -m pip install pandas
+ ~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (python3.8.5:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
A:
It's easier than we imagine:
This image explains how to solve this problem.
A:
Download anaconda interpreter from this link
After installation, open anaconda prompt (anaconda3) and execute this code conda install ipykernel. It will install all necessary packages.
Restart vs code and change interpreter to base conda and voala!
A:
Seems to have worked with
pip install pandas --user
in Command Prompt.
Additional note:
For IPython.display,
pip install IPython--user
in Command Prompt, then
from IPython.display import display
in VS Code.
Helpful links:
pip --user
Display() in Python
A:
I have just run VSCode as administrator!
A:
I had the same problem and running the below command solved it:
pip3 install pandas --upgrade
A:
The solution seems fairly simple! First things first though!
From looking at your post, you seem to have followed a guide into installing Pandas. Nothing is wrong about that but I must point out first based on your information that you provided to us, you seem to run Windows Powershell PS C:\Users\xxxx\hello> and the error format matches Powershell. Therefore, sudo isn't recognized because sudo is the admin command for Unix-based systems like Debian, Ubuntu, and so on which is why it's not a valid command!
But here's how to properly install: (I assume you're running Windows but if that's not the case, correct me and Ill give you the Unix version!)
1 - Windows key, search up CMD and run it as administrator this is important to avoid permissions issues!
2 - Run pip3 install pandas OR python3 -m pip3 install pandas
A:
The problem (at least in my case) was that I have installed a package under the default Python version but I have set the interpreter for the different Python version in Visual Studio Code (VS Code).
There are 2 options to resolve this.
Change the VS Code Interpreter: VS Code -> View -> Command Palette... (Ctrl+Shift+P) -> Python: Select Interpreter -> select "Python: Select Interpreter" (or Enter) -> select an interpreter based on our chosen Python version under which you have installed the package.
Install package under the correct Python version which means to change your default Python version and repeat the process of installation again.
To change your default Python version (for Windows 10):
Right click on This PC -> Properties -> Advanced System Settings (in the right panel) -> Environment Variables -> System variables (the bottom part of the window) -> double-click on "Path" -> Select the 1st row for the wanted Python version and move it up and then do the same with the 2nd row. I recommend to restart (close and open again) your Command Prompt session if you want to see/work with the new default Python version.
Note on installation: Following command (in Command Prompt) worked for me:
pip3 install pandas --user
A:
If you have multiple versions of python installed and/or have something like acaconda installed, you'll have conflicts with the interpreter location in vscode.
To change the settings in vscode:
Ctrl + P
Search for python: select interpreter and then select 'recommended' option and it should work again.
A:
If you don't want to use Anaconda, I've tried many things and only this worked for me.
In windows search, find "This PC", right click and click properties-> Advanced system settings -> Advanced(tab) -> Environment viriables -> Path
For me
Add or Edit Path to: C:\Users\your_user_name\AppData\Local\Programs\Python\Python310\Scripts
A:
I had the same issue using vscode on ubuntu 22.04 with anaconda3. The solution for me was: Open settings, type 'python: default Interpreter Path' and enter the path where the python executable is /home/user/anaconda3/bin/python
|
VS Code: ModuleNotFoundError: No module named 'pandas'
|
Tried to import pandas in VS Code with
import pandas
and got
Traceback (most recent call last):
File "c:\Users\xxxx\hello\sqltest.py", line 2, in <module>
import pandas
ModuleNotFoundError: No module named 'pandas'
Tried to install pandas with
pip install pandas
pip3 install pandas
python -m pip install pandas
separately which returned
(.venv) PS C:\Users\xxxx\hello> pip3 install pandas
Requirement already satisfied: pandas in c:\users\xxxx\hello\.venv\lib\site-packages (1.1.0)
Requirement already satisfied: pytz>=2017.2 in c:\users\xxxx\hello\.venv\lib\site-packages (from pandas) (2020.1)
Requirement already satisfied: numpy>=1.15.4 in c:\users\xxxx\hello\.venv\lib\site-packages (from pandas) (1.19.1)
Requirement already satisfied: python-dateutil>=2.7.3 in c:\users\xxxx\hello\.venv\lib\site-packages (from pandas) (2.8.1)
Requirement already satisfied: six>=1.5 in c:\users\xxxx\hello\.venv\lib\site-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)
Tried:
sudo pip install pandas
and got
(.venv) PS C:\Users\xxxx\hello> sudo pip install pandas
sudo : The term 'sudo' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ sudo pip install pandas
+ ~~~~
+ CategoryInfo : ObjectNotFound: (sudo:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
I also tried to change the python path under workspace settings following this answer. with C:\Users\xxxx\AppData\Local\Microsoft\WindowsApps\python.exe which is the python path I found in Command Prompt using where python but didn't work.
Then I tried
python -m venv .venv
which returned
(.venv) PS C:\Users\xxxx\hello> python -m venv .venv
Error: [Errno 13] Permission denied: 'C:\\Users\\xxxx\\hello\\.venv\\Scripts\\python.exe'
Update:
Tried
python3.8.5 -m pip install pandas
and returned
(.venv) PS C:\Users\xxxx\hello> python3.8.5 -m pip install pandas
python3.8.5 : The term 'python3.8.5' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ python3.8.5 -m pip install pandas
+ ~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (python3.8.5:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
|
[
"It's easier than we imagine:\n\nThis image explains how to solve this problem.\n",
"\nDownload anaconda interpreter from this link\nAfter installation, open anaconda prompt (anaconda3) and execute this code conda install ipykernel. It will install all necessary packages.\nRestart vs code and change interpreter to base conda and voala!\n\n",
"Seems to have worked with\npip install pandas --user\n\nin Command Prompt.\n\nAdditional note:\nFor IPython.display,\npip install IPython--user\n\nin Command Prompt, then\nfrom IPython.display import display\n\nin VS Code.\nHelpful links:\npip --user\nDisplay() in Python\n",
"I have just run VSCode as administrator!\n",
"I had the same problem and running the below command solved it:\npip3 install pandas --upgrade\n\n",
"The solution seems fairly simple! First things first though!\nFrom looking at your post, you seem to have followed a guide into installing Pandas. Nothing is wrong about that but I must point out first based on your information that you provided to us, you seem to run Windows Powershell PS C:\\Users\\xxxx\\hello> and the error format matches Powershell. Therefore, sudo isn't recognized because sudo is the admin command for Unix-based systems like Debian, Ubuntu, and so on which is why it's not a valid command!\nBut here's how to properly install: (I assume you're running Windows but if that's not the case, correct me and Ill give you the Unix version!)\n1 - Windows key, search up CMD and run it as administrator this is important to avoid permissions issues!\n2 - Run pip3 install pandas OR python3 -m pip3 install pandas\n",
"The problem (at least in my case) was that I have installed a package under the default Python version but I have set the interpreter for the different Python version in Visual Studio Code (VS Code).\nThere are 2 options to resolve this.\n\nChange the VS Code Interpreter: VS Code -> View -> Command Palette... (Ctrl+Shift+P) -> Python: Select Interpreter -> select \"Python: Select Interpreter\" (or Enter) -> select an interpreter based on our chosen Python version under which you have installed the package.\nInstall package under the correct Python version which means to change your default Python version and repeat the process of installation again.\nTo change your default Python version (for Windows 10):\nRight click on This PC -> Properties -> Advanced System Settings (in the right panel) -> Environment Variables -> System variables (the bottom part of the window) -> double-click on \"Path\" -> Select the 1st row for the wanted Python version and move it up and then do the same with the 2nd row. I recommend to restart (close and open again) your Command Prompt session if you want to see/work with the new default Python version.\n\nNote on installation: Following command (in Command Prompt) worked for me:\npip3 install pandas --user\n",
"If you have multiple versions of python installed and/or have something like acaconda installed, you'll have conflicts with the interpreter location in vscode.\nTo change the settings in vscode:\nCtrl + P\nSearch for python: select interpreter and then select 'recommended' option and it should work again.\n",
"If you don't want to use Anaconda, I've tried many things and only this worked for me.\nIn windows search, find \"This PC\", right click and click properties-> Advanced system settings -> Advanced(tab) -> Environment viriables -> Path\nFor me\nAdd or Edit Path to: C:\\Users\\your_user_name\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\n",
"I had the same issue using vscode on ubuntu 22.04 with anaconda3. The solution for me was: Open settings, type 'python: default Interpreter Path' and enter the path where the python executable is /home/user/anaconda3/bin/python\n"
] |
[
20,
5,
2,
2,
2,
1,
1,
1,
1,
0
] |
[] |
[] |
[
"python",
"visual_studio_code"
] |
stackoverflow_0063388135_python_visual_studio_code.txt
|
Q:
How to load custom model in pytorch
I'm trying to load my pretrained model (yolov5n) and test it with the following code in PyTorch:
import os
import torch
model = torch.load(os.getcwd()+'/weights/last.pt')
# Images
imgs = ['https://example.com/img.jpg']
# Inference
results = model(imgs)
# Results
results.print()
results.save() # or .show()
results.xyxy[0] # img1 predictions (tensor)
results.pandas().xyxy[0] # img1 predictions (pandas)
and I'm getting the following error:
ModuleNotFoundError Traceback (most recent call
last) in
3 import torch
4
----> 5 model = torch.load(os.getcwd()+'/weights/last.pt')
My model is located in the folder /weights/last.py, I'm not sure what I'm doing false. Could you please tell me, what it's missing in my code.
A:
You should be able to find the weights in this directory: yolov5/runs/train/exp/weights/last.pt
Then you load the weights with a line like this:
model = torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5/runs/train/exp/weights/last.pt', force_reload=True)
I have an example of a notebook that loads custom models and from that directory after training the model here https://github.com/pylabel-project/samples/blob/main/pylabeler.ipynb
A:
In order to load your model's weights, you should first import your model script. I guess it is located in /weights/last.py. Afterwards, you can load your model's weights.
Example code might be as below:
import os
import torch
from weights.last import Model # I assume you named your model as Model, change it accordingly
model = Model() # Then in here instantiate your model
model.load_state_dict(torch.load(ospath.join(os.getcwd()+'/weights/last.pt'))) # Then load your model's weights.
# Images
imgs = ['https://example.com/img.jpg']
# Inference
results = model(imgs)
# Results
results.print()
results.save() # or .show()
results.xyxy[0] # img1 predictions (tensor)
results.pandas().xyxy[0] # img1 predictions (pandas)
In this solution, do not forget that you should run your program from your current working directory, if you run it from the weights folder you might receive errors.
A:
If you wanna load your local saved model you can try this
import torch
model = torch.hub.load('.', 'custom', 'yourmodel.pt', source='local')
|
How to load custom model in pytorch
|
I'm trying to load my pretrained model (yolov5n) and test it with the following code in PyTorch:
import os
import torch
model = torch.load(os.getcwd()+'/weights/last.pt')
# Images
imgs = ['https://example.com/img.jpg']
# Inference
results = model(imgs)
# Results
results.print()
results.save() # or .show()
results.xyxy[0] # img1 predictions (tensor)
results.pandas().xyxy[0] # img1 predictions (pandas)
and I'm getting the following error:
ModuleNotFoundError Traceback (most recent call
last) in
3 import torch
4
----> 5 model = torch.load(os.getcwd()+'/weights/last.pt')
My model is located in the folder /weights/last.py, I'm not sure what I'm doing false. Could you please tell me, what it's missing in my code.
|
[
"You should be able to find the weights in this directory: yolov5/runs/train/exp/weights/last.pt\nThen you load the weights with a line like this:\nmodel = torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5/runs/train/exp/weights/last.pt', force_reload=True) \n\nI have an example of a notebook that loads custom models and from that directory after training the model here https://github.com/pylabel-project/samples/blob/main/pylabeler.ipynb\n",
"In order to load your model's weights, you should first import your model script. I guess it is located in /weights/last.py. Afterwards, you can load your model's weights.\nExample code might be as below:\nimport os \nimport torch \nfrom weights.last import Model # I assume you named your model as Model, change it accordingly\n\nmodel = Model() # Then in here instantiate your model\nmodel.load_state_dict(torch.load(ospath.join(os.getcwd()+'/weights/last.pt'))) # Then load your model's weights.\n\n \n\n# Images\nimgs = ['https://example.com/img.jpg'] \n# Inference\nresults = model(imgs)\n\n# Results\nresults.print()\nresults.save() # or .show()\n\nresults.xyxy[0] # img1 predictions (tensor)\nresults.pandas().xyxy[0] # img1 predictions (pandas)\n\nIn this solution, do not forget that you should run your program from your current working directory, if you run it from the weights folder you might receive errors.\n",
"If you wanna load your local saved model you can try this\nimport torch\n\nmodel = torch.hub.load('.', 'custom', 'yourmodel.pt', source='local')\n\n"
] |
[
8,
1,
0
] |
[] |
[] |
[
"python",
"pytorch",
"yolo",
"yolov5"
] |
stackoverflow_0070167811_python_pytorch_yolo_yolov5.txt
|
Q:
Finding element from Inspet Chrome
SO basically Im trying to find how many brooches are on this site:
https://www.swarovski.com/en-RO/c-0107/Categories/Jewelry/Brooches/
and my code is this:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
driver = webdriver.Chrome()
driver.implicitly_wait(10)
driver.get("https://www.swarovski.com/en-RO/c-0107/Categories/Jewelry/Brooches/")
wait = WebDriverWait(driver, 10)
all_products = driver.find_elements(By.XPATH, '/html/body/main/div[2]/section[2]/div[2]/div/div/div/div[1]/div/a/div[2]/p/span[1]')
print(f"Number of products: {len(all_products)}")
for product in all_products:
product_name = product.find_elements(By.XPATH, '/html/body/main/div[2]/section[2]/div[2]/div/div/div/div[1]/div/a/div[2]/p/span[1]')
product_name = product_name.text
print(product_name)
Output: AttributeError: 'list' object has no attribute 'text'
Any solutions would be much appreciated :)
I tried to change it to
print(product_name.get_attribute("innerHTML"))
Didn't work because now it shows: AttributeError: 'list' object has no attribute 'get_attribute'. Did you mean: 'getattribute'?
I tried to change it to CSS_SELECTOR, same error AttributeError: 'list' object has no attribute 'text'
A:
find_elements method returns a list of web elements.
Your mistake is with this line:
product_name = product.find_elements(By.XPATH, '/html/body/main/div[2]/section[2]/div[2]/div/div/div/div[1]/div/a/div[2]/p/span[1]')
You need to use find_element method here, not find_elements.
The following code will not give the error you mentioned, while your code still will not work correctly...
for product in all_products:
product_name = product.find_element(By.XPATH, '/html/body/main/div[2]/section[2]/div[2]/div/div/div/div[1]/div/a/div[2]/p/span[1]')
product_name = product_name.text
print(product_name)
To make your code working I used WebDriverWait expected_conditions explicit waits to wait for elements to appear on the page. Then I get the amount of the products on the page and extracted the title name from each product.
This is the code:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 20)
url = "https://www.swarovski.com/en-RO/c-0107/Categories/Jewelry/Brooches/"
driver.get(url)
items = wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "div.swa-product-tile-plp")))
print(str(len(items)) + " products found")
for item in items:
name = item.find_element(By.CSS_SELECTOR, ".swa-product-sans--name").text
print(name)
The output is:
6 products found
Eternal Flower pendant and brooch
Stella brooch
Eternal Flower pendant and brooch
Curiosa brooch
Dellium Brooch
Stella brooch
|
Finding element from Inspet Chrome
|
SO basically Im trying to find how many brooches are on this site:
https://www.swarovski.com/en-RO/c-0107/Categories/Jewelry/Brooches/
and my code is this:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
driver = webdriver.Chrome()
driver.implicitly_wait(10)
driver.get("https://www.swarovski.com/en-RO/c-0107/Categories/Jewelry/Brooches/")
wait = WebDriverWait(driver, 10)
all_products = driver.find_elements(By.XPATH, '/html/body/main/div[2]/section[2]/div[2]/div/div/div/div[1]/div/a/div[2]/p/span[1]')
print(f"Number of products: {len(all_products)}")
for product in all_products:
product_name = product.find_elements(By.XPATH, '/html/body/main/div[2]/section[2]/div[2]/div/div/div/div[1]/div/a/div[2]/p/span[1]')
product_name = product_name.text
print(product_name)
Output: AttributeError: 'list' object has no attribute 'text'
Any solutions would be much appreciated :)
I tried to change it to
print(product_name.get_attribute("innerHTML"))
Didn't work because now it shows: AttributeError: 'list' object has no attribute 'get_attribute'. Did you mean: 'getattribute'?
I tried to change it to CSS_SELECTOR, same error AttributeError: 'list' object has no attribute 'text'
|
[
"find_elements method returns a list of web elements.\nYour mistake is with this line:\nproduct_name = product.find_elements(By.XPATH, '/html/body/main/div[2]/section[2]/div[2]/div/div/div/div[1]/div/a/div[2]/p/span[1]')\n\nYou need to use find_element method here, not find_elements.\nThe following code will not give the error you mentioned, while your code still will not work correctly...\nfor product in all_products:\n product_name = product.find_element(By.XPATH, '/html/body/main/div[2]/section[2]/div[2]/div/div/div/div[1]/div/a/div[2]/p/span[1]')\n product_name = product_name.text\n print(product_name)\n\nTo make your code working I used WebDriverWait expected_conditions explicit waits to wait for elements to appear on the page. Then I get the amount of the products on the page and extracted the title name from each product.\nThis is the code:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 20)\n\n\nurl = \"https://www.swarovski.com/en-RO/c-0107/Categories/Jewelry/Brooches/\"\ndriver.get(url)\n\n\nitems = wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, \"div.swa-product-tile-plp\")))\nprint(str(len(items)) + \" products found\")\nfor item in items:\n name = item.find_element(By.CSS_SELECTOR, \".swa-product-sans--name\").text\n print(name)\n\nThe output is:\n6 products found\nEternal Flower pendant and brooch\nStella brooch\nEternal Flower pendant and brooch\nCuriosa brooch\nDellium Brooch\nStella brooch\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_3.x",
"selenium",
"selenium_chromedriver",
"web_scraping"
] |
stackoverflow_0074461620_python_python_3.x_selenium_selenium_chromedriver_web_scraping.txt
|
Q:
Do a specific search for dicts in a list in Python
I am getting traffic network from a website. I want to getting the json file of a location on google maps because of that i need to take a json website link from traffic network. This traffic network I receive is recorded as a list. This list contains words. And every time I refresh the web page, the places in the list change.
its my code here
import time
import json
from selenium import webdriver
from bs4 import BeautifulSoup
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
caps = DesiredCapabilities.CHROME
caps['goog:loggingPrefs'] = {'performance': 'ALL'}
driver = webdriver.Chrome(desired_capabilities=caps)
driver.get("websitelinkhere.com")
while True:
ready = input("Ready?")
if ready =="y" or "Y":
html = driver.page_source
time.sleep(2)
#metadata dosyasını indiren yeri buluyor.
timings = driver.execute_script("return window.performance.getEntries();")
print(type(timings))
#print(timings)
for i in range(len(timings)):
print(i,timings[i])
print("-------------")
# close web browser
browser.close()
There are about 500 data in the list.
Output Example :
140 {'connectEnd': 0, 'connectStart': 0, 'decodedBodySize': 0, 'domainLookupEnd': 0, 'domainLookupStart': 0, 'duration': 98.70000000018626, 'encodedBodySize': 0, 'entryType': 'resource', 'fetchStart': 49603, 'initiatorType': 'script', 'name': 'https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata?pb=!1m4!1sapiv3!11m2!1m1!1b0!2m2!1str-TR!2sUS!3m3!1m2!1e2!2s6BOFuzJhNCDJbDNl_f4GVA!4m57!1e1!1e2!1e3!1e4!1e5!1e6!1e8!1e12!2m1!1e1!4m1!1i48!5m1!1e1!5m1!1e2!6m1!1e1!6m1!1e2!9m36!1m3!1e2!2b1!3e2!1m3!1e2!2b0!3e3!1m3!1e3!2b1!3e2!1m3!1e3!2b0!3e3!1m3!1e8!2b0!3e3!1m3!1e1!2b0!3e3!1m3!1e4!2b0!3e3!1m3!1e10!2b1!3e2!1m3!1e10!2b0!3e3&callback=_callbacks____0lajjuohz', 'nextHopProtocol': '', 'redirectEnd': 0, 'redirectStart': 0, 'renderBlockingStatus': 'non-blocking', 'requestStart': 0, 'responseEnd': 49701.700000000186, 'responseStart': 0, 'secureConnectionStart': 0, 'serverTiming': [], 'startTime': 49603, 'transferSize': 0, 'workerStart': 0}
-------------
this time I found the data I wanted in row 140 of the list ("https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata")
but every time I repeat this process, its place in the list changes.
and the only constant part I want in the above example is ("https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata"). I need to get the rest of this link("https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata?pb=!1m4!1sapiv3!11m2!1m1!1b0!2m2!1str-TR!2sUS!3m3%20!1m2!1e2!2s6BOFuzJhNCDJbDNl_f4GVA!4m57!1e1!1e2!1e3!1e4!1e5!1e6!1e8!1e12!2m1!1e1!4m1!1i48!5m1!1e1!5m1!1!1!1!!1m3!1e2!2b1!3e2!1m3!1e2!2b0!3e3!1m3!1e3!2b1!3e2!1m3!1e3!2b0!3e3!1m3!1e8!2b0!3e3!1m3!1e1!2b0!3e!1e4!2b0!3e3!1m3!1e10!2b1!3e2!1m3!1e10!2b0!3e3&callback=_callbacks____0lajjuohz").
How can I do this debugging and finding what I want?
A:
Since timings is a list we can simply iterate over it to find the desired element in the list and the to extract the rest of the link as following:
for item in timings:
if 'https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata' in item:
the_rest_of_the_link = item.split("https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata",1)[1]
break
A:
I found a solution like this
import time
import json
from selenium import webdriver
from bs4 import BeautifulSoup
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
caps = DesiredCapabilities.CHROME
caps['goog:loggingPrefs'] = {'performance': 'ALL'}
driver = webdriver.Chrome(desired_capabilities=caps)
driver.get("xxxxxxx")
while True:
ready = input("Ready?")
if ready =="y" or "Y":
html = driver.page_source
time.sleep(2)
#metadata dosyasını indiren yeri buluyor.
timings = driver.execute_script("return window.performance.getEntries();")
print(type(timings))
#print(timings)
for i in range(len(timings)):
for y in timings[i]:
url= timings[i][y]
alfa = str(url)
if (alfa.startswith('https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata?') == True):
realurl = url
print (realurl)
# close web browser
browser.close()
|
Do a specific search for dicts in a list in Python
|
I am getting traffic network from a website. I want to getting the json file of a location on google maps because of that i need to take a json website link from traffic network. This traffic network I receive is recorded as a list. This list contains words. And every time I refresh the web page, the places in the list change.
its my code here
import time
import json
from selenium import webdriver
from bs4 import BeautifulSoup
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
caps = DesiredCapabilities.CHROME
caps['goog:loggingPrefs'] = {'performance': 'ALL'}
driver = webdriver.Chrome(desired_capabilities=caps)
driver.get("websitelinkhere.com")
while True:
ready = input("Ready?")
if ready =="y" or "Y":
html = driver.page_source
time.sleep(2)
#metadata dosyasını indiren yeri buluyor.
timings = driver.execute_script("return window.performance.getEntries();")
print(type(timings))
#print(timings)
for i in range(len(timings)):
print(i,timings[i])
print("-------------")
# close web browser
browser.close()
There are about 500 data in the list.
Output Example :
140 {'connectEnd': 0, 'connectStart': 0, 'decodedBodySize': 0, 'domainLookupEnd': 0, 'domainLookupStart': 0, 'duration': 98.70000000018626, 'encodedBodySize': 0, 'entryType': 'resource', 'fetchStart': 49603, 'initiatorType': 'script', 'name': 'https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata?pb=!1m4!1sapiv3!11m2!1m1!1b0!2m2!1str-TR!2sUS!3m3!1m2!1e2!2s6BOFuzJhNCDJbDNl_f4GVA!4m57!1e1!1e2!1e3!1e4!1e5!1e6!1e8!1e12!2m1!1e1!4m1!1i48!5m1!1e1!5m1!1e2!6m1!1e1!6m1!1e2!9m36!1m3!1e2!2b1!3e2!1m3!1e2!2b0!3e3!1m3!1e3!2b1!3e2!1m3!1e3!2b0!3e3!1m3!1e8!2b0!3e3!1m3!1e1!2b0!3e3!1m3!1e4!2b0!3e3!1m3!1e10!2b1!3e2!1m3!1e10!2b0!3e3&callback=_callbacks____0lajjuohz', 'nextHopProtocol': '', 'redirectEnd': 0, 'redirectStart': 0, 'renderBlockingStatus': 'non-blocking', 'requestStart': 0, 'responseEnd': 49701.700000000186, 'responseStart': 0, 'secureConnectionStart': 0, 'serverTiming': [], 'startTime': 49603, 'transferSize': 0, 'workerStart': 0}
-------------
this time I found the data I wanted in row 140 of the list ("https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata")
but every time I repeat this process, its place in the list changes.
and the only constant part I want in the above example is ("https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata"). I need to get the rest of this link("https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata?pb=!1m4!1sapiv3!11m2!1m1!1b0!2m2!1str-TR!2sUS!3m3%20!1m2!1e2!2s6BOFuzJhNCDJbDNl_f4GVA!4m57!1e1!1e2!1e3!1e4!1e5!1e6!1e8!1e12!2m1!1e1!4m1!1i48!5m1!1e1!5m1!1!1!1!!1m3!1e2!2b1!3e2!1m3!1e2!2b0!3e3!1m3!1e3!2b1!3e2!1m3!1e3!2b0!3e3!1m3!1e8!2b0!3e3!1m3!1e1!2b0!3e!1e4!2b0!3e3!1m3!1e10!2b1!3e2!1m3!1e10!2b0!3e3&callback=_callbacks____0lajjuohz").
How can I do this debugging and finding what I want?
|
[
"Since timings is a list we can simply iterate over it to find the desired element in the list and the to extract the rest of the link as following:\nfor item in timings:\n if 'https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata' in item:\n the_rest_of_the_link = item.split(\"https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata\",1)[1]\n break\n\n",
"I found a solution like this\nimport time\nimport json\nfrom selenium import webdriver\nfrom bs4 import BeautifulSoup\nfrom selenium.webdriver.common.desired_capabilities import DesiredCapabilities\ncaps = DesiredCapabilities.CHROME\ncaps['goog:loggingPrefs'] = {'performance': 'ALL'}\ndriver = webdriver.Chrome(desired_capabilities=caps)\ndriver.get(\"xxxxxxx\")\n\nwhile True:\n ready = input(\"Ready?\")\n if ready ==\"y\" or \"Y\":\n html = driver.page_source\n time.sleep(2)\n\n\n #metadata dosyasını indiren yeri buluyor.\n timings = driver.execute_script(\"return window.performance.getEntries();\")\n print(type(timings))\n #print(timings)\n for i in range(len(timings)):\n for y in timings[i]:\n url= timings[i][y]\n \n alfa = str(url)\n if (alfa.startswith('https://maps.googleapis.com/maps/api/js/GeoPhotoService.GetMetadata?') == True):\n realurl = url\n print (realurl)\n\n \n\n \n # close web browser\nbrowser.close()\n\n\n\n\n\n\n\n\n\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"network_traffic",
"python",
"selenium"
] |
stackoverflow_0074460051_network_traffic_python_selenium.txt
|
Q:
Getting each from of a dataframe without column values
I'm trying to add a column to each row of a dataframe which includes a hash value of the row values.
I originally tried this:
df['hash'] = pd.Series((hash(tuple(row)) for _, row in df_to_hash.iterrows()))
However, when I ran this on two different DataFrames, I was encountering an issue when the column names didn't exactly match.
For example:
DF1:
Name Age
0 Tom 12
1 Pat 15
DF1:
FirstName Age
0 Tom 12
1 Pat 15
When I hashed the above DataFrames, row 0 in each dataframe had a different value due to the columns being different.
Is there a way I can has the row values only, excluding the columns?
I also tried this with no success:
df['hash'] = df_to_hash.apply(lambda x: hash(tuple(x)), axis=1)
A:
What about using the underlying numpy array:
pd.Series((hash(tuple(row)) for row in df_to_hash.to_numpy()))
Output:
0 2606281096150585092
1 -1842928179554038127
dtype: int64
You can also use pandas.util.hash_pandas_object with index=False:
pd.util.hash_pandas_object(df_to_hash, index=False)
Output:
0 17445307237601047733
1 15658167368827391476
dtype: uint64
|
Getting each from of a dataframe without column values
|
I'm trying to add a column to each row of a dataframe which includes a hash value of the row values.
I originally tried this:
df['hash'] = pd.Series((hash(tuple(row)) for _, row in df_to_hash.iterrows()))
However, when I ran this on two different DataFrames, I was encountering an issue when the column names didn't exactly match.
For example:
DF1:
Name Age
0 Tom 12
1 Pat 15
DF1:
FirstName Age
0 Tom 12
1 Pat 15
When I hashed the above DataFrames, row 0 in each dataframe had a different value due to the columns being different.
Is there a way I can has the row values only, excluding the columns?
I also tried this with no success:
df['hash'] = df_to_hash.apply(lambda x: hash(tuple(x)), axis=1)
|
[
"What about using the underlying numpy array:\npd.Series((hash(tuple(row)) for row in df_to_hash.to_numpy()))\n\nOutput:\n0 2606281096150585092\n1 -1842928179554038127\ndtype: int64\n\nYou can also use pandas.util.hash_pandas_object with index=False:\npd.util.hash_pandas_object(df_to_hash, index=False)\n\nOutput:\n0 17445307237601047733\n1 15658167368827391476\ndtype: uint64\n\n"
] |
[
1
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074461681_pandas_python.txt
|
Q:
Access denied error installing MySql Python connector 64-bit on Windows 10 via .msi
I have successfully installed most of MySql on Windows 10, and have a working database. The only part that will not install is the 64-bit Python connector.
I am successfully using the connector via pip --install, but it is unclear if I have a 64-bit or 32-bit version (as I am having issues with the int64 python type).
First question ... do I need to run this msi at all (mysql-connector-python-8.0.23-windows-x86-64bit.msi)?
Second question ... even if I try to run as an Administrator, it still fails. If I look at the folder in question (c:\program files\Windows Apps), Administrators only have View rights. A "Trusted Installer" has full rights it seems. Is there a special trick to this?
Apologies if this is a duplicate, but my SO search hasn't returned anything that covers this precise issue.
A:
I had the same problem with this, but I was able to fix it!
It turns out the problem is the Python version from windows Store, that has a restricted folder with no access to modify it, due to windows restrictions to prevent piracy.
The WindowsApps folder is one of the few that doesn't allow modification from users.
To solve this problem, I would recommend to just simply install Python from their offical website, and in that way you won't have this problem :)
A:
Check your permissions of the folder
Step 1 -: Locate the installation directory that is giving you problems. Right-click it and choose Properties. Keep in mind that sometimes you might have to change the security permissions for the parent folder as well in order to fix this problem.
Step 2-: Go to the Security tab and click Edit.
Step 3-: In Group or user names section select SYSTEM or Everyone and click the Full control in the Allow column. If you don’t have SYSTEM or Everyone available, you’ll need to add it. To do that, click the Add button.
Step 4-: Select Users or Groups window will now appear. In the Enter the object names to select field enter Everyone or SYSTEM and click Check Names button. If your input is valid, click the OK button.
Step 5-: SYSTEM or Everyone group will now be added. Select it and check Full control in the Allow column.
Step 6-: Click Apply and OK to save changes.
A:
I faced exactly the same issue. Instead of installing the latest version in MySql installer, I installed the very old version of the connector firstly. Then upgraded it to the latest again using mysql installer and it worked pretty well.
|
Access denied error installing MySql Python connector 64-bit on Windows 10 via .msi
|
I have successfully installed most of MySql on Windows 10, and have a working database. The only part that will not install is the 64-bit Python connector.
I am successfully using the connector via pip --install, but it is unclear if I have a 64-bit or 32-bit version (as I am having issues with the int64 python type).
First question ... do I need to run this msi at all (mysql-connector-python-8.0.23-windows-x86-64bit.msi)?
Second question ... even if I try to run as an Administrator, it still fails. If I look at the folder in question (c:\program files\Windows Apps), Administrators only have View rights. A "Trusted Installer" has full rights it seems. Is there a special trick to this?
Apologies if this is a duplicate, but my SO search hasn't returned anything that covers this precise issue.
|
[
"I had the same problem with this, but I was able to fix it!\nIt turns out the problem is the Python version from windows Store, that has a restricted folder with no access to modify it, due to windows restrictions to prevent piracy.\nThe WindowsApps folder is one of the few that doesn't allow modification from users.\nTo solve this problem, I would recommend to just simply install Python from their offical website, and in that way you won't have this problem :)\n",
"Check your permissions of the folder\nStep 1 -: Locate the installation directory that is giving you problems. Right-click it and choose Properties. Keep in mind that sometimes you might have to change the security permissions for the parent folder as well in order to fix this problem.\nStep 2-: Go to the Security tab and click Edit.\nStep 3-: In Group or user names section select SYSTEM or Everyone and click the Full control in the Allow column. If you don’t have SYSTEM or Everyone available, you’ll need to add it. To do that, click the Add button.\nStep 4-: Select Users or Groups window will now appear. In the Enter the object names to select field enter Everyone or SYSTEM and click Check Names button. If your input is valid, click the OK button.\nStep 5-: SYSTEM or Everyone group will now be added. Select it and check Full control in the Allow column.\nStep 6-: Click Apply and OK to save changes.\n",
"I faced exactly the same issue. Instead of installing the latest version in MySql installer, I installed the very old version of the connector firstly. Then upgraded it to the latest again using mysql installer and it worked pretty well.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"installation",
"mysql",
"python",
"windows_10"
] |
stackoverflow_0066925897_installation_mysql_python_windows_10.txt
|
Q:
How to efficiently create an index-like Polars DataFrame from multiple sparse series?
I would like to create a DataFrame that has an "index" (integer) from a number of (sparse) Series, where the index (or primary key) is NOT necessarily consecutive integers. Each Series is like a vector of (index, value) tuple or {index: value} mapping.
(1) A small example
In Pandas, this is very easy as we can create a DataFrame at a time, like
>>> pd.DataFrame({
"A": {0: 'a', 20: 'b', 40: 'c'},
"B": {10: 'd', 20: 'e', 30: 'f'},
"C": {20: 'g', 30: 'h'},
}).sort_index()
A B C
0 a NaN NaN
10 NaN d NaN
20 b e g
30 NaN f h
40 c NaN NaN
but I can't find an easy way to achieve a similar result with Polars. As described in Coming from Pandas, Polars does not use an index unlike Pandas, and each row is indexed by its integer position in the table; so I might need to represent an "indexed" Series with a 2-column DataFrame:
A = pl.DataFrame({ "index": [0, 20, 40], "A": ['a', 'b', 'c'] })
B = pl.DataFrame({ "index": [10, 20, 30], "B": ['d', 'e', 'f'] })
C = pl.DataFrame({ "index": [20, 30], "C": ['g', 'h'] })
I tried to combine these multiple DataFrames, joining on the index column:
>>> A.join(B, on='index', how='outer').join(C, on='index', how='outer').sort(by='index')
shape: (5, 4)
┌───────┬──────┬──────┬──────┐
│ index ┆ A ┆ B ┆ C │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ str ┆ str ┆ str │
╞═══════╪══════╪══════╪══════╡
│ 0 ┆ a ┆ null ┆ null │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 10 ┆ null ┆ d ┆ null │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 20 ┆ b ┆ e ┆ g │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 30 ┆ null ┆ f ┆ h │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 40 ┆ c ┆ null ┆ null │
└───────┴──────┴──────┴──────┘
This gives the result I want, but I wonder:
(i) if there is there more concise way to do this over many columns, and
(ii) how make this operation as efficient as possible.
Alternatives?
I also tried outer joins as this is one way to combine Dataframes with different number of columns and rows, as described above.
Other alternatives I tried includes diagonal concatenation, but this does not deduplicate or join on index:
>>> pl.concat([A, B, C], how='diagonal')
index A B C
0 0 a None None
1 20 b None None
2 40 c None None
3 10 None d None
4 20 None e None
5 30 None f None
6 20 None None g
7 30 None None h
(2) Efficiently Building a Large Table
The approach I found above gives desired results I'd want but I feel there must be a better way in terms of performance. Consider a case with more large tables; say 300,000 rows and 20 columns:
N, C = 300000, 20
pls = []
pds = []
for i in range(C):
A = pl.DataFrame({
"index": np.linspace(i, N*3-i, num=N, dtype=np.int32),
f"A{i}": np.arange(N, dtype=np.float32),
})
pls.append(A)
B = A.to_pandas().set_index("index")
pds.append(B)
The approach of joining two columns in a row is somewhat slow than I expected:
%%time
F = functools.reduce(lambda a, b: a.join(b, on='index', how='outer'), pls)
F.sort(by='index')
CPU times: user 1.49 s, sys: 97.8 ms, total: 1.59 s
Wall time: 611 ms
or than one-pass creation in pd.DataFrame:
%%time
pd.DataFrame({
f"A{i}": pds[i][f'A{i}'] for i in range(C)
}).sort_index()
CPU times: user 230 ms, sys: 50.7 ms, total: 281 ms
Wall time: 281 ms
A:
Following your example, but only informing polars on the fact that the "index" column is sorted (polars will use fast paths if data is sorted).
You can use align_frames together with functools.reduce to get what you want.
This is your data creation snippet:
import functools
import polars as pl
N, C = 300000, 20
pls = []
pds = []
for i in range(C):
A = pl.DataFrame({
"index": np.linspace(i, N*3-i, num=N, dtype=np.int32),
f"A{i}": np.arange(N, dtype=np.float32),
}).with_column(pl.col("index").set_sorted())
pls.append(A)
B = A.to_pandas().set_index("index")
pds.append(B)
Creating the frame aligned by index. We need to use functools.reduce because align_frames returns a list of new DataFrame objects that are aligned by index.
frames = pl.align_frames(*pls, on="index")
functools.reduce(lambda a, b: a.with_columns(b.get_columns()), frames)
Performance
The performance is better than the pandas sort_index method.
Pandas
>>> %%timeit
>>> pd.DataFrame({
... f"A{i}": pds[i][f'A{i}'] for i in range(C)
... }).sort_index()
389 ms ± 8.96 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Polars
>>> %%timeit
>>> frames = pl.align_frames(*pls, on="index")
>>> functools.reduce(lambda a, b: a.with_columns(b.get_columns()), frames)
348 ms ± 11.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
A:
Quick solution
I've tried a couple of examples but I think pl.from_pandas is faster than any native polars solution I could find.
dt = pl.from_pandas(
pd.DataFrame({
"A": {0: 'a', 20: 'b', 40: 'c'},
"B": {10: 'd', 20: 'e', 30: 'f'},
"C": {20: 'g', 30: 'h'},
}).sort_index().reset_index()
)
shape: (5, 4)
┌───────┬──────┬──────┬──────┐
│ index ┆ A ┆ B ┆ C │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ str ┆ str ┆ str │
╞═══════╪══════╪══════╪══════╡
│ 0 ┆ a ┆ null ┆ null │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 10 ┆ null ┆ d ┆ null │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 20 ┆ b ┆ e ┆ g │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 30 ┆ null ┆ f ┆ h │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 40 ┆ c ┆ null ┆ null │
└───────┴──────┴──────┴──────┘
UPDATE
Finally found a faster solution using lazyframe joins. You can shave more time if you happen to know all the values of the index ahead of time, instead of constructing it from an unique.
# Convert tables to lazy tables
pls = [dt.lazy() for dt in pls]
%%time
out = (
pl.concat([dt.melt('index') for dt in pls])
.select(pl.col('index').unique())
)
for i in range(20):
out = out.join(pls[i], on='index', how='left')
out.collect()
CPU times: total: 766 ms
Wall time: 277 ms
|
How to efficiently create an index-like Polars DataFrame from multiple sparse series?
|
I would like to create a DataFrame that has an "index" (integer) from a number of (sparse) Series, where the index (or primary key) is NOT necessarily consecutive integers. Each Series is like a vector of (index, value) tuple or {index: value} mapping.
(1) A small example
In Pandas, this is very easy as we can create a DataFrame at a time, like
>>> pd.DataFrame({
"A": {0: 'a', 20: 'b', 40: 'c'},
"B": {10: 'd', 20: 'e', 30: 'f'},
"C": {20: 'g', 30: 'h'},
}).sort_index()
A B C
0 a NaN NaN
10 NaN d NaN
20 b e g
30 NaN f h
40 c NaN NaN
but I can't find an easy way to achieve a similar result with Polars. As described in Coming from Pandas, Polars does not use an index unlike Pandas, and each row is indexed by its integer position in the table; so I might need to represent an "indexed" Series with a 2-column DataFrame:
A = pl.DataFrame({ "index": [0, 20, 40], "A": ['a', 'b', 'c'] })
B = pl.DataFrame({ "index": [10, 20, 30], "B": ['d', 'e', 'f'] })
C = pl.DataFrame({ "index": [20, 30], "C": ['g', 'h'] })
I tried to combine these multiple DataFrames, joining on the index column:
>>> A.join(B, on='index', how='outer').join(C, on='index', how='outer').sort(by='index')
shape: (5, 4)
┌───────┬──────┬──────┬──────┐
│ index ┆ A ┆ B ┆ C │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ str ┆ str ┆ str │
╞═══════╪══════╪══════╪══════╡
│ 0 ┆ a ┆ null ┆ null │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 10 ┆ null ┆ d ┆ null │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 20 ┆ b ┆ e ┆ g │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 30 ┆ null ┆ f ┆ h │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 40 ┆ c ┆ null ┆ null │
└───────┴──────┴──────┴──────┘
This gives the result I want, but I wonder:
(i) if there is there more concise way to do this over many columns, and
(ii) how make this operation as efficient as possible.
Alternatives?
I also tried outer joins as this is one way to combine Dataframes with different number of columns and rows, as described above.
Other alternatives I tried includes diagonal concatenation, but this does not deduplicate or join on index:
>>> pl.concat([A, B, C], how='diagonal')
index A B C
0 0 a None None
1 20 b None None
2 40 c None None
3 10 None d None
4 20 None e None
5 30 None f None
6 20 None None g
7 30 None None h
(2) Efficiently Building a Large Table
The approach I found above gives desired results I'd want but I feel there must be a better way in terms of performance. Consider a case with more large tables; say 300,000 rows and 20 columns:
N, C = 300000, 20
pls = []
pds = []
for i in range(C):
A = pl.DataFrame({
"index": np.linspace(i, N*3-i, num=N, dtype=np.int32),
f"A{i}": np.arange(N, dtype=np.float32),
})
pls.append(A)
B = A.to_pandas().set_index("index")
pds.append(B)
The approach of joining two columns in a row is somewhat slow than I expected:
%%time
F = functools.reduce(lambda a, b: a.join(b, on='index', how='outer'), pls)
F.sort(by='index')
CPU times: user 1.49 s, sys: 97.8 ms, total: 1.59 s
Wall time: 611 ms
or than one-pass creation in pd.DataFrame:
%%time
pd.DataFrame({
f"A{i}": pds[i][f'A{i}'] for i in range(C)
}).sort_index()
CPU times: user 230 ms, sys: 50.7 ms, total: 281 ms
Wall time: 281 ms
|
[
"Following your example, but only informing polars on the fact that the \"index\" column is sorted (polars will use fast paths if data is sorted).\nYou can use align_frames together with functools.reduce to get what you want.\nThis is your data creation snippet:\nimport functools\nimport polars as pl\n\nN, C = 300000, 20\npls = []\npds = []\n\nfor i in range(C):\n A = pl.DataFrame({\n \"index\": np.linspace(i, N*3-i, num=N, dtype=np.int32),\n f\"A{i}\": np.arange(N, dtype=np.float32),\n }).with_column(pl.col(\"index\").set_sorted())\n \n pls.append(A)\n \n B = A.to_pandas().set_index(\"index\")\n pds.append(B)\n\nCreating the frame aligned by index. We need to use functools.reduce because align_frames returns a list of new DataFrame objects that are aligned by index.\nframes = pl.align_frames(*pls, on=\"index\")\nfunctools.reduce(lambda a, b: a.with_columns(b.get_columns()), frames)\n\nPerformance\nThe performance is better than the pandas sort_index method.\nPandas\n>>> %%timeit\n>>> pd.DataFrame({\n... f\"A{i}\": pds[i][f'A{i}'] for i in range(C)\n... }).sort_index()\n389 ms ± 8.96 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nPolars\n>>> %%timeit\n>>> frames = pl.align_frames(*pls, on=\"index\")\n>>> functools.reduce(lambda a, b: a.with_columns(b.get_columns()), frames)\n348 ms ± 11.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\n",
"Quick solution\nI've tried a couple of examples but I think pl.from_pandas is faster than any native polars solution I could find.\ndt = pl.from_pandas(\n pd.DataFrame({\n \"A\": {0: 'a', 20: 'b', 40: 'c'},\n \"B\": {10: 'd', 20: 'e', 30: 'f'},\n \"C\": {20: 'g', 30: 'h'},\n }).sort_index().reset_index()\n)\n\nshape: (5, 4)\n┌───────┬──────┬──────┬──────┐\n│ index ┆ A ┆ B ┆ C │\n│ --- ┆ --- ┆ --- ┆ --- │\n│ i64 ┆ str ┆ str ┆ str │\n╞═══════╪══════╪══════╪══════╡\n│ 0 ┆ a ┆ null ┆ null │\n├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤\n│ 10 ┆ null ┆ d ┆ null │\n├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤\n│ 20 ┆ b ┆ e ┆ g │\n├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤\n│ 30 ┆ null ┆ f ┆ h │\n├╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤\n│ 40 ┆ c ┆ null ┆ null │\n└───────┴──────┴──────┴──────┘\n\nUPDATE\nFinally found a faster solution using lazyframe joins. You can shave more time if you happen to know all the values of the index ahead of time, instead of constructing it from an unique.\n# Convert tables to lazy tables\npls = [dt.lazy() for dt in pls]\n\n%%time\nout = (\n pl.concat([dt.melt('index') for dt in pls])\n .select(pl.col('index').unique())\n)\n\nfor i in range(20):\n out = out.join(pls[i], on='index', how='left')\n\nout.collect()\n\nCPU times: total: 766 ms\nWall time: 277 ms\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"dataframe",
"python",
"python_polars",
"rust_polars"
] |
stackoverflow_0074450537_dataframe_python_python_polars_rust_polars.txt
|
Q:
How to combine every 4 lines in a txt file?
I have a txt.file that looks like this:
data1 data2 data3
data4 data5 data6
data7 data8 data9
data10 data11 data12
data13 data14 data15
data16 data17 data18
data19 data20 data21
data22 data23 data24
.
.
.
and I want to rearrange my txt file so that from data1 to data12 will be 1 line, and data13 to data24 will be second line and so on so forth. It is basically combining every 4 lines into 1 line. Desired output should look like this:
I desire something like this:
data1 data2 data3 data4 data5 data6 data7 data8 data9 data10 data11 data12
data13 data14 data15 data16 data17 data18 data19 data20 data21 data22 data23 data24
How can I do this in Python?
Thank you for any advices,
Baris
I tried methods shared under various posts but none of them actually worked.
A:
You could try something like this:
with open("text.txt" "r") as f: # load data
lines = f.readlines()
newlines = []
for i in range(0, len(lines), 4): # step through in blocks of four
newline = lines[i].strip() + " " + lines[i+1].strip() + " " + lines[i+2].strip() + " " + lines[i+3].strip() + " " # add the lines together after stripping the newline characters at the end
newlines.append(newline + "\n") # save them to a list
You would need to add some extra handling for any trailing lines if the number is not evenly divisible by 4.
A:
If you have a number of items to form a rectangular array, you can use a numpy reshape:
N = 4
df = pd.read_csv('your_file', sep='\s+', header=None)
df2 = pd.DataFrame(df.to_numpy().reshape(-1, N*df.shape[1]))
Else, a pandas reshape is needed:
N = 4
df = (pd.read_csv('your_file', sep='\s+', header=None)
.stack(dropna=False).to_frame()
.assign(idx=lambda d: d.index.get_level_values(0)//N,
col=lambda d: d.groupby('idx').cumcount(),
)
.pivot(index='idx', columns='col', values=0)
)
Output:
0 1 2 3 4 5 6 7 8 9 10 11
0 data1 data2 data3 data4 data5 data6 data7 data8 data9 data10 data11 data12
1 data13 data14 data15 data16 data17 data18 data19 data20 data21 data22 data23 data24
A:
You may use numpy. It will be just a single reshape operation on your data
import numpy as np
# data.txt:
# data1 data2 data3
# data4 data5 data6
# data7 data8 data9
# data10 data11 data12
# data13 data14 data15
# data16 data17 data18
# data19 data20 data21
# data22 data23 data24
data = np.loadtxt('data.txt', dtype='str')
data_reshaped = data.reshape((2, 12))
print(data_reshaped)
|
How to combine every 4 lines in a txt file?
|
I have a txt.file that looks like this:
data1 data2 data3
data4 data5 data6
data7 data8 data9
data10 data11 data12
data13 data14 data15
data16 data17 data18
data19 data20 data21
data22 data23 data24
.
.
.
and I want to rearrange my txt file so that from data1 to data12 will be 1 line, and data13 to data24 will be second line and so on so forth. It is basically combining every 4 lines into 1 line. Desired output should look like this:
I desire something like this:
data1 data2 data3 data4 data5 data6 data7 data8 data9 data10 data11 data12
data13 data14 data15 data16 data17 data18 data19 data20 data21 data22 data23 data24
How can I do this in Python?
Thank you for any advices,
Baris
I tried methods shared under various posts but none of them actually worked.
|
[
"You could try something like this:\nwith open(\"text.txt\" \"r\") as f: # load data\n lines = f.readlines()\n\nnewlines = []\nfor i in range(0, len(lines), 4): # step through in blocks of four\n newline = lines[i].strip() + \" \" + lines[i+1].strip() + \" \" + lines[i+2].strip() + \" \" + lines[i+3].strip() + \" \" # add the lines together after stripping the newline characters at the end\n newlines.append(newline + \"\\n\") # save them to a list\n\nYou would need to add some extra handling for any trailing lines if the number is not evenly divisible by 4.\n",
"If you have a number of items to form a rectangular array, you can use a numpy reshape:\nN = 4\ndf = pd.read_csv('your_file', sep='\\s+', header=None)\ndf2 = pd.DataFrame(df.to_numpy().reshape(-1, N*df.shape[1]))\n\nElse, a pandas reshape is needed:\nN = 4\ndf = (pd.read_csv('your_file', sep='\\s+', header=None)\n .stack(dropna=False).to_frame()\n .assign(idx=lambda d: d.index.get_level_values(0)//N,\n col=lambda d: d.groupby('idx').cumcount(),\n )\n .pivot(index='idx', columns='col', values=0)\n \n)\n\nOutput:\n 0 1 2 3 4 5 6 7 8 9 10 11\n0 data1 data2 data3 data4 data5 data6 data7 data8 data9 data10 data11 data12\n1 data13 data14 data15 data16 data17 data18 data19 data20 data21 data22 data23 data24\n\n",
"You may use numpy. It will be just a single reshape operation on your data\nimport numpy as np\n\n# data.txt:\n# data1 data2 data3 \n# data4 data5 data6 \n# data7 data8 data9 \n# data10 data11 data12 \n# data13 data14 data15 \n# data16 data17 data18 \n# data19 data20 data21\n# data22 data23 data24\n\ndata = np.loadtxt('data.txt', dtype='str')\ndata_reshaped = data.reshape((2, 12))\nprint(data_reshaped)\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"pandas",
"python",
"txt"
] |
stackoverflow_0074461272_pandas_python_txt.txt
|
Q:
how to parse mysql database name from database_url
DATABASE_URL- MYSQL://username:password@host:port/database_name
Error: database_name has no attributes.
if 'DATABASE_URL' in os.environ:
url = urlparse(os.getenv['DATABASE_URL'])
g['db'] = mysql.connector.connect(user=url.username,password=url.password, host=url.hostname ,port=url.port,path=url.path[1:])
A:
First of all, url.host would result into:
AttributeError: 'ParseResult' object has no attribute 'host'
use url.hostname instead.
To get the database_name out of the provided URL, use path:
url.path[1:]
An alternative "Don't reinvent the wheel" way to approach the problem would be to use sqlalachemy's make_url(), which is regexp-based:
In [1]: from sqlalchemy.engine.url import make_url
In [2]: url = make_url("MYSQL://username:password@host:100/database_name")
In [3]: print url.username, url.password, url.host, url.port, url.database
username password host 100 database_name
A:
Using standard python3 lib
from urllib.parse import urlparse
dbc = urlparse('mysql://username:password@host:port/database_name')
print(dbc.scheme, dbc.hostname, dbc.username, dbc.password, dbc.path.lstrip('/'))
#output: mysql host username password database_name
A:
changing path to 'database':url.path[1:] worked for me.
A:
update 2022/11/16
Python 3.10.6 (main, Aug 11 2022, 13:47:18) [Clang 12.0.0 (clang-1200.0.32.29)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from sqlalchemy.engine.url import make_url
>>> url = make_url("mysql+pymysql://username:password@host:3306/database_name?charset=utf8")
>>> print(url.get_backend_name(),
... url.get_driver_name(),
... url.username,
... url.password,
... url.host,
... url.port,
... url.database,
... url.query)
mysql pymysql username password host 3306 database_name immutabledict({'charset': 'utf8'})
>>>
|
how to parse mysql database name from database_url
|
DATABASE_URL- MYSQL://username:password@host:port/database_name
Error: database_name has no attributes.
if 'DATABASE_URL' in os.environ:
url = urlparse(os.getenv['DATABASE_URL'])
g['db'] = mysql.connector.connect(user=url.username,password=url.password, host=url.hostname ,port=url.port,path=url.path[1:])
|
[
"First of all, url.host would result into:\n\nAttributeError: 'ParseResult' object has no attribute 'host'\n\nuse url.hostname instead.\nTo get the database_name out of the provided URL, use path:\nurl.path[1:]\n\n\nAn alternative \"Don't reinvent the wheel\" way to approach the problem would be to use sqlalachemy's make_url(), which is regexp-based:\nIn [1]: from sqlalchemy.engine.url import make_url\n\nIn [2]: url = make_url(\"MYSQL://username:password@host:100/database_name\")\n\nIn [3]: print url.username, url.password, url.host, url.port, url.database\nusername password host 100 database_name\n\n",
"Using standard python3 lib\nfrom urllib.parse import urlparse\n\ndbc = urlparse('mysql://username:password@host:port/database_name')\nprint(dbc.scheme, dbc.hostname, dbc.username, dbc.password, dbc.path.lstrip('/'))\n#output: mysql host username password database_name\n\n",
"changing path to 'database':url.path[1:] worked for me.\n",
"update 2022/11/16\nPython 3.10.6 (main, Aug 11 2022, 13:47:18) [Clang 12.0.0 (clang-1200.0.32.29)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from sqlalchemy.engine.url import make_url\n>>> url = make_url(\"mysql+pymysql://username:password@host:3306/database_name?charset=utf8\")\n>>> print(url.get_backend_name(),\n... url.get_driver_name(),\n... url.username,\n... url.password,\n... url.host,\n... url.port,\n... url.database,\n... url.query)\nmysql pymysql username password host 3306 database_name immutabledict({'charset': 'utf8'})\n>>>\n\n"
] |
[
23,
11,
1,
0
] |
[] |
[] |
[
"database_connection",
"mysql",
"python",
"urlparse"
] |
stackoverflow_0031036453_database_connection_mysql_python_urlparse.txt
|
Q:
I need to retrieve historical information from the http://service.iris.edu/fdsnws/dataselect/docs/1/builder/ API
I need to retrieve historical for the earthquakes in Japan and Chile, and I know this websites has an API. Nevertheless, I cannot seem how to used correctly.
Help will be trully appreciated it.
A:
You can use the URL builder that i on the page you posted. Than you make GET request in you python code to the generated url.
This is tutorial how to make GET request in Python: https://www.geeksforgeeks.org/get-post-requests-using-python/
Then you will recieve response with desired data.
|
I need to retrieve historical information from the http://service.iris.edu/fdsnws/dataselect/docs/1/builder/ API
|
I need to retrieve historical for the earthquakes in Japan and Chile, and I know this websites has an API. Nevertheless, I cannot seem how to used correctly.
Help will be trully appreciated it.
|
[
"You can use the URL builder that i on the page you posted. Than you make GET request in you python code to the generated url.\nThis is tutorial how to make GET request in Python: https://www.geeksforgeeks.org/get-post-requests-using-python/\nThen you will recieve response with desired data.\n"
] |
[
0
] |
[] |
[] |
[
"api",
"python"
] |
stackoverflow_0074461685_api_python.txt
|
Q:
How do I get my python code that I transferred to pc to work?
I have a problem with my python, I have copied and pasted every thing from raspberry pi to pc and downloaded visual studio code to run it, and downloaded python and guizero to my pc via command terminal but even then, when I run my code, it opens up a window of it for about 1 second without loading and then immediately after it shuts, can anyone help, my code is here:
from guizero import App, Combo, Text, CheckBox, ButtonGroup, PushButton, info
def do_booking():
info('Booking', 'Thank you for Booking')
print( film_choice.value )
print( vip_seat.value )
print( row_choice.value )
app = App(title="My second GUI app", width=500, height=200, layout="grid")
film_discription = Text(app, text='Which film do you want to watch?', grid=[0,0], align='left')
film_choice = Combo(app, options=['Star Wars', 'Harry Potter', 'Frozen', 'Lion King'], grid=[1,0], align='left')
vip_seat_discription = Text(app, text='VIP is more comfy and can go down', grid=[0,1], align='left')
vip_seat = CheckBox(app, text='VIP seat?', grid=[1,1], align='left')
row_choice_disciption = Text(app, text='Where do you want to sit?', grid=[0,2], align='left')
row_choice = ButtonGroup(app, options=[ ['Front', 'F'], ['Middle', 'M'], ['Back', 'B'] ], selected='M', horizontal=True, grid=[1,2], align='left')
book_seats = PushButton(app, command=do_booking, text='Book Seat', grid=[1,3], align='left')
app.display
I have tried importing a wait command and using that to stop it from closing but it just says not responding, and then when the wait is up, it shuts
A:
You're missing the open close brackets at the end of app.display() :)
|
How do I get my python code that I transferred to pc to work?
|
I have a problem with my python, I have copied and pasted every thing from raspberry pi to pc and downloaded visual studio code to run it, and downloaded python and guizero to my pc via command terminal but even then, when I run my code, it opens up a window of it for about 1 second without loading and then immediately after it shuts, can anyone help, my code is here:
from guizero import App, Combo, Text, CheckBox, ButtonGroup, PushButton, info
def do_booking():
info('Booking', 'Thank you for Booking')
print( film_choice.value )
print( vip_seat.value )
print( row_choice.value )
app = App(title="My second GUI app", width=500, height=200, layout="grid")
film_discription = Text(app, text='Which film do you want to watch?', grid=[0,0], align='left')
film_choice = Combo(app, options=['Star Wars', 'Harry Potter', 'Frozen', 'Lion King'], grid=[1,0], align='left')
vip_seat_discription = Text(app, text='VIP is more comfy and can go down', grid=[0,1], align='left')
vip_seat = CheckBox(app, text='VIP seat?', grid=[1,1], align='left')
row_choice_disciption = Text(app, text='Where do you want to sit?', grid=[0,2], align='left')
row_choice = ButtonGroup(app, options=[ ['Front', 'F'], ['Middle', 'M'], ['Back', 'B'] ], selected='M', horizontal=True, grid=[1,2], align='left')
book_seats = PushButton(app, command=do_booking, text='Book Seat', grid=[1,3], align='left')
app.display
I have tried importing a wait command and using that to stop it from closing but it just says not responding, and then when the wait is up, it shuts
|
[
"You're missing the open close brackets at the end of app.display() :)\n"
] |
[
1
] |
[] |
[] |
[
"guizero",
"python",
"windows"
] |
stackoverflow_0074430459_guizero_python_windows.txt
|
Q:
Tkinter - Use characters/bytes offset as index for text widget
I want to delete part of a text widget's content, using only character offset (or bytes if possible).
I know how to do it for lines, words, etc. Looked around a lot of documentations:
https://www.tcl.tk/man/tcl8.6/TkCmd/text.html#M24
https://tkdocs.com/tutorial/text.html
https://anzeljg.github.io/rin2/book2/2405/docs/tkinter/text-methods.html
https://web.archive.org/web/20120112185338/http://effbot.org/tkinterbook/text.htm
Here is an example mre:
import tkinter as tk
root = tk.Tk()
text = tk.Text(root)
txt = """Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Suspendisse enim lorem, aliquam quis quam sit amet, pharetra porta lectus.
Nam commodo imperdiet sapien, in maximus nibh vestibulum nec.
Quisque rutrum massa eget viverra viverra. Vivamus hendrerit ultricies nibh, ac tincidunt nibh eleifend a. Nulla in dolor consequat, fermentum quam quis, euismod dui.
Nam at gravida nisi. Cras ut varius odio, viverra molestie arcu.
Pellentesque scelerisque eros sit amet sollicitudin venenatis.
Proin fermentum vestibulum risus, quis suscipit velit rutrum id.
Phasellus nisl justo, bibendum non dictum vel, fermentum quis ipsum.
Nunc rutrum nulla quam, ac pretium felis dictum in. Sed ut vestibulum risus, suscipit tempus enim.
Nunc a imperdiet augue.
Nullam iaculis consectetur sodales.
Praesent neque turpis, accumsan ultricies diam in, fermentum semper nibh.
Nullam eget aliquet urna, at interdum odio. Nulla in mi elementum, finibus risus aliquam, sodales ante.
Aenean ut tristique urna, sit amet condimentum quam. Mauris ac mollis nisi.
Proin rhoncus, ex venenatis varius sollicitudin, urna nibh fringilla sapien, eu porttitor felis urna eu mi.
Aliquam aliquam metus non lobortis consequat.
Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Aenean id orci dui."""
text.insert(tk.INSERT, txt)
def test_delete(event=None):
text.delete() # change this line here
text.pack(fill="both", expand=1)
text.pack_propagate(0)
text.bind('<Control-e>', test_delete)
root.mainloop()
It display an example text inside a variable, inside a text widget. I use a single key binding to test some of the possible ways to do what I want on that piece of text.
I tried a lot of things, both from the documentation(s) and my own desperation:
text.delete(0.X): where X is any number. I thought since lines were 1.0, maybe using 0.X would work on chars only. It only work with a single char, regardless of what X is (even with a big number).
text.delete(1.1, 1.3): This act on the same line, because I was trying to see if it would delete 3 chars in any direction on the same line. It delete 2 chars instead of 3, and it does so by omitting one char at the start of the first line, and delete 2 char after that.
text.delete("end - 9c"): only work at the end (last line), and omit 7 chars starting from EOF, and then delete a single char after that.
text.delete(0.1, 0.2): Does not do anything. Same result for other 0.X, 0.X combination.
Example of what I try to achieve:
Using the example text above would take too long, so let's consider a smaller string, say "hello world".
Now let's say we use an index that start with 1 (doesn't matter but make things easier to explain), the first char is "h" and the last one is "d". So say I use chars range such as "2-7", that would be "ello w". Say I want to do "1-8"? -> "hello wo", and now starting from the end, "11-2", "ello world".
This is basically similar to what f.tell() and f.seek() do. I want to do something like that but using only the content inside of the text widget, and then do something on those bytes/chars ranges (in the example above, I'm deleting them, etc).
A:
TL;DR
You can use a relative index similar to f.tell() by giving a starting index and then add or remove lines or characters. For example, text.delete("1.0", "1.0+11c") ("1.0" plus 11 characters)
The canonical documentation for text widget indexes is in the tcl/tk man pages in a section named Indices.
text.delete(0.X): where X is any number. I thought since lines were 1.0, maybe using 0.X would work on chars only. It only work with a single char, regardless of what X is (even with a big number).
I don't know what you mean by "since lines were 1.0". The first part of the index is the line number, the second is the character number. Lines start counting at 1, characters at zero. So, the first character of the widget is "1.0". The first character of line 2 is "2.0", etc.
But yes, text.delete with a single index will only delete one character. That is the defined behavior.
text.delete(1.1, 1.3): This act on the same line, because I was trying to see if it would delete 3 chars in any direction on the same line.
The delete method is documented to delete from the first index to the character before the last index:
"Delete a range of characters from the text. If both index1 and index2 are specified, then delete all the characters starting with the one given by index1 and stopping just before index2"
text.delete("end - 9c"): only work at the end (last line), and omit 7 chars starting from EOF, and then delete a single char after that.
Yes. Again, a single index given to delete will delete just a single character.
text.delete(0.1, 0.2): Does not do anything. Same result for other 0.X, 0.X combination.
0.1 is an invalid index. An index is a string, not a floating point number, and the first number should be 1 or greater. Tkinter has to convert that number to a whole number greater than or equal to 1.So, both 0.1 and 0.2 are both converted to mean "1.0". Like I said earlier, the delete method stops before the second index, so you're deleting everything before character "1.0".
Using the example text above would take too long, so let's consider a smaller string, say "hello world". Now let's say we use an index that start with 1 (doesn't matter but make things easier to explain), the first char is "h" and the last one is "d". So say I use chars range such as "2-7", that would be "ello wo". Say I want to do "1-8"? -> "hello wo", and now starting from the end, "11-2", "ello world".
If "hello world" starts at character position "1.0", and you want to use a relative index to delete a range characters, you can delete it with something like text.delete("1.0", "1.0+11c") ("1.0" plus 11 characters)
A:
I emulated f.seek and combined it with text.delete. It seems what you basically was missing was that you need to take the insertion cursor into account. See the comments in the code
def seek_delete(offset, whence):
if whence == 0: #from the beginning
start = '1.0'
end = f'{start} +{offset} chars'
elif whence == 1:# from insertion cursor
current = 'insert'
if offset >= 0:#positive offset
start = current
end = f'{start} +{offset} chars'
else:#negative offset
start = f'{current} {offset} chars'
end = current
elif whence == 2:#from the end
start = f'end {offset} chars'
end = 'end'
text.delete(start, end)
I have tested it with different values with this binding:
text.bind('<Control-e>', lambda e:seek_delete(-2,1))
As a bonus, you can emulate f.tell quite easy like this:
def tell(event):
print(text.index('insert'))
A:
Based on my own relentless testing and other answers here, I managed to get to a solution.
import tkinter as tk
from tkinter import messagebox # https://stackoverflow.com/a/29780454/12349101
root = tk.Tk()
main_text = tk.Text(root)
box_text = tk.Text(root, height=1, width=10)
box_text.pack()
txt = """hello world"""
len_txt = len(
txt) # get the total length of the text content. Can be replaced by `os.path.getsize` or other alternatives for files
main_text.insert(tk.INSERT, txt)
def offset():
inputValue = box_text.get("1.0",
"end-1c") # get the input of the text widget without newline (since it's added by default)
# focusing the other text widget, deleting and re-insert the original text so that the selection/tag is updated (no need to move the mouse to the other widget in this example)
main_text.focus()
main_text.delete("1.0", tk.END)
main_text.insert(tk.INSERT, txt)
to_do = inputValue.split("-")
if len(to_do) == 1: # if length is 1, it probably is a single offset for a single byte/char
to_do.append(to_do[0])
if not to_do[0].isdigit() or not to_do[1].isdigit(): # Only integers are supported
messagebox.showerror("error", "Only integers are supported")
return # trick to prevent the failing range to be executed
if int(to_do[0]) > len_txt or int(to_do[1]) > len_txt: # total length is the maximum range
messagebox.showerror("error",
"One of the integers in the range seems to be bigger than the total length")
return # trick to prevent the failing range to be executed
if to_do[0] == "0" or to_do[1] == "0": # since we don't use a 0 index, this isn't needed
messagebox.showerror("error", "Using zero in this range isn't useful")
return # trick to prevent the failing range to be executed
if int(to_do[0]) > int(to_do[1]): # This is to support reverse range offset, so 11-2 -> 2-11, etc
first = int(to_do[1]) - 1
first = str(first).split("-")[-1:][0]
second = (int(to_do[0]) - len_txt) - 1
second = str(second).split("-")[-1:][0]
else: # use the offset range normally
first = int(to_do[0]) - 1
first = str(first).split("-")[-1:][0]
second = (int(to_do[1]) - len_txt) - 1
second = str(second).split("-")[-1:][0]
print(first, second)
main_text.tag_add("sel", '1.0 + {}c'.format(first), 'end - {}c'.format(second))
buttonCommit = tk.Button(root, text="use offset",
command=offset)
buttonCommit.pack()
main_text.pack(fill="both", expand=1)
main_text.pack_propagate(0)
root.mainloop()
Now the above works, as described in the "hello world" example in my post. It isn't a 1:1 clone/emulation of f.tell() or f.seek(), but I feel like it's close.
The above does not use text.delete but instead select the text, so it's visually less confusing (at least to me).
It works with the following offset type:
reverse range: 11-2 -> 2-11 so the order does not matter
normal range: 2-11, 1-8, 8-10...
single offset: 10 or 10-10 so it can support single char/byte
Now the main thing I noticed, is that '1.0 + {}c', 'end - {}c' where {} is the range, works by omitting its given range.
If you were to use 1-3 as a range on the string hello world it would select ello wor. You could say it omitted h and ld\n, with the added newline by Tkinter (which we ignore in the code above unless it's part of the total length variable). The correct offset (or at least the one following the example I gave in the post above) would be 2-9.
P.S: For this example, clicking on the button after entering the offsets range is needed.
|
Tkinter - Use characters/bytes offset as index for text widget
|
I want to delete part of a text widget's content, using only character offset (or bytes if possible).
I know how to do it for lines, words, etc. Looked around a lot of documentations:
https://www.tcl.tk/man/tcl8.6/TkCmd/text.html#M24
https://tkdocs.com/tutorial/text.html
https://anzeljg.github.io/rin2/book2/2405/docs/tkinter/text-methods.html
https://web.archive.org/web/20120112185338/http://effbot.org/tkinterbook/text.htm
Here is an example mre:
import tkinter as tk
root = tk.Tk()
text = tk.Text(root)
txt = """Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Suspendisse enim lorem, aliquam quis quam sit amet, pharetra porta lectus.
Nam commodo imperdiet sapien, in maximus nibh vestibulum nec.
Quisque rutrum massa eget viverra viverra. Vivamus hendrerit ultricies nibh, ac tincidunt nibh eleifend a. Nulla in dolor consequat, fermentum quam quis, euismod dui.
Nam at gravida nisi. Cras ut varius odio, viverra molestie arcu.
Pellentesque scelerisque eros sit amet sollicitudin venenatis.
Proin fermentum vestibulum risus, quis suscipit velit rutrum id.
Phasellus nisl justo, bibendum non dictum vel, fermentum quis ipsum.
Nunc rutrum nulla quam, ac pretium felis dictum in. Sed ut vestibulum risus, suscipit tempus enim.
Nunc a imperdiet augue.
Nullam iaculis consectetur sodales.
Praesent neque turpis, accumsan ultricies diam in, fermentum semper nibh.
Nullam eget aliquet urna, at interdum odio. Nulla in mi elementum, finibus risus aliquam, sodales ante.
Aenean ut tristique urna, sit amet condimentum quam. Mauris ac mollis nisi.
Proin rhoncus, ex venenatis varius sollicitudin, urna nibh fringilla sapien, eu porttitor felis urna eu mi.
Aliquam aliquam metus non lobortis consequat.
Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Aenean id orci dui."""
text.insert(tk.INSERT, txt)
def test_delete(event=None):
text.delete() # change this line here
text.pack(fill="both", expand=1)
text.pack_propagate(0)
text.bind('<Control-e>', test_delete)
root.mainloop()
It display an example text inside a variable, inside a text widget. I use a single key binding to test some of the possible ways to do what I want on that piece of text.
I tried a lot of things, both from the documentation(s) and my own desperation:
text.delete(0.X): where X is any number. I thought since lines were 1.0, maybe using 0.X would work on chars only. It only work with a single char, regardless of what X is (even with a big number).
text.delete(1.1, 1.3): This act on the same line, because I was trying to see if it would delete 3 chars in any direction on the same line. It delete 2 chars instead of 3, and it does so by omitting one char at the start of the first line, and delete 2 char after that.
text.delete("end - 9c"): only work at the end (last line), and omit 7 chars starting from EOF, and then delete a single char after that.
text.delete(0.1, 0.2): Does not do anything. Same result for other 0.X, 0.X combination.
Example of what I try to achieve:
Using the example text above would take too long, so let's consider a smaller string, say "hello world".
Now let's say we use an index that start with 1 (doesn't matter but make things easier to explain), the first char is "h" and the last one is "d". So say I use chars range such as "2-7", that would be "ello w". Say I want to do "1-8"? -> "hello wo", and now starting from the end, "11-2", "ello world".
This is basically similar to what f.tell() and f.seek() do. I want to do something like that but using only the content inside of the text widget, and then do something on those bytes/chars ranges (in the example above, I'm deleting them, etc).
|
[
"TL;DR\nYou can use a relative index similar to f.tell() by giving a starting index and then add or remove lines or characters. For example, text.delete(\"1.0\", \"1.0+11c\") (\"1.0\" plus 11 characters)\nThe canonical documentation for text widget indexes is in the tcl/tk man pages in a section named Indices.\n\n\ntext.delete(0.X): where X is any number. I thought since lines were 1.0, maybe using 0.X would work on chars only. It only work with a single char, regardless of what X is (even with a big number).\n\nI don't know what you mean by \"since lines were 1.0\". The first part of the index is the line number, the second is the character number. Lines start counting at 1, characters at zero. So, the first character of the widget is \"1.0\". The first character of line 2 is \"2.0\", etc.\nBut yes, text.delete with a single index will only delete one character. That is the defined behavior.\n\ntext.delete(1.1, 1.3): This act on the same line, because I was trying to see if it would delete 3 chars in any direction on the same line.\n\nThe delete method is documented to delete from the first index to the character before the last index:\n\"Delete a range of characters from the text. If both index1 and index2 are specified, then delete all the characters starting with the one given by index1 and stopping just before index2\"\n\ntext.delete(\"end - 9c\"): only work at the end (last line), and omit 7 chars starting from EOF, and then delete a single char after that.\n\nYes. Again, a single index given to delete will delete just a single character.\n\ntext.delete(0.1, 0.2): Does not do anything. Same result for other 0.X, 0.X combination.\n\n0.1 is an invalid index. An index is a string, not a floating point number, and the first number should be 1 or greater. Tkinter has to convert that number to a whole number greater than or equal to 1.So, both 0.1 and 0.2 are both converted to mean \"1.0\". Like I said earlier, the delete method stops before the second index, so you're deleting everything before character \"1.0\".\n\nUsing the example text above would take too long, so let's consider a smaller string, say \"hello world\". Now let's say we use an index that start with 1 (doesn't matter but make things easier to explain), the first char is \"h\" and the last one is \"d\". So say I use chars range such as \"2-7\", that would be \"ello wo\". Say I want to do \"1-8\"? -> \"hello wo\", and now starting from the end, \"11-2\", \"ello world\".\n\nIf \"hello world\" starts at character position \"1.0\", and you want to use a relative index to delete a range characters, you can delete it with something like text.delete(\"1.0\", \"1.0+11c\") (\"1.0\" plus 11 characters)\n",
"I emulated f.seek and combined it with text.delete. It seems what you basically was missing was that you need to take the insertion cursor into account. See the comments in the code\ndef seek_delete(offset, whence):\n if whence == 0: #from the beginning\n start = '1.0'\n end = f'{start} +{offset} chars'\n elif whence == 1:# from insertion cursor\n current = 'insert'\n if offset >= 0:#positive offset\n start = current\n end = f'{start} +{offset} chars'\n else:#negative offset\n start = f'{current} {offset} chars'\n end = current\n elif whence == 2:#from the end\n start = f'end {offset} chars'\n end = 'end'\n text.delete(start, end)\n\nI have tested it with different values with this binding:\ntext.bind('<Control-e>', lambda e:seek_delete(-2,1))\n\nAs a bonus, you can emulate f.tell quite easy like this:\ndef tell(event):\n print(text.index('insert'))\n\n",
"Based on my own relentless testing and other answers here, I managed to get to a solution.\nimport tkinter as tk\nfrom tkinter import messagebox # https://stackoverflow.com/a/29780454/12349101\n\nroot = tk.Tk()\n\nmain_text = tk.Text(root)\n\nbox_text = tk.Text(root, height=1, width=10)\nbox_text.pack()\n\ntxt = \"\"\"hello world\"\"\"\n\nlen_txt = len(\n txt) # get the total length of the text content. Can be replaced by `os.path.getsize` or other alternatives for files\n\nmain_text.insert(tk.INSERT, txt)\n\n\ndef offset():\n inputValue = box_text.get(\"1.0\",\n \"end-1c\") # get the input of the text widget without newline (since it's added by default)\n\n # focusing the other text widget, deleting and re-insert the original text so that the selection/tag is updated (no need to move the mouse to the other widget in this example)\n main_text.focus()\n main_text.delete(\"1.0\", tk.END)\n main_text.insert(tk.INSERT, txt)\n\n\n to_do = inputValue.split(\"-\")\n\n if len(to_do) == 1: # if length is 1, it probably is a single offset for a single byte/char\n to_do.append(to_do[0])\n\n if not to_do[0].isdigit() or not to_do[1].isdigit(): # Only integers are supported\n messagebox.showerror(\"error\", \"Only integers are supported\")\n return # trick to prevent the failing range to be executed\n\n if int(to_do[0]) > len_txt or int(to_do[1]) > len_txt: # total length is the maximum range\n messagebox.showerror(\"error\",\n \"One of the integers in the range seems to be bigger than the total length\")\n return # trick to prevent the failing range to be executed\n\n if to_do[0] == \"0\" or to_do[1] == \"0\": # since we don't use a 0 index, this isn't needed\n messagebox.showerror(\"error\", \"Using zero in this range isn't useful\")\n return # trick to prevent the failing range to be executed\n\n if int(to_do[0]) > int(to_do[1]): # This is to support reverse range offset, so 11-2 -> 2-11, etc\n first = int(to_do[1]) - 1\n first = str(first).split(\"-\")[-1:][0]\n\n second = (int(to_do[0]) - len_txt) - 1\n second = str(second).split(\"-\")[-1:][0]\n else: # use the offset range normally\n first = int(to_do[0]) - 1\n first = str(first).split(\"-\")[-1:][0]\n\n second = (int(to_do[1]) - len_txt) - 1\n second = str(second).split(\"-\")[-1:][0]\n\n print(first, second)\n main_text.tag_add(\"sel\", '1.0 + {}c'.format(first), 'end - {}c'.format(second))\n\n\nbuttonCommit = tk.Button(root, text=\"use offset\",\n command=offset)\nbuttonCommit.pack()\nmain_text.pack(fill=\"both\", expand=1)\nmain_text.pack_propagate(0)\nroot.mainloop()\n\nNow the above works, as described in the \"hello world\" example in my post. It isn't a 1:1 clone/emulation of f.tell() or f.seek(), but I feel like it's close.\nThe above does not use text.delete but instead select the text, so it's visually less confusing (at least to me).\nIt works with the following offset type:\n\nreverse range: 11-2 -> 2-11 so the order does not matter\nnormal range: 2-11, 1-8, 8-10...\nsingle offset: 10 or 10-10 so it can support single char/byte\n\nNow the main thing I noticed, is that '1.0 + {}c', 'end - {}c' where {} is the range, works by omitting its given range.\nIf you were to use 1-3 as a range on the string hello world it would select ello wor. You could say it omitted h and ld\\n, with the added newline by Tkinter (which we ignore in the code above unless it's part of the total length variable). The correct offset (or at least the one following the example I gave in the post above) would be 2-9.\nP.S: For this example, clicking on the button after entering the offsets range is needed.\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
"offset",
"python",
"text",
"tkinter"
] |
stackoverflow_0074447766_offset_python_text_tkinter.txt
|
Q:
Python: How to search for multiple items in a list
So i have a list with number values such as
my_num = [1,2,2,3,4,5]
What i want is a code that will check if 1, 2 and 3 are in the list.
What i had in mind was:
if 1 and 2 and 3 in my_num:
do something
but the problem is if 1 and 3 are in the list the do something code executes anyways even without the 2 being there.
A:
Check out the standard library functions any and all. You can write this:
if any(a in my_num for a in (1, 2, 3)):
# do something if one of the numbers is in the list
if all(a in my_num for a in (1, 2, 3)):
# do something if all of them are in the list
A:
Try this:
nums = [1,2,3,4]
>>> if (1 in nums) and (2 in nums) and (3 in nums):
... print('ok')
...
ok
>>> if (1 in nums) and (2 in nums) and (9 in nums):
... print('ok')
...
>>>
A:
if 1 and 2 and 3 in my_num:
is not doing what you think it does: it tests if 1 which is True, and if 2, which is also True, then if 3 in my_num
You must test for each condition individually:
if 1 and in my_num and 2 in my_num and 3 in my_num:
A:
If lists lenghs are long:
nums = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
_in = [2, 3, 4]
if len(set(_in)) == len(set(nums)&set(_in)):
print("ok")
A:
Depend on Paul Cornelius answer, I add something and some improvements to make it more understandable
number_list = [1, 2, 3, 4, 5]
search_nums = [1, 2]
if any(num in number_list for num in search_nums):
# Is any number in search_nums inside of number_list do something
if all(num in number_list for num in search_nums):
# Is all numbers in search_nums inside of number_list do something
Searching list in list of lists
list_number_list = [[1,2,3,4], [5,6,7,8]]
search_nums = [1, 2]
for number_list in list_number_list:
if any(num in number_list for num in search_nums):
# Is any number in search_nums inside of number_list do something
if all(num in number_list for num in search_nums):
# Is all numbers in search_nums inside of number_list do something
|
Python: How to search for multiple items in a list
|
So i have a list with number values such as
my_num = [1,2,2,3,4,5]
What i want is a code that will check if 1, 2 and 3 are in the list.
What i had in mind was:
if 1 and 2 and 3 in my_num:
do something
but the problem is if 1 and 3 are in the list the do something code executes anyways even without the 2 being there.
|
[
"Check out the standard library functions any and all. You can write this:\nif any(a in my_num for a in (1, 2, 3)):\n # do something if one of the numbers is in the list\nif all(a in my_num for a in (1, 2, 3)):\n # do something if all of them are in the list\n\n",
"Try this:\nnums = [1,2,3,4]\n>>> if (1 in nums) and (2 in nums) and (3 in nums):\n... print('ok')\n...\nok\n>>> if (1 in nums) and (2 in nums) and (9 in nums):\n... print('ok')\n...\n>>>\n\n",
"if 1 and 2 and 3 in my_num: \n\nis not doing what you think it does: it tests if 1 which is True, and if 2, which is also True, then if 3 in my_num\nYou must test for each condition individually:\nif 1 and in my_num and 2 in my_num and 3 in my_num:\n\n",
"If lists lenghs are long:\nnums = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n_in = [2, 3, 4]\nif len(set(_in)) == len(set(nums)&set(_in)):\n print(\"ok\")\n\n",
"Depend on Paul Cornelius answer, I add something and some improvements to make it more understandable\nnumber_list = [1, 2, 3, 4, 5]\nsearch_nums = [1, 2]\nif any(num in number_list for num in search_nums):\n # Is any number in search_nums inside of number_list do something\nif all(num in number_list for num in search_nums):\n # Is all numbers in search_nums inside of number_list do something\n\nSearching list in list of lists\nlist_number_list = [[1,2,3,4], [5,6,7,8]]\nsearch_nums = [1, 2]\nfor number_list in list_number_list:\n if any(num in number_list for num in search_nums):\n # Is any number in search_nums inside of number_list do something\n if all(num in number_list for num in search_nums):\n # Is all numbers in search_nums inside of number_list do something\n\n"
] |
[
5,
2,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0046985602_python.txt
|
Q:
Why define constants in a metaclass?
I've recently inherited some code. It has a class called SystemConfig that acts as a grab-bag of constants that are used across the code base. But while a few of the constants are defined directly on that class, a big pile of them are defined as properties of a metaclass of that class. Like this:
class _MetaSystemConfig(type):
@property
define CONSTANT_1(cls):
return "value 1"
@property
define CONSTANT_2(cls):
return "value 2"
...
class SystemConfig(metaclass=_MetaSystemConfig):
CONSTANT_3 = "value 3"
...
The class is never instantiated; the values are just used as SystemConfig.CONSTANT_1 and so on.
No-one who is still involved in the project seems to have any idea why it was done this way, except that someone seems to think the guy who did it thought it made unit testing easier.
Can someone explain to me any advantages of doing it this way and why I shouldn't just move all the properties to the SystemConfig class and delete the metaclass?
Edit to add: The metaclass definition doesn't contain anything other than properties.
A:
So I figured out why it was done this way. These properties were defined as properties because a number of them depended on each other - one for a directory, another for a subdirectory of that directory, several for files spread across the directories and so forth.
But @property doesn't work on classmethods. Python 3.9 fixed @classmethod so that it could be stacked on top of @property but this was removed again in Python 3.11. So, as a workaround, he put the properties in a metaclass (presumably after seeing this question).
However, implementing a property decorator that works on classmethods is not exactly rocket science, so for the good of whoever comes after me and has to figure out what's going on, I've replaced the metaclass properties with class properties on the SystemConfig class. For anyone else who's trying to figure this out, this works as a decorator:
class class_property:
def __init__(self, _g):
self._g = _g
def __get__(_, _, cls):
return self._g(cls)
Implementing a setter appears to be much more difficult, as __set__ is not used when assigning to class variables. But I don't need it.
A:
Adding a set of constants to a class can be done with a simple decorator and no properties.
def add_constants(cls):
cls.CONSTANT_1 = "value 1"
cls.CONSTANT_2 = "value 2"
@add_constants
class SystemConfig:
CONSTANT_3 = "value 3"
I'm not concerned about users shooting themselves in the foot by explicitly assigning a new value to any of the "constants", so I consider jumping through hoops just to add read-only class properties more trouble than it's worth.
The problem with metaclasses is that they don't compose. If C1 uses metaclass M1 and C2 uses metaclass M2, you can't assume that class C3(C1, C2): ... will work, because the two metaclasses may not be compatible. The more metaclasses you introduce to do things you could have done without a metaclass, the more problems like this can arise. Use metaclasses when you have no other choice, not just because you think it's a cooler alternative to inheritance or decorators.
|
Why define constants in a metaclass?
|
I've recently inherited some code. It has a class called SystemConfig that acts as a grab-bag of constants that are used across the code base. But while a few of the constants are defined directly on that class, a big pile of them are defined as properties of a metaclass of that class. Like this:
class _MetaSystemConfig(type):
@property
define CONSTANT_1(cls):
return "value 1"
@property
define CONSTANT_2(cls):
return "value 2"
...
class SystemConfig(metaclass=_MetaSystemConfig):
CONSTANT_3 = "value 3"
...
The class is never instantiated; the values are just used as SystemConfig.CONSTANT_1 and so on.
No-one who is still involved in the project seems to have any idea why it was done this way, except that someone seems to think the guy who did it thought it made unit testing easier.
Can someone explain to me any advantages of doing it this way and why I shouldn't just move all the properties to the SystemConfig class and delete the metaclass?
Edit to add: The metaclass definition doesn't contain anything other than properties.
|
[
"So I figured out why it was done this way. These properties were defined as properties because a number of them depended on each other - one for a directory, another for a subdirectory of that directory, several for files spread across the directories and so forth.\nBut @property doesn't work on classmethods. Python 3.9 fixed @classmethod so that it could be stacked on top of @property but this was removed again in Python 3.11. So, as a workaround, he put the properties in a metaclass (presumably after seeing this question).\nHowever, implementing a property decorator that works on classmethods is not exactly rocket science, so for the good of whoever comes after me and has to figure out what's going on, I've replaced the metaclass properties with class properties on the SystemConfig class. For anyone else who's trying to figure this out, this works as a decorator:\nclass class_property:\n def __init__(self, _g):\n self._g = _g\n\n def __get__(_, _, cls):\n return self._g(cls)\n\nImplementing a setter appears to be much more difficult, as __set__ is not used when assigning to class variables. But I don't need it.\n",
"Adding a set of constants to a class can be done with a simple decorator and no properties.\ndef add_constants(cls):\n cls.CONSTANT_1 = \"value 1\"\n cls.CONSTANT_2 = \"value 2\"\n\n\n@add_constants\nclass SystemConfig:\n CONSTANT_3 = \"value 3\"\n\nI'm not concerned about users shooting themselves in the foot by explicitly assigning a new value to any of the \"constants\", so I consider jumping through hoops just to add read-only class properties more trouble than it's worth.\nThe problem with metaclasses is that they don't compose. If C1 uses metaclass M1 and C2 uses metaclass M2, you can't assume that class C3(C1, C2): ... will work, because the two metaclasses may not be compatible. The more metaclasses you introduce to do things you could have done without a metaclass, the more problems like this can arise. Use metaclasses when you have no other choice, not just because you think it's a cooler alternative to inheritance or decorators.\n"
] |
[
1,
0
] |
[] |
[] |
[
"pytest",
"pytest_mock",
"python",
"python_3.x",
"python_unittest"
] |
stackoverflow_0074445635_pytest_pytest_mock_python_python_3.x_python_unittest.txt
|
Q:
pywhatkit opening the same youtube video
When trying to get pywhatkit to open a youtube video, it works, except it opens the same youtube video every time. one that i did not request
if 'play' in command:
song = command.replace('play', '')
talk('playing' + song)
pywhatkit.playonyt('song')
It keeps opening this link https://www.youtube.com/watch?v=Jq_WDQsKYu8&ab_channel=PopularMusic
A:
Remove the quotes around song:
pywhatkit.playonyt(song)
|
pywhatkit opening the same youtube video
|
When trying to get pywhatkit to open a youtube video, it works, except it opens the same youtube video every time. one that i did not request
if 'play' in command:
song = command.replace('play', '')
talk('playing' + song)
pywhatkit.playonyt('song')
It keeps opening this link https://www.youtube.com/watch?v=Jq_WDQsKYu8&ab_channel=PopularMusic
|
[
"Remove the quotes around song:\npywhatkit.playonyt(song)\n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0069992718_python.txt
|
Q:
How do I get time of a Python program's execution?
I have a command line program in Python that takes a while to finish. I want to know the exact time it takes to finish running.
I've looked at the timeit module, but it seems it's only for small snippets of code. I want to time the whole program.
A:
The simplest way in Python:
import time
start_time = time.time()
main()
print("--- %s seconds ---" % (time.time() - start_time))
This assumes that your program takes at least a tenth of second to run.
Prints:
--- 0.764891862869 seconds ---
A:
In Linux or Unix:
$ time python yourprogram.py
In Windows, see this StackOverflow question: How do I measure execution time of a command on the Windows command line?
For more verbose output,
$ time -v python yourprogram.py
Command being timed: "python3 yourprogram.py"
User time (seconds): 0.08
System time (seconds): 0.02
Percent of CPU this job got: 98%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.10
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 9480
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 1114
Voluntary context switches: 0
Involuntary context switches: 22
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
A:
I put this timing.py module into my own site-packages directory, and just insert import timing at the top of my module:
import atexit
from time import clock
def secondsToStr(t):
return "%d:%02d:%02d.%03d" % \
reduce(lambda ll,b : divmod(ll[0],b) + ll[1:],
[(t*1000,),1000,60,60])
line = "="*40
def log(s, elapsed=None):
print line
print secondsToStr(clock()), '-', s
if elapsed:
print "Elapsed time:", elapsed
print line
print
def endlog():
end = clock()
elapsed = end-start
log("End Program", secondsToStr(elapsed))
def now():
return secondsToStr(clock())
start = clock()
atexit.register(endlog)
log("Start Program")
I can also call timing.log from within my program if there are significant stages within the program I want to show. But just including import timing will print the start and end times, and overall elapsed time. (Forgive my obscure secondsToStr function, it just formats a floating point number of seconds to hh:mm:ss.sss form.)
Note: A Python 3 version of the above code can be found here or here.
A:
I like the output the datetime module provides, where time delta objects show days, hours, minutes, etc. as necessary in a human-readable way.
For example:
from datetime import datetime
start_time = datetime.now()
# do your work here
end_time = datetime.now()
print('Duration: {}'.format(end_time - start_time))
Sample output e.g.
Duration: 0:00:08.309267
or
Duration: 1 day, 1:51:24.269711
As J.F. Sebastian mentioned, this approach might encounter some tricky cases with local time, so it's safer to use:
import time
from datetime import timedelta
start_time = time.monotonic()
end_time = time.monotonic()
print(timedelta(seconds=end_time - start_time))
A:
import time
start_time = time.clock()
main()
print(time.clock() - start_time, "seconds")
time.clock() returns the processor time, which allows us to calculate only the time used by this process (on Unix anyway). The documentation says "in any case, this is the function to use for benchmarking Python or timing algorithms"
A:
I really like Paul McGuire's answer, but I use Python 3. So for those who are interested: here's a modification of his answer that works with Python 3 on *nix (I imagine, under Windows, that clock() should be used instead of time()):
#python3
import atexit
from time import time, strftime, localtime
from datetime import timedelta
def secondsToStr(elapsed=None):
if elapsed is None:
return strftime("%Y-%m-%d %H:%M:%S", localtime())
else:
return str(timedelta(seconds=elapsed))
def log(s, elapsed=None):
line = "="*40
print(line)
print(secondsToStr(), '-', s)
if elapsed:
print("Elapsed time:", elapsed)
print(line)
print()
def endlog():
end = time()
elapsed = end-start
log("End Program", secondsToStr(elapsed))
start = time()
atexit.register(endlog)
log("Start Program")
If you find this useful, you should still up-vote his answer instead of this one, as he did most of the work ;).
A:
You can use the Python profiler cProfile to measure CPU time and additionally how much time is spent inside each function and how many times each function is called. This is very useful if you want to improve performance of your script without knowing where to start. This answer to another Stack Overflow question is pretty good. It's always good to have a look in the documentation too.
Here's an example how to profile a script using cProfile from a command line:
$ python -m cProfile euler048.py
1007 function calls in 0.061 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.061 0.061 <string>:1(<module>)
1000 0.051 0.000 0.051 0.000 euler048.py:2(<lambda>)
1 0.005 0.005 0.061 0.061 euler048.py:2(<module>)
1 0.000 0.000 0.061 0.061 {execfile}
1 0.002 0.002 0.053 0.053 {map}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler objects}
1 0.000 0.000 0.000 0.000 {range}
1 0.003 0.003 0.003 0.003 {sum}
A:
Just use the timeit module. It works with both Python 2 and Python 3.
import timeit
start = timeit.default_timer()
# All the program statements
stop = timeit.default_timer()
execution_time = stop - start
print("Program Executed in "+str(execution_time)) # It returns time in seconds
It returns in seconds and you can have your execution time. It is simple, but you should write these in thew main function which starts program execution. If you want to get the execution time even when you get an error then take your parameter "Start" to it and calculate there like:
def sample_function(start,**kwargs):
try:
# Your statements
except:
# except statements run when your statements raise an exception
stop = timeit.default_timer()
execution_time = stop - start
print("Program executed in " + str(execution_time))
A:
time.clock()
Deprecated since version 3.3: The behavior of this function depends
on the platform: use perf_counter() or process_time() instead,
depending on your requirements, to have a well-defined behavior.
time.perf_counter()
Return the value (in fractional seconds) of a performance counter,
i.e. a clock with the highest available resolution to measure a short
duration. It does include time elapsed during sleep and is
system-wide.
time.process_time()
Return the value (in fractional seconds) of the sum of the system and
user CPU time of the current process. It does not include time elapsed
during sleep.
start = time.process_time()
... do something
elapsed = (time.process_time() - start)
A:
time.clock has been deprecated in Python 3.3 and will be removed from Python 3.8: use time.perf_counter or time.process_time instead
import time
start_time = time.perf_counter ()
for x in range(1, 100):
print(x)
end_time = time.perf_counter ()
print(end_time - start_time, "seconds")
A:
For the data folks using Jupyter Notebook
In a cell, you can use Jupyter's %%time magic command to measure the execution time:
%%time
[ x**2 for x in range(10000)]
Output
CPU times: user 4.54 ms, sys: 0 ns, total: 4.54 ms
Wall time: 4.12 ms
This will only capture the execution time of a particular cell. If you'd like to capture the execution time of the whole notebook (i.e. program), you can create a new notebook in the same directory and in the new notebook execute all cells:
Suppose the notebook above is called example_notebook.ipynb. In a new notebook within the same directory:
# Convert your notebook to a .py script:
!jupyter nbconvert --to script example_notebook.ipynb
# Run the example_notebook with -t flag for time
%run -t example_notebook
Output
IPython CPU timings (estimated):
User : 0.00 s.
System : 0.00 s.
Wall time: 0.00 s.
A:
The following snippet prints elapsed time in a nice human readable <HH:MM:SS> format.
import time
from datetime import timedelta
start_time = time.time()
#
# Perform lots of computations.
#
elapsed_time_secs = time.time() - start_time
msg = "Execution took: %s secs (Wall clock time)" % timedelta(seconds=round(elapsed_time_secs))
print(msg)
A:
Similar to the response from @rogeriopvl I added a slight modification to convert to hour minute seconds using the same library for long running jobs.
import time
start_time = time.time()
main()
seconds = time.time() - start_time
print('Time Taken:', time.strftime("%H:%M:%S",time.gmtime(seconds)))
Sample Output
Time Taken: 00:00:08
A:
For functions, I suggest using this simple decorator I created.
def timeit(method):
def timed(*args, **kwargs):
ts = time.time()
result = method(*args, **kwargs)
te = time.time()
if 'log_time' in kwargs:
name = kwargs.get('log_name', method.__name__.upper())
kwargs['log_time'][name] = int((te - ts) * 1000)
else:
print('%r %2.22f ms' % (method.__name__, (te - ts) * 1000))
return result
return timed
@timeit
def foo():
do_some_work()
# foo()
# 'foo' 0.000953 ms
A:
from time import time
start_time = time()
...
end_time = time()
time_taken = end_time - start_time # time_taken is in seconds
hours, rest = divmod(time_taken,3600)
minutes, seconds = divmod(rest, 60)
A:
I was having the same problem in many places, so I created a convenience package horology. You can install it with pip install horology and then do it in the elegant way:
from horology import Timing
with Timing(name='Important calculations: '):
prepare()
do_your_stuff()
finish_sth()
will output:
Important calculations: 12.43 ms
Or even simpler (if you have one function):
from horology import timed
@timed
def main():
...
will output:
main: 7.12 h
It takes care of units and rounding. It works with python 3.6 or newer.
A:
I've looked at the timeit module, but it seems it's only for small snippets of code. I want to time the whole program.
$ python -mtimeit -n1 -r1 -t -s "from your_module import main" "main()"
It runs your_module.main() function one time and print the elapsed time using time.time() function as a timer.
To emulate /usr/bin/time in Python see Python subprocess with /usr/bin/time: how to capture timing info but ignore all other output?.
To measure CPU time (e.g., don't include time during time.sleep()) for each function, you could use profile module (cProfile on Python 2):
$ python3 -mprofile your_module.py
You could pass -p to timeit command above if you want to use the same timer as profile module uses.
See How can you profile a Python script?
A:
I liked Paul McGuire's answer too and came up with a context manager form which suited my needs more.
import datetime as dt
import timeit
class TimingManager(object):
"""Context Manager used with the statement 'with' to time some execution.
Example:
with TimingManager() as t:
# Code to time
"""
clock = timeit.default_timer
def __enter__(self):
"""
"""
self.start = self.clock()
self.log('\n=> Start Timing: {}')
return self
def __exit__(self, exc_type, exc_val, exc_tb):
"""
"""
self.endlog()
return False
def log(self, s, elapsed=None):
"""Log current time and elapsed time if present.
:param s: Text to display, use '{}' to format the text with
the current time.
:param elapsed: Elapsed time to display. Dafault: None, no display.
"""
print s.format(self._secondsToStr(self.clock()))
if(elapsed is not None):
print 'Elapsed time: {}\n'.format(elapsed)
def endlog(self):
"""Log time for the end of execution with elapsed time.
"""
self.log('=> End Timing: {}', self.now())
def now(self):
"""Return current elapsed time as hh:mm:ss string.
:return: String.
"""
return str(dt.timedelta(seconds = self.clock() - self.start))
def _secondsToStr(self, sec):
"""Convert timestamp to h:mm:ss string.
:param sec: Timestamp.
"""
return str(dt.datetime.fromtimestamp(sec))
A:
In IPython, "timeit" any script:
def foo():
%run bar.py
timeit foo()
A:
I used a very simple function to time a part of code execution:
import time
def timing():
start_time = time.time()
return lambda x: print("[{:.2f}s] {}".format(time.time() - start_time, x))
And to use it, just call it before the code to measure to retrieve function timing, and then call the function after the code with comments. The time will appear in front of the comments. For example:
t = timing()
train = pd.read_csv('train.csv',
dtype={
'id': str,
'vendor_id': str,
'pickup_datetime': str,
'dropoff_datetime': str,
'passenger_count': int,
'pickup_longitude': np.float64,
'pickup_latitude': np.float64,
'dropoff_longitude': np.float64,
'dropoff_latitude': np.float64,
'store_and_fwd_flag': str,
'trip_duration': int,
},
parse_dates = ['pickup_datetime', 'dropoff_datetime'],
)
t("Loaded {} rows data from 'train'".format(len(train)))
Then the output will look like this:
[9.35s] Loaded 1458644 rows data from 'train'
A:
Use line_profiler.
line_profiler will profile the time individual lines of code take to execute. The profiler is implemented in C via Cython in order to reduce the overhead of profiling.
from line_profiler import LineProfiler
import random
def do_stuff(numbers):
s = sum(numbers)
l = [numbers[i]/43 for i in range(len(numbers))]
m = ['hello'+str(numbers[i]) for i in range(len(numbers))]
numbers = [random.randint(1,100) for i in range(1000)]
lp = LineProfiler()
lp_wrapper = lp(do_stuff)
lp_wrapper(numbers)
lp.print_stats()
The results will be:
Timer unit: 1e-06 s
Total time: 0.000649 s
File: <ipython-input-2-2e060b054fea>
Function: do_stuff at line 4
Line # Hits Time Per Hit % Time Line Contents
==============================================================
4 def do_stuff(numbers):
5 1 10 10.0 1.5 s = sum(numbers)
6 1 186 186.0 28.7 l = [numbers[i]/43 for i in range(len(numbers))]
7 1 453 453.0 69.8 m = ['hello'+str(numbers[i]) for i in range(len(numbers))]
A:
I tried and found time difference using the following scripts.
import time
start_time = time.perf_counter()
[main code here]
print (time.perf_counter() - start_time, "seconds")
A:
Timeit is a class in Python used to calculate the execution time of small blocks of code.
Default_timer is a method in this class which is used to measure the wall clock timing, not CPU execution time. Thus other process execution might interfere with this. Thus it is useful for small blocks of code.
A sample of the code is as follows:
from timeit import default_timer as timer
start= timer()
# Some logic
end = timer()
print("Time taken:", end-start)
A:
You do this simply in Python. There is no need to make it complicated.
import time
start = time.localtime()
end = time.localtime()
"""Total execution time in minutes$ """
print(end.tm_min - start.tm_min)
"""Total execution time in seconds$ """
print(end.tm_sec - start.tm_sec)
A:
Later answer, but I use the built-in timeit:
import timeit
code_to_test = """
a = range(100000)
b = []
for i in a:
b.append(i*2)
"""
elapsed_time = timeit.timeit(code_to_test, number=500)
print(elapsed_time)
# 10.159821493085474
Wrap all your code, including any imports you may have, inside code_to_test.
number argument specifies the amount of times the code should repeat.
Demo
A:
First, install humanfriendly package by opening Command Prompt (CMD) as administrator and type there -
pip install humanfriendly
Code:
from humanfriendly import format_timespan
import time
begin_time = time.time()
# Put your code here
end_time = time.time() - begin_time
print("Total execution time: ", format_timespan(end_time))
Output:
A:
There is a timeit module which can be used to time the execution times of Python code.
It has detailed documentation and examples in Python documentation, 26.6. timeit — Measure execution time of small code snippets.
A:
Following this answer created a simple but convenient instrument.
import time
from datetime import timedelta
def start_time_measure(message=None):
if message:
print(message)
return time.monotonic()
def end_time_measure(start_time, print_prefix=None):
end_time = time.monotonic()
if print_prefix:
print(print_prefix + str(timedelta(seconds=end_time - start_time)))
return end_time
Usage:
total_start_time = start_time_measure()
start_time = start_time_measure('Doing something...')
# Do something
end_time_measure(start_time, 'Done in: ')
start_time = start_time_measure('Doing something else...')
# Do something else
end_time_measure(start_time, 'Done in: ')
end_time_measure(total_start_time, 'Total time: ')
The output:
Doing something...
Done in: 0:00:01.218000
Doing something else...
Done in: 0:00:01.313000
Total time: 0:00:02.672000
A:
This is Paul McGuire's answer that works for me. Just in case someone was having trouble running that one.
import atexit
from time import clock
def reduce(function, iterable, initializer=None):
it = iter(iterable)
if initializer is None:
value = next(it)
else:
value = initializer
for element in it:
value = function(value, element)
return value
def secondsToStr(t):
return "%d:%02d:%02d.%03d" % \
reduce(lambda ll,b : divmod(ll[0],b) + ll[1:],
[(t*1000,),1000,60,60])
line = "="*40
def log(s, elapsed=None):
print (line)
print (secondsToStr(clock()), '-', s)
if elapsed:
print ("Elapsed time:", elapsed)
print (line)
def endlog():
end = clock()
elapsed = end-start
log("End Program", secondsToStr(elapsed))
def now():
return secondsToStr(clock())
def main():
start = clock()
atexit.register(endlog)
log("Start Program")
Call timing.main() from your program after importing the file.
A:
The time of a Python program's execution measure could be inconsistent depending on:
Same program can be evaluated using different algorithms
Running time varies between algorithms
Running time varies between implementations
Running time varies between computers
Running time is not predictable based on small inputs
This is because the most effective way is using the "Order of Growth" and learn the Big "O" notation to do it properly.
Anyway, you can try to evaluate the performance of any Python program in specific machine counting steps per second using this simple algorithm:
adapt this to the program you want to evaluate
import time
now = time.time()
future = now + 10
step = 4 # Why 4 steps? Because until here already four operations executed
while time.time() < future:
step += 3 # Why 3 again? Because a while loop executes one comparison and one plus equal statement
step += 4 # Why 3 more? Because one comparison starting while when time is over plus the final assignment of step + 1 and print statement
print(str(int(step / 10)) + " steps per second")
A:
This is the simplest way to get the elapsed time for the program:
Write the following code at the end of your program.
import time
print(time.clock())
A:
I use tic and toc from ttictoc.
pip install ttictoc
Then you can use in your script:
from ttictoc import tic,toc
tic()
# foo()
print(toc())
A:
To use metakermit's updated answer for Python 2.7, you will require the monotonic package.
The code would then be as follows:
from datetime import timedelta
from monotonic import monotonic
start_time = monotonic()
end_time = monotonic()
print(timedelta(seconds=end_time - start_time))
A:
If you want to measure time in microseconds, then you can use the following version, based completely on the answers of Paul McGuire and Nicojo - it's Python 3 code. I've also added some colour to it:
import atexit
from time import time
from datetime import timedelta, datetime
def seconds_to_str(elapsed=None):
if elapsed is None:
return datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f")
else:
return str(timedelta(seconds=elapsed))
def log(txt, elapsed=None):
colour_cyan = '\033[36m'
colour_reset = '\033[0;0;39m'
colour_red = '\033[31m'
print('\n ' + colour_cyan + ' [TIMING]> [' + seconds_to_str() + '] ----> ' + txt + '\n' + colour_reset)
if elapsed:
print("\n " + colour_red + " [TIMING]> Elapsed time ==> " + elapsed + "\n" + colour_reset)
def end_log():
end = time()
elapsed = end-start
log("End Program", seconds_to_str(elapsed))
start = time()
atexit.register(end_log)
log("Start Program")
log() => function that prints out the timing information.
txt ==> first argument to log, and its string to mark timing.
atexit ==> Python module to register functions that you can call when the program exits.
A:
By default, Linux or Unix system (tested on macOS) comes with the time command on the terminal that you can use to run the Python script and get the real, user, sys time information for the execution of the running script.
However, the default output isn't very clear (at least for me), and the default time command doesn't even take any options as arguments to format the output. That's because there are two versions of time - one is built into bash that provides just the minimal version and another one is located on /usr/bin/time.
The /usr/bin/time command accepts additional arguments like -al, -h, -p, and -o. My favorite is -p which shows the output in a new line like the following:
real 2.18
user 17.92
sys 2.71
A:
The problem I encountered while finding the running time of two different methods for finding all the prime numbers <= a number. when a user input was taken in the program.
WRONG APPROACH
#Sample input for a number 20
#Sample output [2, 3, 5, 7, 11, 13, 17, 19]
#Total Running time = 0.634 seconds
import time
start_time = time.time()
#Method 1 to find all the prime numbers <= a Number
# Function to check whether a number is prime or not.
def prime_no(num):
if num<2:
return False
else:
for i in range(2, num//2+1):
if num % i == 0:
return False
return True
#To print all the values <= n
def Prime_under_num(n):
a = [2]
if n <2:
print("None")
elif n==2:
print(2)
else:
"Neglecting all even numbers as even numbers won't be prime in order to reduce the time complexity."
for i in range(3, n+1, 2):
if prime_no(i):
a.append(i)
print(a)
"When Method 1 is only used outputs of running time for different inputs"
#Total Running time = 2.73761 seconds #n = 100
#Total Running time = 3.14781 seconds #n = 1000
#Total Running time = 8.69278 seconds #n = 10000
#Total Running time = 18.73701 seconds #n = 100000
#Method 2 to find all the prime numbers <= a Number
def Prime_under_num(n):
a = [2]
if n <2:
print("None")
elif n==2:
print(2)
else:
for i in range(3, n+1, 2):
if n%i ==0:
pass
else:
a.append(i)
print(a)
"When Method 2 is only used outputs of running time for different inputs"
# Total Running time = 2.75935 seconds #n = 100
# Total Running time = 2.86332 seconds #n = 1000
# Total Running time = 4.59884 seconds #n = 10000
# Total Running time = 8.55057 seconds #n = 100000
if __name__ == "__main__" :
n = int(input())
Prime_under_num(n)
print("Total Running time = {:.5f} seconds".format(time.time() - start_time))
The different running time obtained for all the above cases are wrong. For problems where we are taking an input, we have to start the time only after taking the input. Here the time taken by the user to type the input is also calculated along with the running time.
CORRECT APPROACH
We have to remove the start_time = time.time() from the begining and add it in the main block.
if __name__ == "__main__" :
n = int(input())
start_time = time.time()
Prime_under_num(n)
print("Total Running time = {:.3f} seconds".format(time.time() - start_time))
Thus the output for the each of the two methods when used alone will be as follows:-
# Method 1
# Total Running time = 0.00159 seconds #n = 100
# Total Running time = 0.00506 seconds #n = 1000
# Total Running time = 0.22987 seconds #n = 10000
# Total Running time = 18.55819 seconds #n = 100000
# Method 2
# Total Running time = 0.00011 seconds #n = 100
# Total Running time = 0.00118 seconds #n = 1000
# Total Running time = 0.00302 seconds #n = 10000
# Total Running time = 0.01450 seconds #n = 100000
Now we can see there is a significant difference in total running time when compared with WRONG APPROACH. Even though the method 2 is performing better than method 1 in the both approach first approach(WRONG APPROACH) is wrong.
A:
I think this is the best and easiest way to do it:
from time import monotonic
start_time = monotonic()
# something
print(f"Run time {monotonic() - start_time} seconds")
Or with a decorator:
from time import monotonic
def record_time(function):
def wrap(*args, **kwargs):
start_time = monotonic()
function_return = function(*args, **kwargs)
print(f"Run time {monotonic() - start_time} seconds")
return function_return
return wrap
@record_time
def your_function():
# something
|
How do I get time of a Python program's execution?
|
I have a command line program in Python that takes a while to finish. I want to know the exact time it takes to finish running.
I've looked at the timeit module, but it seems it's only for small snippets of code. I want to time the whole program.
|
[
"The simplest way in Python:\nimport time\nstart_time = time.time()\nmain()\nprint(\"--- %s seconds ---\" % (time.time() - start_time))\n\nThis assumes that your program takes at least a tenth of second to run.\nPrints:\n--- 0.764891862869 seconds ---\n\n",
"In Linux or Unix:\n$ time python yourprogram.py\n\nIn Windows, see this StackOverflow question: How do I measure execution time of a command on the Windows command line?\nFor more verbose output, \n$ time -v python yourprogram.py\n Command being timed: \"python3 yourprogram.py\"\n User time (seconds): 0.08\n System time (seconds): 0.02\n Percent of CPU this job got: 98%\n Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.10\n Average shared text size (kbytes): 0\n Average unshared data size (kbytes): 0\n Average stack size (kbytes): 0\n Average total size (kbytes): 0\n Maximum resident set size (kbytes): 9480\n Average resident set size (kbytes): 0\n Major (requiring I/O) page faults: 0\n Minor (reclaiming a frame) page faults: 1114\n Voluntary context switches: 0\n Involuntary context switches: 22\n Swaps: 0\n File system inputs: 0\n File system outputs: 0\n Socket messages sent: 0\n Socket messages received: 0\n Signals delivered: 0\n Page size (bytes): 4096\n Exit status: 0\n\n",
"I put this timing.py module into my own site-packages directory, and just insert import timing at the top of my module:\nimport atexit\nfrom time import clock\n\ndef secondsToStr(t):\n return \"%d:%02d:%02d.%03d\" % \\\n reduce(lambda ll,b : divmod(ll[0],b) + ll[1:],\n [(t*1000,),1000,60,60])\n\nline = \"=\"*40\ndef log(s, elapsed=None):\n print line\n print secondsToStr(clock()), '-', s\n if elapsed:\n print \"Elapsed time:\", elapsed\n print line\n print\n\ndef endlog():\n end = clock()\n elapsed = end-start\n log(\"End Program\", secondsToStr(elapsed))\n\ndef now():\n return secondsToStr(clock())\n\nstart = clock()\natexit.register(endlog)\nlog(\"Start Program\")\n\nI can also call timing.log from within my program if there are significant stages within the program I want to show. But just including import timing will print the start and end times, and overall elapsed time. (Forgive my obscure secondsToStr function, it just formats a floating point number of seconds to hh:mm:ss.sss form.)\nNote: A Python 3 version of the above code can be found here or here.\n",
"I like the output the datetime module provides, where time delta objects show days, hours, minutes, etc. as necessary in a human-readable way.\nFor example:\nfrom datetime import datetime\nstart_time = datetime.now()\n# do your work here\nend_time = datetime.now()\nprint('Duration: {}'.format(end_time - start_time))\n\nSample output e.g.\nDuration: 0:00:08.309267\n\nor\nDuration: 1 day, 1:51:24.269711\n\nAs J.F. Sebastian mentioned, this approach might encounter some tricky cases with local time, so it's safer to use:\nimport time\nfrom datetime import timedelta\nstart_time = time.monotonic()\nend_time = time.monotonic()\nprint(timedelta(seconds=end_time - start_time))\n\n",
"import time\n\nstart_time = time.clock()\nmain()\nprint(time.clock() - start_time, \"seconds\")\n\ntime.clock() returns the processor time, which allows us to calculate only the time used by this process (on Unix anyway). The documentation says \"in any case, this is the function to use for benchmarking Python or timing algorithms\"\n",
"I really like Paul McGuire's answer, but I use Python 3. So for those who are interested: here's a modification of his answer that works with Python 3 on *nix (I imagine, under Windows, that clock() should be used instead of time()):\n#python3\nimport atexit\nfrom time import time, strftime, localtime\nfrom datetime import timedelta\n\ndef secondsToStr(elapsed=None):\n if elapsed is None:\n return strftime(\"%Y-%m-%d %H:%M:%S\", localtime())\n else:\n return str(timedelta(seconds=elapsed))\n\ndef log(s, elapsed=None):\n line = \"=\"*40\n print(line)\n print(secondsToStr(), '-', s)\n if elapsed:\n print(\"Elapsed time:\", elapsed)\n print(line)\n print()\n\ndef endlog():\n end = time()\n elapsed = end-start\n log(\"End Program\", secondsToStr(elapsed))\n\nstart = time()\natexit.register(endlog)\nlog(\"Start Program\")\n\nIf you find this useful, you should still up-vote his answer instead of this one, as he did most of the work ;).\n",
"You can use the Python profiler cProfile to measure CPU time and additionally how much time is spent inside each function and how many times each function is called. This is very useful if you want to improve performance of your script without knowing where to start. This answer to another Stack Overflow question is pretty good. It's always good to have a look in the documentation too.\nHere's an example how to profile a script using cProfile from a command line:\n$ python -m cProfile euler048.py\n\n1007 function calls in 0.061 CPU seconds\n\nOrdered by: standard name\nncalls tottime percall cumtime percall filename:lineno(function)\n 1 0.000 0.000 0.061 0.061 <string>:1(<module>)\n 1000 0.051 0.000 0.051 0.000 euler048.py:2(<lambda>)\n 1 0.005 0.005 0.061 0.061 euler048.py:2(<module>)\n 1 0.000 0.000 0.061 0.061 {execfile}\n 1 0.002 0.002 0.053 0.053 {map}\n 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler objects}\n 1 0.000 0.000 0.000 0.000 {range}\n 1 0.003 0.003 0.003 0.003 {sum}\n\n",
"Just use the timeit module. It works with both Python 2 and Python 3.\nimport timeit\n\nstart = timeit.default_timer()\n\n# All the program statements\nstop = timeit.default_timer()\nexecution_time = stop - start\n\nprint(\"Program Executed in \"+str(execution_time)) # It returns time in seconds\n\nIt returns in seconds and you can have your execution time. It is simple, but you should write these in thew main function which starts program execution. If you want to get the execution time even when you get an error then take your parameter \"Start\" to it and calculate there like:\ndef sample_function(start,**kwargs):\n try:\n # Your statements\n except:\n # except statements run when your statements raise an exception\n stop = timeit.default_timer()\n execution_time = stop - start\n print(\"Program executed in \" + str(execution_time))\n\n",
"time.clock()\n\nDeprecated since version 3.3: The behavior of this function depends\n on the platform: use perf_counter() or process_time() instead,\n depending on your requirements, to have a well-defined behavior.\n\ntime.perf_counter()\n\nReturn the value (in fractional seconds) of a performance counter,\n i.e. a clock with the highest available resolution to measure a short\n duration. It does include time elapsed during sleep and is\n system-wide.\n\ntime.process_time()\n\nReturn the value (in fractional seconds) of the sum of the system and\n user CPU time of the current process. It does not include time elapsed\n during sleep.\n\nstart = time.process_time()\n... do something\nelapsed = (time.process_time() - start)\n\n",
"time.clock has been deprecated in Python 3.3 and will be removed from Python 3.8: use time.perf_counter or time.process_time instead\nimport time\nstart_time = time.perf_counter ()\nfor x in range(1, 100):\n print(x)\nend_time = time.perf_counter ()\nprint(end_time - start_time, \"seconds\")\n\n",
"For the data folks using Jupyter Notebook\nIn a cell, you can use Jupyter's %%time magic command to measure the execution time:\n%%time\n[ x**2 for x in range(10000)]\n\nOutput\nCPU times: user 4.54 ms, sys: 0 ns, total: 4.54 ms\nWall time: 4.12 ms\n\nThis will only capture the execution time of a particular cell. If you'd like to capture the execution time of the whole notebook (i.e. program), you can create a new notebook in the same directory and in the new notebook execute all cells:\nSuppose the notebook above is called example_notebook.ipynb. In a new notebook within the same directory:\n# Convert your notebook to a .py script:\n!jupyter nbconvert --to script example_notebook.ipynb\n\n# Run the example_notebook with -t flag for time\n%run -t example_notebook\n\nOutput\nIPython CPU timings (estimated):\n User : 0.00 s.\n System : 0.00 s.\nWall time: 0.00 s.\n\n",
"The following snippet prints elapsed time in a nice human readable <HH:MM:SS> format.\nimport time\nfrom datetime import timedelta\n\nstart_time = time.time()\n\n#\n# Perform lots of computations.\n#\n\nelapsed_time_secs = time.time() - start_time\n\nmsg = \"Execution took: %s secs (Wall clock time)\" % timedelta(seconds=round(elapsed_time_secs))\n\nprint(msg) \n\n",
"Similar to the response from @rogeriopvl I added a slight modification to convert to hour minute seconds using the same library for long running jobs.\nimport time\nstart_time = time.time()\nmain()\nseconds = time.time() - start_time\nprint('Time Taken:', time.strftime(\"%H:%M:%S\",time.gmtime(seconds)))\n\nSample Output\nTime Taken: 00:00:08\n\n",
"For functions, I suggest using this simple decorator I created.\ndef timeit(method):\n def timed(*args, **kwargs):\n ts = time.time()\n result = method(*args, **kwargs)\n te = time.time()\n if 'log_time' in kwargs:\n name = kwargs.get('log_name', method.__name__.upper())\n kwargs['log_time'][name] = int((te - ts) * 1000)\n else:\n print('%r %2.22f ms' % (method.__name__, (te - ts) * 1000))\n return result\n return timed\n\n@timeit\ndef foo():\n do_some_work()\n\n# foo()\n# 'foo' 0.000953 ms\n\n",
"from time import time\nstart_time = time()\n...\nend_time = time()\ntime_taken = end_time - start_time # time_taken is in seconds\nhours, rest = divmod(time_taken,3600)\nminutes, seconds = divmod(rest, 60)\n\n",
"I was having the same problem in many places, so I created a convenience package horology. You can install it with pip install horology and then do it in the elegant way:\nfrom horology import Timing\n\nwith Timing(name='Important calculations: '):\n prepare()\n do_your_stuff()\n finish_sth()\n\nwill output:\nImportant calculations: 12.43 ms\n\nOr even simpler (if you have one function):\nfrom horology import timed\n\n@timed\ndef main():\n ...\n\nwill output:\nmain: 7.12 h\n\nIt takes care of units and rounding. It works with python 3.6 or newer.\n",
"\nI've looked at the timeit module, but it seems it's only for small snippets of code. I want to time the whole program.\n\n$ python -mtimeit -n1 -r1 -t -s \"from your_module import main\" \"main()\"\n\nIt runs your_module.main() function one time and print the elapsed time using time.time() function as a timer.\nTo emulate /usr/bin/time in Python see Python subprocess with /usr/bin/time: how to capture timing info but ignore all other output?.\nTo measure CPU time (e.g., don't include time during time.sleep()) for each function, you could use profile module (cProfile on Python 2):\n$ python3 -mprofile your_module.py\n\nYou could pass -p to timeit command above if you want to use the same timer as profile module uses.\nSee How can you profile a Python script?\n",
"I liked Paul McGuire's answer too and came up with a context manager form which suited my needs more.\nimport datetime as dt\nimport timeit\n\nclass TimingManager(object):\n \"\"\"Context Manager used with the statement 'with' to time some execution.\n\n Example:\n\n with TimingManager() as t:\n # Code to time\n \"\"\"\n\n clock = timeit.default_timer\n\n def __enter__(self):\n \"\"\"\n \"\"\"\n self.start = self.clock()\n self.log('\\n=> Start Timing: {}')\n\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n \"\"\"\n \"\"\"\n self.endlog()\n\n return False\n\n def log(self, s, elapsed=None):\n \"\"\"Log current time and elapsed time if present.\n :param s: Text to display, use '{}' to format the text with\n the current time.\n :param elapsed: Elapsed time to display. Dafault: None, no display.\n \"\"\"\n print s.format(self._secondsToStr(self.clock()))\n\n if(elapsed is not None):\n print 'Elapsed time: {}\\n'.format(elapsed)\n\n def endlog(self):\n \"\"\"Log time for the end of execution with elapsed time.\n \"\"\"\n self.log('=> End Timing: {}', self.now())\n\n def now(self):\n \"\"\"Return current elapsed time as hh:mm:ss string.\n :return: String.\n \"\"\"\n return str(dt.timedelta(seconds = self.clock() - self.start))\n\n def _secondsToStr(self, sec):\n \"\"\"Convert timestamp to h:mm:ss string.\n :param sec: Timestamp.\n \"\"\"\n return str(dt.datetime.fromtimestamp(sec))\n\n",
"In IPython, \"timeit\" any script: \ndef foo():\n %run bar.py\ntimeit foo()\n\n",
"I used a very simple function to time a part of code execution:\nimport time\ndef timing():\n start_time = time.time()\n return lambda x: print(\"[{:.2f}s] {}\".format(time.time() - start_time, x))\n\nAnd to use it, just call it before the code to measure to retrieve function timing, and then call the function after the code with comments. The time will appear in front of the comments. For example:\nt = timing()\ntrain = pd.read_csv('train.csv',\n dtype={\n 'id': str,\n 'vendor_id': str,\n 'pickup_datetime': str,\n 'dropoff_datetime': str,\n 'passenger_count': int,\n 'pickup_longitude': np.float64,\n 'pickup_latitude': np.float64,\n 'dropoff_longitude': np.float64,\n 'dropoff_latitude': np.float64,\n 'store_and_fwd_flag': str,\n 'trip_duration': int,\n },\n parse_dates = ['pickup_datetime', 'dropoff_datetime'],\n )\nt(\"Loaded {} rows data from 'train'\".format(len(train)))\n\nThen the output will look like this:\n[9.35s] Loaded 1458644 rows data from 'train'\n\n",
"Use line_profiler.\nline_profiler will profile the time individual lines of code take to execute. The profiler is implemented in C via Cython in order to reduce the overhead of profiling.\nfrom line_profiler import LineProfiler\nimport random\n\ndef do_stuff(numbers):\n s = sum(numbers)\n l = [numbers[i]/43 for i in range(len(numbers))]\n m = ['hello'+str(numbers[i]) for i in range(len(numbers))]\n\nnumbers = [random.randint(1,100) for i in range(1000)]\nlp = LineProfiler()\nlp_wrapper = lp(do_stuff)\nlp_wrapper(numbers)\nlp.print_stats()\n\nThe results will be:\nTimer unit: 1e-06 s\n\nTotal time: 0.000649 s\nFile: <ipython-input-2-2e060b054fea>\nFunction: do_stuff at line 4\n\nLine # Hits Time Per Hit % Time Line Contents\n==============================================================\n 4 def do_stuff(numbers):\n 5 1 10 10.0 1.5 s = sum(numbers)\n 6 1 186 186.0 28.7 l = [numbers[i]/43 for i in range(len(numbers))]\n 7 1 453 453.0 69.8 m = ['hello'+str(numbers[i]) for i in range(len(numbers))]\n\n",
"I tried and found time difference using the following scripts.\nimport time\n\nstart_time = time.perf_counter()\n[main code here]\nprint (time.perf_counter() - start_time, \"seconds\")\n\n",
"Timeit is a class in Python used to calculate the execution time of small blocks of code.\nDefault_timer is a method in this class which is used to measure the wall clock timing, not CPU execution time. Thus other process execution might interfere with this. Thus it is useful for small blocks of code.\nA sample of the code is as follows:\nfrom timeit import default_timer as timer\n\nstart= timer()\n\n# Some logic\n\nend = timer()\n\nprint(\"Time taken:\", end-start)\n\n",
"You do this simply in Python. There is no need to make it complicated.\nimport time\n\nstart = time.localtime()\nend = time.localtime()\n\"\"\"Total execution time in minutes$ \"\"\"\nprint(end.tm_min - start.tm_min)\n\"\"\"Total execution time in seconds$ \"\"\"\nprint(end.tm_sec - start.tm_sec)\n\n",
"Later answer, but I use the built-in timeit:\nimport timeit\ncode_to_test = \"\"\"\na = range(100000)\nb = []\nfor i in a:\n b.append(i*2)\n\"\"\"\nelapsed_time = timeit.timeit(code_to_test, number=500)\nprint(elapsed_time)\n# 10.159821493085474\n\n\n\nWrap all your code, including any imports you may have, inside code_to_test.\nnumber argument specifies the amount of times the code should repeat.\nDemo\n\n",
"First, install humanfriendly package by opening Command Prompt (CMD) as administrator and type there -\npip install humanfriendly\nCode:\nfrom humanfriendly import format_timespan\nimport time\nbegin_time = time.time()\n# Put your code here\nend_time = time.time() - begin_time\nprint(\"Total execution time: \", format_timespan(end_time))\n\nOutput:\n\n",
"There is a timeit module which can be used to time the execution times of Python code.\nIt has detailed documentation and examples in Python documentation, 26.6. timeit — Measure execution time of small code snippets.\n",
"Following this answer created a simple but convenient instrument.\nimport time\nfrom datetime import timedelta\n\ndef start_time_measure(message=None):\n if message:\n print(message)\n return time.monotonic()\n\ndef end_time_measure(start_time, print_prefix=None):\n end_time = time.monotonic()\n if print_prefix:\n print(print_prefix + str(timedelta(seconds=end_time - start_time)))\n return end_time\n\nUsage:\ntotal_start_time = start_time_measure() \nstart_time = start_time_measure('Doing something...')\n# Do something\nend_time_measure(start_time, 'Done in: ')\nstart_time = start_time_measure('Doing something else...')\n# Do something else\nend_time_measure(start_time, 'Done in: ')\nend_time_measure(total_start_time, 'Total time: ')\n\nThe output:\nDoing something...\nDone in: 0:00:01.218000\nDoing something else...\nDone in: 0:00:01.313000\nTotal time: 0:00:02.672000\n\n",
"This is Paul McGuire's answer that works for me. Just in case someone was having trouble running that one.\nimport atexit\nfrom time import clock\n\ndef reduce(function, iterable, initializer=None):\n it = iter(iterable)\n if initializer is None:\n value = next(it)\n else:\n value = initializer\n for element in it:\n value = function(value, element)\n return value\n\ndef secondsToStr(t):\n return \"%d:%02d:%02d.%03d\" % \\\n reduce(lambda ll,b : divmod(ll[0],b) + ll[1:],\n [(t*1000,),1000,60,60])\n\nline = \"=\"*40\ndef log(s, elapsed=None):\n print (line)\n print (secondsToStr(clock()), '-', s)\n if elapsed:\n print (\"Elapsed time:\", elapsed)\n print (line)\n\ndef endlog():\n end = clock()\n elapsed = end-start\n log(\"End Program\", secondsToStr(elapsed))\n\ndef now():\n return secondsToStr(clock())\n\ndef main():\n start = clock()\n atexit.register(endlog)\n log(\"Start Program\")\n\nCall timing.main() from your program after importing the file.\n",
"The time of a Python program's execution measure could be inconsistent depending on:\n\nSame program can be evaluated using different algorithms\nRunning time varies between algorithms\nRunning time varies between implementations\nRunning time varies between computers\nRunning time is not predictable based on small inputs\n\nThis is because the most effective way is using the \"Order of Growth\" and learn the Big \"O\" notation to do it properly.\nAnyway, you can try to evaluate the performance of any Python program in specific machine counting steps per second using this simple algorithm:\nadapt this to the program you want to evaluate\nimport time\n\nnow = time.time()\nfuture = now + 10\nstep = 4 # Why 4 steps? Because until here already four operations executed\nwhile time.time() < future:\n step += 3 # Why 3 again? Because a while loop executes one comparison and one plus equal statement\nstep += 4 # Why 3 more? Because one comparison starting while when time is over plus the final assignment of step + 1 and print statement\nprint(str(int(step / 10)) + \" steps per second\")\n\n",
"This is the simplest way to get the elapsed time for the program:\nWrite the following code at the end of your program.\nimport time\nprint(time.clock())\n\n",
"I use tic and toc from ttictoc.\npip install ttictoc\n\nThen you can use in your script:\nfrom ttictoc import tic,toc\ntic()\n\n# foo()\n\nprint(toc())\n\n",
"To use metakermit's updated answer for Python 2.7, you will require the monotonic package.\nThe code would then be as follows:\nfrom datetime import timedelta\nfrom monotonic import monotonic\n\nstart_time = monotonic()\nend_time = monotonic()\nprint(timedelta(seconds=end_time - start_time))\n\n",
"If you want to measure time in microseconds, then you can use the following version, based completely on the answers of Paul McGuire and Nicojo - it's Python 3 code. I've also added some colour to it:\nimport atexit\nfrom time import time\nfrom datetime import timedelta, datetime\n\n\ndef seconds_to_str(elapsed=None):\n if elapsed is None:\n return datetime.now().strftime(\"%Y-%m-%d %H:%M:%S.%f\")\n else:\n return str(timedelta(seconds=elapsed))\n\n\ndef log(txt, elapsed=None):\n colour_cyan = '\\033[36m'\n colour_reset = '\\033[0;0;39m'\n colour_red = '\\033[31m'\n print('\\n ' + colour_cyan + ' [TIMING]> [' + seconds_to_str() + '] ----> ' + txt + '\\n' + colour_reset)\n if elapsed:\n print(\"\\n \" + colour_red + \" [TIMING]> Elapsed time ==> \" + elapsed + \"\\n\" + colour_reset)\n\n\ndef end_log():\n end = time()\n elapsed = end-start\n log(\"End Program\", seconds_to_str(elapsed))\n\n\nstart = time()\natexit.register(end_log)\nlog(\"Start Program\")\n\nlog() => function that prints out the timing information.\ntxt ==> first argument to log, and its string to mark timing.\natexit ==> Python module to register functions that you can call when the program exits.\n",
"By default, Linux or Unix system (tested on macOS) comes with the time command on the terminal that you can use to run the Python script and get the real, user, sys time information for the execution of the running script.\nHowever, the default output isn't very clear (at least for me), and the default time command doesn't even take any options as arguments to format the output. That's because there are two versions of time - one is built into bash that provides just the minimal version and another one is located on /usr/bin/time.\nThe /usr/bin/time command accepts additional arguments like -al, -h, -p, and -o. My favorite is -p which shows the output in a new line like the following:\nreal 2.18\nuser 17.92\nsys 2.71\n\n",
"The problem I encountered while finding the running time of two different methods for finding all the prime numbers <= a number. when a user input was taken in the program.\nWRONG APPROACH\n#Sample input for a number 20 \n#Sample output [2, 3, 5, 7, 11, 13, 17, 19]\n#Total Running time = 0.634 seconds\n\nimport time\n\nstart_time = time.time()\n\n#Method 1 to find all the prime numbers <= a Number\n\n# Function to check whether a number is prime or not.\ndef prime_no(num):\nif num<2:\n return False\nelse:\n for i in range(2, num//2+1):\n if num % i == 0:\n return False\n return True\n\n#To print all the values <= n\ndef Prime_under_num(n):\n a = [2]\n if n <2:\n print(\"None\")\n elif n==2:\n print(2)\n else:\n\"Neglecting all even numbers as even numbers won't be prime in order to reduce the time complexity.\"\n for i in range(3, n+1, 2): \n if prime_no(i):\n a.append(i)\n print(a)\n\n\n\"When Method 1 is only used outputs of running time for different inputs\"\n#Total Running time = 2.73761 seconds #n = 100\n#Total Running time = 3.14781 seconds #n = 1000\n#Total Running time = 8.69278 seconds #n = 10000\n#Total Running time = 18.73701 seconds #n = 100000\n\n#Method 2 to find all the prime numbers <= a Number\n\ndef Prime_under_num(n):\n a = [2]\n if n <2:\n print(\"None\")\n elif n==2:\n print(2)\n else:\n for i in range(3, n+1, 2): \n if n%i ==0:\n pass\n else:\n a.append(i)\n print(a)\n\n\"When Method 2 is only used outputs of running time for different inputs\"\n# Total Running time = 2.75935 seconds #n = 100\n# Total Running time = 2.86332 seconds #n = 1000\n# Total Running time = 4.59884 seconds #n = 10000\n# Total Running time = 8.55057 seconds #n = 100000\n\nif __name__ == \"__main__\" :\n n = int(input())\n Prime_under_num(n)\n print(\"Total Running time = {:.5f} seconds\".format(time.time() - start_time))\n\nThe different running time obtained for all the above cases are wrong. For problems where we are taking an input, we have to start the time only after taking the input. Here the time taken by the user to type the input is also calculated along with the running time.\nCORRECT APPROACH\nWe have to remove the start_time = time.time() from the begining and add it in the main block.\nif __name__ == \"__main__\" :\n n = int(input())\n start_time = time.time()\n Prime_under_num(n)\n print(\"Total Running time = {:.3f} seconds\".format(time.time() - start_time))\n\nThus the output for the each of the two methods when used alone will be as follows:-\n# Method 1\n\n# Total Running time = 0.00159 seconds #n = 100\n# Total Running time = 0.00506 seconds #n = 1000\n# Total Running time = 0.22987 seconds #n = 10000\n# Total Running time = 18.55819 seconds #n = 100000\n\n# Method 2\n\n# Total Running time = 0.00011 seconds #n = 100\n# Total Running time = 0.00118 seconds #n = 1000\n# Total Running time = 0.00302 seconds #n = 10000\n# Total Running time = 0.01450 seconds #n = 100000\n\nNow we can see there is a significant difference in total running time when compared with WRONG APPROACH. Even though the method 2 is performing better than method 1 in the both approach first approach(WRONG APPROACH) is wrong.\n",
"I think this is the best and easiest way to do it:\nfrom time import monotonic\n\nstart_time = monotonic()\n# something\nprint(f\"Run time {monotonic() - start_time} seconds\")\n\nOr with a decorator:\nfrom time import monotonic\n \ndef record_time(function):\n def wrap(*args, **kwargs):\n start_time = monotonic()\n function_return = function(*args, **kwargs)\n print(f\"Run time {monotonic() - start_time} seconds\")\n return function_return\n return wrap\n\n@record_time\ndef your_function():\n # something\n\n"
] |
[
2728,
254,
248,
131,
112,
82,
76,
51,
41,
29,
26,
20,
17,
16,
12,
11,
10,
9,
9,
9,
8,
8,
6,
6,
6,
6,
5,
4,
3,
3,
3,
3,
1,
1,
0,
0,
0
] |
[
"I define the following Python decorator:\ndef profile(fct):\n def wrapper(*args, **kw):\n start_time = time.time()\n ret = fct(*args, **kw)\n print(\"{} {} {} return {} in {} seconds\".format(args[0].__class__.__name__,\n args[0].__class__.__module__,\n fct.__name__,\n ret,\n time.time() - start_time))\n return ret\n return wrapper\n\nand use it on functions or class/methods:\n@profile\ndef main()\n ...\n\n"
] |
[
-2
] |
[
"execution_time",
"python",
"time"
] |
stackoverflow_0001557571_execution_time_python_time.txt
|
Q:
Is the python XOR bitwise operator not just a regular operator?
Other questions on this site suggest that python has no XOR operator, only a bitwise operator ^. But when I try this operator on booleans the result is also a boolean (Python 3.9.12)
True ^ False
>> True
If it was a bitwise operator I would expect it to first cast the inputs to integers, resulting in an integer as output. Is bitwise XOR still an appropriate name for ^? And why doesn't python implement an XOR keyword to make it more python-esque?
A:
Boolean values are the two constant objects False and True. They are used to represent truth values (although other values can also be considered false or true). In numeric contexts (for example when used as the argument to an arithmetic operator), they behave like the integers 0 and 1, respectively. The built-in function bool() can be used to convert any value to a Boolean, if the value can be interpreted as a truth value (see section Truth Value Testing above).
it is as if you are writing:
1^0
#output
1
https://docs.python.org/3/library/stdtypes.html#boolean-values
|
Is the python XOR bitwise operator not just a regular operator?
|
Other questions on this site suggest that python has no XOR operator, only a bitwise operator ^. But when I try this operator on booleans the result is also a boolean (Python 3.9.12)
True ^ False
>> True
If it was a bitwise operator I would expect it to first cast the inputs to integers, resulting in an integer as output. Is bitwise XOR still an appropriate name for ^? And why doesn't python implement an XOR keyword to make it more python-esque?
|
[
"Boolean values are the two constant objects False and True. They are used to represent truth values (although other values can also be considered false or true). In numeric contexts (for example when used as the argument to an arithmetic operator), they behave like the integers 0 and 1, respectively. The built-in function bool() can be used to convert any value to a Boolean, if the value can be interpreted as a truth value (see section Truth Value Testing above).\nit is as if you are writing:\n1^0\n#output\n1\n\nhttps://docs.python.org/3/library/stdtypes.html#boolean-values\n"
] |
[
0
] |
[] |
[] |
[
"bitwise_xor",
"operators",
"python"
] |
stackoverflow_0074461775_bitwise_xor_operators_python.txt
|
Q:
Unable to successfully patch functions of Azure ContainerClient
I have been trying to patch the list_blobs() function of ContainerClient, have not been able to do this successfully, this code outputs a MagicMock() function - but the function isn't patched as I would expect it to be (Trying to patch with a list ['Blob1', 'Blob2'].
#################Script File
import sys
from datetime import datetime, timedelta
import pyspark
import pytz
import yaml
# from azure.storage.blob import BlobServiceClient, ContainerClient
from pyspark.dbutils import DBUtils as dbutils
import azure.storage.blob
# Open Config
def main():
spark_context = pyspark.SparkContext.getOrCreate()
spark_context.addFile(sys.argv[1])
stream = None
stream = open(sys.argv[1], "r")
config = yaml.load(stream, Loader=yaml.FullLoader)
stream.close()
account_key = dbutils.secrets.get(scope=config["Secrets"]["Scope"], key=config["Secrets"]["Key Name"])
target_container = config["Storage Configuration"]["Container"]
target_account = config["Storage Configuration"]["Account"]
days_history_to_keep = config["Storage Configuration"]["Days History To Keep"]
connection_string = (
"DefaultEndpointsProtocol=https;AccountName="
+ target_account
+ ";AccountKey="
+ account_key
+ ";EndpointSuffix=core.windows.net"
)
blob_service_client: azure.storage.blob.BlobServiceClient = (
azure.storage.blob.BlobServiceClient.from_connection_string(connection_string)
)
container_client: azure.storage.blob.ContainerClient = (
blob_service_client.get_container_client(target_container)
)
blobs = container_client.list_blobs()
print(blobs)
print(blobs)
utc = pytz.UTC
delete_before_date = utc.localize(
datetime.today() - timedelta(days=days_history_to_keep)
)
for blob in blobs:
if blob.creation_time < delete_before_date:
print("Deleting Blob: " + blob.name)
container_client.delete_blob(blob, delete_snapshots="include")
if __name__ == "__main__":
main()
#################Test File
import unittest
from unittest import mock
import DeleteOldBlobs
class DeleteBlobsTest(unittest.TestCase):
def setUp(self):
pass
@mock.patch("DeleteOldBlobs.azure.storage.blob.ContainerClient")
@mock.patch("DeleteOldBlobs.azure.storage.blob.BlobServiceClient")
@mock.patch("DeleteOldBlobs.dbutils")
@mock.patch("DeleteOldBlobs.sys")
@mock.patch('DeleteOldBlobs.pyspark')
def test_main(self, mock_pyspark, mock_sys, mock_dbutils, mock_blobserviceclient, mock_containerclient):
# mock setup
config_file = "Delete_Old_Blobs_UnitTest.yml"
mock_sys.argv = ["unused_arg", config_file]
mock_dbutils.secrets.get.return_value = "A Secret"
mock_containerclient.list_blobs.return_value = ["ablob1", "ablob2"]
# execute test
DeleteOldBlobs.main()
# TODO assert actions taken
# mock_sys.argv.__get__.assert_called_with()
# dbutils.secrets.get(scope=config['Secrets']['Scope'], key=config['Secrets']['Key Name'])
if __name__ == "__main__":
unittest.main()
Output:
<MagicMock name='BlobServiceClient.from_connection_string().get_container_client().list_blobs()' id='1143355577232'>
What am I doing incorrectly here?
A:
I'm not able to execute your code in this moment, but I have tried to simulate it. To do this I have created the following 3 files in the path: /<path-to>/pkg/sub_pkg1 (where pkg and sub_pkg1 are packages).
File ContainerClient.py
def list_blobs(self):
return "blob1"
File DeleteOldBlobs.py
from pkg.sub_pkg1 import ContainerClient
# Open Config
def main():
blobs = ContainerClient.list_blobs()
print(blobs)
print(blobs)
File DeleteBlobsTest.py
import unittest
from unittest import mock
from pkg.sub_pkg1 import DeleteOldBlobs
class DeleteBlobsTest(unittest.TestCase):
def setUp(self):
pass
def test_main(self):
mock_containerclient = mock.MagicMock()
with mock.patch("DeleteOldBlobs.ContainerClient.list_blobs", mock_containerclient.list_blobs):
mock_containerclient.list_blobs.return_value = ["ablob1", "ablob2"]
DeleteOldBlobs.main()
if __name__ == '__main__':
unittest.main()
If you execute the test code you obtain the output:
['ablob1', 'ablob2']
['ablob1', 'ablob2']
This output means that the function list_blobs() is mocked by mock_containerclient.list_blobs.
I don't know if the content of this post can be useful for you, but I'm not able to simulate better your code in this moment.
I hope you can inspire to my code to find your real solution.
|
Unable to successfully patch functions of Azure ContainerClient
|
I have been trying to patch the list_blobs() function of ContainerClient, have not been able to do this successfully, this code outputs a MagicMock() function - but the function isn't patched as I would expect it to be (Trying to patch with a list ['Blob1', 'Blob2'].
#################Script File
import sys
from datetime import datetime, timedelta
import pyspark
import pytz
import yaml
# from azure.storage.blob import BlobServiceClient, ContainerClient
from pyspark.dbutils import DBUtils as dbutils
import azure.storage.blob
# Open Config
def main():
spark_context = pyspark.SparkContext.getOrCreate()
spark_context.addFile(sys.argv[1])
stream = None
stream = open(sys.argv[1], "r")
config = yaml.load(stream, Loader=yaml.FullLoader)
stream.close()
account_key = dbutils.secrets.get(scope=config["Secrets"]["Scope"], key=config["Secrets"]["Key Name"])
target_container = config["Storage Configuration"]["Container"]
target_account = config["Storage Configuration"]["Account"]
days_history_to_keep = config["Storage Configuration"]["Days History To Keep"]
connection_string = (
"DefaultEndpointsProtocol=https;AccountName="
+ target_account
+ ";AccountKey="
+ account_key
+ ";EndpointSuffix=core.windows.net"
)
blob_service_client: azure.storage.blob.BlobServiceClient = (
azure.storage.blob.BlobServiceClient.from_connection_string(connection_string)
)
container_client: azure.storage.blob.ContainerClient = (
blob_service_client.get_container_client(target_container)
)
blobs = container_client.list_blobs()
print(blobs)
print(blobs)
utc = pytz.UTC
delete_before_date = utc.localize(
datetime.today() - timedelta(days=days_history_to_keep)
)
for blob in blobs:
if blob.creation_time < delete_before_date:
print("Deleting Blob: " + blob.name)
container_client.delete_blob(blob, delete_snapshots="include")
if __name__ == "__main__":
main()
#################Test File
import unittest
from unittest import mock
import DeleteOldBlobs
class DeleteBlobsTest(unittest.TestCase):
def setUp(self):
pass
@mock.patch("DeleteOldBlobs.azure.storage.blob.ContainerClient")
@mock.patch("DeleteOldBlobs.azure.storage.blob.BlobServiceClient")
@mock.patch("DeleteOldBlobs.dbutils")
@mock.patch("DeleteOldBlobs.sys")
@mock.patch('DeleteOldBlobs.pyspark')
def test_main(self, mock_pyspark, mock_sys, mock_dbutils, mock_blobserviceclient, mock_containerclient):
# mock setup
config_file = "Delete_Old_Blobs_UnitTest.yml"
mock_sys.argv = ["unused_arg", config_file]
mock_dbutils.secrets.get.return_value = "A Secret"
mock_containerclient.list_blobs.return_value = ["ablob1", "ablob2"]
# execute test
DeleteOldBlobs.main()
# TODO assert actions taken
# mock_sys.argv.__get__.assert_called_with()
# dbutils.secrets.get(scope=config['Secrets']['Scope'], key=config['Secrets']['Key Name'])
if __name__ == "__main__":
unittest.main()
Output:
<MagicMock name='BlobServiceClient.from_connection_string().get_container_client().list_blobs()' id='1143355577232'>
What am I doing incorrectly here?
|
[
"I'm not able to execute your code in this moment, but I have tried to simulate it. To do this I have created the following 3 files in the path: /<path-to>/pkg/sub_pkg1 (where pkg and sub_pkg1 are packages).\nFile ContainerClient.py\ndef list_blobs(self):\n return \"blob1\"\n\nFile DeleteOldBlobs.py\nfrom pkg.sub_pkg1 import ContainerClient\n\n# Open Config\ndef main():\n blobs = ContainerClient.list_blobs()\n print(blobs)\n print(blobs)\n\nFile DeleteBlobsTest.py\nimport unittest\nfrom unittest import mock\nfrom pkg.sub_pkg1 import DeleteOldBlobs\n\nclass DeleteBlobsTest(unittest.TestCase):\n def setUp(self):\n pass\n\n def test_main(self):\n mock_containerclient = mock.MagicMock()\n with mock.patch(\"DeleteOldBlobs.ContainerClient.list_blobs\", mock_containerclient.list_blobs):\n mock_containerclient.list_blobs.return_value = [\"ablob1\", \"ablob2\"]\n DeleteOldBlobs.main()\n\nif __name__ == '__main__':\n unittest.main()\n\n\nIf you execute the test code you obtain the output:\n['ablob1', 'ablob2']\n['ablob1', 'ablob2']\n\nThis output means that the function list_blobs() is mocked by mock_containerclient.list_blobs.\nI don't know if the content of this post can be useful for you, but I'm not able to simulate better your code in this moment.\nI hope you can inspire to my code to find your real solution.\n"
] |
[
0
] |
[] |
[] |
[
"azure",
"mocking",
"python",
"python_unittest"
] |
stackoverflow_0074447151_azure_mocking_python_python_unittest.txt
|
Q:
Break long lines of python code programmatically
What is a/the way to auto-format an existing (potentially large) Python codebase to conform to a given max line length?
Autoformatters like black, yapf and autopep8 do change too much as they also change other things.
A:
This seems like a thing that can be easily solved using an .editorconfig file. I don't know what IDE/Code editor you use, but from my experience, pyCharm supports it very well. The config should look something like this:
root = true
[*.py]
max_line_length = 88
For more info, check https://editorconfig.org/.
|
Break long lines of python code programmatically
|
What is a/the way to auto-format an existing (potentially large) Python codebase to conform to a given max line length?
Autoformatters like black, yapf and autopep8 do change too much as they also change other things.
|
[
"This seems like a thing that can be easily solved using an .editorconfig file. I don't know what IDE/Code editor you use, but from my experience, pyCharm supports it very well. The config should look something like this:\nroot = true\n\n[*.py]\nmax_line_length = 88\n\nFor more info, check https://editorconfig.org/.\n"
] |
[
0
] |
[] |
[] |
[
"autoformatting",
"python"
] |
stackoverflow_0074461544_autoformatting_python.txt
|
Q:
Variable input from lists for find_element selenium function
Hi StackOverflow gurus,
I am new to coding and Python but very enthusiastic about it. Your support and option will be huge addition do my development.
I am trying to write a Python code, where using Selenium find_element(By.LINK_TEXT,"") I need to identify company names and click on it. This action should be repetitive for all the companies on the list (in total I have around 60 entities on the list, but for this example I am using only 3). For this I used the loop.
But as a result I am getting an error:
driver.find_element(By.LINK_TEXT,format(str(company))).click() #Select the entity. This input must be later variable. Items are foudn with link text
TypeError: 'str' object is not callable
These actions should be performed in Google Chrome browser.
This is what I have documented so far:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
company = ['Apple Inc','Microsoft','Tesla']
url = "I did not include the link due to security reasons"
driver = webdriver.Chrome(r"C:\Users\Downloads\chromedriver_win32\chromedriver.exe")
driver.get(url)
drop = Select(driver.find_element(By.ID,'ctl00_Cont_uxProjectTTIDropDownList')) #select project from droop down list
drop.select_by_visible_text ('2022 Q4 - Projects')
sleep(1)
for i in range (len(company)):
driver.find_element(By.LINK_TEXT,format(str(company))).click()
I am getting an error on this the last line:
for i in range (len(company)):
driver.find_element(By.LINK_TEXT,format(str(company))).click()
If I manually include the value it works e.g.:
driver.find_element(By.LINK_TEXT,'Tesla').click()
Could you share your suggestions how to fix this?
A:
You're using the entire company list as your text. Use the index you created in the for loop to grab only one element in the list:
for i in range (len(company)):
driver.find_element(By.LINK_TEXT,company[i]).click()
|
Variable input from lists for find_element selenium function
|
Hi StackOverflow gurus,
I am new to coding and Python but very enthusiastic about it. Your support and option will be huge addition do my development.
I am trying to write a Python code, where using Selenium find_element(By.LINK_TEXT,"") I need to identify company names and click on it. This action should be repetitive for all the companies on the list (in total I have around 60 entities on the list, but for this example I am using only 3). For this I used the loop.
But as a result I am getting an error:
driver.find_element(By.LINK_TEXT,format(str(company))).click() #Select the entity. This input must be later variable. Items are foudn with link text
TypeError: 'str' object is not callable
These actions should be performed in Google Chrome browser.
This is what I have documented so far:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
company = ['Apple Inc','Microsoft','Tesla']
url = "I did not include the link due to security reasons"
driver = webdriver.Chrome(r"C:\Users\Downloads\chromedriver_win32\chromedriver.exe")
driver.get(url)
drop = Select(driver.find_element(By.ID,'ctl00_Cont_uxProjectTTIDropDownList')) #select project from droop down list
drop.select_by_visible_text ('2022 Q4 - Projects')
sleep(1)
for i in range (len(company)):
driver.find_element(By.LINK_TEXT,format(str(company))).click()
I am getting an error on this the last line:
for i in range (len(company)):
driver.find_element(By.LINK_TEXT,format(str(company))).click()
If I manually include the value it works e.g.:
driver.find_element(By.LINK_TEXT,'Tesla').click()
Could you share your suggestions how to fix this?
|
[
"You're using the entire company list as your text. Use the index you created in the for loop to grab only one element in the list:\nfor i in range (len(company)):\n driver.find_element(By.LINK_TEXT,company[i]).click()\n\n"
] |
[
0
] |
[] |
[] |
[
"dynamic",
"findelement",
"list",
"python",
"selenium"
] |
stackoverflow_0074460747_dynamic_findelement_list_python_selenium.txt
|
Q:
Can't access specific element using xpath with selenium Python
I am trying to parse the wind direction using selenium and I think using xpath is the easiest way to get this info.
There is a table with all the information and the xpath of the elements within this table follow the same structure, hence my following code:
wind_directions = [browser.find_element_by_xpath(f'//*[@id="archive_results"]/table/tbody/tr/td/table/tbody/tr[3]/td[{i}]').text for i in range(14,25)]
Indeed, the structure of the data on the site is the following:
My issue is that I would like to get the content "rotate(494, 50, 50) translate(0,5)" from the picture above but I can't:
If I try to write replace the previous fstring with f'//*[@id="archive_results"]/table/tbody/tr/td/table/tbody/tr[3]/td[{i}]/svg/g'],
The compiler tells me that Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[@id="archive_results"]/table/tbody/tr/td/table/tbody/tr[3]/td[14]/svg/g"}.
Any idea why I get such a message while this is the exact xpath that appears when I check on the element on Chrome ? (I triple-checked the indexes in the fstring and it is not the source of the error).
A:
svg g etc. are special tag names.
To locate such nodes with XPath you can change your XPath expression as following:
'//*[@id="archive_results"]/table/tbody/tr/td/table/tbody/tr[3]/td[{i}]/*[name()="svg"]/*[name()="g"]'
|
Can't access specific element using xpath with selenium Python
|
I am trying to parse the wind direction using selenium and I think using xpath is the easiest way to get this info.
There is a table with all the information and the xpath of the elements within this table follow the same structure, hence my following code:
wind_directions = [browser.find_element_by_xpath(f'//*[@id="archive_results"]/table/tbody/tr/td/table/tbody/tr[3]/td[{i}]').text for i in range(14,25)]
Indeed, the structure of the data on the site is the following:
My issue is that I would like to get the content "rotate(494, 50, 50) translate(0,5)" from the picture above but I can't:
If I try to write replace the previous fstring with f'//*[@id="archive_results"]/table/tbody/tr/td/table/tbody/tr[3]/td[{i}]/svg/g'],
The compiler tells me that Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[@id="archive_results"]/table/tbody/tr/td/table/tbody/tr[3]/td[14]/svg/g"}.
Any idea why I get such a message while this is the exact xpath that appears when I check on the element on Chrome ? (I triple-checked the indexes in the fstring and it is not the source of the error).
|
[
"svg g etc. are special tag names.\nTo locate such nodes with XPath you can change your XPath expression as following:\n'//*[@id=\"archive_results\"]/table/tbody/tr/td/table/tbody/tr[3]/td[{i}]/*[name()=\"svg\"]/*[name()=\"g\"]'\n\n"
] |
[
1
] |
[] |
[] |
[
"html",
"python",
"selenium",
"xpath"
] |
stackoverflow_0074461752_html_python_selenium_xpath.txt
|
Q:
difference of summary between sklearn and statsmodels OLS
The goal is to detect and fix why the report between my sklearn "summary" implementation is not matching with the results of OLS statsmodels. The only thing is matching, is the beta coefficients.
import pandas as pd
import numpy as np
from statsmodels.regression.linear_model import OLS
from sklearn import linear_model
from scipy.stats import t
class LinearRegression(linear_model.LinearRegression):
"""
LinearRegression class after sklearn's, but calculate t-statistics
and p-values for model coefficients (betas).
Additional attributes available after .fit()
are `t` and `p` which are of the shape (y.shape[1], X.shape[1])
which is (n_features, n_coefs)
This class sets the intercept to 0 by default, since usually we include it
in X.
"""
def __init__(self, *args, **kwargs):
if not "fit_intercept" in kwargs:
kwargs['fit_intercept'] = False
super(LinearRegression, self)\
.__init__(*args, **kwargs)
def fit(self, X, y, n_jobs=1):
self = super(LinearRegression, self).fit(X, y, n_jobs)
# std errors
uhat = (y-(X@self.coef_).ravel())
k = np.shape(X)[1]
s2 = (uhat.T@uhat)/(y.shape[0])
var = s2*np.linalg.inv(X.T@X)
self.se = np.sqrt(np.diag(var))
# T-Stat
self.t_stats = self.coef_/self.se
# p-values
self.df = y.shape[0] - k # -1 degrees of freedom: N minus number of parameters
self.p_values = 2*(t.sf(abs(self.t_stats),self.df))
# Rsquared
tss = (y-np.mean(y)).T@(y-np.mean(y))
rss = uhat.T@uhat
self.rsq = 1 - rss/tss
self.summary = pd.DataFrame({
"beta":self.coef_.reshape(1,-1).tolist()[0],
"se":self.se.reshape(1,-1).tolist()[0],
"t_stats":self.t_stats.reshape(1,-1).tolist()[0],
"p_values":self.p_values.reshape(1,-1).tolist()[0],
})
return self
Running the function in a toy dataset we can test the results:
import statsmodels.api as sm
data = sm.datasets.longley.load_pandas()
# Estimate statsmodels OLS
model = OLS(endog=data.endog,exog=data.exog).fit()
# Estimate Sklearn with report like statsmodels OLS
model2 = LinearRegression(fit_intercept=False).fit(data.exog,np.array(data.endog))
model2.summary
I am worried about some formula is not matching with the correct one.
A:
Here you have a class that you can use in order to obtain a LinearRegression model summary using Scikit-learn:
import numpy as np
import pandas as pd
from scipy.stats import t
from sklearn import linear_model
class LinearRegression(linear_model.LinearRegression):
def __init__(self, *args, **kwargs):
if not "fit_intercept" in kwargs:
self.with_intercept = True
kwargs['fit_intercept'] = True
else:
self.with_intercept = False
kwargs['fit_intercept'] = False
super(LinearRegression, self).__init__(*args, **kwargs)
def fit(self, X, y):
self = super(LinearRegression, self).fit(X, y)
y_pred = self.predict(X)
residuals = y - y_pred
residual_sum_of_squares = residuals.T @ residuals
if self.with_intercept:
coefficients = [coef for coef in self.coef_]
self.coefficients = [self.intercept_] + coefficients
p = len(X.columns) + 1 # plus one because an intercept is added
new_X = np.empty(shape=(len(X), p), dtype=float)
new_X[:, 0] = 1
new_X[:, 1:p] = X.values
else:
self.coefficients = self.coef_
p = len(X.columns)
new_X = X.values
# standard errors
sigma_squared_hat = residual_sum_of_squares / (len(X) - p)
var_beta_hat = np.linalg.inv(new_X.T @ new_X) * sigma_squared_hat
self.std_errors = [var_beta_hat[p_, p_] ** 0.5 for p_ in range(p)]
# t_values
self.t_values = np.array(self.coefficients)/self.std_errors
# p values
freedom_degree = y.shape[0] - X.shape[1]
self.p_values = 2*(t.sf(abs(self.t_values), freedom_degree))
# summary
self.summary = pd.DataFrame()
self.summary['Coefficients'] = self.coefficients
self.summary['Standard errors'] = self.std_errors
self.summary['t values'] = self.t_values
self.summary['p values'] = self.p_values
return self
This is how you can try the class and fit a model:
import statsmodels.api as sm
data = sm.datasets.longley.load_pandas()
X = pd.DataFrame(data.exog)
y = np.array(data.endog)
# in case you want to ignore the intercept include fit_intercept=False as a parameter
model_linear_regression = LinearRegression().fit(X, y)
model_linear_regression.summary
Result:
Coefficients Standard errors t values p values
0 -3.482259e+06 890420.379 -3.911 0.003
1 1.506190e+01 84.915 0.177 0.863
2 -3.580000e-02 0.033 -1.070 0.310
3 -2.020200e+00 0.488 -4.136 0.002
4 -1.033200e+00 0.214 -4.822 0.001
5 -5.110000e-02 0.226 -0.226 0.826
6 1.829151e+03 455.478 4.016 0.002
You can train an OLS model, making the necessary changes in case you want to add the intercept/constant, as follows:
with_intercept = True
if with_intercept:
p = len(X.columns) + 1 # plus one because an intercept is added
new_X = np.empty(shape=(len(X), p), dtype=float)
new_X[:, 0] = 1
new_X[:, 1:p] = X.values
model_statsmodel = sm.OLS(endog=y, exog=new_X).fit()
else:
model_statsmodel = sm.OLS(endog=y, exog=X).fit()
To finish, you can verify the results are the same this way:
assert np.allclose(np.round(model_statsmodel.params, 3), np.round(model_linear_regression.coefficients, 3))
assert np.allclose(np.round(model_statsmodel.bse, 3), np.round(model_linear_regression.std_errors, 3))
assert np.allclose(np.round(model_statsmodel.tvalues, 3), np.round(model_linear_regression.t_values, 3))
# the following assertion might return False, due to a difference in the number of decimals
assert np.allclose(np.round(model_statsmodel.pvalues, 3), np.round(model_linear_regression.p_values, 3))
Hope it is helpful :)
|
difference of summary between sklearn and statsmodels OLS
|
The goal is to detect and fix why the report between my sklearn "summary" implementation is not matching with the results of OLS statsmodels. The only thing is matching, is the beta coefficients.
import pandas as pd
import numpy as np
from statsmodels.regression.linear_model import OLS
from sklearn import linear_model
from scipy.stats import t
class LinearRegression(linear_model.LinearRegression):
"""
LinearRegression class after sklearn's, but calculate t-statistics
and p-values for model coefficients (betas).
Additional attributes available after .fit()
are `t` and `p` which are of the shape (y.shape[1], X.shape[1])
which is (n_features, n_coefs)
This class sets the intercept to 0 by default, since usually we include it
in X.
"""
def __init__(self, *args, **kwargs):
if not "fit_intercept" in kwargs:
kwargs['fit_intercept'] = False
super(LinearRegression, self)\
.__init__(*args, **kwargs)
def fit(self, X, y, n_jobs=1):
self = super(LinearRegression, self).fit(X, y, n_jobs)
# std errors
uhat = (y-(X@self.coef_).ravel())
k = np.shape(X)[1]
s2 = (uhat.T@uhat)/(y.shape[0])
var = s2*np.linalg.inv(X.T@X)
self.se = np.sqrt(np.diag(var))
# T-Stat
self.t_stats = self.coef_/self.se
# p-values
self.df = y.shape[0] - k # -1 degrees of freedom: N minus number of parameters
self.p_values = 2*(t.sf(abs(self.t_stats),self.df))
# Rsquared
tss = (y-np.mean(y)).T@(y-np.mean(y))
rss = uhat.T@uhat
self.rsq = 1 - rss/tss
self.summary = pd.DataFrame({
"beta":self.coef_.reshape(1,-1).tolist()[0],
"se":self.se.reshape(1,-1).tolist()[0],
"t_stats":self.t_stats.reshape(1,-1).tolist()[0],
"p_values":self.p_values.reshape(1,-1).tolist()[0],
})
return self
Running the function in a toy dataset we can test the results:
import statsmodels.api as sm
data = sm.datasets.longley.load_pandas()
# Estimate statsmodels OLS
model = OLS(endog=data.endog,exog=data.exog).fit()
# Estimate Sklearn with report like statsmodels OLS
model2 = LinearRegression(fit_intercept=False).fit(data.exog,np.array(data.endog))
model2.summary
I am worried about some formula is not matching with the correct one.
|
[
"Here you have a class that you can use in order to obtain a LinearRegression model summary using Scikit-learn:\nimport numpy as np\nimport pandas as pd\nfrom scipy.stats import t\nfrom sklearn import linear_model\n\nclass LinearRegression(linear_model.LinearRegression):\n\n def __init__(self, *args, **kwargs):\n if not \"fit_intercept\" in kwargs:\n self.with_intercept = True\n kwargs['fit_intercept'] = True\n else:\n self.with_intercept = False\n kwargs['fit_intercept'] = False\n super(LinearRegression, self).__init__(*args, **kwargs)\n\n def fit(self, X, y):\n self = super(LinearRegression, self).fit(X, y)\n\n y_pred = self.predict(X)\n residuals = y - y_pred\n residual_sum_of_squares = residuals.T @ residuals\n\n if self.with_intercept:\n coefficients = [coef for coef in self.coef_]\n self.coefficients = [self.intercept_] + coefficients\n\n p = len(X.columns) + 1 # plus one because an intercept is added\n new_X = np.empty(shape=(len(X), p), dtype=float)\n new_X[:, 0] = 1\n new_X[:, 1:p] = X.values\n \n else:\n self.coefficients = self.coef_\n\n p = len(X.columns)\n new_X = X.values\n\n # standard errors\n sigma_squared_hat = residual_sum_of_squares / (len(X) - p)\n var_beta_hat = np.linalg.inv(new_X.T @ new_X) * sigma_squared_hat\n self.std_errors = [var_beta_hat[p_, p_] ** 0.5 for p_ in range(p)]\n\n # t_values\n self.t_values = np.array(self.coefficients)/self.std_errors\n\n # p values\n freedom_degree = y.shape[0] - X.shape[1]\n self.p_values = 2*(t.sf(abs(self.t_values), freedom_degree))\n\n # summary\n self.summary = pd.DataFrame()\n self.summary['Coefficients'] = self.coefficients\n self.summary['Standard errors'] = self.std_errors\n self.summary['t values'] = self.t_values\n self.summary['p values'] = self.p_values\n \n return self\n\nThis is how you can try the class and fit a model:\nimport statsmodels.api as sm\n\ndata = sm.datasets.longley.load_pandas()\n\nX = pd.DataFrame(data.exog)\ny = np.array(data.endog)\n\n# in case you want to ignore the intercept include fit_intercept=False as a parameter\nmodel_linear_regression = LinearRegression().fit(X, y)\nmodel_linear_regression.summary\n\nResult:\n Coefficients Standard errors t values p values\n0 -3.482259e+06 890420.379 -3.911 0.003\n1 1.506190e+01 84.915 0.177 0.863\n2 -3.580000e-02 0.033 -1.070 0.310\n3 -2.020200e+00 0.488 -4.136 0.002\n4 -1.033200e+00 0.214 -4.822 0.001\n5 -5.110000e-02 0.226 -0.226 0.826\n6 1.829151e+03 455.478 4.016 0.002\n\nYou can train an OLS model, making the necessary changes in case you want to add the intercept/constant, as follows:\nwith_intercept = True\nif with_intercept:\n p = len(X.columns) + 1 # plus one because an intercept is added\n new_X = np.empty(shape=(len(X), p), dtype=float)\n new_X[:, 0] = 1\n new_X[:, 1:p] = X.values\n model_statsmodel = sm.OLS(endog=y, exog=new_X).fit()\nelse:\n model_statsmodel = sm.OLS(endog=y, exog=X).fit()\n\nTo finish, you can verify the results are the same this way:\nassert np.allclose(np.round(model_statsmodel.params, 3), np.round(model_linear_regression.coefficients, 3))\nassert np.allclose(np.round(model_statsmodel.bse, 3), np.round(model_linear_regression.std_errors, 3))\nassert np.allclose(np.round(model_statsmodel.tvalues, 3), np.round(model_linear_regression.t_values, 3))\n# the following assertion might return False, due to a difference in the number of decimals\nassert np.allclose(np.round(model_statsmodel.pvalues, 3), np.round(model_linear_regression.p_values, 3))\n\nHope it is helpful :)\n"
] |
[
2
] |
[] |
[] |
[
"least_squares",
"p_value",
"python",
"scikit_learn",
"statsmodels"
] |
stackoverflow_0074412143_least_squares_p_value_python_scikit_learn_statsmodels.txt
|
Q:
can you make a regular python class frozen?
It's useful to be able to create frozen dataclasses. I'm wondering if there is a way to do something similar for regular python classes (ones with an __init__ function with complex logic possibly). It would be good to prevent modification after construction in some kind of elegant way, like frozen dataclasses.
A:
yes.
All attribute access in Python is highly customizable, and this is just a feature dataclasses make use of.
The easiest way to control attribute setting is to create a custom __setattr__ method in your class - if you want to be able to create attributes during __init__ one of the ways is to have an specific parameter to control whether each instance is frozen already, and freeze it at the end of __init__:
class MyFrozen:
_frozen = False
def __init__(self, ...):
...
self._frozen = True
def __setattr__(self, attr, value):
if getattr(self, "_frozen"):
raise AttributeError("Trying to set attribute on a frozen instance")
return super().__setattr__(attr, value)
|
can you make a regular python class frozen?
|
It's useful to be able to create frozen dataclasses. I'm wondering if there is a way to do something similar for regular python classes (ones with an __init__ function with complex logic possibly). It would be good to prevent modification after construction in some kind of elegant way, like frozen dataclasses.
|
[
"yes.\nAll attribute access in Python is highly customizable, and this is just a feature dataclasses make use of.\nThe easiest way to control attribute setting is to create a custom __setattr__ method in your class - if you want to be able to create attributes during __init__ one of the ways is to have an specific parameter to control whether each instance is frozen already, and freeze it at the end of __init__:\nclass MyFrozen:\n _frozen = False\n def __init__(self, ...):\n ...\n self._frozen = True\n\n def __setattr__(self, attr, value):\n if getattr(self, \"_frozen\"):\n raise AttributeError(\"Trying to set attribute on a frozen instance\")\n return super().__setattr__(attr, value) \n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_dataclasses"
] |
stackoverflow_0074312939_python_python_dataclasses.txt
|
Q:
Spyder IDE with python 3.10 seems freezing when click run button, but it works fine if run a single line beforehand running the entire script
I have truble with last version of Spyder 5.4.0 with last version of Python 3.10.6.
Spyder version: 5.4.0 (conda)
Python version: 3.10.6 64-bit
Qt version: 5.15.2
PyQt5 version: 5.15.7
Operating System: Windows 10
Even if running a script like
print('Hello world')
when I click on the play green button, the IPython console seems freezing on this script, and it does not run for hours.
If I select this line of code and I run current selection or run the current cell it works fine. From this moment, it seems that spyder works fine, until at a certain point when I run it seems again freezing. I have to restart a new console and before running the script I have to run a single line or a single cell.
It seems that I have to 'activate' in someway the Python console in order to Spyder will run the script.
Does anyone have the same issue? How can I solve it?
I have tried to uninstall and reinstall both spyder and python many times, but it is useless.
A:
After many trials, I noticed that there is something strange with IPython console. I noticed that when it hangs after running a code, if I delete all user variables, it worked fine.
Then I tryed to delete all variables before execution, and it work fine.
Therefore I discovered that a solution that worked for me is to go to preferences -> Run -> and untick the option 'Remove all variables before execution'
It is quite annoying because I have to do it manually every time before running, but in this way the spyder does not appear to hang anymore! I hope that the Spyder developers will solve it soon.
-
I automatically solved by typing at the beginning of any script these lines, inspired from the question Code to clear console and variables in Spyder :
try:
from IPython import get_ipython
get_ipython().magic('clear')
get_ipython().magic('reset -f')
import matplotlib.pyplot as plt
plt.close('all')
except:
pass
similar to Matlab in which you normally start your code with
clc
close all
clear all
|
Spyder IDE with python 3.10 seems freezing when click run button, but it works fine if run a single line beforehand running the entire script
|
I have truble with last version of Spyder 5.4.0 with last version of Python 3.10.6.
Spyder version: 5.4.0 (conda)
Python version: 3.10.6 64-bit
Qt version: 5.15.2
PyQt5 version: 5.15.7
Operating System: Windows 10
Even if running a script like
print('Hello world')
when I click on the play green button, the IPython console seems freezing on this script, and it does not run for hours.
If I select this line of code and I run current selection or run the current cell it works fine. From this moment, it seems that spyder works fine, until at a certain point when I run it seems again freezing. I have to restart a new console and before running the script I have to run a single line or a single cell.
It seems that I have to 'activate' in someway the Python console in order to Spyder will run the script.
Does anyone have the same issue? How can I solve it?
I have tried to uninstall and reinstall both spyder and python many times, but it is useless.
|
[
"After many trials, I noticed that there is something strange with IPython console. I noticed that when it hangs after running a code, if I delete all user variables, it worked fine.\nThen I tryed to delete all variables before execution, and it work fine.\nTherefore I discovered that a solution that worked for me is to go to preferences -> Run -> and untick the option 'Remove all variables before execution'\nIt is quite annoying because I have to do it manually every time before running, but in this way the spyder does not appear to hang anymore! I hope that the Spyder developers will solve it soon.\n-\nI automatically solved by typing at the beginning of any script these lines, inspired from the question Code to clear console and variables in Spyder :\ntry:\n from IPython import get_ipython\n get_ipython().magic('clear')\n get_ipython().magic('reset -f')\n import matplotlib.pyplot as plt\n plt.close('all')\nexcept:\n pass\n\nsimilar to Matlab in which you normally start your code with\nclc\nclose all\nclear all\n\n"
] |
[
0
] |
[] |
[] |
[
"anaconda",
"ide",
"miniconda",
"python",
"spyder"
] |
stackoverflow_0074459113_anaconda_ide_miniconda_python_spyder.txt
|
Q:
Python: Detect most similar list from list of lists
I want to detect the most similar list from list of lists in the fastest way.
My searching list:
[1,2,3,4]
The list of lists:
[[1],[2],[1,2],[1,2,3,4,5,6],[1,2,3],[1,2,3,4,5]]
Most simillar result:
[1,2,3]
I was trying to find that with some common operators in python but it's too slow in my data. I have about 2 million list of lists that I want to search in them.
A:
The following fonction returns the most similar lists according to the length
def most_similar_acc_length(my_list, range_of_lists, length_range):
"""most similar series according to length
Parameters
----------
my_list : The list of interest
range_of_lists: List of lists where we search the most similar to 'my_list'
length_range: Range of series length to be considered as similar to the one of my_list
Returns:
--------
List of most similar lists in terms of length
"""
sim_lists=[x for x in range_of_lists if len(x)>=(len(my_list)-length_range) and len(x)<=(len(my_list)+length_range)]
return sim_lists
If we try it on the lists you shared with length_range length_range=1 we get:
range_of_lists=[[1],[2],[1,2],[1,2,3,4,5,6],[1,2,3],[1,2,3,4,5]]
my_list=[1,2,3,4]
sim_list=most_similar_acc_length(my_list, range_of_lists, 1)
Output
[[1, 2, 3], [1, 2, 3, 4, 5]]
Second step
We set up another function after having similar lists according to length
def most_similar_list(my_list, range_of_lists, length_range):
# We start with a first selection similar lists in terms of length
sim_list=most_similar_acc_length(my_list, range_of_lists, length_range)
new_list=[] # Binary values ==1 when value is same and ==0 when not
temp_list=[] # Temprary list to be appended to 'new_list'
for x in sim_list:
for i in range(min(len(x), len(my_list))):
if i==min(len(x)-1, len(my_list)-1):
if x[i]==my_list[i]:
temp_list.append(1)
else:
temp_list.append(0)
new_list.append(temp_list)
temp_list=[]
else:
if x[i]==my_list[i]:
temp_list.append(1)
else:
temp_list.append(0)
max_list=[sum(x) for x in new_list]
ind_max=max_list.index(max(max_list))
return sim_list[ind_max]
Let's try this function:
range_of_lists=[[1],[2],[1,2],[1,2,3,4,5,6],[1,2,3],[1,2,3,4,5]]
my_list=[1,2,3,4]
similar_list=most_similar_list(my_list, range_of_lists, 1)
similar_list
Output
[1, 2, 3, 4, 5]
|
Python: Detect most similar list from list of lists
|
I want to detect the most similar list from list of lists in the fastest way.
My searching list:
[1,2,3,4]
The list of lists:
[[1],[2],[1,2],[1,2,3,4,5,6],[1,2,3],[1,2,3,4,5]]
Most simillar result:
[1,2,3]
I was trying to find that with some common operators in python but it's too slow in my data. I have about 2 million list of lists that I want to search in them.
|
[
"The following fonction returns the most similar lists according to the length\ndef most_similar_acc_length(my_list, range_of_lists, length_range):\n \"\"\"most similar series according to length\n Parameters\n ----------\n my_list : The list of interest\n range_of_lists: List of lists where we search the most similar to 'my_list'\n length_range: Range of series length to be considered as similar to the one of my_list\n \n Returns:\n --------\n List of most similar lists in terms of length\n \"\"\"\n \n sim_lists=[x for x in range_of_lists if len(x)>=(len(my_list)-length_range) and len(x)<=(len(my_list)+length_range)]\n return sim_lists\n\nIf we try it on the lists you shared with length_range length_range=1 we get:\nrange_of_lists=[[1],[2],[1,2],[1,2,3,4,5,6],[1,2,3],[1,2,3,4,5]]\nmy_list=[1,2,3,4]\n\nsim_list=most_similar_acc_length(my_list, range_of_lists, 1)\n\nOutput\n[[1, 2, 3], [1, 2, 3, 4, 5]]\n\nSecond step\nWe set up another function after having similar lists according to length\ndef most_similar_list(my_list, range_of_lists, length_range):\n # We start with a first selection similar lists in terms of length\n sim_list=most_similar_acc_length(my_list, range_of_lists, length_range)\n \n new_list=[] # Binary values ==1 when value is same and ==0 when not\n temp_list=[] # Temprary list to be appended to 'new_list'\n \n for x in sim_list:\n for i in range(min(len(x), len(my_list))):\n if i==min(len(x)-1, len(my_list)-1):\n if x[i]==my_list[i]:\n temp_list.append(1)\n else:\n temp_list.append(0)\n new_list.append(temp_list)\n temp_list=[]\n else:\n if x[i]==my_list[i]:\n temp_list.append(1)\n else:\n temp_list.append(0)\n\n max_list=[sum(x) for x in new_list]\n ind_max=max_list.index(max(max_list))\n \n return sim_list[ind_max]\n\nLet's try this function:\nrange_of_lists=[[1],[2],[1,2],[1,2,3,4,5,6],[1,2,3],[1,2,3,4,5]]\nmy_list=[1,2,3,4]\n\nsimilar_list=most_similar_list(my_list, range_of_lists, 1)\n\nsimilar_list\n\nOutput\n[1, 2, 3, 4, 5]\n\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074461893_python.txt
|
Q:
pandas groupby.apply is slow, even on small DataSets
I want to aggregate a pandas DataFrame by two group variables and do calculations on each group. As I want to mix columns, I use dataframe.groupby.apply
The following code works but is inexplicably slow. 3 seconds to aggregate 4000 rows.
When I change the code to one group variable, it is just half the time, maybe a little less.
Any ideas, why it is so slow?
import random
df = pd.DataFrame(np.random.rand(4000,4), columns=list('abcd'))
df['group'] = random.choices([0, 0, 1, 1],k=4000)
df["grupp"]= random.choices([2, 3, 4, 2],k=4000)
df
def f(x):
d = {}
d['c_d_prodsum'] = (x['c'] * x['d']).sum()
return pd.Series(d, index=['c_d_prodsum'])
import time
start = time.time()
%timeit b=df.groupby(['group','grupp']).apply(f)
end = time.time()
print(end - start)
On my machine, it shows 33.2 ms ± 2.03 ms per loop and 2.77 as the number of seconds
A:
You'll get better performance if you restrict yourself to only those functions provided by pandas.
For instance...
def totime():
df['c*d'] = df['c']*df['d']
d = df.groupby(['group','grupp'])['c*d'].sum().rename('c_d_prodsum')
%timeit totime()
shows 842 µs ± 3.67 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
|
pandas groupby.apply is slow, even on small DataSets
|
I want to aggregate a pandas DataFrame by two group variables and do calculations on each group. As I want to mix columns, I use dataframe.groupby.apply
The following code works but is inexplicably slow. 3 seconds to aggregate 4000 rows.
When I change the code to one group variable, it is just half the time, maybe a little less.
Any ideas, why it is so slow?
import random
df = pd.DataFrame(np.random.rand(4000,4), columns=list('abcd'))
df['group'] = random.choices([0, 0, 1, 1],k=4000)
df["grupp"]= random.choices([2, 3, 4, 2],k=4000)
df
def f(x):
d = {}
d['c_d_prodsum'] = (x['c'] * x['d']).sum()
return pd.Series(d, index=['c_d_prodsum'])
import time
start = time.time()
%timeit b=df.groupby(['group','grupp']).apply(f)
end = time.time()
print(end - start)
On my machine, it shows 33.2 ms ± 2.03 ms per loop and 2.77 as the number of seconds
|
[
"You'll get better performance if you restrict yourself to only those functions provided by pandas.\nFor instance...\ndef totime():\n df['c*d'] = df['c']*df['d']\n d = df.groupby(['group','grupp'])['c*d'].sum().rename('c_d_prodsum')\n\n%timeit totime()\n\nshows 842 µs ± 3.67 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074455854_dataframe_pandas_python.txt
|
Q:
How to convert nested object to nested dictionary in python
I have an object Entry with following fields as id, scene_info and rating. As can be seen, the object has attributes that are types to other classes Scene and Item. I want to convert this object to dictionary.
Entry(id=None, scene_info=Scene(Recipes=[Item(ID='rec.chicky-nuggies', SpawnerIdx=0), Item(ID='rec.impossible-burger', SpawnerIdx=1)], Decor=[Item(ID='dec.plate-large-orange', SpawnerIdx=2), Item(ID='dec.plate-small-green', SpawnerIdx=3)]), rating=None)
(Pdb) vars(self)
{'id': None, 'scene_info': Scene(Recipes=[Item(ID='rec.chicky-nuggies', SpawnerIndex=0), Item(ID='rec.impossible-burger', SpawnerIdx=1)], Decor=[Item(ID='dec.plate-large-orange', SpawnerIdx=2), Item(ID='dec.plate-small-green', SpawnerIdx=3)]), 'rating': None}
EXPECTED RESULT
{'id': None, 'scene_info':{'Recipes': [{'ID': 'rec.chicky-nuggies', 'SpawnerIdx': 0}, {'ID': 'rec.impossible-burger', 'SpawnerIdx': 1}], 'Decor': [{'ID': 'dec.plate-large-orange', 'SpawnerIndex': 2}, {'ID': 'dec.plate-small-green', 'SpawnerIdx': 3}]}, 'rating': None}
I tried vars and they only convert outer object to dict but not inner object. How can I convert the nested ones?
A:
I usually do it this way:
class Bar:
# child class
# some init code...
def encode(self):
return vars(self)
class Foo:
# parent class
# some init code...
def encode(self):
return vars(self)
def to_json(self, indent=None):
return json.dumps(self, default=lambda o: o.encode(), indent=indent)
to_json() will give you a json string for the class and its nested objects if they are simple enough, you can also use marshmallow to do this with more control. You could just do return json.dumps(self, default=lambda o: vars(o), indent=indent) in the parent class and not have the encode() method but using the encode method allows you to customize the output.
Here is some random, silly code to show how it might be used and the output:
import json
class Ingredient:
def __init__(self, name, cost=0):
self.name = name
self.cost = cost
def encode(self):
return vars(self)
class Recipe:
def __init__(self, name, prep_time=0, cook_time=0, ingredients=None,
instructions=None):
self.name = name
self.prep_time = prep_time
self.cook_time = cook_time
self.ingredients = ingredients or []
self.instructions = instructions or {}
def encode(self):
return vars(self)
def to_json(self, indent=None):
return json.dumps(self, default=lambda o: o.encode(), indent=indent)
lettuce = Ingredient('Lettuce', 1.3)
tomato = Ingredient('Tomato', 5.2)
salad = Recipe('Salad', prep_time=5, cook_time=0)
salad.ingredients = [
lettuce,
tomato
]
salad.instructions = {
'Step 1': 'Get the ingredients out',
'Step 2': 'Mix tem together',
'Step 3': 'Eat'
}
print(salad.to_json(4))
Output:
{
"name": "Salad",
"prep_time": 5,
"cook_time": 0,
"ingredients": [
{
"name": "Lettuce",
"cost": 1.3
},
{
"name": "Tomato",
"cost": 5.2
}
],
"instructions": {
"Step 1": "Get the ingredients out",
"Step 2": "Mix tem together",
"Step 3": "Eat"
}
}
A:
The prefered way to go would be using modifing class definition as stated by Tenacious B, but if you want a fast solution you can use the recursive function stated below.
def class2dict(instance, built_dict={}):
if not hasattr(instance, "__dict__"):
return instance
new_subdic = vars(instance)
for key, value in new_subdic.items():
new_subdic[key] = class2dict(value)
return new_subdic
Example:
# Class definitions
class Scene:
def __init__(self, time_dur, tag):
self.time_dur = time_dur
self.tag = tag
class Movie:
def __init__(self, scene1, scene2):
self.scene1 = scene1
self.scene2 = scene2
class Entry:
def __init__(self, movie):
self.movie = movie
In [2]: entry = Entry(Movie(Scene('1 minute', 'action'), Scene('2 hours', 'comedy')))
In [3]: class2dict(entry)
Out[3]:
{'movie': {
'scene1': {'time_dur': '1 minute', 'tag': 'action'},
'scene2': {'time_dur': '2 hours', 'tag': 'comedy'}}
}
A:
For the class types (Entry\Scene\Item), you can create a function the returns the arguments as a dictionary.
Try this code:
def getargs(**kwargs):
return kwargs # already a dictionary
Entry = Scene = Item = getargs # all functions do same thing
x = Entry(id=None, scene_info=Scene(Recipes=[Item(ID='rec.chicky-nuggies', SpawnerIdx=0), Item(ID='rec.impossible-burger', SpawnerIdx=1)], Decor=[Item(ID='dec.plate-large-orange', SpawnerIdx=2), Item(ID='dec.plate-small-green', SpawnerIdx=3)]), rating=None)
print(x)
Output
{'id': None, 'scene_info': {'Recipes': [{'ID': 'rec.chicky-nuggies', 'SpawnerIdx': 0}, {'ID': 'rec.impossible-burger', 'SpawnerIdx': 1}], 'Decor': [{'ID': 'dec.plate-large-orange', 'SpawnerIdx': 2}, {'ID': 'dec.plate-small-green', 'SpawnerIdx': 3}]}, 'rating': None}
A:
You can use the Pydantic model's Entry.json(), this will convert everything including nested models to a string which can then be converted back to a dictionary by using something like json.loads()
|
How to convert nested object to nested dictionary in python
|
I have an object Entry with following fields as id, scene_info and rating. As can be seen, the object has attributes that are types to other classes Scene and Item. I want to convert this object to dictionary.
Entry(id=None, scene_info=Scene(Recipes=[Item(ID='rec.chicky-nuggies', SpawnerIdx=0), Item(ID='rec.impossible-burger', SpawnerIdx=1)], Decor=[Item(ID='dec.plate-large-orange', SpawnerIdx=2), Item(ID='dec.plate-small-green', SpawnerIdx=3)]), rating=None)
(Pdb) vars(self)
{'id': None, 'scene_info': Scene(Recipes=[Item(ID='rec.chicky-nuggies', SpawnerIndex=0), Item(ID='rec.impossible-burger', SpawnerIdx=1)], Decor=[Item(ID='dec.plate-large-orange', SpawnerIdx=2), Item(ID='dec.plate-small-green', SpawnerIdx=3)]), 'rating': None}
EXPECTED RESULT
{'id': None, 'scene_info':{'Recipes': [{'ID': 'rec.chicky-nuggies', 'SpawnerIdx': 0}, {'ID': 'rec.impossible-burger', 'SpawnerIdx': 1}], 'Decor': [{'ID': 'dec.plate-large-orange', 'SpawnerIndex': 2}, {'ID': 'dec.plate-small-green', 'SpawnerIdx': 3}]}, 'rating': None}
I tried vars and they only convert outer object to dict but not inner object. How can I convert the nested ones?
|
[
"I usually do it this way:\nclass Bar:\n # child class\n # some init code...\n\n def encode(self):\n return vars(self)\n\nclass Foo:\n # parent class\n # some init code...\n\n def encode(self):\n return vars(self)\n\n def to_json(self, indent=None):\n return json.dumps(self, default=lambda o: o.encode(), indent=indent)\n\nto_json() will give you a json string for the class and its nested objects if they are simple enough, you can also use marshmallow to do this with more control. You could just do return json.dumps(self, default=lambda o: vars(o), indent=indent) in the parent class and not have the encode() method but using the encode method allows you to customize the output.\nHere is some random, silly code to show how it might be used and the output:\nimport json\n\n\nclass Ingredient:\n def __init__(self, name, cost=0):\n self.name = name\n self.cost = cost\n\n def encode(self):\n return vars(self)\n\n\nclass Recipe:\n def __init__(self, name, prep_time=0, cook_time=0, ingredients=None,\n instructions=None):\n self.name = name\n self.prep_time = prep_time\n self.cook_time = cook_time\n self.ingredients = ingredients or []\n self.instructions = instructions or {}\n\n def encode(self):\n return vars(self)\n\n def to_json(self, indent=None):\n return json.dumps(self, default=lambda o: o.encode(), indent=indent)\n\n\nlettuce = Ingredient('Lettuce', 1.3)\ntomato = Ingredient('Tomato', 5.2)\n\nsalad = Recipe('Salad', prep_time=5, cook_time=0)\n\nsalad.ingredients = [\n lettuce,\n tomato\n]\n\nsalad.instructions = {\n 'Step 1': 'Get the ingredients out',\n 'Step 2': 'Mix tem together',\n 'Step 3': 'Eat' \n}\n\nprint(salad.to_json(4))\n\n\nOutput:\n{\n \"name\": \"Salad\",\n \"prep_time\": 5,\n \"cook_time\": 0,\n \"ingredients\": [\n {\n \"name\": \"Lettuce\",\n \"cost\": 1.3\n },\n {\n \"name\": \"Tomato\",\n \"cost\": 5.2\n }\n ],\n \"instructions\": {\n \"Step 1\": \"Get the ingredients out\",\n \"Step 2\": \"Mix tem together\",\n \"Step 3\": \"Eat\"\n }\n}\n\n",
"The prefered way to go would be using modifing class definition as stated by Tenacious B, but if you want a fast solution you can use the recursive function stated below.\ndef class2dict(instance, built_dict={}):\n if not hasattr(instance, \"__dict__\"):\n return instance\n new_subdic = vars(instance)\n for key, value in new_subdic.items():\n new_subdic[key] = class2dict(value)\n return new_subdic\n\nExample:\n# Class definitions\nclass Scene:\n def __init__(self, time_dur, tag):\n self.time_dur = time_dur\n self.tag = tag\n\n\nclass Movie:\n def __init__(self, scene1, scene2):\n self.scene1 = scene1\n self.scene2 = scene2\n\n\nclass Entry:\n def __init__(self, movie):\n self.movie = movie\n\nIn [2]: entry = Entry(Movie(Scene('1 minute', 'action'), Scene('2 hours', 'comedy'))) \nIn [3]: class2dict(entry) \nOut[3]: \n{'movie': {\n 'scene1': {'time_dur': '1 minute', 'tag': 'action'}, \n 'scene2': {'time_dur': '2 hours', 'tag': 'comedy'}}\n} \n \n\n \n\n",
"For the class types (Entry\\Scene\\Item), you can create a function the returns the arguments as a dictionary.\nTry this code:\ndef getargs(**kwargs):\n return kwargs # already a dictionary\n\nEntry = Scene = Item = getargs # all functions do same thing\n\nx = Entry(id=None, scene_info=Scene(Recipes=[Item(ID='rec.chicky-nuggies', SpawnerIdx=0), Item(ID='rec.impossible-burger', SpawnerIdx=1)], Decor=[Item(ID='dec.plate-large-orange', SpawnerIdx=2), Item(ID='dec.plate-small-green', SpawnerIdx=3)]), rating=None)\n\nprint(x)\n\nOutput\n{'id': None, 'scene_info': {'Recipes': [{'ID': 'rec.chicky-nuggies', 'SpawnerIdx': 0}, {'ID': 'rec.impossible-burger', 'SpawnerIdx': 1}], 'Decor': [{'ID': 'dec.plate-large-orange', 'SpawnerIdx': 2}, {'ID': 'dec.plate-small-green', 'SpawnerIdx': 3}]}, 'rating': None}\n\n",
"You can use the Pydantic model's Entry.json(), this will convert everything including nested models to a string which can then be converted back to a dictionary by using something like json.loads()\n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"dictionary",
"pydantic",
"python"
] |
stackoverflow_0063893843_dictionary_pydantic_python.txt
|
Q:
how i can skip Please enter your phone (or bot token)?
I have several telegram accounts, and at startup, some are asked to enter data. How can I skip this input so that the script continues to run?
my example is not working
for f in glob.iglob("*.session"): # generator, search immediate subdirectories
print(f.rsplit('.', 1)[0])
name_file = f.rsplit('.', 1)[0]
try:
client = TelegramClient(session=name_file, api_id=api_id, api_hash=api_hash)
await send_mes_to_users(client)
except errors.rpcerrorlist.PhoneNumberInvalidError:
print('fail session')
continue
A:
Done
try:
client = TelegramClient(session=name_file, api_id=api_id, api_hash=api_hash)
await client.connect()
if not await client.is_user_authorized():
print("Error authorisation")
continue
await send_mes_to_users(client)
except errors.rpcerrorlist.PhoneNumberInvalidError:
print('fail session')
continue
|
how i can skip Please enter your phone (or bot token)?
|
I have several telegram accounts, and at startup, some are asked to enter data. How can I skip this input so that the script continues to run?
my example is not working
for f in glob.iglob("*.session"): # generator, search immediate subdirectories
print(f.rsplit('.', 1)[0])
name_file = f.rsplit('.', 1)[0]
try:
client = TelegramClient(session=name_file, api_id=api_id, api_hash=api_hash)
await send_mes_to_users(client)
except errors.rpcerrorlist.PhoneNumberInvalidError:
print('fail session')
continue
|
[
"Done\ntry:\n\nclient = TelegramClient(session=name_file, api_id=api_id, api_hash=api_hash)\nawait client.connect()\nif not await client.is_user_authorized():\n print(\"Error authorisation\")\n continue\n await send_mes_to_users(client)\nexcept errors.rpcerrorlist.PhoneNumberInvalidError:\n print('fail session')\n continue\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"telegram",
"telethon"
] |
stackoverflow_0074461392_python_telegram_telethon.txt
|
Q:
Python list of tuples: increase number of tuple members
I have a list of tuples with the pattern "id", "text", "language" like this:
a = [('1', 'hello', 'en'), ...]
I would like to increase number of tuple members to "id", "text", "language", "color":
b = [('1', 'hello', 'en', 'red'), ...]
What is the correct way of doing this?
Thank you.
A:
Since a tuple is immutable you have to create new tuples. I assume you want to add this additional value to every tuple in the list.
a = [('1', 'hello', 'en'), ('2', 'hi', 'en')]
color = 'red'
a = [(x + (color,)) for x in a]
print(a)
The result is [('1', 'hello', 'en', 'red'), ('2', 'hi', 'en', 'red')].
If you have multiple colors in a list with as many entries as you have in your list with the tuples you can zip both sets of data.
a = [('1', 'hello', 'en'), ('2', 'hi', 'en'), ('3', 'oy', 'en')]
colors = ['red', 'green', 'blue']
a = [(x + (color,)) for x, color in zip(a, colors)]
print(a)
Now the result is
[('1', 'hello', 'en', 'red'), ('2', 'hi', 'en', 'green'), ('3', 'oy', 'en', 'blue')]
A:
tuples are immutable so you cannot append().
If you want to add stuffs you should use python lists.
Hope, that might help you!
A:
You can convert the tuple to a list, change it, and then converting back to a tuple
a[0] = list(a[0])
a[0].append("red")
a[0] = tuple(a[0])
Just loop this for the entire list and it should work
|
Python list of tuples: increase number of tuple members
|
I have a list of tuples with the pattern "id", "text", "language" like this:
a = [('1', 'hello', 'en'), ...]
I would like to increase number of tuple members to "id", "text", "language", "color":
b = [('1', 'hello', 'en', 'red'), ...]
What is the correct way of doing this?
Thank you.
|
[
"Since a tuple is immutable you have to create new tuples. I assume you want to add this additional value to every tuple in the list.\na = [('1', 'hello', 'en'), ('2', 'hi', 'en')]\ncolor = 'red'\n\na = [(x + (color,)) for x in a]\nprint(a)\n\nThe result is [('1', 'hello', 'en', 'red'), ('2', 'hi', 'en', 'red')].\n\nIf you have multiple colors in a list with as many entries as you have in your list with the tuples you can zip both sets of data.\na = [('1', 'hello', 'en'), ('2', 'hi', 'en'), ('3', 'oy', 'en')]\ncolors = ['red', 'green', 'blue']\n\na = [(x + (color,)) for x, color in zip(a, colors)]\nprint(a)\n\nNow the result is\n[('1', 'hello', 'en', 'red'), ('2', 'hi', 'en', 'green'), ('3', 'oy', 'en', 'blue')]\n\n",
"tuples are immutable so you cannot append().\nIf you want to add stuffs you should use python lists.\nHope, that might help you!\n",
"You can convert the tuple to a list, change it, and then converting back to a tuple\na[0] = list(a[0])\na[0].append(\"red\")\na[0] = tuple(a[0])\n\nJust loop this for the entire list and it should work\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"list",
"python",
"tuples"
] |
stackoverflow_0074462061_list_python_tuples.txt
|
Q:
Pygame ball bounce left to right of screen
my Python code has a circle which moves from the right of the screen to the left but it stops. I would like it to bounce off the left edge and continue moving to the right and then bounce off the right edge to the left and so on. I think I'm missing a line. I have tried several things but it doesn't seem to be working. Please see code below. Any advice would be very grateful.
import pygame
pygame.init()
size = width, height = 400, 300
screen = pygame.display.set_mode(size)
x_pos = 380
y_pos = 280
r = 20
running = True
while running: # game cycle
screen.fill((0, 0, 0))
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
pygame.draw.circle(screen, (0, 255, 0), (x_pos, y_pos), r)
if x_pos > 20: # do not let the ball roll out of the screen
x_pos -= 1
pygame.time.delay(5) # delay in milliseconds
pygame.display.flip()
pygame.quit()
I think I am expecting another IF statement which allows it to bounce off the edge. I would like to continue using the code that I have, and I'm looking for just one or two lines that can hopefully solve my problems. I don't want the code to be completely revamped.
A:
you could define a veriable xSpeed which is initially positive.
every frame you would add xSpeed to the current x Position.
when ever the ball hits the right or left wall xSpeed's sign should get flipped.
A:
You need to change your move method a bit, you need to remove the else blocks as they mess up ball's movement, just always move the ball once when calling move. You can also combine checking whether ball is on edge for an axis in one line (using or):
def move(self):
if self.top <= 0 or self.bottom >= HEIGHT:
self.speedY *= -1
if self.left <= 0 or self.right >= WIDTH:
self.speedX *= -1
self.y += self.speedY
self.x += self.speedX
|
Pygame ball bounce left to right of screen
|
my Python code has a circle which moves from the right of the screen to the left but it stops. I would like it to bounce off the left edge and continue moving to the right and then bounce off the right edge to the left and so on. I think I'm missing a line. I have tried several things but it doesn't seem to be working. Please see code below. Any advice would be very grateful.
import pygame
pygame.init()
size = width, height = 400, 300
screen = pygame.display.set_mode(size)
x_pos = 380
y_pos = 280
r = 20
running = True
while running: # game cycle
screen.fill((0, 0, 0))
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
pygame.draw.circle(screen, (0, 255, 0), (x_pos, y_pos), r)
if x_pos > 20: # do not let the ball roll out of the screen
x_pos -= 1
pygame.time.delay(5) # delay in milliseconds
pygame.display.flip()
pygame.quit()
I think I am expecting another IF statement which allows it to bounce off the edge. I would like to continue using the code that I have, and I'm looking for just one or two lines that can hopefully solve my problems. I don't want the code to be completely revamped.
|
[
"you could define a veriable xSpeed which is initially positive.\nevery frame you would add xSpeed to the current x Position.\nwhen ever the ball hits the right or left wall xSpeed's sign should get flipped.\n",
"You need to change your move method a bit, you need to remove the else blocks as they mess up ball's movement, just always move the ball once when calling move. You can also combine checking whether ball is on edge for an axis in one line (using or):\ndef move(self):\n\n if self.top <= 0 or self.bottom >= HEIGHT:\n self.speedY *= -1\n\n if self.left <= 0 or self.right >= WIDTH:\n self.speedX *= -1\n\n self.y += self.speedY\n self.x += self.speedX\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"pygame",
"python"
] |
stackoverflow_0074462066_pygame_python.txt
|
Q:
Grey box appearing instead of image in Tkinter
I am trying to add a small icon next to one of my buttons for my app, however when I import the image and place it in the window it is just a grey box. The image I am adding is not transparent and in a jpg format, I have tried a png format also, ideally I would want it to accept a transparent png. My window has 2 frames, one for the side bar and one for the main screen. I am using customtkinter library, which makes tkinter look better (https://github.com/TomSchimansky/CustomTkinter). The image is in the same directory as the py file. The following is my code and images of the GUI before and after adding the code for the image.
code:
home_button = customtkinter.CTkButton(left_frame, text="Home", width=195, height=40,
corner_radius=10,
text_font=["", "16"],
fg_color=["#ffffff", "#000000"],
hover_color=["#f4f5fa", "#1d1e20"],
command=home
)
home_button.place(x=50, y=80)
home_button.text_label.grid(sticky="w")
home_ico = ImageTk.PhotoImage(Image.open("home_def.jpg"))
home_label = Label(image=home_ico)
home_label.place(x=20, y=80)
|
Grey box appearing instead of image in Tkinter
|
I am trying to add a small icon next to one of my buttons for my app, however when I import the image and place it in the window it is just a grey box. The image I am adding is not transparent and in a jpg format, I have tried a png format also, ideally I would want it to accept a transparent png. My window has 2 frames, one for the side bar and one for the main screen. I am using customtkinter library, which makes tkinter look better (https://github.com/TomSchimansky/CustomTkinter). The image is in the same directory as the py file. The following is my code and images of the GUI before and after adding the code for the image.
code:
home_button = customtkinter.CTkButton(left_frame, text="Home", width=195, height=40,
corner_radius=10,
text_font=["", "16"],
fg_color=["#ffffff", "#000000"],
hover_color=["#f4f5fa", "#1d1e20"],
command=home
)
home_button.place(x=50, y=80)
home_button.text_label.grid(sticky="w")
home_ico = ImageTk.PhotoImage(Image.open("home_def.jpg"))
home_label = Label(image=home_ico)
home_label.place(x=20, y=80)
|
[] |
[] |
[
"You are probably making a bad reference to the path of the image file.\nI bet this image is on the same folder of the .py file but that's not how python works.\nYou will need to get OS. PATH and make a reference to your source folder.\n"
] |
[
-1
] |
[
"image",
"python",
"tkinter"
] |
stackoverflow_0074462156_image_python_tkinter.txt
|
Q:
Removing decimals from strings
I'm having an introductory course in python right now and i get into some troubles with the task.
I have two strings in format:
a b c d e
f g h i l
I need to get this strings from .txt file,convert them as matrix to vertical format like this:
a f
b g
c h
d i
e l
and put into another .txt file, without using the numpy and pandas libraries. The problem is that from matrix like this:
1 2 3 4 5
6 7 8 9 10
where each number don't have to be an integer, i need to get this matrix:
1 6
2 7
3 8
4 9
5 10
and right now i can get only that with decimals:
1.0 6.0
2.0 7.0
3.0 8.0
4.0 9.0
5.0 10.0
So, from my POW, i need to somehow remove the .0 from the final result, but i dk how i can remove decimals from the strings, consisted with float numbers.
Here goes my code:
with open('input.txt') as f:
Matrix = [list(map(float, row.split())) for row in f.readlines()]
TrMatrix=[[Matrix[j][i] for j in range(len(Matrix))] for i in range(len(Matrix[0]))]
file=open('output.txt','w')
for i in range(len(TrMatrix)):
print(*TrMatrix[i],file=file)
A:
Change float to int. float contains decimals. int does not.
A:
Here is the solution as much as I understand your problem
with open('input.txt') as f:
cols = []
for row in f.readlines():
col = [int(float(i)) for i in row.split()]
cols.append(col)
new_rows = []
for i in range(len(cols[0])):
new_rows.append(' '.join([str(col[i]) for col in cols]))
Tr_matrix = '\n'.join(new_rows)
with open('output.txt','w') as file:
file.write(Tr_matrix)
print(Tr_matrix)
Input:
1 2 3 4.6 5.4
6 7 8 9 10
Output:
1 6
2 7
3 8
4 9
5 10
|
Removing decimals from strings
|
I'm having an introductory course in python right now and i get into some troubles with the task.
I have two strings in format:
a b c d e
f g h i l
I need to get this strings from .txt file,convert them as matrix to vertical format like this:
a f
b g
c h
d i
e l
and put into another .txt file, without using the numpy and pandas libraries. The problem is that from matrix like this:
1 2 3 4 5
6 7 8 9 10
where each number don't have to be an integer, i need to get this matrix:
1 6
2 7
3 8
4 9
5 10
and right now i can get only that with decimals:
1.0 6.0
2.0 7.0
3.0 8.0
4.0 9.0
5.0 10.0
So, from my POW, i need to somehow remove the .0 from the final result, but i dk how i can remove decimals from the strings, consisted with float numbers.
Here goes my code:
with open('input.txt') as f:
Matrix = [list(map(float, row.split())) for row in f.readlines()]
TrMatrix=[[Matrix[j][i] for j in range(len(Matrix))] for i in range(len(Matrix[0]))]
file=open('output.txt','w')
for i in range(len(TrMatrix)):
print(*TrMatrix[i],file=file)
|
[
"Change float to int. float contains decimals. int does not.\n",
"Here is the solution as much as I understand your problem\nwith open('input.txt') as f:\n cols = []\n for row in f.readlines():\n col = [int(float(i)) for i in row.split()]\n cols.append(col)\nnew_rows = []\nfor i in range(len(cols[0])):\n new_rows.append(' '.join([str(col[i]) for col in cols]))\nTr_matrix = '\\n'.join(new_rows)\nwith open('output.txt','w') as file:\n file.write(Tr_matrix)\nprint(Tr_matrix)\n\nInput:\n1 2 3 4.6 5.4 \n6 7 8 9 10 \n\nOutput:\n1 6\n2 7\n3 8\n4 9\n5 10\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"decimal",
"matrix",
"python",
"string"
] |
stackoverflow_0074461301_decimal_matrix_python_string.txt
|
Q:
Single log line
As a developer, I want a single log line with OpenTelemetry Logs. Using the following example I am able to use Otel _logs, but they are emitted on several lines, which makes correlating difficult.
common.py
import logging
from opentelemetry.sdk._logs import (
LogEmitterProvider,
LoggingHandler,
set_log_emitter_provider,
)
from opentelemetry.sdk._logs.export import BatchLogProcessor, ConsoleLogExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.semconv.resource import ResourceAttributes
from local_machine_resource_detector import LocalMachineResourceDetector
def configure_logger(name, version):
local_resource = LocalMachineResourceDetector().detect()
resource = local_resource.merge(
Resource.create(
{
ResourceAttributes.SERVICE_NAME: name,
ResourceAttributes.SERVICE_VERSION: version,
}
)
)
provider = LogEmitterProvider(resource=resource)
set_log_emitter_provider(provider)
exporter = ConsoleLogExporter()
provider.add_log_processor(BatchLogProcessor(exporter))
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
handler = LoggingHandler()
logger.addHandler(handler)
return logger
common_runner.py
from common import configure_logger
logger = configure_logger("common_runner", "6.6.6")
logger.debug(
"common_runner.py module has been run",
extra={
"username": "Sid Vicous",
},
)
Here's the output:
{
"body": "common_runner.py module has been run",
"severity_number": "<SeverityNumber.DEBUG: 5>",
"severity_text": "DEBUG",
"attributes": {
"username": "Sid Vicous"
},
"timestamp": "2022-07-13T14:40:08.595698Z",
"trace_id": "0x00000000000000000000000000000000",
"span_id": "0x0000000000000000",
"trace_flags": 0,
"resource": "BoundedAttributes({'telemetry.sdk.language': 'python', 'telemetry.sdk.name': 'opentelemetry', 'telemetry.sdk.version': '1.11.1', 'net.host.name': 'Doug.Ramirez', 'net.host.ip': '127.0.0.1', 'service.name': 'common_runner', 'service.version': '6.6.6'}, maxlen=None)"
}
Here's what I would like to see in my APM platform (Datadog):
{"body": "common_runner.py module has been run", "severity_number": "<SeverityNumber.DEBUG: 5>", "severity_text": "DEBUG", "attributes": {"username": "Sid Vicous"}, "timestamp": "2022-07-13T14:40:08.595698Z", "trace_id": "0x00000000000000000000000000000000", "span_id": "0x0000000000000000", "trace_flags": 0, "resource": "BoundedAttributes({'telemetry.sdk.language': 'python', 'telemetry.sdk.name': 'opentelemetry', 'telemetry.sdk.version': '1.11.1', 'net.host.name': 'Doug.Ramirez', 'net.host.ip': '127.0.0.1', 'service.name': 'common_runner', 'service.version': '6.6.6'}, maxlen=None)"}
A:
I had a similar issue with the ConsoleSpanExporter and solved this by writing a custom formatter:
from os import linesep
from opentelemetry.sdk.trace import ReadableSpan, TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter
def log_formatter_oneline(span: ReadableSpan):
return span.to_json(indent=None) + linesep
tracer_provider = TracerProvider(resource=Resource.create({"service.name": "my-service"}))
tracer_provider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter(formatter=log_formatter_oneline)))
|
Single log line
|
As a developer, I want a single log line with OpenTelemetry Logs. Using the following example I am able to use Otel _logs, but they are emitted on several lines, which makes correlating difficult.
common.py
import logging
from opentelemetry.sdk._logs import (
LogEmitterProvider,
LoggingHandler,
set_log_emitter_provider,
)
from opentelemetry.sdk._logs.export import BatchLogProcessor, ConsoleLogExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.semconv.resource import ResourceAttributes
from local_machine_resource_detector import LocalMachineResourceDetector
def configure_logger(name, version):
local_resource = LocalMachineResourceDetector().detect()
resource = local_resource.merge(
Resource.create(
{
ResourceAttributes.SERVICE_NAME: name,
ResourceAttributes.SERVICE_VERSION: version,
}
)
)
provider = LogEmitterProvider(resource=resource)
set_log_emitter_provider(provider)
exporter = ConsoleLogExporter()
provider.add_log_processor(BatchLogProcessor(exporter))
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
handler = LoggingHandler()
logger.addHandler(handler)
return logger
common_runner.py
from common import configure_logger
logger = configure_logger("common_runner", "6.6.6")
logger.debug(
"common_runner.py module has been run",
extra={
"username": "Sid Vicous",
},
)
Here's the output:
{
"body": "common_runner.py module has been run",
"severity_number": "<SeverityNumber.DEBUG: 5>",
"severity_text": "DEBUG",
"attributes": {
"username": "Sid Vicous"
},
"timestamp": "2022-07-13T14:40:08.595698Z",
"trace_id": "0x00000000000000000000000000000000",
"span_id": "0x0000000000000000",
"trace_flags": 0,
"resource": "BoundedAttributes({'telemetry.sdk.language': 'python', 'telemetry.sdk.name': 'opentelemetry', 'telemetry.sdk.version': '1.11.1', 'net.host.name': 'Doug.Ramirez', 'net.host.ip': '127.0.0.1', 'service.name': 'common_runner', 'service.version': '6.6.6'}, maxlen=None)"
}
Here's what I would like to see in my APM platform (Datadog):
{"body": "common_runner.py module has been run", "severity_number": "<SeverityNumber.DEBUG: 5>", "severity_text": "DEBUG", "attributes": {"username": "Sid Vicous"}, "timestamp": "2022-07-13T14:40:08.595698Z", "trace_id": "0x00000000000000000000000000000000", "span_id": "0x0000000000000000", "trace_flags": 0, "resource": "BoundedAttributes({'telemetry.sdk.language': 'python', 'telemetry.sdk.name': 'opentelemetry', 'telemetry.sdk.version': '1.11.1', 'net.host.name': 'Doug.Ramirez', 'net.host.ip': '127.0.0.1', 'service.name': 'common_runner', 'service.version': '6.6.6'}, maxlen=None)"}
|
[
"I had a similar issue with the ConsoleSpanExporter and solved this by writing a custom formatter:\nfrom os import linesep\n\nfrom opentelemetry.sdk.trace import ReadableSpan, TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter\n\ndef log_formatter_oneline(span: ReadableSpan):\n return span.to_json(indent=None) + linesep\n\ntracer_provider = TracerProvider(resource=Resource.create({\"service.name\": \"my-service\"}))\ntracer_provider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter(formatter=log_formatter_oneline)))\n\n"
] |
[
0
] |
[
"You can use a combination of stringifying and replacing your log, like so!\nlog_as_str = str(log)\nprint(log_as_str.replace(\"\\n\", \"\"))\n\nIf your log is a dict(), it should work fine. In case, you can use json.dumps(log)\n"
] |
[
-1
] |
[
"open_telemetry",
"python"
] |
stackoverflow_0072968235_open_telemetry_python.txt
|
Q:
the user created in the admin panel cannot log in to the admin panel
I have a custom user model:
class CustomUser(AbstractUser):
ACCESS_LEVELS = (
('user', 'Авторизованный пользователь'),
('admin', 'Администратор')
)
email = models.EmailField(
max_length=254,
unique=True,
verbose_name='Эл. почта'
)
access_level = models.CharField(
max_length=150,
choices=ACCESS_LEVELS,
blank=True,
default='user',
verbose_name='Уровень доступа',
)
@property
def is_admin(self):
return self.is_superuser or self.access_level == 'admin'
class Meta:
verbose_name = 'Пользователь'
verbose_name_plural = 'Пользователи'
def __str__(self):
return (
f'email: {self.email}, '
f'access_level: {self.access_level}'
)
Registered in the admin panel:
@admin.register(CustomUser)
class UserAdmin(admin.ModelAdmin):
list_display = ('username', 'email', 'access_level')
search_fields = ('email', 'access_level')
list_filter = ('email',)
def save_model(self, request, obj, form, change):
if obj.is_admin:
obj.is_staff = True
obj.save()
When I create a superuser or a user with staff status and try to log in, a message appears:
Please enter the correct username and password for a staff account. Note that both fields may be case-sensitive.
So I Googled the issue and tried everything I could. Here are all the problems I investigated:
Database not synced: I synced it and nothing changed.
No django_session table: I checked; it's there.
Problematic settings: I just added the created apps to INSTALLED_APPS.
User not configured correctly: is_staff, is_superuser, and is_active are all True.
Old sessions: I checked the django_session table and it's empty.
Missing or wrong URL pattern: Currently I have url('admin/', admin.site.urls) in mysite/urls.py.
Wrong server command: I'm using python manage.py runserver.
Something wrong with database: I tried deleting the database and then reapplying the migrations, but nothing changed.
A:
Use UserAdmin[Django-GitHub] to register UserModel. It will provide the functionality to hash the password when you enter the password in admin panel so:
from django.contrib.auth.admin import UserAdmin
from .models import CustomUser
class CustomUserAdmin(UserAdmin):
pass
admin.site.register(CustomUser, CustomUserAdmin)
|
the user created in the admin panel cannot log in to the admin panel
|
I have a custom user model:
class CustomUser(AbstractUser):
ACCESS_LEVELS = (
('user', 'Авторизованный пользователь'),
('admin', 'Администратор')
)
email = models.EmailField(
max_length=254,
unique=True,
verbose_name='Эл. почта'
)
access_level = models.CharField(
max_length=150,
choices=ACCESS_LEVELS,
blank=True,
default='user',
verbose_name='Уровень доступа',
)
@property
def is_admin(self):
return self.is_superuser or self.access_level == 'admin'
class Meta:
verbose_name = 'Пользователь'
verbose_name_plural = 'Пользователи'
def __str__(self):
return (
f'email: {self.email}, '
f'access_level: {self.access_level}'
)
Registered in the admin panel:
@admin.register(CustomUser)
class UserAdmin(admin.ModelAdmin):
list_display = ('username', 'email', 'access_level')
search_fields = ('email', 'access_level')
list_filter = ('email',)
def save_model(self, request, obj, form, change):
if obj.is_admin:
obj.is_staff = True
obj.save()
When I create a superuser or a user with staff status and try to log in, a message appears:
Please enter the correct username and password for a staff account. Note that both fields may be case-sensitive.
So I Googled the issue and tried everything I could. Here are all the problems I investigated:
Database not synced: I synced it and nothing changed.
No django_session table: I checked; it's there.
Problematic settings: I just added the created apps to INSTALLED_APPS.
User not configured correctly: is_staff, is_superuser, and is_active are all True.
Old sessions: I checked the django_session table and it's empty.
Missing or wrong URL pattern: Currently I have url('admin/', admin.site.urls) in mysite/urls.py.
Wrong server command: I'm using python manage.py runserver.
Something wrong with database: I tried deleting the database and then reapplying the migrations, but nothing changed.
|
[
"Use UserAdmin[Django-GitHub] to register UserModel. It will provide the functionality to hash the password when you enter the password in admin panel so:\nfrom django.contrib.auth.admin import UserAdmin\nfrom .models import CustomUser\n\nclass CustomUserAdmin(UserAdmin):\n pass\n\nadmin.site.register(CustomUser, CustomUserAdmin)\n\n"
] |
[
2
] |
[] |
[] |
[
"django",
"django_4.1",
"django_admin",
"python"
] |
stackoverflow_0074461881_django_django_4.1_django_admin_python.txt
|
Q:
Python Selenium driver.get() not working within a for loop
The code below logs into a YouTube account, and once logged in, it should visit a few YouTube videos.
The issue is:
If I do a simple direct link like here, it works
driver.get('https://www.youtube.com/watch?v=FFDDN1C1MEQ')
If I do a loop to visit multiple links i get an error:
raise exception_class(message, screen, stacktrace)
InvalidArgumentException: invalid type: sequence, expected a string at line 1 column 8
the full code is below
import time
import numpy as np
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
def youtube_login():
email = 'email@email.com'
password = 'emailPassword'
# Browser
driver = webdriver.Firefox()
driver.get('https://accounts.google.com/ServiceLogin?hl=en&continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Fhl%3Den%26feature%3Dsign_in_button%26app%3Ddesktop%26action_handle_signin%3Dtrue%26next%3D%252F&uilel=3&passive=true&service=youtube#identifier')
# log in
driver.find_element_by_id('identifierId').send_keys(email)
driver.find_element_by_class_name('CwaK9').click()
WebDriverWait(driver, 500).until(EC.element_to_be_clickable((By.NAME, "password")))
driver.find_element_by_name('password').send_keys(password)
driver.find_element_by_class_name('CwaK9').click()
WebDriverWait(driver, 500).until(EC.element_to_be_clickable((By.ID, "identity-prompt-confirm-button")))
driver.find_element_by_id('identity-prompt-confirm-button').click()
#driver.get('https://www.youtube.com/watch?v=FFDDN1C1MEQ') # If I do a simple direct link like here, it works
urls = []
# You can add in a file and import from there
inp = open ("urls.txt","r")
for line in inp.readlines():
urls.append(line.split())
for url in urls:
driver.get(url)
youtube_login()
A:
I think you have bad URL format in urls.txt
Try to debug URL like this:
from selenium.common.exceptions import InvalidArgumentException
for url in urls:
try:
driver.get(url)
except InvalidArgumentException:
print(url)
|
Python Selenium driver.get() not working within a for loop
|
The code below logs into a YouTube account, and once logged in, it should visit a few YouTube videos.
The issue is:
If I do a simple direct link like here, it works
driver.get('https://www.youtube.com/watch?v=FFDDN1C1MEQ')
If I do a loop to visit multiple links i get an error:
raise exception_class(message, screen, stacktrace)
InvalidArgumentException: invalid type: sequence, expected a string at line 1 column 8
the full code is below
import time
import numpy as np
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
def youtube_login():
email = 'email@email.com'
password = 'emailPassword'
# Browser
driver = webdriver.Firefox()
driver.get('https://accounts.google.com/ServiceLogin?hl=en&continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Fhl%3Den%26feature%3Dsign_in_button%26app%3Ddesktop%26action_handle_signin%3Dtrue%26next%3D%252F&uilel=3&passive=true&service=youtube#identifier')
# log in
driver.find_element_by_id('identifierId').send_keys(email)
driver.find_element_by_class_name('CwaK9').click()
WebDriverWait(driver, 500).until(EC.element_to_be_clickable((By.NAME, "password")))
driver.find_element_by_name('password').send_keys(password)
driver.find_element_by_class_name('CwaK9').click()
WebDriverWait(driver, 500).until(EC.element_to_be_clickable((By.ID, "identity-prompt-confirm-button")))
driver.find_element_by_id('identity-prompt-confirm-button').click()
#driver.get('https://www.youtube.com/watch?v=FFDDN1C1MEQ') # If I do a simple direct link like here, it works
urls = []
# You can add in a file and import from there
inp = open ("urls.txt","r")
for line in inp.readlines():
urls.append(line.split())
for url in urls:
driver.get(url)
youtube_login()
|
[
"I think you have bad URL format in urls.txt\nTry to debug URL like this:\nfrom selenium.common.exceptions import InvalidArgumentException\n\nfor url in urls:\n try: \n driver.get(url)\n except InvalidArgumentException:\n print(url)\n\n"
] |
[
1
] |
[
"I solved this with import time, time.sleep(0.5) before driver.get().\n"
] |
[
-1
] |
[
"python",
"selenium",
"selenium_webdriver"
] |
stackoverflow_0053032043_python_selenium_selenium_webdriver.txt
|
Q:
Colors not displaying properly matplotlib bar chart
I am trying to create a bar chart with one colour per bar.
Dataset: Dataset Link
When I use the color parameter in a matplotlib bar chart, the colours do not assign one to each bar. They randomly distribute throughout all the bars, with no explicit pattern.
This is the code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#Read file
df = pd.read_excel('CAR DETAILS FROM CAR DEKHO.xlsx')
#Change price column
df["price_sterling"] = df["selling_price"].apply(lambda x: x*0.011)
#PLOT BAR
plt.bar(df.owner,df.price_sterling,color=['red', 'yellow', 'black', 'blue', 'orange'])
plt.xticks(rotation="vertical")
plt.show()
Bar chart output
I don't know what I'm doing wrong. Any ideas?
I have not seen this problem in any of the many tutorials and SO pages I have visited...
A:
Because it will depict the value of your df.price and df.owner in color randomly. I don't know what you're plotting to represent but what if you need to assign a color to each bar. Simply use .set_color for each bar as below example
graph = plt.bar([1,2,3,4], [10,11,12,13])
graph[0].set_color('red')
graph[1].set_color('green')
graph[2].set_color('orange')
graph[3].set_color('blue')
plt.show()
or another way
a = [1, 2, 3, 4]
b = [10, 11, 12, 13]
figure, axes = plt.subplots()
axes.bar(a, b, color=['red', 'green', 'orange', 'blue'])
I think you are trying to calculate the total "selling price" of the "owner"? If so then you can add the following
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#Read file
df = pd.read_excel('CAR DETAILS FROM CAR DEKHO.xlsx')
#Change price column
df["price_sterling"] = df["selling_price"].apply(lambda x: x*0.011)
new_df = df.groupby(['owner']).sum()
#PLOT BAR
plt.bar(new_df.index,new_df.price_sterling,color=['red', 'yellow', 'black', 'blue', 'orange'])
plt.xticks(rotation="vertical")
plt.show()
|
Colors not displaying properly matplotlib bar chart
|
I am trying to create a bar chart with one colour per bar.
Dataset: Dataset Link
When I use the color parameter in a matplotlib bar chart, the colours do not assign one to each bar. They randomly distribute throughout all the bars, with no explicit pattern.
This is the code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#Read file
df = pd.read_excel('CAR DETAILS FROM CAR DEKHO.xlsx')
#Change price column
df["price_sterling"] = df["selling_price"].apply(lambda x: x*0.011)
#PLOT BAR
plt.bar(df.owner,df.price_sterling,color=['red', 'yellow', 'black', 'blue', 'orange'])
plt.xticks(rotation="vertical")
plt.show()
Bar chart output
I don't know what I'm doing wrong. Any ideas?
I have not seen this problem in any of the many tutorials and SO pages I have visited...
|
[
"Because it will depict the value of your df.price and df.owner in color randomly. I don't know what you're plotting to represent but what if you need to assign a color to each bar. Simply use .set_color for each bar as below example\n graph = plt.bar([1,2,3,4], [10,11,12,13])\n graph[0].set_color('red')\n graph[1].set_color('green')\n graph[2].set_color('orange')\n graph[3].set_color('blue')\n plt.show()\n\nor another way\na = [1, 2, 3, 4]\nb = [10, 11, 12, 13]\nfigure, axes = plt.subplots()\naxes.bar(a, b, color=['red', 'green', 'orange', 'blue'])\n\nI think you are trying to calculate the total \"selling price\" of the \"owner\"? If so then you can add the following\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n#Read file\ndf = pd.read_excel('CAR DETAILS FROM CAR DEKHO.xlsx')\n\n#Change price column\ndf[\"price_sterling\"] = df[\"selling_price\"].apply(lambda x: x*0.011)\nnew_df = df.groupby(['owner']).sum()\n\n#PLOT BAR\nplt.bar(new_df.index,new_df.price_sterling,color=['red', 'yellow', 'black', 'blue', 'orange'])\nplt.xticks(rotation=\"vertical\")\nplt.show()\n\n"
] |
[
1
] |
[] |
[] |
[
"bar_chart",
"matplotlib",
"python"
] |
stackoverflow_0074461007_bar_chart_matplotlib_python.txt
|
Q:
Python matplotlib barbs/quiver map colors to different sets of values
I am trying to create a barb vector plot in matplotlib and map some colors to specific magnitudes: for example, to have vectors with magnitudes between 10 and 20 plotted as blue, and between 20 and 30 as rgb(0,15,40), and so on. The documentation for the barbs and quiver functions (they are similar) mentions the C input arg:
barb(X, Y, U, V, C, **kw)
Arguments:
X, Y:
The x and y coordinates of the barb locations (default is head of barb; see pivot kwarg)
U, V:
Give the x and y components of the barb shaft
C:
An optional array used to map colors to the barbs
However, this is very vague, and after searching all over Google, I am no closer to understanding how to use this color array in specific ways. I managed to discover that by setting C equal to the array of vector magnitudes and specifying the "cmap" kwarg, it will map the barbs to the specified colormap, as in the example code below. However, this is not what I want. I want to control the colors of specific groups of magnitudes. Any help would be appreciated.
Example code:
from matplotlib import pyplot as plt
from numpy import arange,meshgrid,sqrt
u,v = arange(-50,51,10),arange(-50,51,10)
u,v = meshgrid(u,v)
x,y = u,v
C = sqrt(u**2 + v**2)
plt.barbs(x,y,u,v,C,cmap=plt.cm.jet)
plt.show()
Resulting plot image link: (sorry can't post images directly yet)
http://i49.tinypic.com/xombmc.jpg
A:
You can get it by discretizing the map.
import matplotlib as mpl
import pyplot as plt
from numpy import arange,meshgrid,sqrt
u,v = arange(-50,51,10),arange(-50,51,10)
u,v = meshgrid(u,v)
x,y = u,v
C = sqrt(u**2 + v**2)
cmap=plt.cm.jet
bounds = [10, 20, 40, 60]
norm = mpl.colors.BoundaryNorm(bounds, cmap.N)
img=plt.barbs(x,y,u,v,C,cmap=cmap,norm=norm)
plt.colorbar(img, cmap=cmap, norm=norm, boundaries=bounds, ticks=bounds)
plt.show()
A:
sx = 0
ex = 135
sy = 0
ey = 234
plt.barbs(x[sx:ex:5, sy:ey:5], y[sx:ex:5, sy:ey:5],
u[sx:ex:5, sy:ey:5], v[sx:ex:5, sy:ey:5],
u[sx:ex:5, sy:ey:5], cmap='coolwarm',
linewidth=1)
try this for "different color barbs w.r.t wind speed"
|
Python matplotlib barbs/quiver map colors to different sets of values
|
I am trying to create a barb vector plot in matplotlib and map some colors to specific magnitudes: for example, to have vectors with magnitudes between 10 and 20 plotted as blue, and between 20 and 30 as rgb(0,15,40), and so on. The documentation for the barbs and quiver functions (they are similar) mentions the C input arg:
barb(X, Y, U, V, C, **kw)
Arguments:
X, Y:
The x and y coordinates of the barb locations (default is head of barb; see pivot kwarg)
U, V:
Give the x and y components of the barb shaft
C:
An optional array used to map colors to the barbs
However, this is very vague, and after searching all over Google, I am no closer to understanding how to use this color array in specific ways. I managed to discover that by setting C equal to the array of vector magnitudes and specifying the "cmap" kwarg, it will map the barbs to the specified colormap, as in the example code below. However, this is not what I want. I want to control the colors of specific groups of magnitudes. Any help would be appreciated.
Example code:
from matplotlib import pyplot as plt
from numpy import arange,meshgrid,sqrt
u,v = arange(-50,51,10),arange(-50,51,10)
u,v = meshgrid(u,v)
x,y = u,v
C = sqrt(u**2 + v**2)
plt.barbs(x,y,u,v,C,cmap=plt.cm.jet)
plt.show()
Resulting plot image link: (sorry can't post images directly yet)
http://i49.tinypic.com/xombmc.jpg
|
[
"You can get it by discretizing the map.\nimport matplotlib as mpl \nimport pyplot as plt\nfrom numpy import arange,meshgrid,sqrt\n\nu,v = arange(-50,51,10),arange(-50,51,10)\nu,v = meshgrid(u,v)\nx,y = u,v\nC = sqrt(u**2 + v**2)\ncmap=plt.cm.jet\nbounds = [10, 20, 40, 60]\nnorm = mpl.colors.BoundaryNorm(bounds, cmap.N)\nimg=plt.barbs(x,y,u,v,C,cmap=cmap,norm=norm)\nplt.colorbar(img, cmap=cmap, norm=norm, boundaries=bounds, ticks=bounds)\nplt.show()\n\n\n",
"sx = 0\nex = 135\nsy = 0\ney = 234\nplt.barbs(x[sx:ex:5, sy:ey:5], y[sx:ex:5, sy:ey:5],\n u[sx:ex:5, sy:ey:5], v[sx:ex:5, sy:ey:5],\n u[sx:ex:5, sy:ey:5], cmap='coolwarm',\n linewidth=1)\n\ntry this for \"different color barbs w.r.t wind speed\"\n"
] |
[
3,
0
] |
[] |
[] |
[
"colors",
"matplotlib",
"python",
"vector"
] |
stackoverflow_0011476752_colors_matplotlib_python_vector.txt
|
Q:
Computing log-likelihood on a validation / test set
Regression results from Python's statsmodels library include the value llf, which is, I recon, the log-likelihood obtained during fitting. I am, however, interested in log-likelihood on new data, those I use in predict(). Is there a function (even if undocumented) I can call to obtain it? In particular, I am interested in log-likelihood for OrderedModel.
A:
Computing loglikelihood on new data is not directly possible in statsmodels.
(see for example https://github.com/statsmodels/statsmodels/issues/7947 )
The model loglike method always uses the data, endog, exog and other model specific arrays, that is attached to the model as attributes.
Several models like GLM and standard discrete models like Logit, Poisson have a get_distribution method (in statsmodels 0.14) that returns a scipy stats compatible distribution instance for new data similar to predict. This distribution instance has a pdf and logpdf method that can be use to compute the loglikelihood for predictions.
However, that is not yet available for models like OrderedModel.
Two possible workarounds, that might work for most cases (I have not checked for OrderedModel)
Create a new model with the predict data and then evaluate model.loglike with params from the estimated model. This will use nobs and degrees of freedom based on the prediction data and not the original model. So results that depend on those might not be appropriate for some usecases.
Change the data attributes of the underlying model. That is, assign the new data to model.endog, model.exog and, if necessary, other arrays. Then call the model.loglike method with the estimated parameters.
Both of those are hacks that might work for loglike but might not work for some other model or results statistics.
A proper way would be to write new functions that either compute the loglike directly, or that convert predicted probabilities to create a multinomial distribution instance.
|
Computing log-likelihood on a validation / test set
|
Regression results from Python's statsmodels library include the value llf, which is, I recon, the log-likelihood obtained during fitting. I am, however, interested in log-likelihood on new data, those I use in predict(). Is there a function (even if undocumented) I can call to obtain it? In particular, I am interested in log-likelihood for OrderedModel.
|
[
"Computing loglikelihood on new data is not directly possible in statsmodels.\n(see for example https://github.com/statsmodels/statsmodels/issues/7947 )\nThe model loglike method always uses the data, endog, exog and other model specific arrays, that is attached to the model as attributes.\nSeveral models like GLM and standard discrete models like Logit, Poisson have a get_distribution method (in statsmodels 0.14) that returns a scipy stats compatible distribution instance for new data similar to predict. This distribution instance has a pdf and logpdf method that can be use to compute the loglikelihood for predictions.\nHowever, that is not yet available for models like OrderedModel.\nTwo possible workarounds, that might work for most cases (I have not checked for OrderedModel)\n\nCreate a new model with the predict data and then evaluate model.loglike with params from the estimated model. This will use nobs and degrees of freedom based on the prediction data and not the original model. So results that depend on those might not be appropriate for some usecases.\nChange the data attributes of the underlying model. That is, assign the new data to model.endog, model.exog and, if necessary, other arrays. Then call the model.loglike method with the estimated parameters.\n\nBoth of those are hacks that might work for loglike but might not work for some other model or results statistics.\nA proper way would be to write new functions that either compute the loglike directly, or that convert predicted probabilities to create a multinomial distribution instance.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"statsmodels"
] |
stackoverflow_0074459317_python_statsmodels.txt
|
Q:
Cannot load file containing pickled data - Python .npy I/O
I am trying to save a dataframe and a matrix as .npy files with np.save() and then read them using np.load() but I get the following error:
File "/Users/sofiafarina/opt/anaconda3/lib/python3.7/site-packages/numpy/lib/npyio.py", line 457, in load
raise ValueError("Cannot load file containing pickled data "
ValueError: Cannot load file containing pickled data when allow_pickle=False
Even if I write allow_pickle=True I get an error:
File "/Users/sofiafarina/opt/anaconda3/lib/python3.7/site-packages/numpy/lib/npyio.py", line 463, in load
"Failed to interpret file %s as a pickle" % repr(file))
OSError: Failed to interpret file 'finaldf_p_85_12.npy' as a pickle
So how could I save a df from a python script and then load it in another one? Should I use other functions?
Thank you!
A:
I used the syntax below to load the .npy file and it worked.
np.load("finaldf_p_85_12.npy",allow_pickle=True)
I think you need to add allow_pickle=True parameter.
A:
TLDR;
After hundreds of search and hours of debugging I found out that the issue was with git-lfs, my files did not get pulled using git-lfs.
git lfs install
git lfs pull
I think numpy needs to report this correctly
I had the exact same issue. dtype in my .npz file was uint8, so not an Object, technically allow_pickle should not be required. My numpy version is 1.20.x
Got the following when using allow_pickle=False
ValueError: Cannot load file containing pickled data when allow_pickle=False
And with allow_pickle=True I got
OSError: Failed to interpret file 'finaldf_p_85_12.npy' as a pickle
A:
Python uses a native data serialization module called Pickle. Nested data (like a list of lists) is serialized using pickle and NumPy warns against pickling.
Warning:
Loading files that contain object arrays uses the pickle module, which is not secure against erroneous or maliciously constructed data. Consider passing allow_pickle=False to load data that is known not to contain object arrays for the safer handling of untrusted sources.
You might be saving an array which consists a single dataFrame. This causes pickling. Example:
x = array([[ 0.1, 0.1, 0.1],
[ 0.1, 0.1, 0.1],
[ 0.1, 0.1, 0.1],
[ 0.1, 0.1, 0.1],
[ 0.1, 0.1, 0.1],
[ 0.1, 0.1, 0.1],
[ 0.1, 0.1, 0.1]])
In that case, try saving just the numpy array as np.save(filename, x[0]). This will not use any pickling to save your data and resolves the issue.
A:
The OSError suggests you could be having a python 2/python 3 issue. I had the same problem and errors when I was trying to read a file with python 3 that had been written in python 2. For me, using the np.load command with the following arguments worked:
np.load('file.npy',allow_pickle=True,fix_imports=True,encoding='latin1')
The doc for numpy.load says about the encoding argument, "Only useful when loading Python 2 generated pickled files in Python 3, which includes npy/npz files containing object arrays."
A:
The existing answers are all useful. Just a note that I just got this error from someone else's pickle file when the file itself was corrupted. As above, allow_pickle=False complained about pickle being disabled, and allow_pickle=True complained about it not being a valid pickle file. The fix was just redownloading the file in my case.
A:
I had the same issue. Try np.loadtxt instead.
A:
I was dealing with the problem long time. I have tried all of the solutions which are listed here however they all didn't work. I have tried different versions of python such as 3.7, 2.7, 3.9 and the result was same.
Finally I have noticed that the file with the extension .npy is corrupted so it gives out this error. Here is the line giving the error.
npyFile = np.load('file1.npy')
So whoever come accross the same problem first of all it would be better to check the .npy file.
A:
I uploaded my documents to drive and I uploaded the documents from the drive.
It is solved.
from google.colab import drive
drive.mount("/content/drive")
label = np.load("path/labels.npy")
|
Cannot load file containing pickled data - Python .npy I/O
|
I am trying to save a dataframe and a matrix as .npy files with np.save() and then read them using np.load() but I get the following error:
File "/Users/sofiafarina/opt/anaconda3/lib/python3.7/site-packages/numpy/lib/npyio.py", line 457, in load
raise ValueError("Cannot load file containing pickled data "
ValueError: Cannot load file containing pickled data when allow_pickle=False
Even if I write allow_pickle=True I get an error:
File "/Users/sofiafarina/opt/anaconda3/lib/python3.7/site-packages/numpy/lib/npyio.py", line 463, in load
"Failed to interpret file %s as a pickle" % repr(file))
OSError: Failed to interpret file 'finaldf_p_85_12.npy' as a pickle
So how could I save a df from a python script and then load it in another one? Should I use other functions?
Thank you!
|
[
"I used the syntax below to load the .npy file and it worked.\nnp.load(\"finaldf_p_85_12.npy\",allow_pickle=True)\n\nI think you need to add allow_pickle=True parameter.\n",
"TLDR;\nAfter hundreds of search and hours of debugging I found out that the issue was with git-lfs, my files did not get pulled using git-lfs.\ngit lfs install\ngit lfs pull\n\nI think numpy needs to report this correctly\n\nI had the exact same issue. dtype in my .npz file was uint8, so not an Object, technically allow_pickle should not be required. My numpy version is 1.20.x\nGot the following when using allow_pickle=False\nValueError: Cannot load file containing pickled data when allow_pickle=False\nAnd with allow_pickle=True I got\nOSError: Failed to interpret file 'finaldf_p_85_12.npy' as a pickle\n",
"Python uses a native data serialization module called Pickle. Nested data (like a list of lists) is serialized using pickle and NumPy warns against pickling.\n\nWarning:\nLoading files that contain object arrays uses the pickle module, which is not secure against erroneous or maliciously constructed data. Consider passing allow_pickle=False to load data that is known not to contain object arrays for the safer handling of untrusted sources.\n\nYou might be saving an array which consists a single dataFrame. This causes pickling. Example:\nx = array([[ 0.1, 0.1, 0.1],\n [ 0.1, 0.1, 0.1],\n [ 0.1, 0.1, 0.1],\n [ 0.1, 0.1, 0.1],\n [ 0.1, 0.1, 0.1],\n [ 0.1, 0.1, 0.1],\n [ 0.1, 0.1, 0.1]])\n\nIn that case, try saving just the numpy array as np.save(filename, x[0]). This will not use any pickling to save your data and resolves the issue.\n",
"The OSError suggests you could be having a python 2/python 3 issue. I had the same problem and errors when I was trying to read a file with python 3 that had been written in python 2. For me, using the np.load command with the following arguments worked:\nnp.load('file.npy',allow_pickle=True,fix_imports=True,encoding='latin1')\n\nThe doc for numpy.load says about the encoding argument, \"Only useful when loading Python 2 generated pickled files in Python 3, which includes npy/npz files containing object arrays.\"\n",
"The existing answers are all useful. Just a note that I just got this error from someone else's pickle file when the file itself was corrupted. As above, allow_pickle=False complained about pickle being disabled, and allow_pickle=True complained about it not being a valid pickle file. The fix was just redownloading the file in my case.\n",
"I had the same issue. Try np.loadtxt instead.\n",
"I was dealing with the problem long time. I have tried all of the solutions which are listed here however they all didn't work. I have tried different versions of python such as 3.7, 2.7, 3.9 and the result was same.\nFinally I have noticed that the file with the extension .npy is corrupted so it gives out this error. Here is the line giving the error.\nnpyFile = np.load('file1.npy')\n\nSo whoever come accross the same problem first of all it would be better to check the .npy file.\n",
"I uploaded my documents to drive and I uploaded the documents from the drive.\nIt is solved.\nfrom google.colab import drive\ndrive.mount(\"/content/drive\")\nlabel = np.load(\"path/labels.npy\") \n\n"
] |
[
10,
9,
2,
1,
1,
0,
0,
0
] |
[
"Just make sure the file isn't corrupted.\n"
] |
[
-2
] |
[
"io",
"numpy",
"python"
] |
stackoverflow_0060191681_io_numpy_python.txt
|
Q:
Getting different Values when using groupby(column)["id"].nunique and trying to add a column using transform
I'm trying to count the individual values per group in a dataset and add them as a new column to a table. The first one works, the second one produces wrong values.
When I use the following code
unique_id_per_column = source_table.groupby("disease").some_id.nunique()
I'll get
| | disease | some_id |
|---:|:------------------------|--------:|
| 0 | disease1 | 121 |
| 1 | disease2 | 1 |
| 2 | disease3 | 5 |
| 3 | disease4 | 9 |
| 4 | disease5 | 77 |
These numbers seem to check out, but I want to add them to another table where I have already a column with all values per group.
So I used the following code
table["unique_ids"] = source_table.groupby("disease").uniqe_id.transform("nunique")
and I get the following table, with wrong numbers for every row except the first.
| | disease |some_id | unique_ids |
|---:|:------------------------|-------:|------------------:|
| 0 | disease1 | 151 | 121 |
| 1 | disease2 | 1 | 121 |
| 2 | disease3 | 5 | 121 |
| 3 | disease4 | 9 | 121 |
| 4 | disease5 | 91 | 121 |
I've expected that I will get the same results as in the first table. Anyone knows why I get the number for the first row repeated instead of correct numbers?
A:
Solution with Series.map if need create column in another DataFrame:
s = source_table.groupby("disease").some_id.nunique()
table["unique_ids"] = table["disease"].map(s)
|
Getting different Values when using groupby(column)["id"].nunique and trying to add a column using transform
|
I'm trying to count the individual values per group in a dataset and add them as a new column to a table. The first one works, the second one produces wrong values.
When I use the following code
unique_id_per_column = source_table.groupby("disease").some_id.nunique()
I'll get
| | disease | some_id |
|---:|:------------------------|--------:|
| 0 | disease1 | 121 |
| 1 | disease2 | 1 |
| 2 | disease3 | 5 |
| 3 | disease4 | 9 |
| 4 | disease5 | 77 |
These numbers seem to check out, but I want to add them to another table where I have already a column with all values per group.
So I used the following code
table["unique_ids"] = source_table.groupby("disease").uniqe_id.transform("nunique")
and I get the following table, with wrong numbers for every row except the first.
| | disease |some_id | unique_ids |
|---:|:------------------------|-------:|------------------:|
| 0 | disease1 | 151 | 121 |
| 1 | disease2 | 1 | 121 |
| 2 | disease3 | 5 | 121 |
| 3 | disease4 | 9 | 121 |
| 4 | disease5 | 91 | 121 |
I've expected that I will get the same results as in the first table. Anyone knows why I get the number for the first row repeated instead of correct numbers?
|
[
"Solution with Series.map if need create column in another DataFrame:\ns = source_table.groupby(\"disease\").some_id.nunique()\n\ntable[\"unique_ids\"] = table[\"disease\"].map(s) \n\n"
] |
[
1
] |
[] |
[] |
[
"group_by",
"pandas",
"python"
] |
stackoverflow_0074462377_group_by_pandas_python.txt
|
Q:
Create columns from strings that are in a list
I am trying to create a set of columns from a list taking a string from another column.
I have found a temporary solution in this post but it only creates one column if, for example, I have in String1 "I have a dog and a cat".
In [7]: df["animal"] = df["String1"].map(lambda s: next((animal for animal in search_list if animal in s), "other"))
...:
In [8]: df
Out[8]:
weight String1 animal
0 70 Labrador is a dog dog
1 10 Abyssinian is a cat cat
2 65 German Shepard is a dog dog
3 1 pigeon is a bird other
How could I create two columns, like ['animal_1'] and ['animal_2'] to have both "dog" (in ['animal_1']) and "cat" in ['animal_2']?
Desired output would be like below:
weight String1 animal_1 animal_2
0 70 Labrador is a dog dog
1 10 Abyssinian is a cat cat
2 65 German Shepard is a dog dog
3 1 pigeon is a bird other
4 30 I have a dog and a cat dog cat
A:
You can use:
animals = ['dog', 'cat']
regex = '|'.join(animals)
out = (df.join(
df['String1'].str.extractall(fr'\b({regex})\b')[0].unstack()
.rename(columns=lambda x: f'animal_{x+1}')
)
.fillna({'animal_1': 'other'})
)
Output:
weight String1 animal_1 animal_2
0 70 Labrador is a dog dog NaN
1 10 Abyssinian is a cat cat NaN
2 65 German Shepard is a dog dog NaN
3 1 pigeon is a bird other NaN
4 30 I have a dog and a cat dog cat
A:
It's a good idea to compile the regex at the beginning and use the compiled regex in the loop.
import re
import pandas as pd
ANIMALS = {"dog", "cat"}
PATTERN = re.compile("|".join(rf"\b{x}\b" for x in ANIMALS))
data = {"String1": ["Labrador is a dog", "Abyssinian is a cat", "German Shepard is a dog", "pigeon is a bird", "I have a dog and a cat"]}
df = pd.DataFrame(data)
for ix, item in df["String1"].items():
for i, animal in enumerate(pattern.findall(item)):
df.loc[ix, f"animal_{i+1}"] = animal
df.fillna({"animal_1": "other"}, inplace=True)
|
Create columns from strings that are in a list
|
I am trying to create a set of columns from a list taking a string from another column.
I have found a temporary solution in this post but it only creates one column if, for example, I have in String1 "I have a dog and a cat".
In [7]: df["animal"] = df["String1"].map(lambda s: next((animal for animal in search_list if animal in s), "other"))
...:
In [8]: df
Out[8]:
weight String1 animal
0 70 Labrador is a dog dog
1 10 Abyssinian is a cat cat
2 65 German Shepard is a dog dog
3 1 pigeon is a bird other
How could I create two columns, like ['animal_1'] and ['animal_2'] to have both "dog" (in ['animal_1']) and "cat" in ['animal_2']?
Desired output would be like below:
weight String1 animal_1 animal_2
0 70 Labrador is a dog dog
1 10 Abyssinian is a cat cat
2 65 German Shepard is a dog dog
3 1 pigeon is a bird other
4 30 I have a dog and a cat dog cat
|
[
"You can use:\nanimals = ['dog', 'cat']\nregex = '|'.join(animals)\n\nout = (df.join(\n df['String1'].str.extractall(fr'\\b({regex})\\b')[0].unstack()\n .rename(columns=lambda x: f'animal_{x+1}')\n )\n .fillna({'animal_1': 'other'})\n )\n\nOutput:\n weight String1 animal_1 animal_2\n0 70 Labrador is a dog dog NaN\n1 10 Abyssinian is a cat cat NaN\n2 65 German Shepard is a dog dog NaN\n3 1 pigeon is a bird other NaN\n4 30 I have a dog and a cat dog cat\n\n",
"It's a good idea to compile the regex at the beginning and use the compiled regex in the loop.\nimport re\nimport pandas as pd\n\nANIMALS = {\"dog\", \"cat\"}\nPATTERN = re.compile(\"|\".join(rf\"\\b{x}\\b\" for x in ANIMALS))\n\ndata = {\"String1\": [\"Labrador is a dog\", \"Abyssinian is a cat\", \"German Shepard is a dog\", \"pigeon is a bird\", \"I have a dog and a cat\"]}\ndf = pd.DataFrame(data)\n\nfor ix, item in df[\"String1\"].items():\n for i, animal in enumerate(pattern.findall(item)):\n df.loc[ix, f\"animal_{i+1}\"] = animal\ndf.fillna({\"animal_1\": \"other\"}, inplace=True)\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"list",
"pandas",
"python",
"text_extraction"
] |
stackoverflow_0074459380_list_pandas_python_text_extraction.txt
|
Q:
How can I generate my own Fernet key in Python?
I've tried something like this but it doesn't seem to be working:
from cryptography.fernet import Fernet
from base64 import urlsafe_b64encode as b64e
bytes_gen = b64e(PASSWORD.encode())
if len(bytes_gen) < 32:
bytes_gen += b'=' * (32 - len(bytes_gen))
elif len(bytes_gen) > 32:
bytes_gen = bytes_gen[:32]
print(bytes_gen, len(bytes_gen))
f = Fernet(bytes_gen)
Terminal:
ValueError: Fernet key must be 32 url-safe base64-encoded bytes.
A:
You are reading it wrong. The key is a cryptographic keys, which are usually 16, 24 or 32 bytes in size. So the phrase "Fernet key must be 32 url-safe base64-encoded bytes." doesn't mean that the encoding needs to be 32 characters in size, it means that there are 32 bytes that need to be encoded.
You seem to want to use a password instead of a key though. In that case you need to read the appropriate section in the manual that explains how PBKDF2 can be used to derive a key from a password.
|
How can I generate my own Fernet key in Python?
|
I've tried something like this but it doesn't seem to be working:
from cryptography.fernet import Fernet
from base64 import urlsafe_b64encode as b64e
bytes_gen = b64e(PASSWORD.encode())
if len(bytes_gen) < 32:
bytes_gen += b'=' * (32 - len(bytes_gen))
elif len(bytes_gen) > 32:
bytes_gen = bytes_gen[:32]
print(bytes_gen, len(bytes_gen))
f = Fernet(bytes_gen)
Terminal:
ValueError: Fernet key must be 32 url-safe base64-encoded bytes.
|
[
"You are reading it wrong. The key is a cryptographic keys, which are usually 16, 24 or 32 bytes in size. So the phrase \"Fernet key must be 32 url-safe base64-encoded bytes.\" doesn't mean that the encoding needs to be 32 characters in size, it means that there are 32 bytes that need to be encoded.\nYou seem to want to use a password instead of a key though. In that case you need to read the appropriate section in the manual that explains how PBKDF2 can be used to derive a key from a password.\n"
] |
[
1
] |
[] |
[] |
[
"byte",
"cryptography",
"fernet",
"python",
"valueerror"
] |
stackoverflow_0074461900_byte_cryptography_fernet_python_valueerror.txt
|
Q:
Changing level of some columns in multi index
I have a data frame that is looking like this (DATA is the year and month of the order) :
CUSTOMER_ID
NAME
DATA
COFFEE_SOLD(KG)
WATER_SOLD(L)
10000
ALEX
2022 - 01
3
4
10000
ALEX
2022 - 01
5
6
10000
ALEX
2022 - 02
7
8
10001
JOE
2022 - 02
1
1
10001
JOE
2022 - 03
1
0
I pivoted the df with :
df_rap = df_rap.pivot_table(index=["CUSTOMER_ID",'NAME',],columns=["DATA"], values=['COFFEE_SOLD(KG)','WATER_SOLD(L)'], aggfunc='sum').reset_index()
The result :
CUSTOMER_ID
NAME
COFFEE_SOLD(KG)
COFFEE_SOLD(KG)
COFFEE_SOLD(KG)
WATER_SOLD(L)
WATER_SOLD(L)
WATER_SOLD(L)
DATA
2022 - 01
2022 - 02
2022 - 03
2022 - 01
2022 - 02
2022 - 03
0
10000
ALEX
8
7
0
10
8
0
1
10001
JOE
0
1
1
0
1
0
The format is ok but I want to export it to excel. For that I need the data frame to look like this :
COFFEE_SOLD(KG)
COFFEE_SOLD(KG)
COFFEE_SOLD(KG)
WATER_SOLD(L)
WATER_SOLD(L)
WATER_SOLD(L)
DATA
CUSTOMER_ID
NAME
2022 - 01
2022 - 02
2022 - 03
2022 - 01
2022 - 02
2022 - 03
0
10000
ALEX
8
7
0
10
8
0
1
10001
JOE
0
1
1
0
1
0
In other words, i would like to lower the level of the first 2 column ( in header ), to save it in excel properly.
I tried :
df.reset_index()
And it dosen't work.
EDIT :
With :
display( df_copy.columns)
I saw the format of the columns :
MultiIndex([('CUSTOMER_ID', ''),
('NAME', ''),
('COFFEE_SOLD(KG)', '2022 - 01'),
('COFFEE_SOLD(KG)', '2022 - 02'),
('COFFEE_SOLD(KG)', '2022 - 03'),
('WATER_SOLD(L)', '2022 - 01'),
('WATER_SOLD(L)', '2022 - 02'),
('WATER_SOLD(L)', '2022 - 03'),],
names=[None, 'DATA'])
I expected to be :
MultiIndex([('', 'CUSTOMER_ID'),
('', 'NAME'),
('COFFEE_SOLD(KG)', '2022 - 01'),
('COFFEE_SOLD(KG)', '2022 - 02'),
('COFFEE_SOLD(KG)', '2022 - 03'),
('WATER_SOLD(L)', '2022 - 01'),
('WATER_SOLD(L)', '2022 - 02'),
('WATER_SOLD(L)', '2022 - 03'),],
names=[None, 'DATA'])
Thank you !
A:
A possible approach is to overwrite the column values:
cols = [('', 'CUSTOMER_ID'), ('', 'NAME'),]
for t in df_rap.columns[2:]:
cols.append(t)
df_rap.columns = pd.MultiIndex.from_tuples(cols)
This leads to a data frame without the word DATA in it. Which somehow makes sense, as DATA has lost some of it's meaning - now being just the name of the index column. If you nevertheless need to keep DATA, you could create a new column with the corresponding values, rename all columns and move the DATA column to the front (and save the data frame without the new index column) :
# create a new DATA column
df_rap['DATA'] = df_rap.index
# set new values for the column headers (this time including the new DATA column)
cols = [('', 'CUSTOMER_ID'), ('', 'NAME'),]
for t in df_rap.columns[2:-1]:
cols.append(t)
cols.append(('', 'DATA'))
df_rap.columns = pd.MultiIndex.from_tuples(cols)
# reorder columns ('DATA' to the front)
cols = cols[-1:] + cols[:-1]
df_rap = df_rap[cols]
|
Changing level of some columns in multi index
|
I have a data frame that is looking like this (DATA is the year and month of the order) :
CUSTOMER_ID
NAME
DATA
COFFEE_SOLD(KG)
WATER_SOLD(L)
10000
ALEX
2022 - 01
3
4
10000
ALEX
2022 - 01
5
6
10000
ALEX
2022 - 02
7
8
10001
JOE
2022 - 02
1
1
10001
JOE
2022 - 03
1
0
I pivoted the df with :
df_rap = df_rap.pivot_table(index=["CUSTOMER_ID",'NAME',],columns=["DATA"], values=['COFFEE_SOLD(KG)','WATER_SOLD(L)'], aggfunc='sum').reset_index()
The result :
CUSTOMER_ID
NAME
COFFEE_SOLD(KG)
COFFEE_SOLD(KG)
COFFEE_SOLD(KG)
WATER_SOLD(L)
WATER_SOLD(L)
WATER_SOLD(L)
DATA
2022 - 01
2022 - 02
2022 - 03
2022 - 01
2022 - 02
2022 - 03
0
10000
ALEX
8
7
0
10
8
0
1
10001
JOE
0
1
1
0
1
0
The format is ok but I want to export it to excel. For that I need the data frame to look like this :
COFFEE_SOLD(KG)
COFFEE_SOLD(KG)
COFFEE_SOLD(KG)
WATER_SOLD(L)
WATER_SOLD(L)
WATER_SOLD(L)
DATA
CUSTOMER_ID
NAME
2022 - 01
2022 - 02
2022 - 03
2022 - 01
2022 - 02
2022 - 03
0
10000
ALEX
8
7
0
10
8
0
1
10001
JOE
0
1
1
0
1
0
In other words, i would like to lower the level of the first 2 column ( in header ), to save it in excel properly.
I tried :
df.reset_index()
And it dosen't work.
EDIT :
With :
display( df_copy.columns)
I saw the format of the columns :
MultiIndex([('CUSTOMER_ID', ''),
('NAME', ''),
('COFFEE_SOLD(KG)', '2022 - 01'),
('COFFEE_SOLD(KG)', '2022 - 02'),
('COFFEE_SOLD(KG)', '2022 - 03'),
('WATER_SOLD(L)', '2022 - 01'),
('WATER_SOLD(L)', '2022 - 02'),
('WATER_SOLD(L)', '2022 - 03'),],
names=[None, 'DATA'])
I expected to be :
MultiIndex([('', 'CUSTOMER_ID'),
('', 'NAME'),
('COFFEE_SOLD(KG)', '2022 - 01'),
('COFFEE_SOLD(KG)', '2022 - 02'),
('COFFEE_SOLD(KG)', '2022 - 03'),
('WATER_SOLD(L)', '2022 - 01'),
('WATER_SOLD(L)', '2022 - 02'),
('WATER_SOLD(L)', '2022 - 03'),],
names=[None, 'DATA'])
Thank you !
|
[
"A possible approach is to overwrite the column values:\ncols = [('', 'CUSTOMER_ID'), ('', 'NAME'),]\nfor t in df_rap.columns[2:]:\n cols.append(t)\n \ndf_rap.columns = pd.MultiIndex.from_tuples(cols)\n\nThis leads to a data frame without the word DATA in it. Which somehow makes sense, as DATA has lost some of it's meaning - now being just the name of the index column. If you nevertheless need to keep DATA, you could create a new column with the corresponding values, rename all columns and move the DATA column to the front (and save the data frame without the new index column) :\n# create a new DATA column\ndf_rap['DATA'] = df_rap.index\n# set new values for the column headers (this time including the new DATA column)\ncols = [('', 'CUSTOMER_ID'), ('', 'NAME'),]\nfor t in df_rap.columns[2:-1]:\n cols.append(t)\ncols.append(('', 'DATA'))\ndf_rap.columns = pd.MultiIndex.from_tuples(cols)\n# reorder columns ('DATA' to the front)\ncols = cols[-1:] + cols[:-1]\ndf_rap = df_rap[cols]\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"python"
] |
stackoverflow_0074402448_dataframe_python.txt
|
Q:
Match strings of different length in two lists of different length
Say I have two flat lists of strings:
a = ["today", "I", "want", "to", "eat", "some", "cake."]
b = ["to", "da", "y", "I", "wa", "nt", "to", "ea", "t", "some", "ca", "ke", "."]
Where in list b some strings (not all) of list a are split into multiple substrings. Note that the substrings in b that correspond to the strings in a are adjacent and in the same order, as in the example above.
I want to obtain a list c where the substrings in b that correspond to a single string in a are put together in a sublist:
c = [["to", "da", "y"], ["I"], ["wa", "nt"], ["to"], ["ea", "t"], ["some"], ["ca", "ke", "."]]
Unfortunately I don't have any code to share since I don't know how to approach this problem.
Thanks!
A:
a = ["today", "I", "want", "to", "eat", "some", "cake."]
b = ["to", "da", "y", "I", "wa", "nt", "to", "ea", "t", "some", "ca", "ke", "."]
c = []
for element in a:
temp_list = []
while "".join(temp_list) != element:
temp_list.append(b.pop(0))
c.append(temp_list)
Value of c:
[['to', 'da', 'y'],
['I'],
['wa', 'nt'],
['to'],
['ea', 't'],
['some'],
['ca', 'ke', '.']]
I don't know if there is any other clever way to do it. Just use .pop(0) to save u a bit of code
A:
@Ben.S.'s answer works but costs O(m x n x k) in time complexity, where m and n are the lengths of the two lists and k is the average length of the words.
A more efficient approach that solves the problem in a time complexity of O(n) is to keep appending word fragments from b to the last sub-list of a new list, but keep track of the total length of fragments appended to the last sub-list and append a new sub-list when the total length equals the length of the corresponding word in a:
c = []
for i in b:
if not c or length == target:
length = 0
target = len(a[len(c)])
c.append([])
length += len(i)
c[-1].append(i)
Demo: https://replit.com/@blhsing/ExoticConsiderateSearchservice
|
Match strings of different length in two lists of different length
|
Say I have two flat lists of strings:
a = ["today", "I", "want", "to", "eat", "some", "cake."]
b = ["to", "da", "y", "I", "wa", "nt", "to", "ea", "t", "some", "ca", "ke", "."]
Where in list b some strings (not all) of list a are split into multiple substrings. Note that the substrings in b that correspond to the strings in a are adjacent and in the same order, as in the example above.
I want to obtain a list c where the substrings in b that correspond to a single string in a are put together in a sublist:
c = [["to", "da", "y"], ["I"], ["wa", "nt"], ["to"], ["ea", "t"], ["some"], ["ca", "ke", "."]]
Unfortunately I don't have any code to share since I don't know how to approach this problem.
Thanks!
|
[
"a = [\"today\", \"I\", \"want\", \"to\", \"eat\", \"some\", \"cake.\"]\nb = [\"to\", \"da\", \"y\", \"I\", \"wa\", \"nt\", \"to\", \"ea\", \"t\", \"some\", \"ca\", \"ke\", \".\"]\nc = []\n\nfor element in a:\n temp_list = []\n while \"\".join(temp_list) != element:\n temp_list.append(b.pop(0))\n c.append(temp_list)\n\nValue of c:\n[['to', 'da', 'y'],\n ['I'],\n ['wa', 'nt'],\n ['to'],\n ['ea', 't'],\n ['some'],\n ['ca', 'ke', '.']]\n\nI don't know if there is any other clever way to do it. Just use .pop(0) to save u a bit of code\n",
"@Ben.S.'s answer works but costs O(m x n x k) in time complexity, where m and n are the lengths of the two lists and k is the average length of the words.\nA more efficient approach that solves the problem in a time complexity of O(n) is to keep appending word fragments from b to the last sub-list of a new list, but keep track of the total length of fragments appended to the last sub-list and append a new sub-list when the total length equals the length of the corresponding word in a:\nc = []\nfor i in b:\n if not c or length == target:\n length = 0\n target = len(a[len(c)])\n c.append([])\n length += len(i)\n c[-1].append(i)\n\nDemo: https://replit.com/@blhsing/ExoticConsiderateSearchservice\n"
] |
[
2,
0
] |
[] |
[] |
[
"list",
"python",
"string"
] |
stackoverflow_0074458282_list_python_string.txt
|
Q:
Fill pandas dataframe with dictionary elements
I have a dataframe df structured as well:
Name Surname Nationality
Joe Tippy Italian
Adam Wesker American
I would like to create a new record based on a dictionary whose keys corresponds to the column names:
new_record = {'Name': 'Jimmy', 'Surname': 'Turner', 'Nationality': 'Australian'}
How can I do that? I tried with a simple:
df = df.append(new_record, ignore_index=True)
but if I have a missing value in my record the dataframe doesn't get filled with a space, instead it leaves me the last column empty.
A:
IIUC replace missing values in next step:
new_record = {'Surname': 'Turner', 'Nationality': 'Australian'}
df = pd.concat([df, pd.DataFrame([new_record])], ignore_index=True).fillna('')
print (df)
Name Surname Nationality
0 Joe Tippy Italian
1 Adam Wesker American
2 Turner Australian
Or use DataFrame.reindex:
df = pd.concat([df, pd.DataFrame([new_record])].reindex(df.columns, fill_value='', axis=1), ignore_index=True)
A:
A simple way if you have a range index:
df.loc[len(df)] = new_record
Updated dataframe:
Name Surname Nationality
0 Joe Tippy Italian
1 Adam Wesker American
2 Jimmy Turner Australian
If you have a missing key (for example 'Surname'):
Name Surname Nationality
0 Joe Tippy Italian
1 Adam Wesker American
2 Jimmy NaN Australian
If you want empty strings:
df.loc[len(df)] = pd.Series(new_record).reindex(df.columns, fill_value='')
Output:
Name Surname Nationality
0 Joe Tippy Italian
1 Adam Wesker American
2 Jimmy Australian
|
Fill pandas dataframe with dictionary elements
|
I have a dataframe df structured as well:
Name Surname Nationality
Joe Tippy Italian
Adam Wesker American
I would like to create a new record based on a dictionary whose keys corresponds to the column names:
new_record = {'Name': 'Jimmy', 'Surname': 'Turner', 'Nationality': 'Australian'}
How can I do that? I tried with a simple:
df = df.append(new_record, ignore_index=True)
but if I have a missing value in my record the dataframe doesn't get filled with a space, instead it leaves me the last column empty.
|
[
"IIUC replace missing values in next step:\nnew_record = {'Surname': 'Turner', 'Nationality': 'Australian'}\ndf = pd.concat([df, pd.DataFrame([new_record])], ignore_index=True).fillna('')\n\nprint (df)\n Name Surname Nationality\n0 Joe Tippy Italian\n1 Adam Wesker American\n2 Turner Australian\n\nOr use DataFrame.reindex:\ndf = pd.concat([df, pd.DataFrame([new_record])].reindex(df.columns, fill_value='', axis=1), ignore_index=True)\n\n",
"A simple way if you have a range index:\ndf.loc[len(df)] = new_record\n\nUpdated dataframe:\n Name Surname Nationality\n0 Joe Tippy Italian\n1 Adam Wesker American\n2 Jimmy Turner Australian\n\nIf you have a missing key (for example 'Surname'):\n Name Surname Nationality\n0 Joe Tippy Italian\n1 Adam Wesker American\n2 Jimmy NaN Australian\n\nIf you want empty strings:\ndf.loc[len(df)] = pd.Series(new_record).reindex(df.columns, fill_value='')\n\nOutput:\n Name Surname Nationality\n0 Joe Tippy Italian\n1 Adam Wesker American\n2 Jimmy Australian\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"dataframe",
"dictionary",
"pandas",
"python"
] |
stackoverflow_0074462398_dataframe_dictionary_pandas_python.txt
|
Q:
TypeError: unhashable type: 'numpy.ndarray' when trying to append to a dictionary
I'm trying to append values to my dictionary, but I can't solve this error.
This is my dictionary:
groups = {'group1': array([450, 449.]), 'group2': array([490, 489.]), 'group3': array([568, 567.])}
then I have a txt file (loaded using np.loadtxt) with many data and I have to iterate over this file and if a certain condition is met I should add that line to the correct key of my dictionary.
I used the if statement and I called the data that met the condition "parent".
parent = [[449. 448.]]
[[489. 488.]]
[[567. 566.]]
I tried this:
for i, x in enumerate(parent):
groups.setdefault(x, []).append(i)
expected output:
groups = {'group1': array([450, 449.], [449, 448]), 'group2': array([490, 489.], [489, 488]), 'group3': array([568, 567.], [567, 566])}
but I get this error:
TypeError: unhashable type: 'numpy.ndarray'
A:
The reason why you got this error is that you tried to use data of unhashable type numpy.ndarray as the key of a dictionary. The links below are useful for your question.
Mapping Types - dict
A mapping object maps hashable values to arbitrary objects.
dict.setdefault(key[, default]) - It invokes a hash operation on key.
exception TypeError
Raised when an operation or function is applied to an object of inappropriate type.
enumerate(iterable, start=0)
In your for loop, i is of type int which is hashable while x is of type numpy.ndarray which is unhashable. Therefore not using x as the key argument of groups.setdefault solves your TypeError problem.
P.S., there is still a way to go to get the expected groups. I don't show you the code, because it's not clear what the expected values of the dictionary are, a 2D array as value or a list of 1D arrays.
A:
As ILS noted, the problem description is unclear:
are the output values a list or a 2D array?
is parent a list of lists or some sort of array?
is parent's size the same as groups's size?
Assuming same size lists, is this helping? -
groups = {'group1': [[450., 449.]], 'group2': [[490., 489.]], 'group3': [[568., 567.]]}
parent = [[449., 448.],
[489., 488.],
[567., 566.]]
for i, x in enumerate(parent):
groups[f'group{i+1}'].append(x)
groups
Returns:
{'group1': [[450.0, 449.0], [449.0, 448.0]],
'group2': [[490.0, 489.0], [489.0, 488.0]],
'group3': [[568.0, 567.0], [567.0, 566.0]]}
How about now? -
import numpy as np
groups = {'group1': np.array([[450., 449.], [0,0]]), 'group2': np.array([[490., 489.], [0,0]]), 'group3': np.array([[568., 567.], [0,0]])}
parent = [[449., 448.],
[489., 488.],
[567., 566.]]
for i, x in enumerate(parent):
groups[f'group{i+1}'][1] = x
groups
Returns:
{'group1': array([[450., 449.],
[449., 448.]]),
'group2': array([[490., 489.],
[489., 488.]]),
'group3': array([[568., 567.],
[567., 566.]])}
Note that I initialized groups to a 2D array to avoid resizing.
Or, as an exercise, if you prefer using np.append:
groups = {'group1': np.array([[450., 449.]]), 'group2': np.array([[490., 489.]]), 'group3': np.array([[568., 567.]])}
parent = [[449., 448.],
[489., 488.],
[567., 566.]]
for i, x in enumerate(parent):
groups[f'group{i+1}'] = np.append(groups[f'group{i+1}'], np.array(x)[np.newaxis, ...], axis=0)
groups
Returns again:
{'group1': array([[450., 449.],
[449., 448.]]),
'group2': array([[490., 489.],
[489., 488.]]),
'group3': array([[568., 567.],
[567., 566.]])}
An alternative with a different parent format:
groups = {'group1': np.array([[450., 449.]]), 'group2': np.array([[490., 489.]]), 'group3': np.array([[568., 567.]])}
parent = [[[449., 448.]],
[[489., 488.]],
[[567., 566.]]]
for i, x in enumerate(parent):
groups[f'group{i+1}'] = np.append(groups[f'group{i+1}'], x, axis=0)
groups
Please confirm what your data look like... I'm unfortunately just guessing.
Another approach, assuming parent is an array of size (1, 3, 2):
groups = {'group1': np.array([[450., 449.]]), 'group2': np.array([[490., 489.]]), 'group3': np.array([[568., 567.]])}
parent = np.array([[[449., 448.],
[489., 488.],
[567., 566.]]])
parent.shape
parent[0,1]
for i, _ in enumerate(groups):
groups[f'group{i+1}'] = np.append(groups[f'group{i+1}'], np.array(parent[0, i])[np.newaxis, ...], axis=0)
groups
Returns:
{'group1': array([[450., 449.],
[449., 448.]]),
'group2': array([[490., 489.],
[489., 488.]]),
'group3': array([[568., 567.],
[567., 566.]])}
|
TypeError: unhashable type: 'numpy.ndarray' when trying to append to a dictionary
|
I'm trying to append values to my dictionary, but I can't solve this error.
This is my dictionary:
groups = {'group1': array([450, 449.]), 'group2': array([490, 489.]), 'group3': array([568, 567.])}
then I have a txt file (loaded using np.loadtxt) with many data and I have to iterate over this file and if a certain condition is met I should add that line to the correct key of my dictionary.
I used the if statement and I called the data that met the condition "parent".
parent = [[449. 448.]]
[[489. 488.]]
[[567. 566.]]
I tried this:
for i, x in enumerate(parent):
groups.setdefault(x, []).append(i)
expected output:
groups = {'group1': array([450, 449.], [449, 448]), 'group2': array([490, 489.], [489, 488]), 'group3': array([568, 567.], [567, 566])}
but I get this error:
TypeError: unhashable type: 'numpy.ndarray'
|
[
"The reason why you got this error is that you tried to use data of unhashable type numpy.ndarray as the key of a dictionary. The links below are useful for your question.\n\nMapping Types - dict\n\n\nA mapping object maps hashable values to arbitrary objects.\n\n\ndict.setdefault(key[, default]) - It invokes a hash operation on key.\nexception TypeError\n\n\nRaised when an operation or function is applied to an object of inappropriate type.\n\n\nenumerate(iterable, start=0)\n\n\nIn your for loop, i is of type int which is hashable while x is of type numpy.ndarray which is unhashable. Therefore not using x as the key argument of groups.setdefault solves your TypeError problem.\nP.S., there is still a way to go to get the expected groups. I don't show you the code, because it's not clear what the expected values of the dictionary are, a 2D array as value or a list of 1D arrays.\n",
"As ILS noted, the problem description is unclear:\n\nare the output values a list or a 2D array?\nis parent a list of lists or some sort of array?\nis parent's size the same as groups's size?\n\nAssuming same size lists, is this helping? -\ngroups = {'group1': [[450., 449.]], 'group2': [[490., 489.]], 'group3': [[568., 567.]]}\nparent = [[449., 448.],\n[489., 488.],\n[567., 566.]]\nfor i, x in enumerate(parent):\n groups[f'group{i+1}'].append(x)\ngroups\n\nReturns:\n{'group1': [[450.0, 449.0], [449.0, 448.0]],\n 'group2': [[490.0, 489.0], [489.0, 488.0]],\n 'group3': [[568.0, 567.0], [567.0, 566.0]]}\n\n\nHow about now? -\nimport numpy as np\n\ngroups = {'group1': np.array([[450., 449.], [0,0]]), 'group2': np.array([[490., 489.], [0,0]]), 'group3': np.array([[568., 567.], [0,0]])}\nparent = [[449., 448.],\n[489., 488.],\n[567., 566.]]\nfor i, x in enumerate(parent):\n groups[f'group{i+1}'][1] = x\ngroups\n\nReturns:\n{'group1': array([[450., 449.],\n [449., 448.]]),\n 'group2': array([[490., 489.],\n [489., 488.]]),\n 'group3': array([[568., 567.],\n [567., 566.]])}\n\nNote that I initialized groups to a 2D array to avoid resizing.\n\nOr, as an exercise, if you prefer using np.append:\ngroups = {'group1': np.array([[450., 449.]]), 'group2': np.array([[490., 489.]]), 'group3': np.array([[568., 567.]])}\nparent = [[449., 448.],\n[489., 488.],\n[567., 566.]]\nfor i, x in enumerate(parent):\n groups[f'group{i+1}'] = np.append(groups[f'group{i+1}'], np.array(x)[np.newaxis, ...], axis=0)\ngroups\n\nReturns again:\n{'group1': array([[450., 449.],\n [449., 448.]]),\n 'group2': array([[490., 489.],\n [489., 488.]]),\n 'group3': array([[568., 567.],\n [567., 566.]])}\n\n\nAn alternative with a different parent format:\ngroups = {'group1': np.array([[450., 449.]]), 'group2': np.array([[490., 489.]]), 'group3': np.array([[568., 567.]])}\nparent = [[[449., 448.]],\n[[489., 488.]],\n[[567., 566.]]]\nfor i, x in enumerate(parent):\n groups[f'group{i+1}'] = np.append(groups[f'group{i+1}'], x, axis=0)\ngroups\n\nPlease confirm what your data look like... I'm unfortunately just guessing.\n\nAnother approach, assuming parent is an array of size (1, 3, 2):\ngroups = {'group1': np.array([[450., 449.]]), 'group2': np.array([[490., 489.]]), 'group3': np.array([[568., 567.]])}\nparent = np.array([[[449., 448.],\n[489., 488.],\n[567., 566.]]])\nparent.shape\nparent[0,1]\nfor i, _ in enumerate(groups):\n groups[f'group{i+1}'] = np.append(groups[f'group{i+1}'], np.array(parent[0, i])[np.newaxis, ...], axis=0)\ngroups\n\nReturns:\n{'group1': array([[450., 449.],\n [449., 448.]]),\n 'group2': array([[490., 489.],\n [489., 488.]]),\n 'group3': array([[568., 567.],\n [567., 566.]])}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"append",
"dictionary",
"numpy_ndarray",
"python"
] |
stackoverflow_0074447402_append_dictionary_numpy_ndarray_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.