content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
Fetch the latest tweet from Twitter with Tweepy
I want to fetch the latest tweet if the keyword is met from a bounch of users at Twitter in real time.
This code fetchs the latest tweet if 'Twitter' keyword is met, and stores it in the "store" variable every 5 seconds and goes on forever.
Is there a way to make it to only fetch the tweet if it isent already present in the store variable. And if its already there it should stay on and search for the next tweet but not fetch it?
import tweepy
import time
api = 'APIKEY'
apisq = 'APISQ'
acc_tok = 'TOK'
acc_sq = 'TOkSQ'
auth = tweepy.OAuthHandler(api, apisq)
auth.set_access_token(acc_tok, acc_sq)
api = tweepy.API(auth)
store = []
username = 'somename'
while True:
first = []
get_tweets = api.user_timeline(screen_name=username, count=1)
test = get_tweets[0]
first.append(test.text)
time.sleep(5)
if any('Twitter' in word for word in first):
store.append(first)
print(store)
else:
continue
Ive tried with some Conditional Statements but has not been very succesful yet.
A:
I think the important piece of data would be the 'id' field returned in the list. You could either add the tweets to a dictionary where the key would be the 'id' and the value the text of the tweet, or create a second list that contains the 'id' and then create a filter condition to validate that the 'id' isn't present in the other list before adding the new tweet.
The dictionary method is likely the quickest computationally, but the second list method is likely the easiest conceptually.
|
Fetch the latest tweet from Twitter with Tweepy
|
I want to fetch the latest tweet if the keyword is met from a bounch of users at Twitter in real time.
This code fetchs the latest tweet if 'Twitter' keyword is met, and stores it in the "store" variable every 5 seconds and goes on forever.
Is there a way to make it to only fetch the tweet if it isent already present in the store variable. And if its already there it should stay on and search for the next tweet but not fetch it?
import tweepy
import time
api = 'APIKEY'
apisq = 'APISQ'
acc_tok = 'TOK'
acc_sq = 'TOkSQ'
auth = tweepy.OAuthHandler(api, apisq)
auth.set_access_token(acc_tok, acc_sq)
api = tweepy.API(auth)
store = []
username = 'somename'
while True:
first = []
get_tweets = api.user_timeline(screen_name=username, count=1)
test = get_tweets[0]
first.append(test.text)
time.sleep(5)
if any('Twitter' in word for word in first):
store.append(first)
print(store)
else:
continue
Ive tried with some Conditional Statements but has not been very succesful yet.
|
[
"I think the important piece of data would be the 'id' field returned in the list. You could either add the tweets to a dictionary where the key would be the 'id' and the value the text of the tweet, or create a second list that contains the 'id' and then create a filter condition to validate that the 'id' isn't present in the other list before adding the new tweet.\nThe dictionary method is likely the quickest computationally, but the second list method is likely the easiest conceptually.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"tweepy",
"web_scraping"
] |
stackoverflow_0074537755_python_tweepy_web_scraping.txt
|
Q:
Order queryset by the number of foreign key instances in a Django field
I am trying to return the objects relating to a through table which counts the number of reactions on a blog post.
I have an Article model, Sentiment model and Reactions model. The sentiment is simply a 1 or 2, 1 representing like and 2 for dislike. On the frontend users can react to an article and their reactions are stored in a Reactions table.
Reactions model
class Reaction(models.Model):
user_id = models.ForeignKey(User, related_name='user_id', on_delete=models.CASCADE)
article_id = models.ForeignKey(Article, related_name='article_id', on_delete=models.CASCADE)
sentiment = models.ForeignKey(Sentiment, related_name='sentiment', on_delete=models.CASCADE)
I'd like to find the 2 most liked articles so I have written a view to handle the GET request
views.py
class MostPopularView(generics.RetrieveAPIView):
queryset = Reaction.objects.annotate(num_likes = Count('sentiment_id')).order_by('num_likes')
serializer_class = MostPopularSerializer
and a serializer to transform the data
serializers.py
class MostPopularSerializer(serializers.Serializer):
class Meta:
fields = (
'id',
'title',
)
model = Article
As the code stands now, I'm getting a response
<QuerySet [<Reaction: d745e09b-5685-4592-ab43-766f47c73bef San Francisco Bay 1>, <Reaction: d745e09b-5685-4592-ab43-766f47c73bef The Golden Gate Bridge 1>, <Reaction: dd512e6d-5015-4a70-ac42-3afcb1747050 San Francisco Bay 1>, <Reaction: dd512e6d-5015-4a70-ac42-3afcb1747050 The Golden Gate Bridge 2>]>
Showing San Francisco Bay has 2 likes and The Golden Gate Bridge has 1 like and 1 dislike.
I've tried multiple methods to get the correct response including filtering by sentiment=1 but can't get any further than this.
What I'm looking for is a way to count the number of sentiment=1 fields which correspond to each article id and order them in descending order, so most liked at the top.
Edit
I've rethought my approach although I have not yet found a solution
Filter Reaction table by sentiment=1
Order by count of article_id
Serialize with MostPopularSerializer
I changed the View to be a ModelViewSet
class MostPopularView(viewsets.ModelViewSet):
articles = Reaction.objects.filter(sentiment=1).annotate(num_likes = Count('article_id')).order_by('num_likes')[:4]
# queryset = Article.objects.filter(id=articles['article_id'])
#Doesn't work by hypothetically what I'm thinking
for article in articles:
queryset = Article.objects.filter(id=article['article_id'])
serializer_class = MostPopularSerializer
And the serializer to be a ModelSerializer
class MostPopularSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'id',
'title',
'tags',
)
model = Article
and an updated URL for good measure
path('popular', views.MostPopularView.as_view({'get': 'list'}))
Any tips on achieving these steps would be much appreciated, thank you
A:
I solved a simular Problem a different way.
For me I wanted to sort a queryset of Person by how often the Country was used.
I added a property to the Model
class Country(models.Model):
.
.
def _get_count(self):
count = len(Person.objects.filter(country=self.id))
return count or 0
count = property(_get_count)
In the View I have this queryset
qs = sorted(Country.objects.all(), key=lambda country: country.count*-1)
I needed to use python sorted because the django qs.order_by can not sort by property.
The *-1 is for descending order
A:
It makes no sense to use the Reaction as queryset, you use the Article as queryset, so:
from django.db.models import Case, Value, When
class MostPopularView(generics.RetrieveAPIView):
queryset = Article.objects.annotate(
sentiment=Sum(
Case(
When(article_id__sentiment_id=1, then=Value(1)),
When(article_id__sentiment_id=2, then=Value(-1)),
)
)
).order_by('-sentiment')
serializer_class = MostPopularSerializer
Note: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation.
Note: Normally one does not add a suffix …_id to a ForeignKey field, since Django
will automatically add a "twin" field with an …_id suffix. Therefore it should
be user, instead of user_id.
Note: The related_name=… parameter [Django-doc]
is the name of the relation in reverse, so from the User model to the Reaction
model in this case. Therefore it (often) makes not much sense to name it the
same as the forward relation. You thus might want to consider renaming the user relation to reactions.
|
Order queryset by the number of foreign key instances in a Django field
|
I am trying to return the objects relating to a through table which counts the number of reactions on a blog post.
I have an Article model, Sentiment model and Reactions model. The sentiment is simply a 1 or 2, 1 representing like and 2 for dislike. On the frontend users can react to an article and their reactions are stored in a Reactions table.
Reactions model
class Reaction(models.Model):
user_id = models.ForeignKey(User, related_name='user_id', on_delete=models.CASCADE)
article_id = models.ForeignKey(Article, related_name='article_id', on_delete=models.CASCADE)
sentiment = models.ForeignKey(Sentiment, related_name='sentiment', on_delete=models.CASCADE)
I'd like to find the 2 most liked articles so I have written a view to handle the GET request
views.py
class MostPopularView(generics.RetrieveAPIView):
queryset = Reaction.objects.annotate(num_likes = Count('sentiment_id')).order_by('num_likes')
serializer_class = MostPopularSerializer
and a serializer to transform the data
serializers.py
class MostPopularSerializer(serializers.Serializer):
class Meta:
fields = (
'id',
'title',
)
model = Article
As the code stands now, I'm getting a response
<QuerySet [<Reaction: d745e09b-5685-4592-ab43-766f47c73bef San Francisco Bay 1>, <Reaction: d745e09b-5685-4592-ab43-766f47c73bef The Golden Gate Bridge 1>, <Reaction: dd512e6d-5015-4a70-ac42-3afcb1747050 San Francisco Bay 1>, <Reaction: dd512e6d-5015-4a70-ac42-3afcb1747050 The Golden Gate Bridge 2>]>
Showing San Francisco Bay has 2 likes and The Golden Gate Bridge has 1 like and 1 dislike.
I've tried multiple methods to get the correct response including filtering by sentiment=1 but can't get any further than this.
What I'm looking for is a way to count the number of sentiment=1 fields which correspond to each article id and order them in descending order, so most liked at the top.
Edit
I've rethought my approach although I have not yet found a solution
Filter Reaction table by sentiment=1
Order by count of article_id
Serialize with MostPopularSerializer
I changed the View to be a ModelViewSet
class MostPopularView(viewsets.ModelViewSet):
articles = Reaction.objects.filter(sentiment=1).annotate(num_likes = Count('article_id')).order_by('num_likes')[:4]
# queryset = Article.objects.filter(id=articles['article_id'])
#Doesn't work by hypothetically what I'm thinking
for article in articles:
queryset = Article.objects.filter(id=article['article_id'])
serializer_class = MostPopularSerializer
And the serializer to be a ModelSerializer
class MostPopularSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'id',
'title',
'tags',
)
model = Article
and an updated URL for good measure
path('popular', views.MostPopularView.as_view({'get': 'list'}))
Any tips on achieving these steps would be much appreciated, thank you
|
[
"I solved a simular Problem a different way.\nFor me I wanted to sort a queryset of Person by how often the Country was used.\nI added a property to the Model\nclass Country(models.Model):\n .\n .\n def _get_count(self):\n count = len(Person.objects.filter(country=self.id))\n\n return count or 0\n\n count = property(_get_count)\n\nIn the View I have this queryset\nqs = sorted(Country.objects.all(), key=lambda country: country.count*-1)\n\nI needed to use python sorted because the django qs.order_by can not sort by property.\nThe *-1 is for descending order\n",
"It makes no sense to use the Reaction as queryset, you use the Article as queryset, so:\nfrom django.db.models import Case, Value, When\n\n\nclass MostPopularView(generics.RetrieveAPIView):\n queryset = Article.objects.annotate(\n sentiment=Sum(\n Case(\n When(article_id__sentiment_id=1, then=Value(1)),\n When(article_id__sentiment_id=2, then=Value(-1)),\n )\n )\n ).order_by('-sentiment')\n serializer_class = MostPopularSerializer\n\n\nNote: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation.\n\n\n\nNote: Normally one does not add a suffix …_id to a ForeignKey field, since Django\nwill automatically add a \"twin\" field with an …_id suffix. Therefore it should\nbe user, instead of user_id.\n\n\n\nNote: The related_name=… parameter [Django-doc]\nis the name of the relation in reverse, so from the User model to the Reaction\nmodel in this case. Therefore it (often) makes not much sense to name it the\nsame as the forward relation. You thus might want to consider renaming the user relation to reactions.\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"count",
"django",
"python"
] |
stackoverflow_0074534666_count_django_python.txt
|
Q:
How do I capture the result of assertion in a variable?
In pytest, I would like to capture, for example, the result of something like assert a==b in a variable.
Any idea how do I do that?
var = assert fruit1 == fruit2
does not capture the assert value in var.
Thanks in advance!
Tried
var = assert fruit1 == fruit2
Expecting the value of assert (true or false) to be captured so that I can post the result to database.
A:
In newer versions of Python, you can use the Walrus operator:
https://realpython.com/python-walrus-operator/
assert (var := (fruit1 == fruit2))
print('var = ', var)
# output: var = True # otherwise, the code would have already crashed :)
The Walrus operator can also be used inside if-statements, nested expressions, arithmetic operations, etc.
A:
Assert doesn't really have a value. To capture the value you need two lines:
var = fruit1 == fruit2
assert var
Since you're trying to log to a database, the fact that assert doesn't evaluate to anything won't make your code less concise, as you can just bake it into your logging function:
def add_to_database(expression, con = "default_file.txt"):
# not sure what database you're appending to so I'm going to write to text file.
with open(con, 'a') as db:
db.write(str(expression))
return expression
assert add_to_database(fruit_1 == fruit_2)
This approach may give you the concision you're looking for while retaining the ability to add a second argument to inform the assert error text. The other approach would be to put assert within the function, which would be more concise if you want the same error text every time.
If you're trying to capture error text rather than the output of the expression you might try this:
def log_assert(expression, error_text = ""):
try:
assert expression, error_text
except AssertionError as error:
print(f"AssertionError: {error_text}") # replace print with whatever function writes to your database of errors
raise error
fruit_1 = 'apple'
fruit_2 = 'Apple.'
log_assert(fruit_1 == fruit_2, "Not equal")
returns
AssertionError: Not equal <-- from the print function, this is what would be logged
--------------------------------------------------------------------------- AssertionError Traceback (most recent call
last) Cell In [41], line 11
8 fruit_1 == 'apple'
9 fruit_2 == 'Apple.'
---> 11 log_assert(fruit_1 == fruit_2, "Not equal")
Cell In [41], line 6, in log_assert(expression, error_text)
4 except AssertionError as error:
5 print(f"AssertionError: {error_text}") # replace print with whatever function writes to your database of errors
----> 6 raise error
Cell In [41], line 3, in log_assert(expression, error_text)
1 def log_assert(expression, error_text = ""):
2 try:
----> 3 assert expression, error_text
4 except AssertionError as error:
5 print(f"AssertionError: {error_text}") # replace print with whatever function writes to your database of errors
AssertionError: Not equal
Assert only raises the error text you feed into it and will return nothing in a try/except block if you don't give it a second argument.
You can get more informative error messages by wrapping this function into a more specialized one
def log_assert_equal(var1, var2):
log_assert(var1 == var2, f"{var1} is not equal to {var2}")
log_assert_equal(fruit_1, fruit_2)
returns:
AssertionError: apple is not equal to Apple.
--------------------------------------------------------------------------- AssertionError Traceback (most recent call
last) Cell In [47], line 1
----> 1 log_assert_equal(fruit_1, fruit_2)
Cell In [42], line 2, in log_assert_equal(var1, var2)
1 def log_assert_equal(var1, var2):
----> 2 log_assert(var1 == var2, f"{var1} is not equal to {var2}")
Cell In [41], line 6, in log_assert(expression, error_text)
4 except AssertionError as error:
5 print(f"AssertionError: {error_text}") # replace print with whatever function writes to your database of errors
----> 6 raise error
Cell In [41], line 3, in log_assert(expression, error_text)
1 def log_assert(expression, error_text = ""):
2 try:
----> 3 assert expression, error_text
4 except AssertionError as error:
5 print(f"AssertionError: {error_text}") # replace print with whatever function writes to your database of errors
AssertionError: apple is not equal to Apple.
I hope this wide net answered your question
|
How do I capture the result of assertion in a variable?
|
In pytest, I would like to capture, for example, the result of something like assert a==b in a variable.
Any idea how do I do that?
var = assert fruit1 == fruit2
does not capture the assert value in var.
Thanks in advance!
Tried
var = assert fruit1 == fruit2
Expecting the value of assert (true or false) to be captured so that I can post the result to database.
|
[
"In newer versions of Python, you can use the Walrus operator:\nhttps://realpython.com/python-walrus-operator/\nassert (var := (fruit1 == fruit2))\nprint('var = ', var)\n# output: var = True # otherwise, the code would have already crashed :)\n\nThe Walrus operator can also be used inside if-statements, nested expressions, arithmetic operations, etc.\n",
"Assert doesn't really have a value. To capture the value you need two lines:\nvar = fruit1 == fruit2\n\nassert var\n\nSince you're trying to log to a database, the fact that assert doesn't evaluate to anything won't make your code less concise, as you can just bake it into your logging function:\ndef add_to_database(expression, con = \"default_file.txt\"):\n # not sure what database you're appending to so I'm going to write to text file.\n with open(con, 'a') as db:\n db.write(str(expression))\n return expression\n\nassert add_to_database(fruit_1 == fruit_2)\n\n\nThis approach may give you the concision you're looking for while retaining the ability to add a second argument to inform the assert error text. The other approach would be to put assert within the function, which would be more concise if you want the same error text every time.\nIf you're trying to capture error text rather than the output of the expression you might try this:\ndef log_assert(expression, error_text = \"\"):\n try:\n assert expression, error_text\n except AssertionError as error:\n print(f\"AssertionError: {error_text}\") # replace print with whatever function writes to your database of errors\n raise error\n\nfruit_1 = 'apple'\nfruit_2 = 'Apple.'\n\nlog_assert(fruit_1 == fruit_2, \"Not equal\")\n\n\nreturns\nAssertionError: Not equal <-- from the print function, this is what would be logged\n\n--------------------------------------------------------------------------- AssertionError Traceback (most recent call\nlast) Cell In [41], line 11\n8 fruit_1 == 'apple'\n9 fruit_2 == 'Apple.'\n---> 11 log_assert(fruit_1 == fruit_2, \"Not equal\")\nCell In [41], line 6, in log_assert(expression, error_text)\n4 except AssertionError as error:\n5 print(f\"AssertionError: {error_text}\") # replace print with whatever function writes to your database of errors\n----> 6 raise error\nCell In [41], line 3, in log_assert(expression, error_text)\n1 def log_assert(expression, error_text = \"\"):\n2 try:\n----> 3 assert expression, error_text\n4 except AssertionError as error:\n5 print(f\"AssertionError: {error_text}\") # replace print with whatever function writes to your database of errors\nAssertionError: Not equal\n\nAssert only raises the error text you feed into it and will return nothing in a try/except block if you don't give it a second argument.\nYou can get more informative error messages by wrapping this function into a more specialized one\ndef log_assert_equal(var1, var2):\n log_assert(var1 == var2, f\"{var1} is not equal to {var2}\")\n\nlog_assert_equal(fruit_1, fruit_2) \n\nreturns:\nAssertionError: apple is not equal to Apple.\n\n--------------------------------------------------------------------------- AssertionError Traceback (most recent call\nlast) Cell In [47], line 1\n----> 1 log_assert_equal(fruit_1, fruit_2)\nCell In [42], line 2, in log_assert_equal(var1, var2)\n1 def log_assert_equal(var1, var2):\n----> 2 log_assert(var1 == var2, f\"{var1} is not equal to {var2}\")\nCell In [41], line 6, in log_assert(expression, error_text)\n4 except AssertionError as error:\n5 print(f\"AssertionError: {error_text}\") # replace print with whatever function writes to your database of errors\n----> 6 raise error\nCell In [41], line 3, in log_assert(expression, error_text)\n1 def log_assert(expression, error_text = \"\"):\n2 try:\n----> 3 assert expression, error_text\n4 except AssertionError as error:\n5 print(f\"AssertionError: {error_text}\") # replace print with whatever function writes to your database of errors\nAssertionError: apple is not equal to Apple.\n\nI hope this wide net answered your question\n"
] |
[
1,
0
] |
[] |
[] |
[
"pytest",
"python"
] |
stackoverflow_0074537222_pytest_python.txt
|
Q:
Avoiding console prints by Libvirt Qemu python APIs
I am trying to check if a domain exists by using the libvirt python API "lookupbyname()". If the domain does not exist, it prints an error message on the console saying "Domain not found".
I need the errors or logs only in syslog. I have tried redirecting stderr and stdout. But, it doesn't have any effect. I have also tried playing around with the libvirt logging settings described in https://libvirt.org/logging.html . No effect again. "stdio_handler" flag in /etc/libvirt/qemu.conf is set to "file" as well.
Following is my test code:
import os, sys
import libvirt
conn = libvirt.open('qemu:///system')
# Find the application in the virsh domain
try:
sys.stdout = open(os.devnull, "w")
sys.stderr = open(os.devnull, "w")
dom = conn.lookupByName('abcd')
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
except Exception as e:
syslog.syslog (syslog.LOG_ERR, 'Could not find the domain. ERROR: %s.' % (e))
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
Output:
$ python test.py
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'abcd'
$
Is there a way to avoid this console print?
A:
This is a historical design mistake of libvirt, which we unfortunately can't remove without breaking back-compat for apps relying in this mis-feature. So you need to manually turn off printing to console using
def libvirt_callback(userdata, err):
pass
libvirt.registerErrorHandler(f=libvirt_callback, ctx=None)
A:
A lambda version of Danielb's answer
libvirt.registerErrorHandler(lambda userdata, err: None, ctx=None)
|
Avoiding console prints by Libvirt Qemu python APIs
|
I am trying to check if a domain exists by using the libvirt python API "lookupbyname()". If the domain does not exist, it prints an error message on the console saying "Domain not found".
I need the errors or logs only in syslog. I have tried redirecting stderr and stdout. But, it doesn't have any effect. I have also tried playing around with the libvirt logging settings described in https://libvirt.org/logging.html . No effect again. "stdio_handler" flag in /etc/libvirt/qemu.conf is set to "file" as well.
Following is my test code:
import os, sys
import libvirt
conn = libvirt.open('qemu:///system')
# Find the application in the virsh domain
try:
sys.stdout = open(os.devnull, "w")
sys.stderr = open(os.devnull, "w")
dom = conn.lookupByName('abcd')
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
except Exception as e:
syslog.syslog (syslog.LOG_ERR, 'Could not find the domain. ERROR: %s.' % (e))
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
Output:
$ python test.py
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'abcd'
$
Is there a way to avoid this console print?
|
[
"This is a historical design mistake of libvirt, which we unfortunately can't remove without breaking back-compat for apps relying in this mis-feature. So you need to manually turn off printing to console using\ndef libvirt_callback(userdata, err):\n pass\n\nlibvirt.registerErrorHandler(f=libvirt_callback, ctx=None)\n\n",
"A lambda version of Danielb's answer\nlibvirt.registerErrorHandler(lambda userdata, err: None, ctx=None)\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"libvirt",
"logging",
"python",
"qemu"
] |
stackoverflow_0045541725_libvirt_logging_python_qemu.txt
|
Q:
Im trying to get links from a TXT file, but it ends with 0 results
So I have a txt file that contains several links along with other text, More specifically a list of twitter like data, (tweets that I have liked), And im trying to compile the image links specifically (t.co links) into a single txt file. So I made this script.
FileObject = open(r"like.txt","r")
word = str(FileObject)
link=[]
result = re.search('https://t.co', word)
while True:
try:
result_string = result.group(0)
link.append(result_string)
word= word.replace(result_string, "")
result = re.search('https://t.co', word)
FileObject2 = open(r"list.txt","r+")
if link(None):
print("No Image URLS Found")
else:
FileObject2.write(link + "\n")
FileObject2.close("list.txt")
result = re.search('https://t.co', word)
except: break
However upon running this, Nothing is added to list.txt. Please help.
Heres a couple of lines from the text file.
"like" :
"tweetId" : "1594749508147191808"
"fullText" : "@tragicbirdapp https://t(dot)co/LTEe5qrv0B"
"expandedUrl" : "https://twitter.com/i/web/status/1594749508147191808"
"like" :
"tweetId" : "1594880996431781890"
"fullText" : "New Drawing https://t(dot)co/kLziQSpbrT"
"expandedUrl" : "https://twitter.com/i/web/status/1594880996431781890"
```
|
Im trying to get links from a TXT file, but it ends with 0 results
|
So I have a txt file that contains several links along with other text, More specifically a list of twitter like data, (tweets that I have liked), And im trying to compile the image links specifically (t.co links) into a single txt file. So I made this script.
FileObject = open(r"like.txt","r")
word = str(FileObject)
link=[]
result = re.search('https://t.co', word)
while True:
try:
result_string = result.group(0)
link.append(result_string)
word= word.replace(result_string, "")
result = re.search('https://t.co', word)
FileObject2 = open(r"list.txt","r+")
if link(None):
print("No Image URLS Found")
else:
FileObject2.write(link + "\n")
FileObject2.close("list.txt")
result = re.search('https://t.co', word)
except: break
However upon running this, Nothing is added to list.txt. Please help.
Heres a couple of lines from the text file.
"like" :
"tweetId" : "1594749508147191808"
"fullText" : "@tragicbirdapp https://t(dot)co/LTEe5qrv0B"
"expandedUrl" : "https://twitter.com/i/web/status/1594749508147191808"
"like" :
"tweetId" : "1594880996431781890"
"fullText" : "New Drawing https://t(dot)co/kLziQSpbrT"
"expandedUrl" : "https://twitter.com/i/web/status/1594880996431781890"
```
|
[] |
[] |
[
"try this for the file reading:\nhttps://www.tutorialkart.com/python/python-read-file-as-string/\n#open text file in read mode\ntext_file = open(\"D:/data.txt\", \"r\")\n\n#read whole file to a string\nword = text_file.read()\n\n"
] |
[
-1
] |
[
"python",
"python_3.x"
] |
stackoverflow_0074538093_python_python_3.x.txt
|
Q:
How to import a module to a script in a sub-directory
I have a basic directory strucrure
model_folder
|
|
------- model_modules
| |
| ---- __init__.py
| |
| ---- foo.py
| |
| ---- bar.py
|
|
------- research
| |
| ----- training.ipynb
| |
| ----- eda.ipynb
|
|
------- main.py
Where model_modules is a folder containing two functions that I use in my model, research is a folder where I put modeling research, etc., and main.py is my main script that tests new data.
My question regards how to import a module, ie model_modules, into a script in a sub-directory. I can import model_modules into main.py just fine by doing the following
from model_modules.foo import Foo
from model_modules.bar import Bar
But when I try the same thing in training.ipynb or eda.ipynb, I get the error
ModuleNotFoundError: No module named 'model_modules'
I want my directory to be "clean" and don't want all my research scripts in the root directory. Is there a fix for this so that I can import model_modules into scripts in research? Or do I need to approach the architecture of this directory differently?
A:
I believe your ipynb is not in the same directory as your module.
In this case, you must add the module path as the code below.
Prepare the absolute path of the model_folder.
I suggest this code below.
import sys
sys.path.append('/absolute/path/model_folder')
from model_modules.foo import Foo
from model_modules.bar import Bar
Let's say, your module path which has submodule directories and python files is '/Users/username/custom_module'
import sys
sys.path.append('/Users/username/custom_module')
from model_modules.foo import Foo
from model_modules.bar import Bar
OR you can use the way below. but If you don't want to dive into the Bash and Linux system world, try the first method I suggested.
export PYTHONPATH='/Users/username/custom_module'
more detailed explanation
|
How to import a module to a script in a sub-directory
|
I have a basic directory strucrure
model_folder
|
|
------- model_modules
| |
| ---- __init__.py
| |
| ---- foo.py
| |
| ---- bar.py
|
|
------- research
| |
| ----- training.ipynb
| |
| ----- eda.ipynb
|
|
------- main.py
Where model_modules is a folder containing two functions that I use in my model, research is a folder where I put modeling research, etc., and main.py is my main script that tests new data.
My question regards how to import a module, ie model_modules, into a script in a sub-directory. I can import model_modules into main.py just fine by doing the following
from model_modules.foo import Foo
from model_modules.bar import Bar
But when I try the same thing in training.ipynb or eda.ipynb, I get the error
ModuleNotFoundError: No module named 'model_modules'
I want my directory to be "clean" and don't want all my research scripts in the root directory. Is there a fix for this so that I can import model_modules into scripts in research? Or do I need to approach the architecture of this directory differently?
|
[
"I believe your ipynb is not in the same directory as your module.\nIn this case, you must add the module path as the code below.\nPrepare the absolute path of the model_folder.\nI suggest this code below.\nimport sys\nsys.path.append('/absolute/path/model_folder')\nfrom model_modules.foo import Foo\nfrom model_modules.bar import Bar\n\nLet's say, your module path which has submodule directories and python files is '/Users/username/custom_module'\nimport sys\nsys.path.append('/Users/username/custom_module')\nfrom model_modules.foo import Foo\nfrom model_modules.bar import Bar\n\nOR you can use the way below. but If you don't want to dive into the Bash and Linux system world, try the first method I suggested.\nexport PYTHONPATH='/Users/username/custom_module'\n\nmore detailed explanation\n"
] |
[
1
] |
[] |
[] |
[
"directory",
"python",
"python_module"
] |
stackoverflow_0074538012_directory_python_python_module.txt
|
Q:
How to remove extra parentheses, if and only if, in between they contain a regex pattern?
import re, datetime
input_text = "hhhh ((44_-_44)) ggj ((2022_-_02_-_18 20:00 pm)) ((((2022_-_02_-_18 20:00 pm))) (2022_-_02_-_18 00:00 am)"
identify_dates_regex_00 = r"(?P<year>\d*)_-_(?P<month>\d{2})_-_(?P<startDay>\d{2})"
identify_time_regex = r"(?P<hh>\d{2}):(?P<mm>\d{2})[\s|]*(?P<am_or_pm>(?:am|pm))"
restructuring_structure_00 = "(" + r"\g<year>_-_\g<month>_-_\g<startDay>" + r" \g<hh>:\g<mm> \g<am_or_pm>" + ")"
input_text = re.sub("\(" + identify_dates_regex_00 + " " + identify_time_regex + "\)", restructuring_structure_00, input_text)
print(repr(input_text)) # --> output
This is the wrong output that I get:
'hhhh ((44_-_44)) ggj ((2022_-_02_-_18 20:00 pm)) ((((2022_-_02_-_18 20:00 pm))) (2022_-_02_-_18 00:00 am)'
This is the correct output, without the extra parentheses, that I get:
'hhhh ((44_-_44)) ggj (2022_-_02_-_18 20:00 pm) (2022_-_02_-_18 20:00 pm) (2022_-_02_-_18 00:00 am)'
I need it to remove the unnecessary parentheses if they have in the middle the structure of year_-_month_-_day hour:minute am or pm, that in regex using capture groups can be written like this "(?P<year>\d*)_-_(?P<month>\d{2})_-_(?P<startDay>\d{2})" identify_time_regex = r"(?P<hh>\d{2}):(?P<mm>\d{2})[\s|]*(?P<am_or_pm>(?:am|pm))" or with and without determining capturing groups, it could be written with simple regex (although we would lose the possibility of capturing the data) "\d*_-_\d{2}_-_\d{2} \d{2}:\d{2}[\s|]*[ap]m"
A:
You can use a single capture group to capture the date and time format between parenthesis, and then remove any surrounding parenthesis.
To do the replacement, you don't need the named capture groups.
In the replacement use capture group 1.
\(*(\(\d{4}_-_\d{2}_-_\d{2} \d{2}:\d{2}[\s|]*[ap]m\))\)*
Regex demo
Example code:
import re
input_text = "hhhh ((44_-_44)) ggj ((2022_-_02_-_18 20:00 pm)) ((((2022_-_02_-_18 20:00 pm))) (2022_-_02_-_18 00:00 am)"
pattern = r"\(*(\(\d{4}_-_\d{2}_-_\d{2} \d{2}:\d{2}[\s|]*[ap]m\))\)*"
print(re.sub(pattern, r"\1", input_text))
Output
hhhh ((44_-_44)) ggj (2022_-_02_-_18 20:00 pm) (2022_-_02_-_18 20:00 pm) (2022_-_02_-_18 00:00 am)
|
How to remove extra parentheses, if and only if, in between they contain a regex pattern?
|
import re, datetime
input_text = "hhhh ((44_-_44)) ggj ((2022_-_02_-_18 20:00 pm)) ((((2022_-_02_-_18 20:00 pm))) (2022_-_02_-_18 00:00 am)"
identify_dates_regex_00 = r"(?P<year>\d*)_-_(?P<month>\d{2})_-_(?P<startDay>\d{2})"
identify_time_regex = r"(?P<hh>\d{2}):(?P<mm>\d{2})[\s|]*(?P<am_or_pm>(?:am|pm))"
restructuring_structure_00 = "(" + r"\g<year>_-_\g<month>_-_\g<startDay>" + r" \g<hh>:\g<mm> \g<am_or_pm>" + ")"
input_text = re.sub("\(" + identify_dates_regex_00 + " " + identify_time_regex + "\)", restructuring_structure_00, input_text)
print(repr(input_text)) # --> output
This is the wrong output that I get:
'hhhh ((44_-_44)) ggj ((2022_-_02_-_18 20:00 pm)) ((((2022_-_02_-_18 20:00 pm))) (2022_-_02_-_18 00:00 am)'
This is the correct output, without the extra parentheses, that I get:
'hhhh ((44_-_44)) ggj (2022_-_02_-_18 20:00 pm) (2022_-_02_-_18 20:00 pm) (2022_-_02_-_18 00:00 am)'
I need it to remove the unnecessary parentheses if they have in the middle the structure of year_-_month_-_day hour:minute am or pm, that in regex using capture groups can be written like this "(?P<year>\d*)_-_(?P<month>\d{2})_-_(?P<startDay>\d{2})" identify_time_regex = r"(?P<hh>\d{2}):(?P<mm>\d{2})[\s|]*(?P<am_or_pm>(?:am|pm))" or with and without determining capturing groups, it could be written with simple regex (although we would lose the possibility of capturing the data) "\d*_-_\d{2}_-_\d{2} \d{2}:\d{2}[\s|]*[ap]m"
|
[
"You can use a single capture group to capture the date and time format between parenthesis, and then remove any surrounding parenthesis.\nTo do the replacement, you don't need the named capture groups.\nIn the replacement use capture group 1.\n\\(*(\\(\\d{4}_-_\\d{2}_-_\\d{2} \\d{2}:\\d{2}[\\s|]*[ap]m\\))\\)*\n\nRegex demo\nExample code:\nimport re\n\ninput_text = \"hhhh ((44_-_44)) ggj ((2022_-_02_-_18 20:00 pm)) ((((2022_-_02_-_18 20:00 pm))) (2022_-_02_-_18 00:00 am)\"\npattern = r\"\\(*(\\(\\d{4}_-_\\d{2}_-_\\d{2} \\d{2}:\\d{2}[\\s|]*[ap]m\\))\\)*\"\nprint(re.sub(pattern, r\"\\1\", input_text))\n\nOutput\nhhhh ((44_-_44)) ggj (2022_-_02_-_18 20:00 pm) (2022_-_02_-_18 20:00 pm) (2022_-_02_-_18 00:00 am)\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"python_3.x",
"regex",
"regex_group",
"replace"
] |
stackoverflow_0074528223_python_python_3.x_regex_regex_group_replace.txt
|
Q:
Create line graph from database that assigns lines to each name
I have an SQLite table I want to make a line graph from:
import sqlite3
conn = sqlite3.connect('sales_sheet.db')
cur = conn.cursor()
cur.execute("""CREATE TABLE IF NOT EXISTS sales(id INTEGER PRIMARY KEY NOT NULL,
sales_rep TEXT,
client TEXT,
number_of_sales INTEGER)""")
case1 = ("Jon","Apple", 5)
case2 = ("Judy","Amazon", 6)
case3 = ("Jon","Walmart", 4)
case4 = ("Don","Twitter", 8)
case5 = ("Don","Walmart", 4)
case6 = ("Judy","Google", 7)
case7 = ("Judy","Tesla", 3)
case8 = ("Jon","Microsoft", 7)
case9 = ("Don", "SpaceX", 5)
insert = """INSERT INTO sales(sales_rep, client, number_of_sales) VALUES(?,?,?)"""
cur.execute(insert, case1)
cur.execute(insert, case2)
cur.execute(insert, case3)
cur.execute(insert, case4)
cur.execute(insert, case5)
cur.execute(insert, case6)
cur.execute(insert, case7)
cur.execute(insert, case8)
cur.execute(insert, case9)
conn.commit()
conn.close()
Table:
I want x-axis of numbers in id column and y-axis as number of sales with a line with color coding to represent each sales rep. How to do this?
A:
You can use pandas.read_sql or pandas.read_sql_query to read the sqlite table as a dataframe then seaborn.lineplot to make the multicolor linegraph.
import pandas as pd
import seaborn as sns
df = pd.read_sql_query("SELECT * FROM sales LIMIT 0,30", conn)
sns.lineplot(data=df, x='id', y='number_of_sales', hue='sales_rep')
# Output :
|
Create line graph from database that assigns lines to each name
|
I have an SQLite table I want to make a line graph from:
import sqlite3
conn = sqlite3.connect('sales_sheet.db')
cur = conn.cursor()
cur.execute("""CREATE TABLE IF NOT EXISTS sales(id INTEGER PRIMARY KEY NOT NULL,
sales_rep TEXT,
client TEXT,
number_of_sales INTEGER)""")
case1 = ("Jon","Apple", 5)
case2 = ("Judy","Amazon", 6)
case3 = ("Jon","Walmart", 4)
case4 = ("Don","Twitter", 8)
case5 = ("Don","Walmart", 4)
case6 = ("Judy","Google", 7)
case7 = ("Judy","Tesla", 3)
case8 = ("Jon","Microsoft", 7)
case9 = ("Don", "SpaceX", 5)
insert = """INSERT INTO sales(sales_rep, client, number_of_sales) VALUES(?,?,?)"""
cur.execute(insert, case1)
cur.execute(insert, case2)
cur.execute(insert, case3)
cur.execute(insert, case4)
cur.execute(insert, case5)
cur.execute(insert, case6)
cur.execute(insert, case7)
cur.execute(insert, case8)
cur.execute(insert, case9)
conn.commit()
conn.close()
Table:
I want x-axis of numbers in id column and y-axis as number of sales with a line with color coding to represent each sales rep. How to do this?
|
[
"You can use pandas.read_sql or pandas.read_sql_query to read the sqlite table as a dataframe then seaborn.lineplot to make the multicolor linegraph.\nimport pandas as pd\nimport seaborn as sns\n\ndf = pd.read_sql_query(\"SELECT * FROM sales LIMIT 0,30\", conn)\n\nsns.lineplot(data=df, x='id', y='number_of_sales', hue='sales_rep')\n\n# Output :\n\n"
] |
[
1
] |
[] |
[] |
[
"linegraph",
"matplotlib",
"python",
"sqlite"
] |
stackoverflow_0074538086_linegraph_matplotlib_python_sqlite.txt
|
Q:
other way to solve Least common multiple of two integers a and b
PPCM which is the least common multiple, lowest common multiple, or smallest common multiple of two integers a and b, is the smallest positive integer that is divisible by both a and b. Since division of integers by zero is undefined, this definition has meaning only if a and b are both different from zero. However, some authors define lcm(a,0) as 0 for all a, since 0 is the only common multiple of a and 0.
a=int(input("Valeur de a ?"))
b=int(input("Valeur de b ?"))
print('les diviseures de a : ')
tab_a = []
tab_b = []
tab_c = []
for i in range(1,a+1):
if(a%i==0):
tab_a.append(i)
print(tab_a)
print('les diviseures de b : ')
for j in range(1,b+1):
if(b%j==0):
tab_b.append(j)
print(tab_b)
l=0
if(a>b):
sh = len(tab_b)
lg = len(tab_a)
arr_sh = tab_b
arr_lg = tab_a
else:
sh = len(tab_a)
lg = len(tab_b)
arr_sh = tab_a
arr_lg = tab_b
for i in range(0,sh):
for j in range(0,lg):
if(arr_sh[i]==arr_lg[j]):
tab_c.append(arr_sh[i])
print(tab_c)
print('PPCM est :',tab_c[0])
I think my approach is long, how can I improve it?
A:
The lcm is computed from the gcd and the latter using Euclid's agorithm.
def gcd(a, b):
while b > 0:
a, b= b, a % b
return a
def lcm(a, b):
return a * b // gcd(a, b)
(The trivial cases are not handled.)
A:
You can do this with a single loop; there's no need to build a bunch of lists and make multiple passes over them or anything like that:
>>> def lcm(a, b):
... a, b = sorted((a, b))
... return next(i for i in range(b, a * b + 1, b) if i % a == 0)
...
>>> lcm(2, 4)
4
>>> lcm(20, 16)
80
This is O(min(a, b)) in time and O(1) in space.
|
other way to solve Least common multiple of two integers a and b
|
PPCM which is the least common multiple, lowest common multiple, or smallest common multiple of two integers a and b, is the smallest positive integer that is divisible by both a and b. Since division of integers by zero is undefined, this definition has meaning only if a and b are both different from zero. However, some authors define lcm(a,0) as 0 for all a, since 0 is the only common multiple of a and 0.
a=int(input("Valeur de a ?"))
b=int(input("Valeur de b ?"))
print('les diviseures de a : ')
tab_a = []
tab_b = []
tab_c = []
for i in range(1,a+1):
if(a%i==0):
tab_a.append(i)
print(tab_a)
print('les diviseures de b : ')
for j in range(1,b+1):
if(b%j==0):
tab_b.append(j)
print(tab_b)
l=0
if(a>b):
sh = len(tab_b)
lg = len(tab_a)
arr_sh = tab_b
arr_lg = tab_a
else:
sh = len(tab_a)
lg = len(tab_b)
arr_sh = tab_a
arr_lg = tab_b
for i in range(0,sh):
for j in range(0,lg):
if(arr_sh[i]==arr_lg[j]):
tab_c.append(arr_sh[i])
print(tab_c)
print('PPCM est :',tab_c[0])
I think my approach is long, how can I improve it?
|
[
"The lcm is computed from the gcd and the latter using Euclid's agorithm.\ndef gcd(a, b):\n while b > 0:\n a, b= b, a % b\n return a\n\ndef lcm(a, b):\n return a * b // gcd(a, b)\n\n(The trivial cases are not handled.)\n",
"You can do this with a single loop; there's no need to build a bunch of lists and make multiple passes over them or anything like that:\n>>> def lcm(a, b):\n... a, b = sorted((a, b))\n... return next(i for i in range(b, a * b + 1, b) if i % a == 0)\n...\n>>> lcm(2, 4)\n4\n>>> lcm(20, 16)\n80\n\nThis is O(min(a, b)) in time and O(1) in space.\n"
] |
[
1,
0
] |
[] |
[] |
[
"algorithm",
"list",
"math",
"performance",
"python"
] |
stackoverflow_0074537912_algorithm_list_math_performance_python.txt
|
Q:
How to create a virtual environment in python (venv) and add libraries from anaconda installed in the operating system? Without internet connection
Is it possible to create a python virtual environment (venv) from the local anaconda repository and add packages from there?
I have anaconda distribution installed here:
C: \ ProgramData \ Anaconda3
I want to create a virtual environment for a new project. Here:
C: \ new_project \ venv
For example, I want to add pandas, numpy to location 2) from location 1)
Important! I want to add from the location in point 1). I don't want to connect to the internet.
Is it even possible? If not, how can I create a virtual environment based on the installed anconda packages in the operating system?
I know you can add local libraries via pip, but I don't know how to do that with anaconda.
https://packaging.python.org/en/latest/tutorials/installing-packages/#installing-from-a-local-src-tree
Maybe there is a standard?
A:
I did it.
https://docs.conda.io/projects/conda/en/latest/commands/install.html
[SOLUTION]
I solved it as follows:
"path" - your path
"libs" - library names
conda create -p "path" --copy
conda install -p "path" "libs" --offline --use-local
Example:
1.conda create -p c: \ my_project \ venv
2.conda install -p c: \ my_project \ venv scipy numpy pandas --offline --use-local
|
How to create a virtual environment in python (venv) and add libraries from anaconda installed in the operating system? Without internet connection
|
Is it possible to create a python virtual environment (venv) from the local anaconda repository and add packages from there?
I have anaconda distribution installed here:
C: \ ProgramData \ Anaconda3
I want to create a virtual environment for a new project. Here:
C: \ new_project \ venv
For example, I want to add pandas, numpy to location 2) from location 1)
Important! I want to add from the location in point 1). I don't want to connect to the internet.
Is it even possible? If not, how can I create a virtual environment based on the installed anconda packages in the operating system?
I know you can add local libraries via pip, but I don't know how to do that with anaconda.
https://packaging.python.org/en/latest/tutorials/installing-packages/#installing-from-a-local-src-tree
Maybe there is a standard?
|
[
"I did it.\nhttps://docs.conda.io/projects/conda/en/latest/commands/install.html\n[SOLUTION]\nI solved it as follows:\n\"path\" - your path\n\"libs\" - library names\n\nconda create -p \"path\" --copy\n\nconda install -p \"path\" \"libs\" --offline --use-local\n\n\nExample:\n1.conda create -p c: \\ my_project \\ venv\n2.conda install -p c: \\ my_project \\ venv scipy numpy pandas --offline --use-local\n"
] |
[
0
] |
[] |
[] |
[
"anaconda",
"numpy",
"pandas",
"python",
"python_venv"
] |
stackoverflow_0074535319_anaconda_numpy_pandas_python_python_venv.txt
|
Q:
Pafy+Youtub_dl+OpenCV is slow to display videos
I am trying a basic example of displaying a youtube video using opencv, and I seem to get 50% or less of the framerate as in the browser. Eventually I want to make a real-time computer vision application from youtube streams (well, a fixed delay is fine, but I want it to be able to keep up), and so if just displaying a video is slow, I'm not sure how that is going to happen. Does anyone know which part of this is slow? And is there a way to speed it up?
import cv2
import pafy
import youtube_dl
url = 'https://youtu.be/1AbfRENy3OQ'
urlPafy = pafy.new(url)
videoplay = urlPafy.getbest()
cap = cv2.VideoCapture(videoplay.url)
while(True):
# Capture image frame-by-frame
ret, frame = cap.read()
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
A:
Turns out there was a bug in youtube_dl: https://github.com/ytdl-org/youtube-dl/issues/29326 that essentially caused youtube to throttle the connection.
|
Pafy+Youtub_dl+OpenCV is slow to display videos
|
I am trying a basic example of displaying a youtube video using opencv, and I seem to get 50% or less of the framerate as in the browser. Eventually I want to make a real-time computer vision application from youtube streams (well, a fixed delay is fine, but I want it to be able to keep up), and so if just displaying a video is slow, I'm not sure how that is going to happen. Does anyone know which part of this is slow? And is there a way to speed it up?
import cv2
import pafy
import youtube_dl
url = 'https://youtu.be/1AbfRENy3OQ'
urlPafy = pafy.new(url)
videoplay = urlPafy.getbest()
cap = cv2.VideoCapture(videoplay.url)
while(True):
# Capture image frame-by-frame
ret, frame = cap.read()
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
|
[
"Turns out there was a bug in youtube_dl: https://github.com/ytdl-org/youtube-dl/issues/29326 that essentially caused youtube to throttle the connection.\n"
] |
[
0
] |
[] |
[] |
[
"opencv",
"pafy",
"python",
"youtube_dl"
] |
stackoverflow_0074512218_opencv_pafy_python_youtube_dl.txt
|
Q:
How to get columns titles from googlesheets to print in Python?
Basically I'm using Python to pull information off a google spreadsheet.
enter image description here
I have no problem pulling the information I need but when I start to break it down into specific catergories like "goals scored" i get the information but can print it to the terminal with the column headings. Example below:
enter image description here
So basically I want to bring down the above information but also with the column headings:
'player, position, appearances..... etc'
This is what my code looks like to get the information posted above:
data = {
"man united": SHEET.worksheet("man_utd").get_all_values(),
"man city": SHEET.worksheet("man_city").get_all_values(),
"chelsea": SHEET.worksheet('chelsea').get_all_values(),
"liverpool": SHEET.worksheet('liverpool').get_all_values()
}
team_name = ""
position = ""
top_stats = ""
def user_commands():
"""
gives commands the user is able to input to receive different data sets
"""
options = 'Man United, Man City, Liverpool, Chelsea'
print(f"1: {options}")
team_name = input("Please Enter A Team Name:\n").casefold()
print(f"You Have Entered {team_name}\n")
while team_name not in data:
print("You Entered a Wrong Option, Please Enter A Correct Option")
print(f"1: {options}")
team_name = input()
print(tabulate(data[team_name]))
return team_name
def user_commands_2(team_name):
"""
function to see players of a set position
from data received from first input.
Players can be goalkeepers, defenders, midfielders or forwards
"""
options_1 = 'goalkeeper, defender, midfielder,\nforward, home'
print(f"1: {options_1}")
position = input("\nPlease Enter a Position:\n").casefold()
print(f"You Have Entered {position}\n")
while position.casefold() not in (options_1):
print("\nYou Entered a Wrong Option, Please Enter a Correct Option")
print(f"1: {options_1}")
position = input()
if position.casefold() == 'home':
print("Hi! Welcome to a Football Stats Generator")
print("The Available Options Are As Follows:")
main()
res = [i for i in data[team_name] if position.capitalize() in i]
print(tabulate(res))
print("Hi! Welcome To a Football Stats Generator\n")
print("The Available Options Are As Follows:\n")
Any help would be greatly appreciated.
Thanks
A:
Use "get_all_records()" instead of "get_all_values()".
|
How to get columns titles from googlesheets to print in Python?
|
Basically I'm using Python to pull information off a google spreadsheet.
enter image description here
I have no problem pulling the information I need but when I start to break it down into specific catergories like "goals scored" i get the information but can print it to the terminal with the column headings. Example below:
enter image description here
So basically I want to bring down the above information but also with the column headings:
'player, position, appearances..... etc'
This is what my code looks like to get the information posted above:
data = {
"man united": SHEET.worksheet("man_utd").get_all_values(),
"man city": SHEET.worksheet("man_city").get_all_values(),
"chelsea": SHEET.worksheet('chelsea').get_all_values(),
"liverpool": SHEET.worksheet('liverpool').get_all_values()
}
team_name = ""
position = ""
top_stats = ""
def user_commands():
"""
gives commands the user is able to input to receive different data sets
"""
options = 'Man United, Man City, Liverpool, Chelsea'
print(f"1: {options}")
team_name = input("Please Enter A Team Name:\n").casefold()
print(f"You Have Entered {team_name}\n")
while team_name not in data:
print("You Entered a Wrong Option, Please Enter A Correct Option")
print(f"1: {options}")
team_name = input()
print(tabulate(data[team_name]))
return team_name
def user_commands_2(team_name):
"""
function to see players of a set position
from data received from first input.
Players can be goalkeepers, defenders, midfielders or forwards
"""
options_1 = 'goalkeeper, defender, midfielder,\nforward, home'
print(f"1: {options_1}")
position = input("\nPlease Enter a Position:\n").casefold()
print(f"You Have Entered {position}\n")
while position.casefold() not in (options_1):
print("\nYou Entered a Wrong Option, Please Enter a Correct Option")
print(f"1: {options_1}")
position = input()
if position.casefold() == 'home':
print("Hi! Welcome to a Football Stats Generator")
print("The Available Options Are As Follows:")
main()
res = [i for i in data[team_name] if position.capitalize() in i]
print(tabulate(res))
print("Hi! Welcome To a Football Stats Generator\n")
print("The Available Options Are As Follows:\n")
Any help would be greatly appreciated.
Thanks
|
[
"Use \"get_all_records()\" instead of \"get_all_values()\".\n"
] |
[
0
] |
[] |
[] |
[
"google_sheets",
"gspread",
"python"
] |
stackoverflow_0069472788_google_sheets_gspread_python.txt
|
Q:
Django runserver_plus pyOpenSSL not installed error, although it is
Linux Mint 19.3, Python 3.8 virtual environment.
So I try to run runserver_plus using ssl:
python manage.py runserver_plus --cert-file cert.crt
Then I get following error:
CommandError: Python OpenSSL Library is required to use runserver_plus with ssl support. Install via pip (pip install pyOpenSSL).
But the deal is that pyOpenSSL is already installed within my environment. Here is pip list output:
asgiref (3.5.2)
certifi (2022.9.24)
cffi (1.15.1)
charset-normalizer (2.1.1)
cryptography (38.0.3)
defusedxml (0.7.1)
Django (3.0.14)
django-extensions (2.2.5)
idna (3.4)
oauthlib (3.2.2)
Pillow (7.0.0)
pip (9.0.1)
pkg-resources (0.0.0)
pycparser (2.21)
PyJWT (2.6.0)
pyOpenSSL (19.0.0)
python3-openid (3.2.0)
pytz (2022.6)
requests (2.28.1)
requests-oauthlib (1.3.1)
setuptools (39.0.1)
six (1.16.0)
social-auth-app-django (3.1.0)
social-auth-core (4.3.0)
sqlparse (0.4.3)
urllib3 (1.26.12)
Werkzeug (0.16.0)
wheel (0.38.4)
Thanks in forward for any help!
I've tried to install different versions of pyOpenSSL, both erlier and later. Unsuccessfully.
Runserver_plus starts successfully without additional parameters, but my point is to access virtual server securely.
A:
Such issue can appear when you need to recompile cryptography with the correct openssl.
To do that you can check the cryptography docs.
$ pip uninstall pyopenssl
$ pip uninstall cryptography
$ env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" pip install cryptography
$ pip install pyopenssl
$ python manage.py shell_plus
$ python manage.py runserver_plus --cert=foo.cert
|
Django runserver_plus pyOpenSSL not installed error, although it is
|
Linux Mint 19.3, Python 3.8 virtual environment.
So I try to run runserver_plus using ssl:
python manage.py runserver_plus --cert-file cert.crt
Then I get following error:
CommandError: Python OpenSSL Library is required to use runserver_plus with ssl support. Install via pip (pip install pyOpenSSL).
But the deal is that pyOpenSSL is already installed within my environment. Here is pip list output:
asgiref (3.5.2)
certifi (2022.9.24)
cffi (1.15.1)
charset-normalizer (2.1.1)
cryptography (38.0.3)
defusedxml (0.7.1)
Django (3.0.14)
django-extensions (2.2.5)
idna (3.4)
oauthlib (3.2.2)
Pillow (7.0.0)
pip (9.0.1)
pkg-resources (0.0.0)
pycparser (2.21)
PyJWT (2.6.0)
pyOpenSSL (19.0.0)
python3-openid (3.2.0)
pytz (2022.6)
requests (2.28.1)
requests-oauthlib (1.3.1)
setuptools (39.0.1)
six (1.16.0)
social-auth-app-django (3.1.0)
social-auth-core (4.3.0)
sqlparse (0.4.3)
urllib3 (1.26.12)
Werkzeug (0.16.0)
wheel (0.38.4)
Thanks in forward for any help!
I've tried to install different versions of pyOpenSSL, both erlier and later. Unsuccessfully.
Runserver_plus starts successfully without additional parameters, but my point is to access virtual server securely.
|
[
"Such issue can appear when you need to recompile cryptography with the correct openssl.\nTo do that you can check the cryptography docs.\n$ pip uninstall pyopenssl\n$ pip uninstall cryptography\n$ env LDFLAGS=\"-L$(brew --prefix openssl)/lib\" CFLAGS=\"-I$(brew --prefix openssl)/include\" pip install cryptography\n$ pip install pyopenssl\n$ python manage.py shell_plus\n$ python manage.py runserver_plus --cert=foo.cert\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"pyopenssl",
"python"
] |
stackoverflow_0074538243_django_pyopenssl_python.txt
|
Q:
how to change the python version from default 3.5 to 3.8 of google colab
I downloaded python version 3.8 on google colab using:
!apt-get install python3.8
Now I want to change the default python version used in google colab uses from 3.6 to 3.8. how to do it??
I have read few ans but there are no updates...
A:
Colab has default python 3.7 and alternative 3.6 (on 26.07.2021)
#**Add python version you wish** to list
!sudo apt-get update -y
!sudo apt-get install python3.8
from IPython.display import clear_output
clear_output()
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
# Choose one of the given alternatives:
!sudo update-alternatives --config python3
# This one used to work but now NOT(for me)!
# !sudo update-alternatives --config python
# Check the result
!python3 --version
# Attention: Install pip (... needed!)
!sudo apt install python3-pip
A:
There is a way to use any version of python you want, without having to run a kernel locally or going through an ngrok proxy.
Download the colab notebook. Open a text editor to change the kernel specification to:
"kernelspec": {
"name": "py38",
"display_name": "Python 3.8"
}
This is the same trick as the one used with Javascript, Java, and Golang.
Then upload the edited notebook to Google Drive. Open the notebook in Google Colab. It cannot find the py38 kernel, so it use normal python3 kernel.
You need to install a python 3.8, the google-colab package and the ipykernel under the name you defined above: "py38":
!wget -O mini.sh https://repo.anaconda.com/miniconda/Miniconda3-py38_4.8.2-Linux-x86_64.sh
!chmod +x mini.sh
!bash ./mini.sh -b -f -p /usr/local
!conda install -q -y jupyter
!conda install -q -y google-colab -c conda-forge
!python -m ipykernel install --name "py38" --user
Reload the page, and voilà, you can test the version is correct:
import sys
print("User Current Version:-", sys.version)
A working example can be found there.
A:
try these commands
!update-alternatives --install /usr/bin/python python /usr/bin/python3.8 1
then
!update-alternatives --list python
this must display your downloaded python version
after that
!sudo update-alternatives --config python
## !Set python3.8 as default.
finally
!sudo update-alternatives --set python /usr/bin/python3.8
then check your default python version on colab
!python3 --version
A:
In my opinion there is no "good" way to do this. What you can do is start your script with a shebang line. A shebang line will set the python version for the following code.
Find some related answers and informations here.
How do I tell a Python script to use a particular version
Find here some informations on how to use shebang in colab.
https://colab.research.google.com/github/jhermann/blog/blob/master/_notebooks/2020-02-28-env_with_arguments.ipynb#scrollTo=SYv4FagrzLVu
When you have script for more versions of python you might come across this issue.
Dealing with multiple python versions when python files have to use #!/bin/env python
A:
This is what I have tried with success:
!sudo apt-get update -y
!sudo apt-get install python3.8
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8
!update-alternatives --install /usr/bin/python python /usr/bin/python3.8
!update-alternatives --list python
!sudo update-alternatives --config python
!sudo update-alternatives --set python /usr/bin/python3.8
!python3 --version
A:
Here's my solution which completely changes the runtime version, not just the interpreter:
https://stackoverflow.com/a/74538231/9738112
|
how to change the python version from default 3.5 to 3.8 of google colab
|
I downloaded python version 3.8 on google colab using:
!apt-get install python3.8
Now I want to change the default python version used in google colab uses from 3.6 to 3.8. how to do it??
I have read few ans but there are no updates...
|
[
"Colab has default python 3.7 and alternative 3.6 (on 26.07.2021)\n#**Add python version you wish** to list\n!sudo apt-get update -y\n!sudo apt-get install python3.8\nfrom IPython.display import clear_output \nclear_output()\n!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1\n\n# Choose one of the given alternatives:\n!sudo update-alternatives --config python3\n\n# This one used to work but now NOT(for me)!\n# !sudo update-alternatives --config python\n\n# Check the result\n!python3 --version\n\n# Attention: Install pip (... needed!)\n!sudo apt install python3-pip\n\n",
"There is a way to use any version of python you want, without having to run a kernel locally or going through an ngrok proxy.\nDownload the colab notebook. Open a text editor to change the kernel specification to:\n\"kernelspec\": {\n \"name\": \"py38\",\n \"display_name\": \"Python 3.8\"\n}\n\nThis is the same trick as the one used with Javascript, Java, and Golang.\nThen upload the edited notebook to Google Drive. Open the notebook in Google Colab. It cannot find the py38 kernel, so it use normal python3 kernel.\nYou need to install a python 3.8, the google-colab package and the ipykernel under the name you defined above: \"py38\":\n!wget -O mini.sh https://repo.anaconda.com/miniconda/Miniconda3-py38_4.8.2-Linux-x86_64.sh\n!chmod +x mini.sh\n!bash ./mini.sh -b -f -p /usr/local\n!conda install -q -y jupyter\n!conda install -q -y google-colab -c conda-forge\n!python -m ipykernel install --name \"py38\" --user\n\nReload the page, and voilà, you can test the version is correct:\nimport sys\nprint(\"User Current Version:-\", sys.version)\n\nA working example can be found there.\n",
"try these commands\n!update-alternatives --install /usr/bin/python python /usr/bin/python3.8 1\n\nthen\n!update-alternatives --list python\n\nthis must display your downloaded python version\nafter that\n!sudo update-alternatives --config python\n## !Set python3.8 as default.\n\nfinally\n!sudo update-alternatives --set python /usr/bin/python3.8\n\nthen check your default python version on colab\n!python3 --version\n\n",
"In my opinion there is no \"good\" way to do this. What you can do is start your script with a shebang line. A shebang line will set the python version for the following code.\nFind some related answers and informations here.\nHow do I tell a Python script to use a particular version\nFind here some informations on how to use shebang in colab.\nhttps://colab.research.google.com/github/jhermann/blog/blob/master/_notebooks/2020-02-28-env_with_arguments.ipynb#scrollTo=SYv4FagrzLVu\nWhen you have script for more versions of python you might come across this issue.\nDealing with multiple python versions when python files have to use #!/bin/env python\n",
"This is what I have tried with success:\n!sudo apt-get update -y\n!sudo apt-get install python3.8\n!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8\n\n!update-alternatives --install /usr/bin/python python /usr/bin/python3.8\n!update-alternatives --list python\n!sudo update-alternatives --config python\n!sudo update-alternatives --set python /usr/bin/python3.8\n!python3 --version\n\n",
"Here's my solution which completely changes the runtime version, not just the interpreter:\nhttps://stackoverflow.com/a/74538231/9738112\n"
] |
[
13,
5,
3,
0,
0,
0
] |
[] |
[] |
[
"google_colaboratory",
"python"
] |
stackoverflow_0063168301_google_colaboratory_python.txt
|
Q:
How can I add new dimensions to a Numpy array?
I'm starting off with a numpy array of an image.
In[1]:img = cv2.imread('test.jpg')
The shape is what you might expect for a 640x480 RGB image.
In[2]:img.shape
Out[2]: (480, 640, 3)
However, this image that I have is a frame of a video, which is 100 frames long. Ideally, I would like to have a single array that contains all the data from this video such that img.shape returns (480, 640, 3, 100).
What is the best way to add the next frame -- that is, the next set of image data, another 480 x 640 x 3 array -- to my initial array?
A:
A dimension can be added to a numpy array as follows:
image = image[..., np.newaxis]
A:
Alternatively to
image = image[..., np.newaxis]
in @dbliss' answer, you can also use numpy.expand_dims like
image = np.expand_dims(image, <your desired dimension>)
For example (taken from the link above):
x = np.array([1, 2])
print(x.shape) # prints (2,)
Then
y = np.expand_dims(x, axis=0)
yields
array([[1, 2]])
and
y.shape
gives
(1, 2)
A:
You could just create an array of the correct size up-front and fill it:
frames = np.empty((480, 640, 3, 100))
for k in xrange(nframes):
frames[:,:,:,k] = cv2.imread('frame_{}.jpg'.format(k))
if the frames were individual jpg file that were named in some particular way (in the example, frame_0.jpg, frame_1.jpg, etc).
Just a note, you might consider using a (nframes, 480,640,3) shaped array, instead.
A:
Pythonic
X = X[:, :, None]
which is equivalent to
X = X[:, :, numpy.newaxis] and
X = numpy.expand_dims(X, axis=-1)
But as you are explicitly asking about stacking images,
I would recommend going for stacking the list of images np.stack([X1, X2, X3]) that you may have collected in a loop.
If you do not like the order of the dimensions you can rearrange with np.transpose()
A:
You can use np.concatenate() use the axis parameter to specify the dimension that should be concatenated. If the arrays being concatenated do not have this dimension, you can use np.newaxis to indicate where the new dimension should be added:
import numpy as np
movie = np.concatenate((img1[:,np.newaxis], img2[:,np.newaxis]), axis=3)
If you are reading from many files:
import glob
movie = np.concatenate([cv2.imread(p)[:,np.newaxis] for p in glob.glob('*.jpg')], axis=3)
A:
Consider Approach 1 with reshape method and Approach 2 with np.newaxis method that produce the same outcome:
#Lets suppose, we have:
x = [1,2,3,4,5,6,7,8,9]
print('I. x',x)
xNpArr = np.array(x)
print('II. xNpArr',xNpArr)
print('III. xNpArr', xNpArr.shape)
xNpArr_3x3 = xNpArr.reshape((3,3))
print('IV. xNpArr_3x3.shape', xNpArr_3x3.shape)
print('V. xNpArr_3x3', xNpArr_3x3)
#Approach 1 with reshape method
xNpArrRs_1x3x3x1 = xNpArr_3x3.reshape((1,3,3,1))
print('VI. xNpArrRs_1x3x3x1.shape', xNpArrRs_1x3x3x1.shape)
print('VII. xNpArrRs_1x3x3x1', xNpArrRs_1x3x3x1)
#Approach 2 with np.newaxis method
xNpArrNa_1x3x3x1 = xNpArr_3x3[np.newaxis, ..., np.newaxis]
print('VIII. xNpArrNa_1x3x3x1.shape', xNpArrNa_1x3x3x1.shape)
print('IX. xNpArrNa_1x3x3x1', xNpArrNa_1x3x3x1)
We have as outcome:
I. x [1, 2, 3, 4, 5, 6, 7, 8, 9]
II. xNpArr [1 2 3 4 5 6 7 8 9]
III. xNpArr (9,)
IV. xNpArr_3x3.shape (3, 3)
V. xNpArr_3x3 [[1 2 3]
[4 5 6]
[7 8 9]]
VI. xNpArrRs_1x3x3x1.shape (1, 3, 3, 1)
VII. xNpArrRs_1x3x3x1 [[[[1]
[2]
[3]]
[[4]
[5]
[6]]
[[7]
[8]
[9]]]]
VIII. xNpArrNa_1x3x3x1.shape (1, 3, 3, 1)
IX. xNpArrNa_1x3x3x1 [[[[1]
[2]
[3]]
[[4]
[5]
[6]]
[[7]
[8]
[9]]]]
A:
There is no structure in numpy that allows you to append more data later.
Instead, numpy puts all of your data into a contiguous chunk of numbers (basically; a C array), and any resize requires allocating a new chunk of memory to hold it. Numpy's speed comes from being able to keep all the data in a numpy array in the same chunk of memory; e.g. mathematical operations can be parallelized for speed and you get less cache misses.
So you will have two kinds of solutions:
Pre-allocate the memory for the numpy array and fill in the values, like in JoshAdel's answer, or
Keep your data in a normal python list until it's actually needed to put them all together (see below)
images = []
for i in range(100):
new_image = # pull image from somewhere
images.append(new_image)
images = np.stack(images, axis=3)
Note that there is no need to expand the dimensions of the individual image arrays first, nor do you need to know how many images you expect ahead of time.
A:
You can use stack with the axis parameter:
img.shape # h,w,3
imgs = np.stack([img1,img2,img3,img4], axis=-1) # -1 = new axis is last
imgs.shape # h,w,3,nimages
For example: to convert grayscale to color:
>>> d = np.zeros((5,4), dtype=int) # 5x4
>>> d[2,3] = 1
>>> d3.shape
Out[30]: (5, 4, 3)
>>> d3 = np.stack([d,d,d], axis=-2) # 5x4x3 -1=as last axis
>>> d3[2,3]
Out[32]: array([1, 1, 1])
A:
I followed this approach:
import numpy as np
import cv2
ls = []
for image in image_paths:
ls.append(cv2.imread('test.jpg'))
img_np = np.array(ls) # shape (100, 480, 640, 3)
img_np = np.rollaxis(img_np, 0, 4) # shape (480, 640, 3, 100).
A:
This worked for me:
image = image[..., None]
A:
This will help you add axis anywhere you want
import numpy as np
signal = np.array([[0.3394572666491664, 0.3089068053925853, 0.3516359279582483], [0.33932706934615525, 0.3094755563319447, 0.3511973743219001], [0.3394407172182317, 0.30889042266755573, 0.35166886011421256], [0.3394407172182317, 0.30889042266755573, 0.35166886011421256]])
print(signal.shape)
#(4,3)
print(signal[...,np.newaxis].shape) or signal[...:none]
#(4, 3, 1)
print(signal[:, np.newaxis, :].shape) or signal[:,none, :]
#(4, 1, 3)
A:
there is three-way for adding new dimensions to ndarray .
first: using "np.newaxis" (something like @dbliss answer)
np.newaxis is just given an alias to None for making it easier to
understand. If you replace np.newaxis with None, it works the same
way. but it's better to use np.newaxis for being more explicit.
import numpy as np
my_arr = np.array([2, 3])
new_arr = my_arr[..., np.newaxis]
print("old shape", my_arr.shape)
print("new shape", new_arr.shape)
>>> old shape (2,)
>>> new shape (2, 1)
second: using "np.expand_dims()"
Specify the original ndarray in the first argument and the position
to add the dimension in the second argument axis.
my_arr = np.array([2, 3])
new_arr = np.expand_dims(my_arr, -1)
print("old shape", my_arr.shape)
print("new shape", new_arr.shape)
>>> old shape (2,)
>>> new shape (2, 1)
third: using "reshape()"
my_arr = np.array([2, 3])
new_arr = my_arr.reshape(*my_arr.shape, 1)
print("old shape", my_arr.shape)
print("new shape", new_arr.shape)
>>> old shape (2,)
>>> new shape (2, 1)
A:
a = np.expand_dims(a, axis=-1)
or
a = a[:, np.newaxis]
or
a = a.reshape(a.shape + (1,))
|
How can I add new dimensions to a Numpy array?
|
I'm starting off with a numpy array of an image.
In[1]:img = cv2.imread('test.jpg')
The shape is what you might expect for a 640x480 RGB image.
In[2]:img.shape
Out[2]: (480, 640, 3)
However, this image that I have is a frame of a video, which is 100 frames long. Ideally, I would like to have a single array that contains all the data from this video such that img.shape returns (480, 640, 3, 100).
What is the best way to add the next frame -- that is, the next set of image data, another 480 x 640 x 3 array -- to my initial array?
|
[
"A dimension can be added to a numpy array as follows:\nimage = image[..., np.newaxis]\n\n",
"Alternatively to \nimage = image[..., np.newaxis]\n\nin @dbliss' answer, you can also use numpy.expand_dims like\nimage = np.expand_dims(image, <your desired dimension>)\n\nFor example (taken from the link above):\nx = np.array([1, 2])\n\nprint(x.shape) # prints (2,)\n\nThen \ny = np.expand_dims(x, axis=0)\n\nyields\narray([[1, 2]])\n\nand\ny.shape\n\ngives\n(1, 2)\n\n",
"You could just create an array of the correct size up-front and fill it:\nframes = np.empty((480, 640, 3, 100))\n\nfor k in xrange(nframes):\n frames[:,:,:,k] = cv2.imread('frame_{}.jpg'.format(k))\n\nif the frames were individual jpg file that were named in some particular way (in the example, frame_0.jpg, frame_1.jpg, etc).\nJust a note, you might consider using a (nframes, 480,640,3) shaped array, instead.\n",
"Pythonic\nX = X[:, :, None]\nwhich is equivalent to\nX = X[:, :, numpy.newaxis] and\nX = numpy.expand_dims(X, axis=-1)\nBut as you are explicitly asking about stacking images,\nI would recommend going for stacking the list of images np.stack([X1, X2, X3]) that you may have collected in a loop.\nIf you do not like the order of the dimensions you can rearrange with np.transpose()\n",
"You can use np.concatenate() use the axis parameter to specify the dimension that should be concatenated. If the arrays being concatenated do not have this dimension, you can use np.newaxis to indicate where the new dimension should be added:\nimport numpy as np\nmovie = np.concatenate((img1[:,np.newaxis], img2[:,np.newaxis]), axis=3)\n\nIf you are reading from many files:\nimport glob\nmovie = np.concatenate([cv2.imread(p)[:,np.newaxis] for p in glob.glob('*.jpg')], axis=3)\n\n",
"Consider Approach 1 with reshape method and Approach 2 with np.newaxis method that produce the same outcome:\n#Lets suppose, we have:\nx = [1,2,3,4,5,6,7,8,9]\nprint('I. x',x)\n\nxNpArr = np.array(x)\nprint('II. xNpArr',xNpArr)\nprint('III. xNpArr', xNpArr.shape)\n\nxNpArr_3x3 = xNpArr.reshape((3,3))\nprint('IV. xNpArr_3x3.shape', xNpArr_3x3.shape)\nprint('V. xNpArr_3x3', xNpArr_3x3)\n\n#Approach 1 with reshape method\nxNpArrRs_1x3x3x1 = xNpArr_3x3.reshape((1,3,3,1))\nprint('VI. xNpArrRs_1x3x3x1.shape', xNpArrRs_1x3x3x1.shape)\nprint('VII. xNpArrRs_1x3x3x1', xNpArrRs_1x3x3x1)\n\n#Approach 2 with np.newaxis method\nxNpArrNa_1x3x3x1 = xNpArr_3x3[np.newaxis, ..., np.newaxis]\nprint('VIII. xNpArrNa_1x3x3x1.shape', xNpArrNa_1x3x3x1.shape)\nprint('IX. xNpArrNa_1x3x3x1', xNpArrNa_1x3x3x1)\n\nWe have as outcome:\nI. x [1, 2, 3, 4, 5, 6, 7, 8, 9]\n\nII. xNpArr [1 2 3 4 5 6 7 8 9]\n\nIII. xNpArr (9,)\n\nIV. xNpArr_3x3.shape (3, 3)\n\nV. xNpArr_3x3 [[1 2 3]\n [4 5 6]\n [7 8 9]]\n\nVI. xNpArrRs_1x3x3x1.shape (1, 3, 3, 1)\n\nVII. xNpArrRs_1x3x3x1 [[[[1]\n [2]\n [3]]\n\n [[4]\n [5]\n [6]]\n\n [[7]\n [8]\n [9]]]]\n\nVIII. xNpArrNa_1x3x3x1.shape (1, 3, 3, 1)\n\nIX. xNpArrNa_1x3x3x1 [[[[1]\n [2]\n [3]]\n\n [[4]\n [5]\n [6]]\n\n [[7]\n [8]\n [9]]]]\n\n",
"There is no structure in numpy that allows you to append more data later. \nInstead, numpy puts all of your data into a contiguous chunk of numbers (basically; a C array), and any resize requires allocating a new chunk of memory to hold it. Numpy's speed comes from being able to keep all the data in a numpy array in the same chunk of memory; e.g. mathematical operations can be parallelized for speed and you get less cache misses. \nSo you will have two kinds of solutions:\n\nPre-allocate the memory for the numpy array and fill in the values, like in JoshAdel's answer, or\nKeep your data in a normal python list until it's actually needed to put them all together (see below)\n\n\nimages = []\nfor i in range(100):\n new_image = # pull image from somewhere\n images.append(new_image)\nimages = np.stack(images, axis=3)\n\n\nNote that there is no need to expand the dimensions of the individual image arrays first, nor do you need to know how many images you expect ahead of time.\n",
"You can use stack with the axis parameter:\nimg.shape # h,w,3\nimgs = np.stack([img1,img2,img3,img4], axis=-1) # -1 = new axis is last\nimgs.shape # h,w,3,nimages\n\nFor example: to convert grayscale to color:\n>>> d = np.zeros((5,4), dtype=int) # 5x4\n>>> d[2,3] = 1\n\n>>> d3.shape\nOut[30]: (5, 4, 3)\n\n>>> d3 = np.stack([d,d,d], axis=-2) # 5x4x3 -1=as last axis\n>>> d3[2,3]\nOut[32]: array([1, 1, 1])\n\n",
"I followed this approach:\nimport numpy as np\nimport cv2\n\nls = []\n\nfor image in image_paths:\n ls.append(cv2.imread('test.jpg'))\n\nimg_np = np.array(ls) # shape (100, 480, 640, 3)\nimg_np = np.rollaxis(img_np, 0, 4) # shape (480, 640, 3, 100).\n\n",
"This worked for me:\nimage = image[..., None]\n\n",
"This will help you add axis anywhere you want\n import numpy as np\n signal = np.array([[0.3394572666491664, 0.3089068053925853, 0.3516359279582483], [0.33932706934615525, 0.3094755563319447, 0.3511973743219001], [0.3394407172182317, 0.30889042266755573, 0.35166886011421256], [0.3394407172182317, 0.30889042266755573, 0.35166886011421256]])\n \n print(signal.shape)\n#(4,3)\n print(signal[...,np.newaxis].shape) or signal[...:none]\n#(4, 3, 1) \n print(signal[:, np.newaxis, :].shape) or signal[:,none, :]\n\n#(4, 1, 3)\n\n",
"there is three-way for adding new dimensions to ndarray .\nfirst: using \"np.newaxis\" (something like @dbliss answer)\n\nnp.newaxis is just given an alias to None for making it easier to\nunderstand. If you replace np.newaxis with None, it works the same\nway. but it's better to use np.newaxis for being more explicit.\n\nimport numpy as np\n\nmy_arr = np.array([2, 3])\nnew_arr = my_arr[..., np.newaxis]\n\nprint(\"old shape\", my_arr.shape)\nprint(\"new shape\", new_arr.shape)\n\n>>> old shape (2,)\n>>> new shape (2, 1)\n\nsecond: using \"np.expand_dims()\"\n\nSpecify the original ndarray in the first argument and the position\nto add the dimension in the second argument axis.\n\nmy_arr = np.array([2, 3])\nnew_arr = np.expand_dims(my_arr, -1)\n\nprint(\"old shape\", my_arr.shape)\nprint(\"new shape\", new_arr.shape)\n\n>>> old shape (2,)\n>>> new shape (2, 1)\n\nthird: using \"reshape()\"\nmy_arr = np.array([2, 3])\nnew_arr = my_arr.reshape(*my_arr.shape, 1)\n\nprint(\"old shape\", my_arr.shape)\nprint(\"new shape\", new_arr.shape)\n\n>>> old shape (2,)\n>>> new shape (2, 1)\n\n",
"a = np.expand_dims(a, axis=-1) \n\nor\na = a[:, np.newaxis] \n\nor\na = a.reshape(a.shape + (1,))\n\n"
] |
[
182,
97,
33,
23,
9,
3,
2,
2,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"arrays",
"numpy",
"python"
] |
stackoverflow_0017394882_arrays_numpy_python.txt
|
Q:
How can i ignore comments in a string based on compiler design?
I want to ignore every comment like { comments } and // comments.
I have a pointer named peek that checks my string character by character. I know how to ignore newlines, tabs, and spaces but I don't know how to ignore comments.
string = """ beGIn west WEST north//comment1 \n
north north west East east south\n
// comment west\n
{\n
comment\n
}\n end
"""
tokens = []
tmp = ''
for i, peek in enumerate(string.lower()):
if peek == ' ' or peek == '\n':
tokens.append(tmp)
# ignoing WS's and comments
if(len(tmp)>0):
print(tmp)
tmp = ''
else:
tmp += peek
Here is my result:
begin
west
west
north//
comment1
north
north
west
east
east
south
{
comment2
}
end
As you see spaces are ignored but comments aren't.
How can I get a result like below?
begin
west
west
north
north
north
west
east
east
south
end
A:
Simply use global variable skip = False and set it True when you get { and set False when you get } and the rest of your if/else run in if not skip:
string = """ beGIn west WEST north//comment1 \n
north north west East east south\n
// comment west\n
{\n
comment\n
}\n end
"""
tokens = []
tmp = ''
skip = False
for i, peek in enumerate(string.lower()):
if peek == '{':
skip = True
elif peek == '}':
skip = False
elif not skip:
if peek == ' ' or peek == '\n':
tokens.append(tmp)
# ignoing WS's and comments
if(len(tmp)>0):
print(tmp)
tmp = ''
else:
tmp += peek
Because you may have nested { { } } like
{\n
{ comment1 }\n
comment2\n
{ comment3 }\n
}\n
so better use skip to count { }
string = """ beGIn west WEST north//comment1 \n
north north west East east south\n
// comment west\n
{\n
{ comment1 }\n
comment2\n
{ comment3 }\n
}\n end
"""
tokens = []
tmp = ''
skip = 0
for i, peek in enumerate(string.lower()):
if peek == '{':
skip += 1
elif peek == '}':
skip -= 1
elif not skip: # elif skip == 0:
if peek == ' ' or peek == '\n':
tokens.append(tmp)
# ignoing WS's and comments
if(len(tmp)>0):
print(tmp)
tmp = ''
else:
tmp += peek
But maybe it would be better to get all as tokens and later filter tokens. But I skip this idea.
EDIT:
Version using Python module sly which works similar to C/C++ tools lex/yacc
Regex for MULTI_LINE_COMMENT I found in other tool for building parsers - lark:
syntax for multiline comments
from sly import Lexer, Parser
class MyLexer(Lexer):
# Create it befor defining regex for Tokens
tokens = { NAME, ONE_LINE_COMMENT, MULTI_LINE_COMMENT }
ignore = ' \t'
# Tokens
NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
ONE_LINE_COMMENT = '\/\/.*'
MULTI_LINE_COMMENT = '{(.|\n)*}'
# Ignored pattern
ignore_newline = r'\n+'
# Extra action for newlines
def ignore_newline(self, t):
self.lineno += t.value.count('\n')
# Work with errors
def error(self, t):
print("Illegal character '%s'" % t.value[0])
self.index += 1
if __name__ == '__main__':
text = """ beGIn west WEST north//comment1
north north west East east south
// comment west
{
{ comment1 }
comment2
{ comment3 }
}
end
"""
lexer = MyLexer()
tokens = lexer.tokenize(text)
for item in tokens:
print(item.type, ':', item.value)
Result:
NAME : beGIn
NAME : west
NAME : WEST
NAME : north
ONE_LINE_COMMENT : //comment1
NAME : north
NAME : north
NAME : west
NAME : East
NAME : east
NAME : south
ONE_LINE_COMMENT : // comment west
MULTI_LINE_COMMENT : {
{ comment1 }
comment2
{ comment3 }
}
NAME : end
A:
@furas answer works, but to make it count newlines properly, use the _ decorator:
@_('{(.|\n)*}')
def MULTILINE_COMMENT(self, t):
self.lineno += t.value.count('\n')
return t
|
How can i ignore comments in a string based on compiler design?
|
I want to ignore every comment like { comments } and // comments.
I have a pointer named peek that checks my string character by character. I know how to ignore newlines, tabs, and spaces but I don't know how to ignore comments.
string = """ beGIn west WEST north//comment1 \n
north north west East east south\n
// comment west\n
{\n
comment\n
}\n end
"""
tokens = []
tmp = ''
for i, peek in enumerate(string.lower()):
if peek == ' ' or peek == '\n':
tokens.append(tmp)
# ignoing WS's and comments
if(len(tmp)>0):
print(tmp)
tmp = ''
else:
tmp += peek
Here is my result:
begin
west
west
north//
comment1
north
north
west
east
east
south
{
comment2
}
end
As you see spaces are ignored but comments aren't.
How can I get a result like below?
begin
west
west
north
north
north
west
east
east
south
end
|
[
"Simply use global variable skip = False and set it True when you get { and set False when you get } and the rest of your if/else run in if not skip:\nstring = \"\"\" beGIn west WEST north//comment1 \\n\nnorth north west East east south\\n\n// comment west\\n\n{\\n\n comment\\n\n}\\n end\n\"\"\"\n\ntokens = []\ntmp = ''\nskip = False\n\nfor i, peek in enumerate(string.lower()):\n\n if peek == '{':\n skip = True\n elif peek == '}':\n skip = False\n elif not skip:\n\n if peek == ' ' or peek == '\\n':\n tokens.append(tmp)\n # ignoing WS's and comments\n if(len(tmp)>0): \n print(tmp)\n tmp = ''\n else:\n tmp += peek\n\nBecause you may have nested { { } } like\n{\\n\n { comment1 }\\n\n comment2\\n\n { comment3 }\\n\n}\\n\n\nso better use skip to count { }\nstring = \"\"\" beGIn west WEST north//comment1 \\n\nnorth north west East east south\\n\n// comment west\\n\n{\\n\n { comment1 }\\n\n comment2\\n\n { comment3 }\\n\n}\\n end\n\"\"\"\n\ntokens = []\ntmp = ''\nskip = 0\n\nfor i, peek in enumerate(string.lower()):\n\n if peek == '{':\n skip += 1\n elif peek == '}':\n skip -= 1\n elif not skip: # elif skip == 0:\n\n if peek == ' ' or peek == '\\n':\n tokens.append(tmp)\n # ignoing WS's and comments\n if(len(tmp)>0): \n print(tmp)\n tmp = ''\n else:\n tmp += peek\n\nBut maybe it would be better to get all as tokens and later filter tokens. But I skip this idea.\n\nEDIT:\nVersion using Python module sly which works similar to C/C++ tools lex/yacc\nRegex for MULTI_LINE_COMMENT I found in other tool for building parsers - lark:\nsyntax for multiline comments\nfrom sly import Lexer, Parser\n\nclass MyLexer(Lexer):\n # Create it befor defining regex for Tokens\n tokens = { NAME, ONE_LINE_COMMENT, MULTI_LINE_COMMENT }\n\n ignore = ' \\t'\n\n # Tokens\n NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'\n ONE_LINE_COMMENT = '\\/\\/.*'\n MULTI_LINE_COMMENT = '{(.|\\n)*}'\n\n # Ignored pattern\n ignore_newline = r'\\n+'\n\n # Extra action for newlines\n def ignore_newline(self, t):\n self.lineno += t.value.count('\\n')\n\n # Work with errors\n def error(self, t):\n print(\"Illegal character '%s'\" % t.value[0])\n self.index += 1\n\nif __name__ == '__main__':\n \n text = \"\"\" beGIn west WEST north//comment1 \nnorth north west East east south\n// comment west\n{\n { comment1 }\n comment2\n { comment3 }\n}\n end\n\"\"\"\n \n lexer = MyLexer()\n tokens = lexer.tokenize(text)\n for item in tokens:\n print(item.type, ':', item.value)\n\nResult:\nNAME : beGIn\nNAME : west\nNAME : WEST\nNAME : north\nONE_LINE_COMMENT : //comment1 \nNAME : north\nNAME : north\nNAME : west\nNAME : East\nNAME : east\nNAME : south\nONE_LINE_COMMENT : // comment west\nMULTI_LINE_COMMENT : {\n { comment1 }\n comment2\n { comment3 }\n}\nNAME : end\n\n",
"@furas answer works, but to make it count newlines properly, use the _ decorator:\n@_('{(.|\\n)*}')\ndef MULTILINE_COMMENT(self, t):\n self.lineno += t.value.count('\\n')\n return t\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"compiler_construction",
"lexical_analysis",
"python"
] |
stackoverflow_0070069741_compiler_construction_lexical_analysis_python.txt
|
Q:
Scoping in Python 'for' loops
I'm not asking about Python's scoping rules; I understand generally how scoping works in Python for loops. My question is why the design decisions were made in this way. For example (no pun intended):
for foo in xrange(10):
bar = 2
print(foo, bar)
The above will print (9,2).
This strikes me as weird: 'foo' is really just controlling the loop, and 'bar' was defined inside the loop. I can understand why it might be necessary for 'bar' to be accessible outside the loop (otherwise, for loops would have very limited functionality). What I don't understand is why it is necessary for the control variable to remain in scope after the loop exits. In my experience, it simply clutters the global namespace and makes it harder to track down errors that would be caught by interpreters in other languages.
A:
The likeliest answer is that it just keeps the grammar simple, hasn't been a stumbling block for adoption, and many have been happy with not having to disambiguate the scope to which a name belongs when assigning to it within a loop construct. Variables are not declared within a scope, it is implied by the location of assignment statements. The global keyword exists just for this reason (to signify that assignment is done at a global scope).
Update
Here's a good discussion on the topic: http://mail.python.org/pipermail/python-ideas/2008-October/002109.html
Previous proposals to make for-loop
variables local to the loop have
stumbled on the problem of existing
code that relies on the loop variable
keeping its value after exiting the
loop, and it seems that this is
regarded as a desirable feature.
In short, you can probably blame it on the Python community :P
A:
Python does not have blocks, as do some other languages (such as C/C++ or Java). Therefore, scoping unit in Python is a function.
A:
A really useful case for this is when using enumerate and you want the total count in the end:
for count, x in enumerate(someiterator, start=1):
dosomething(count, x)
print "I did something {0} times".format(count)
Is this necessary? No. But, it sure is convenient.
Another thing to be aware of: in Python 2, variables in list comprehensions are leaked as well:
>>> [x**2 for x in range(10)]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
>>> x
9
But, the same does not apply to Python 3.
A:
One of the primary influences for Python is ABC, a language developed in the Netherlands for teaching programming concepts to beginners. Python's creator, Guido van Rossum, worked on ABC for several years in the 1980s. I know almost nothing about ABC, but as it is intended for beginners, I suppose it must have a limited number of scopes, much like early BASICs.
A:
If you have a break statement in the loop (and want to use the iteration value later, perhaps to pick back up, index something, or give status), it saves you one line of code and one assignment, so there's a convenience.
A:
It is a design choice in Python, which often makes some tasks easier than in other languages with the typical block scope behavior.
But oftentimes you would still miss the typical block scopes, because, say, you might have large temporary arrays which should be freed as soon as possible. It could be done by temporary function/class tricks but still there is a neater solution achieved with directly manipulating the interpreter state.
from scoping import scoping
a = 2
with scoping():
assert(2 == a)
a = 3
b = 4
scoping.keep('b')
assert(3 == a)
assert(2 == a)
assert(4 == b)
https://github.com/l74d/scoping
A:
I might be wrong, but if I am certain that I don't need to access foo outside the loop, I would write it in this way
for _foo in xrange(10):
bar = 2
|
Scoping in Python 'for' loops
|
I'm not asking about Python's scoping rules; I understand generally how scoping works in Python for loops. My question is why the design decisions were made in this way. For example (no pun intended):
for foo in xrange(10):
bar = 2
print(foo, bar)
The above will print (9,2).
This strikes me as weird: 'foo' is really just controlling the loop, and 'bar' was defined inside the loop. I can understand why it might be necessary for 'bar' to be accessible outside the loop (otherwise, for loops would have very limited functionality). What I don't understand is why it is necessary for the control variable to remain in scope after the loop exits. In my experience, it simply clutters the global namespace and makes it harder to track down errors that would be caught by interpreters in other languages.
|
[
"The likeliest answer is that it just keeps the grammar simple, hasn't been a stumbling block for adoption, and many have been happy with not having to disambiguate the scope to which a name belongs when assigning to it within a loop construct. Variables are not declared within a scope, it is implied by the location of assignment statements. The global keyword exists just for this reason (to signify that assignment is done at a global scope).\nUpdate\nHere's a good discussion on the topic: http://mail.python.org/pipermail/python-ideas/2008-October/002109.html\n\nPrevious proposals to make for-loop\nvariables local to the loop have\nstumbled on the problem of existing\ncode that relies on the loop variable\nkeeping its value after exiting the\nloop, and it seems that this is\nregarded as a desirable feature.\n\nIn short, you can probably blame it on the Python community :P\n",
"Python does not have blocks, as do some other languages (such as C/C++ or Java). Therefore, scoping unit in Python is a function.\n",
"A really useful case for this is when using enumerate and you want the total count in the end:\nfor count, x in enumerate(someiterator, start=1):\n dosomething(count, x)\nprint \"I did something {0} times\".format(count)\n\nIs this necessary? No. But, it sure is convenient.\nAnother thing to be aware of: in Python 2, variables in list comprehensions are leaked as well:\n>>> [x**2 for x in range(10)]\n[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\n>>> x\n9\n\nBut, the same does not apply to Python 3.\n",
"One of the primary influences for Python is ABC, a language developed in the Netherlands for teaching programming concepts to beginners. Python's creator, Guido van Rossum, worked on ABC for several years in the 1980s. I know almost nothing about ABC, but as it is intended for beginners, I suppose it must have a limited number of scopes, much like early BASICs.\n",
"If you have a break statement in the loop (and want to use the iteration value later, perhaps to pick back up, index something, or give status), it saves you one line of code and one assignment, so there's a convenience.\n",
"It is a design choice in Python, which often makes some tasks easier than in other languages with the typical block scope behavior.\nBut oftentimes you would still miss the typical block scopes, because, say, you might have large temporary arrays which should be freed as soon as possible. It could be done by temporary function/class tricks but still there is a neater solution achieved with directly manipulating the interpreter state.\nfrom scoping import scoping\na = 2 \n\nwith scoping():\n assert(2 == a)\n a = 3\n b = 4\n scoping.keep('b')\n assert(3 == a) \n\nassert(2 == a) \nassert(4 == b)\n\nhttps://github.com/l74d/scoping\n",
"I might be wrong, but if I am certain that I don't need to access foo outside the loop, I would write it in this way\nfor _foo in xrange(10):\n bar = 2\n\n"
] |
[
143,
74,
48,
3,
1,
1,
0
] |
[
"For starters, if variables were local to loops, those loops would be useless for most real-world programming.\nIn the current situation:\n# Sum the values 0..9\ntotal = 0\nfor foo in xrange(10):\n total = total + foo\nprint total\n\nyields 45. Now, consider how assignment works in Python. If loop variables were strictly local:\n# Sum the values 0..9?\ntotal = 0\nfor foo in xrange(10):\n # Create a new integer object with value \"total + foo\" and bind it to a new\n # loop-local variable named \"total\".\n total = total + foo\nprint total\n\nyields 0, because total inside the loop after the assignment is not the same variable as total outside the loop. This would not be optimal or expected behavior.\n"
] |
[
-8
] |
[
"python",
"scope"
] |
stackoverflow_0003611760_python_scope.txt
|
Q:
trying to make subclass but nothing seems to work :(
So i'm basicly trying to fetch some data from the duolingo api and make all the different parts accesible via a class (I think that's the best way to make the data accesible in other files?)
I currently have this code:
class DuoData:
def __init__(self, username):
self.username = username
self.URL = "https://www.duolingo.com/2017-06-30/users?username={username}"
self.data = requests.get(self.URL.format(username=self.username))
self.data_json = self.data.json()
def get_streak(self):
return self.data_json['users'][0]['streak']
class ActiveLanguage:
def __init__(self, data_json):
super().__init__()
self.active_language = data_json['users'][0]['courses'][0]
def get_name(self):
return self.active_language['title']
def get_xp(self):
return self.active_language['xp']
def get_crowns(self):
return self.active_language['crowns']
the get_streak fucntion works perfectly, so
duo = DuoData("username")
print(duo.get_streak())
prints the streak number like I want, but the following code doesn't work:
print(duo.ActiveLanguage.get_name())
I want it so that duo.ActiveLanguage.getname() returns the name of the language but it doesn't work like this, I get the following error:
TypeError: DuoData.ActiveLanguage.get_name() missing 1 required positional argument: 'self'
I already tried lots of different things and this was my best approach but it still doesn't work, can anyone help me? This is my first time working with classes (in Python)
I think maybe subclasses aren't the right approach?
My question is: can i have a class or whatever with a few categories that each have different values?
like: data.userdata.streak and data.userdata.id and data.activelanguage.name and so on?
A:
This is how you would do a subclass. A subclass means that ActiveLanguage is a specific kind of DuoData.
However, in this particular case, I'm not sure that's what you want. It may be you want "encapsulation", where ActiveLanguage is a class that stands alone and USES an instance of DuoData to do its work.
class DuoData:
def __init__(self, username):
self.username = username
self.URL = "https://www.duolingo.com/2017-06-30/users?username={username}"
self.data = requests.get(self.URL.format(username=self.username))
self.data_json = self.data.json()
def get_streak(self):
return self.data_json['users'][0]['streak']
class ActiveLanguage(DuoData)
def __init__(self, username):
super().__init__(username)
self.active_language = self.data_json['users'][0]['courses'][0]
def get_name(self):
return self.active_language['title']
def get_xp(self):
return self.active_language['xp']
def get_crowns(self):
return self.active_language['crowns']
duo = DuoData("username")
print(duo.get_streak())
acl = ActiveLanguage("username")
print(acl.get_streak())
print(acl.get_name())
A:
Are you sure you want a subclass? If ActiveLanguage is a subclass of DuoData then you are, in effect, saying that ActiveLanguage "is-a" DuoData which does not seem to be your intent. If your intent is to say that DuoData "has-a" ActiveLanguage attribute then you may want to use composition rather than inheritance:
class ActiveLanguage:
def __init__(self, active_language):
self.active_language = active_language
def get_name(self):
return self.active_language['title']
def get_xp(self):
return self.active_language['xp']
def get_crowns(self):
return self.active_language['crowns']
class DuoData:
def __init__(self, username):
self.username = username
self.URL = "https://www.duolingo.com/2017-06-30/users?username={username}"
self.data = requests.get(self.URL.format(username=self.username))
self.data_json = self.data.json()
def get_streak(self):
return self.data_json['users'][0]['streak']
def get_active_language(self):
return ActiveLanguage(self.data_json['users'][0]['courses'][0])
duo = DuoData("username")
print(duo.get_streak())
acl = duo.get_active_language()
print(acl.get_name())
Edit: Fixed a cut and paste error by changing:
return self.ActiveLanguage(self.data_json['users'][0]['courses'][0])
to:
return ActiveLanguage(self.data_json['users'][0]['courses'][0])
Thanks to @Tim and @Infinibyte for spotting it.
A:
First of all, thanks to Tim and Rhurwitz, I threw your code together and it magically worked! This is the code:
import requests
class DuoData:
def __init__(self, username):
self.username = username
self.URL = "https://www.duolingo.com/2017-06-30/users?username={username}"
self.data = requests.get(self.URL.format(username=self.username))
self.data_json = self.data.json()
self.active = self.ActiveLanguage(self.data_json['users'][0]['courses'][0])
def get_streak(self):
return self.data_json['users'][0]['streak']
class ActiveLanguage:
def __init__(self, active_language):
self.active_language = active_language
def get_name(self):
return self.active_language['title']
def get_xp(self):
return self.active_language['xp']
def get_crowns(self):
return self.active_language['crowns']
duo = DuoData("Infinibyte")
print(duo.get_streak())
print(duo.active.get_name())
print(duo.active.get_xp())
print(duo.active.get_crowns())
It works just like I want. However, I do not understand what I did (subclasses, nested classes, ... idk) so any explanation would be welcome ;)
|
trying to make subclass but nothing seems to work :(
|
So i'm basicly trying to fetch some data from the duolingo api and make all the different parts accesible via a class (I think that's the best way to make the data accesible in other files?)
I currently have this code:
class DuoData:
def __init__(self, username):
self.username = username
self.URL = "https://www.duolingo.com/2017-06-30/users?username={username}"
self.data = requests.get(self.URL.format(username=self.username))
self.data_json = self.data.json()
def get_streak(self):
return self.data_json['users'][0]['streak']
class ActiveLanguage:
def __init__(self, data_json):
super().__init__()
self.active_language = data_json['users'][0]['courses'][0]
def get_name(self):
return self.active_language['title']
def get_xp(self):
return self.active_language['xp']
def get_crowns(self):
return self.active_language['crowns']
the get_streak fucntion works perfectly, so
duo = DuoData("username")
print(duo.get_streak())
prints the streak number like I want, but the following code doesn't work:
print(duo.ActiveLanguage.get_name())
I want it so that duo.ActiveLanguage.getname() returns the name of the language but it doesn't work like this, I get the following error:
TypeError: DuoData.ActiveLanguage.get_name() missing 1 required positional argument: 'self'
I already tried lots of different things and this was my best approach but it still doesn't work, can anyone help me? This is my first time working with classes (in Python)
I think maybe subclasses aren't the right approach?
My question is: can i have a class or whatever with a few categories that each have different values?
like: data.userdata.streak and data.userdata.id and data.activelanguage.name and so on?
|
[
"This is how you would do a subclass. A subclass means that ActiveLanguage is a specific kind of DuoData.\nHowever, in this particular case, I'm not sure that's what you want. It may be you want \"encapsulation\", where ActiveLanguage is a class that stands alone and USES an instance of DuoData to do its work.\nclass DuoData:\n def __init__(self, username):\n self.username = username\n self.URL = \"https://www.duolingo.com/2017-06-30/users?username={username}\"\n self.data = requests.get(self.URL.format(username=self.username))\n self.data_json = self.data.json()\n\n def get_streak(self):\n return self.data_json['users'][0]['streak']\n\nclass ActiveLanguage(DuoData)\n def __init__(self, username):\n super().__init__(username)\n self.active_language = self.data_json['users'][0]['courses'][0]\n \n def get_name(self):\n return self.active_language['title']\n\n def get_xp(self):\n return self.active_language['xp']\n \n def get_crowns(self):\n return self.active_language['crowns']\n\nduo = DuoData(\"username\")\nprint(duo.get_streak())\nacl = ActiveLanguage(\"username\")\nprint(acl.get_streak())\nprint(acl.get_name())\n\n",
"Are you sure you want a subclass? If ActiveLanguage is a subclass of DuoData then you are, in effect, saying that ActiveLanguage \"is-a\" DuoData which does not seem to be your intent. If your intent is to say that DuoData \"has-a\" ActiveLanguage attribute then you may want to use composition rather than inheritance:\nclass ActiveLanguage:\n def __init__(self, active_language):\n self.active_language = active_language\n \n def get_name(self):\n return self.active_language['title']\n\n def get_xp(self):\n return self.active_language['xp']\n \n def get_crowns(self):\n return self.active_language['crowns']\n\nclass DuoData:\n def __init__(self, username):\n self.username = username\n self.URL = \"https://www.duolingo.com/2017-06-30/users?username={username}\"\n self.data = requests.get(self.URL.format(username=self.username))\n self.data_json = self.data.json()\n\n def get_streak(self):\n return self.data_json['users'][0]['streak']\n\n def get_active_language(self):\n return ActiveLanguage(self.data_json['users'][0]['courses'][0])\n\n\nduo = DuoData(\"username\")\nprint(duo.get_streak())\nacl = duo.get_active_language()\nprint(acl.get_name())\n\nEdit: Fixed a cut and paste error by changing:\nreturn self.ActiveLanguage(self.data_json['users'][0]['courses'][0])\n\nto:\nreturn ActiveLanguage(self.data_json['users'][0]['courses'][0])\n\nThanks to @Tim and @Infinibyte for spotting it.\n",
"First of all, thanks to Tim and Rhurwitz, I threw your code together and it magically worked! This is the code:\nimport requests\n\nclass DuoData:\n def __init__(self, username):\n self.username = username\n self.URL = \"https://www.duolingo.com/2017-06-30/users?username={username}\"\n self.data = requests.get(self.URL.format(username=self.username))\n self.data_json = self.data.json()\n self.active = self.ActiveLanguage(self.data_json['users'][0]['courses'][0])\n\n def get_streak(self):\n return self.data_json['users'][0]['streak']\n\n class ActiveLanguage:\n def __init__(self, active_language):\n self.active_language = active_language\n \n def get_name(self):\n return self.active_language['title']\n\n def get_xp(self):\n return self.active_language['xp']\n \n def get_crowns(self):\n return self.active_language['crowns']\n\n\nduo = DuoData(\"Infinibyte\")\nprint(duo.get_streak())\nprint(duo.active.get_name())\nprint(duo.active.get_xp())\nprint(duo.active.get_crowns())\n\nIt works just like I want. However, I do not understand what I did (subclasses, nested classes, ... idk) so any explanation would be welcome ;)\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"class",
"oop",
"python"
] |
stackoverflow_0074538048_class_oop_python.txt
|
Q:
Fill form input with Scrapy
I'm trying to input a word to search products with Scrapy, this is the url = https://www.mercadolivre.com.br/
The problem is that I cant even pass the input form, recieving the following error:
'[scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.mercadolivre.com.br/jm/search?as_word=&cb1-edit=smartphone> (failed 2 times): 502 Bad Gateway'
My code is this:
`
class MlSpider(scrapy.Spider):
name = 'ml'
allowed_domains = ['www.mercadolivre.com.br']
start_urls = ['https://www.mercadolivre.com.br/']
def parse(self, response):
return scrapy.FormRequest.from_response(
response,
formdata={'cb1-edit':"smartphone"},
callback=self.scrape_data
)
def scrape_data(self,response):
for element in response.xpath('//li[@class="ui-search-layout__item shops__layout-item"]'):
item = element.xpath('//li[@class="ui-search-layout__item shops__layout-item"]//h2/text()').get()
price = element.xpath('//div[@class="ui-search-price__second-line shops__price-second-line"]').getall()
link = element.xpath('./a/@href').get()
yield {
"item":item,
"price":price,
"link":link
}
`
I believe to be passing the wrong parameters to formdata but cant figure out what is it.
I tried to use formxpath "/html/body/header/div/form" before formdata but still bad gateway
A:
There is no need to use the FormRequest.
Their search api is to just add the search term as the last path of the url.
for example:
import scrapy
search_terms = ['smartphone', 'charger']
class Mlspider(scrapy.Spider):
name = 'ml'
start_urls = ['https://lista.mercadolivre.com.br/' + i for i in search_terms]
def parse(self, response):
for title in response.xpath('//h2[contains(@class,"shops__item-title")]/text()').getall():
yield {'title': title}
Ouptut:
...
https://lista.mercadolivre.com.br/charger>
{'title': 'Cabo Carregador Smartwatch P68 P70 P70s P70 Pro P80 Usb'}
2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Rápido Turbo 30w Pd Baseus Tipo-c & 2 Usb Compact'}
2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Turbo 20w Baseus Para iPhone 13 12 11 + Cabo 1m'}
2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Usb Smartwatch Relógio Amazfit Gts Mini 2e/gts 2'}
2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Para Relógio Amazfit Gts Gtr 42mm 47mm T-rex 1918'}
2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Cabo Usb-c Para Tipo-c 2 Metros Super Charge 100w Power'}
2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Turbo Usb + Tipo C 20w Baseus Quick Charger 3.0'}
2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Bateria Original Jbl Charge 3 Gsp1029102a 6000mah Greatpo'}
2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Cabo Carregador Smartwatch Xiaomi Mi Band 3 Usb Carga Rápida'}
2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Suporte Tablet Mesa Apoio Universal Videoconferência Live'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Usb Celular De Moto Universal Impermeavel Charge'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': '2 Baterias Com Cabo Carregador Para Controle Xbox One Charge'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Cabo Usb Tipo C Em L Turbo Quick Charge Gamer Mcdodo 1.8m '}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Suporte Tablet iPad Universal De Mesa Grande Resistente Pro'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Bateria Xbox One Xbox S Com Cabo Carregador Controle Charge'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Kit 10 Cabos Tipo C Android Celular Atacado Revenda'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': '44 Tomada Usb Carregador 2 Saídas Comum Van Onibus'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Kit 10 Cabos Compatível P/ iPhone Lightning Atacado Revenda'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Portátil Power Bank Magsafe Fast Charge 10000mah'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Baterias Automotivo 12v Pr10 Até 200 Amp + Brinde'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Cabo Usb Carregador Magnetico Para Huawei Band 6 Pro'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Cabo Carregador Para Smartwatch Hw12 Hw16 Hw22 Hw26 Hw26+'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Kit Play And Charge Xbox Series Bateria + Cabo Usb-c'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Tomada Usb Veicular Qc3.0 Turbo Voltímetro Aluminium On/off'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Bateira Original Jbl Charge 3 Gsp1029102a 7200mah + (brinde)'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Bateria Automotivo 2a Automático Visor Completo'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Cabo Usb Type C Para C/ Jbl Pulse 4 Flip 5 Charge 4 Jr Pop'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Bateria 12-24v Automotivo Carro Moto Caminhao 10a'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Samsung Super Rápido S21 Plus Ultra 25w'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Cabo Tubo Compativel iPhone iPad Lightning Baseus Zinco Led'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Cabo Usb Tipo C 2 Metros Fast Quick Charge 3a Trançado Forte'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Cabo Usb-c Para Tipo-c 1 Metro Quick Charge 60w Power'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Turbo Tipo C Para Samsung A20 A30 A50 A70 A80 S10'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Cabo Carregador Smartwatch Xiaomi Miband 3 Usb Relógio'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador De Parede Turbo 20w Tipo-c Baseus + Cabo iPhone'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Bateria Estacionária 8ah 12v Certificado Inmetro'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Para Relógio Garmin Forerruner 245 745 935 945 '}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Turbo Quick Charge 20w Pd Ultra Rápido Duplo'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Tomada Adaptador 12v Veicular Isqueiro Carro Casa'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Kit 20 Cabos Compatível P/ iPhone Lightning Atacado Revenda'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Cabo Usb Tipo C Em L Turbo Charge Espiral Gamer 1.8m Mcdodo'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Cabo Micro Usb Reforçado 2a Fast Charge Baseus 3 Metros'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Portátil Power Bank Magsafe Fast Charge 10000mah'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Wireless Sem Fio Indução Samsung Branco Preto'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Inteligente Bateria 4-6ah Lcd Dessulfatador 12v'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador De Bateria 12v Sylc Inteligente 12v Carro E Moto'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador De Bateria 6a 12v Inteligente Digital P/ Carro'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Inteligente Bateria Automotivo Portatil 12v 10a'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Relógio T80 T80s T80 Pro Magnético Smartwatch Usb'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Kit 2 Cabos Usb Tipo C Quick Charge Turbo Original Baseus'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Batmax Sony 3 Np-fw50+charge Duplo A7rii/a7sii/a6400/a6500'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Adaptador De Isqueiro Para Tomada Veicular 12v Carro Energia'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Bateria Original Jbl Charge 3 Gsp1029102a 6000mah Greatpo'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Bat-eria Compatível Jbl Charge 3 Original 1 Ano De Garantia '}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>
{'title': 'Carregador Samsung Super Rápido S21 Plus Ultra 25w'}
2022-11-22 12:12:36 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://lista.mercadolivre.com.br/smartphone> (referer: None)
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Samsung Galaxy M13 Dual SIM 128 GB azul 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Samsung Galaxy A03 Dual SIM 64 GB preto 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Samsung Galaxy S20 FE 5G Dual SIM 128 GB cloud navy 6 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Samsung Galaxy A03s Dual SIM 64 GB azul 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': ' Moto E20 Dual SIM 32 GB cinza 2 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': ' Moto G22 Dual SIM 128 GB mint green 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': ' Moto G9 Play Dual SIM 64 GB rosa-quartzo 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Apple iPhone 13 (512 GB) - Meia-noite'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Celular De Idoso Multilaser Flip Vita Duo Sim 32 Mb Preto'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Celular De Idoso Multilaser Flip Vita Duo Sim 32 Mb Dourado'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Samsung Galaxy A13 Dual SIM 128 GB azul 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': ' Moto G41 Dual SIM 128 GB pearl gold 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Smartphone Galaxy A03 Core 32gb 2gb Verde Samsung'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Samsung Galaxy S21+ 5G Dual SIM 256 GB violeta 8 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Smartphone Motorola Moto E22 64gb 4gb 6.5 Dual Chip Azul'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Samsung Galaxy Note20 5G Dual SIM 256 GB verde-místico 8 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Samsung Galaxy Galaxy M53 5G Dual SIM 128 GB verde 8 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': ' Moto G5 Dual SIM 32 GB cinza-lunar 2 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Celular LG 360 Retrô Simples P/ Idosos Números Grandes Flip'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Telefone Celular LG 360 Retrô Simples P Idosos Número Grande'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': ' Moto G22 Dual SIM 64 GB cosmic black 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Celular Moto G20 Xt2128 Azul 64gb Mem. Interna 4gb Ram'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Multilaser Flip Vita Dual SIM 32 MB azul 32 MB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Xiaomi Redmi Note 10S Dual SIM 128 GB ocean blue 6 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Xiaomi Redmi Note 11 (Snapdragon) Dual SIM 128 GB graphite gray 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Realme C35 Dual SIM 64 GB glowing green 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Xiaomi Pocophone Poco X4 Pro 5G Dual SIM 256 GB laser black 8 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Samsung Galaxy A22 Dual SIM 128 GB preto 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'TCL 103 Dual SIM 64 GB power grey 2 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Smartphone Tcl L7 32gb Preto'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Telefone Celular LG Antigo Simples Para Idosos E Rural 2g'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Samsung Galaxy A53 5G Dual SIM 128 GB branco 8 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': ' Moto E32 Dual SIM 64 GB rosa 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Xiaomi Pocophone Poco M3 Pro 5G Dual SIM 128 GB power black 6 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Xiaomi Redmi 10 Dual SIM 128 GB carbon gray 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Smartphone Moto Edge 30 256gb 8gb Ram Rosê Motorola'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Xiaomi Pocophone Poco M4 Pro Dual SIM 128 GB power black 6 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Apple iPhone 13 Pro Max (256 GB) - Azul-Sierra'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': ' Moto E32 64 GB azul 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Smartphone Galaxy S21 Fe 5g 6,4 128gb 6gb Ram Branco Samsung'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Samsung Galaxy A52 Dual SIM 128 GB lilás 6 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Samsung Galaxy A32 Dual SIM 128 GB preto 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': ' Moto G31 Dual SIM 128 GB azul 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Asus ZenFone 8 ZS590KS Dual SIM 128 GB obsidian black 8 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Positivo P26 Dual SIM 32 MB preto 32 MB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Multilaser F Dual SIM 32 GB café 1 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': ' Moto E40 Dual SIM 64 GB grafite 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Samsung Galaxy Z Flip4 5G 256 GB preto 8 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': 'Multilaser Vita IV Dual SIM preto'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
{'title': ' Moto G20 128 GB verde 4 GB RAM'}
2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>
...
|
Fill form input with Scrapy
|
I'm trying to input a word to search products with Scrapy, this is the url = https://www.mercadolivre.com.br/
The problem is that I cant even pass the input form, recieving the following error:
'[scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.mercadolivre.com.br/jm/search?as_word=&cb1-edit=smartphone> (failed 2 times): 502 Bad Gateway'
My code is this:
`
class MlSpider(scrapy.Spider):
name = 'ml'
allowed_domains = ['www.mercadolivre.com.br']
start_urls = ['https://www.mercadolivre.com.br/']
def parse(self, response):
return scrapy.FormRequest.from_response(
response,
formdata={'cb1-edit':"smartphone"},
callback=self.scrape_data
)
def scrape_data(self,response):
for element in response.xpath('//li[@class="ui-search-layout__item shops__layout-item"]'):
item = element.xpath('//li[@class="ui-search-layout__item shops__layout-item"]//h2/text()').get()
price = element.xpath('//div[@class="ui-search-price__second-line shops__price-second-line"]').getall()
link = element.xpath('./a/@href').get()
yield {
"item":item,
"price":price,
"link":link
}
`
I believe to be passing the wrong parameters to formdata but cant figure out what is it.
I tried to use formxpath "/html/body/header/div/form" before formdata but still bad gateway
|
[
"There is no need to use the FormRequest.\nTheir search api is to just add the search term as the last path of the url.\nfor example:\nimport scrapy\n\nsearch_terms = ['smartphone', 'charger']\n\nclass Mlspider(scrapy.Spider):\n name = 'ml'\n start_urls = ['https://lista.mercadolivre.com.br/' + i for i in search_terms]\n\n def parse(self, response):\n for title in response.xpath('//h2[contains(@class,\"shops__item-title\")]/text()').getall():\n yield {'title': title}\n\nOuptut:\n...\nhttps://lista.mercadolivre.com.br/charger>\n{'title': 'Cabo Carregador Smartwatch P68 P70 P70s P70 Pro P80 Usb'}\n2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Rápido Turbo 30w Pd Baseus Tipo-c & 2 Usb Compact'}\n2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Turbo 20w Baseus Para iPhone 13 12 11 + Cabo 1m'}\n2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Usb Smartwatch Relógio Amazfit Gts Mini 2e/gts 2'}\n2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Para Relógio Amazfit Gts Gtr 42mm 47mm T-rex 1918'}\n2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Cabo Usb-c Para Tipo-c 2 Metros Super Charge 100w Power'}\n2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Turbo Usb + Tipo C 20w Baseus Quick Charger 3.0'}\n2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Bateria Original Jbl Charge 3 Gsp1029102a 6000mah Greatpo'}\n2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Cabo Carregador Smartwatch Xiaomi Mi Band 3 Usb Carga Rápida'}\n2022-11-22 12:12:35 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Suporte Tablet Mesa Apoio Universal Videoconferência Live'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Usb Celular De Moto Universal Impermeavel Charge'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': '2 Baterias Com Cabo Carregador Para Controle Xbox One Charge'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Cabo Usb Tipo C Em L Turbo Quick Charge Gamer Mcdodo 1.8m '}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Suporte Tablet iPad Universal De Mesa Grande Resistente Pro'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Bateria Xbox One Xbox S Com Cabo Carregador Controle Charge'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Kit 10 Cabos Tipo C Android Celular Atacado Revenda'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': '44 Tomada Usb Carregador 2 Saídas Comum Van Onibus'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Kit 10 Cabos Compatível P/ iPhone Lightning Atacado Revenda'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Portátil Power Bank Magsafe Fast Charge 10000mah'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Baterias Automotivo 12v Pr10 Até 200 Amp + Brinde'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Cabo Usb Carregador Magnetico Para Huawei Band 6 Pro'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Cabo Carregador Para Smartwatch Hw12 Hw16 Hw22 Hw26 Hw26+'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Kit Play And Charge Xbox Series Bateria + Cabo Usb-c'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Tomada Usb Veicular Qc3.0 Turbo Voltímetro Aluminium On/off'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Bateira Original Jbl Charge 3 Gsp1029102a 7200mah + (brinde)'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Bateria Automotivo 2a Automático Visor Completo'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Cabo Usb Type C Para C/ Jbl Pulse 4 Flip 5 Charge 4 Jr Pop'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Bateria 12-24v Automotivo Carro Moto Caminhao 10a'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Samsung Super Rápido S21 Plus Ultra 25w'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Cabo Tubo Compativel iPhone iPad Lightning Baseus Zinco Led'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Cabo Usb Tipo C 2 Metros Fast Quick Charge 3a Trançado Forte'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Cabo Usb-c Para Tipo-c 1 Metro Quick Charge 60w Power'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Turbo Tipo C Para Samsung A20 A30 A50 A70 A80 S10'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Cabo Carregador Smartwatch Xiaomi Miband 3 Usb Relógio'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador De Parede Turbo 20w Tipo-c Baseus + Cabo iPhone'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Bateria Estacionária 8ah 12v Certificado Inmetro'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Para Relógio Garmin Forerruner 245 745 935 945 '}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Turbo Quick Charge 20w Pd Ultra Rápido Duplo'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Tomada Adaptador 12v Veicular Isqueiro Carro Casa'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Kit 20 Cabos Compatível P/ iPhone Lightning Atacado Revenda'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Cabo Usb Tipo C Em L Turbo Charge Espiral Gamer 1.8m Mcdodo'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Cabo Micro Usb Reforçado 2a Fast Charge Baseus 3 Metros'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Portátil Power Bank Magsafe Fast Charge 10000mah'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Wireless Sem Fio Indução Samsung Branco Preto'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Inteligente Bateria 4-6ah Lcd Dessulfatador 12v'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador De Bateria 12v Sylc Inteligente 12v Carro E Moto'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador De Bateria 6a 12v Inteligente Digital P/ Carro'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Inteligente Bateria Automotivo Portatil 12v 10a'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Relógio T80 T80s T80 Pro Magnético Smartwatch Usb'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Kit 2 Cabos Usb Tipo C Quick Charge Turbo Original Baseus'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Batmax Sony 3 Np-fw50+charge Duplo A7rii/a7sii/a6400/a6500'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Adaptador De Isqueiro Para Tomada Veicular 12v Carro Energia'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Bateria Original Jbl Charge 3 Gsp1029102a 6000mah Greatpo'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Bat-eria Compatível Jbl Charge 3 Original 1 Ano De Garantia '}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/charger>\n{'title': 'Carregador Samsung Super Rápido S21 Plus Ultra 25w'}\n2022-11-22 12:12:36 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://lista.mercadolivre.com.br/smartphone> (referer: None)\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Samsung Galaxy M13 Dual SIM 128 GB azul 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Samsung Galaxy A03 Dual SIM 64 GB preto 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Samsung Galaxy S20 FE 5G Dual SIM 128 GB cloud navy 6 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Samsung Galaxy A03s Dual SIM 64 GB azul 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': ' Moto E20 Dual SIM 32 GB cinza 2 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': ' Moto G22 Dual SIM 128 GB mint green 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': ' Moto G9 Play Dual SIM 64 GB rosa-quartzo 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Apple iPhone 13 (512 GB) - Meia-noite'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Celular De Idoso Multilaser Flip Vita Duo Sim 32 Mb Preto'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Celular De Idoso Multilaser Flip Vita Duo Sim 32 Mb Dourado'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Samsung Galaxy A13 Dual SIM 128 GB azul 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': ' Moto G41 Dual SIM 128 GB pearl gold 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Smartphone Galaxy A03 Core 32gb 2gb Verde Samsung'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Samsung Galaxy S21+ 5G Dual SIM 256 GB violeta 8 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Smartphone Motorola Moto E22 64gb 4gb 6.5 Dual Chip Azul'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Samsung Galaxy Note20 5G Dual SIM 256 GB verde-místico 8 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Samsung Galaxy Galaxy M53 5G Dual SIM 128 GB verde 8 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': ' Moto G5 Dual SIM 32 GB cinza-lunar 2 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Celular LG 360 Retrô Simples P/ Idosos Números Grandes Flip'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Telefone Celular LG 360 Retrô Simples P Idosos Número Grande'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': ' Moto G22 Dual SIM 64 GB cosmic black 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Celular Moto G20 Xt2128 Azul 64gb Mem. Interna 4gb Ram'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Multilaser Flip Vita Dual SIM 32 MB azul 32 MB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Xiaomi Redmi Note 10S Dual SIM 128 GB ocean blue 6 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Xiaomi Redmi Note 11 (Snapdragon) Dual SIM 128 GB graphite gray 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Realme C35 Dual SIM 64 GB glowing green 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Xiaomi Pocophone Poco X4 Pro 5G Dual SIM 256 GB laser black 8 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Samsung Galaxy A22 Dual SIM 128 GB preto 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'TCL 103 Dual SIM 64 GB power grey 2 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Smartphone Tcl L7 32gb Preto'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Telefone Celular LG Antigo Simples Para Idosos E Rural 2g'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Samsung Galaxy A53 5G Dual SIM 128 GB branco 8 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': ' Moto E32 Dual SIM 64 GB rosa 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Xiaomi Pocophone Poco M3 Pro 5G Dual SIM 128 GB power black 6 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Xiaomi Redmi 10 Dual SIM 128 GB carbon gray 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Smartphone Moto Edge 30 256gb 8gb Ram Rosê Motorola'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Xiaomi Pocophone Poco M4 Pro Dual SIM 128 GB power black 6 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Apple iPhone 13 Pro Max (256 GB) - Azul-Sierra'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': ' Moto E32 64 GB azul 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Smartphone Galaxy S21 Fe 5g 6,4 128gb 6gb Ram Branco Samsung'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Samsung Galaxy A52 Dual SIM 128 GB lilás 6 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Samsung Galaxy A32 Dual SIM 128 GB preto 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': ' Moto G31 Dual SIM 128 GB azul 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Asus ZenFone 8 ZS590KS Dual SIM 128 GB obsidian black 8 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Positivo P26 Dual SIM 32 MB preto 32 MB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Multilaser F Dual SIM 32 GB café 1 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': ' Moto E40 Dual SIM 64 GB grafite 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Samsung Galaxy Z Flip4 5G 256 GB preto 8 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': 'Multilaser Vita IV Dual SIM preto'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n{'title': ' Moto G20 128 GB verde 4 GB RAM'}\n2022-11-22 12:12:36 [scrapy.core.scraper] DEBUG: Scraped from <200 https://lista.mercadolivre.com.br/smartphone>\n...\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"scrapy"
] |
stackoverflow_0074535831_python_scrapy.txt
|
Q:
How to work with MultipleCheckBox with Django?
I'm new to Django and I'm trying to make an application that registers the attendance of entrepreneurs (I'm currently working on this). There are some services that I would like to select, sometimes the same person requires more than one service per appointment. However, part of the application uses the Models and part uses the Forms, I'd like to keep the two ones separate to keep the code organized, but I have no idea how to do it, I even created a separate class just for the tuple that holds the values, but no I managed to implement, can anyone help me? Here are the codes:
models.py
from django.db import models
from django_cpf_cnpj.fields import CPFField, CNPJField
class CadastroEmpreendedor(models.Model):
ABERTURA = 'ABERTURA MEI'
ALTERACAO = 'ALTERAÇÃO CADASTRAL'
INFO = 'INFORMAÇÕES'
DAS = 'EMISSÃO DAS'
PARC = 'PARCELAMENTO'
EMISSAO_PARC = 'EMISSÃO DE PARCELA'
CODIGO = 'CÓDIGO DE ACESSO'
REGULARIZE = 'REGULARIZE'
BAIXA = 'BAIXA MEI'
CANCELADO = 'REGISTRO BAIXADO'
descricao_atendimento = (
(ABERTURA, 'FORMALIZAÇÃO'),
(ALTERACAO, 'ALTERAÇÃO CADASTRAL'),
(INFO, 'INFORMAÇÕES'),
(DAS, 'EMISSÃO DAS'),
(PARC, 'PARCELAMENTO'),
(EMISSAO_PARC, 'EMISSÃO DE PARCELA'),
(CODIGO, 'CÓDIGO DE ACESSO'),
(REGULARIZE, 'REGULARIZE'),
(BAIXA, 'BAIXA MEI'),
(CANCELADO, 'REGISTRO BAIXADO'),
)
cnpj = CNPJField('CNPJ')
cpf = CPFField('CPF')
nome = models.CharField('Nome', max_length=120)
nascimento = models.DateField()
email = models.EmailField('Email', max_length=100)
telefone_principal = models.CharField(max_length=11)
telefone_alternativo = models.CharField(max_length=11, blank=True)
descricao_atendimento
def __str__(self) -> str:
return self.nome
class DescricaoAtendimento(models.Model):
descricao = models.ForeignKey(CadastroEmpreendedor, on_delete=models.CASCADE)
forms.py
from django import forms
from .models import DescricaoAtendimento
class EmpreendedorForm(forms.ModelForm):
class Meta:
model = DescricaoAtendimento
fields = ['descricao']
widgets = {'descricao': forms.CheckboxSelectMultiple(),}
views.py
from django.shortcuts import render
from django.contrib import messages
from .forms import EmpreendedorForm
def cadastro_empreendedor(request):
if str(request.method) == 'POST':
form = EmpreendedorForm(request.POST, request.FILES)
if form.is_valid():
form.save()
messages.success(request, 'Produto salvo com sucesso!')
form = EmpreendedorForm()
else:
messages.success(request, 'Erro ao salvar produto!')
else:
form = EmpreendedorForm()
context = {
'form': form
}
return render(request, 'empreendedor.html', context)
If you have any tips, I really appreciate it, I started with Django almost a month ago, so there's a long way to go.
P.S.: I integrated with PostgreSQL and in the Django administration part I can save all the fields in the DB, but I can't implement that part of the checkbox.
At this moment, I get the error:
It is impossible to add a non-nullable field 'descricao' to descricaoatendimento without specifying a default. This is because the database needs something to populate existing rows.
Please select a fix:
1) Provide a one-off default now (will be set on all existing rows with a null value for this column)
2) Quit and manually define a default value in models.py.
In the template, I gonna work with bootstrap4 to create the forms. But I'd like to resolve this before. I'm still learning English, so sorry for some mistakes.
A:
It sounds like you have many Entrepreneurs, each of which can choose many Services. This is a ManyToMany Relationship and you can create it in Django by having one model for each and creating the link between them like this
class CadastroEmpreendedor(models.Model):
...
descricao_atendimento = models.ManyToManyField(DescricaoAtendimento)
class DescricaoAtendimento(models.Model):
nome = models.CharField('Nome', max_length=120, default="unnamed service")
In this case, every object/row in DescricaoAtendimento is a service. Each entrepeneur can have many services associated with them.
This way you don't need to create a model form for DescricaoAtendimento to choose services for an entrepeneur. As it's linked to them by a manytomany relationship you can have an CadastroEmpreendedor model form with just the escricao_atendimento field and the various services become available as options.
Django handles this by creating a 'through table' which is basically a table with two fields of foreign keys, one pointing to an entrepeneur, and the other to a service. You can also create this table yourself as a through table - which is useful if you want to extend data about the relationship - eg, a begin and end date for the entrepeneur's use of a service.
The error you are getting when you migrate isn't an error, per se. It seems you created an number of DescricaoAtendimento objects and then added the descricao field later. When you then try and migrate, django wants you to provide a default value for the already existing rows, or allow the field to be empty (via blank=True in the model). You can assign a dummy value and then go back and change it in /admin later, or, if you don't have a lot of data, recreate your database and remigrate. Above I've used a default value to avoid this situation.
However, if you are dead set against extra tables, you might want to look at an extension like django multiselectfield
|
How to work with MultipleCheckBox with Django?
|
I'm new to Django and I'm trying to make an application that registers the attendance of entrepreneurs (I'm currently working on this). There are some services that I would like to select, sometimes the same person requires more than one service per appointment. However, part of the application uses the Models and part uses the Forms, I'd like to keep the two ones separate to keep the code organized, but I have no idea how to do it, I even created a separate class just for the tuple that holds the values, but no I managed to implement, can anyone help me? Here are the codes:
models.py
from django.db import models
from django_cpf_cnpj.fields import CPFField, CNPJField
class CadastroEmpreendedor(models.Model):
ABERTURA = 'ABERTURA MEI'
ALTERACAO = 'ALTERAÇÃO CADASTRAL'
INFO = 'INFORMAÇÕES'
DAS = 'EMISSÃO DAS'
PARC = 'PARCELAMENTO'
EMISSAO_PARC = 'EMISSÃO DE PARCELA'
CODIGO = 'CÓDIGO DE ACESSO'
REGULARIZE = 'REGULARIZE'
BAIXA = 'BAIXA MEI'
CANCELADO = 'REGISTRO BAIXADO'
descricao_atendimento = (
(ABERTURA, 'FORMALIZAÇÃO'),
(ALTERACAO, 'ALTERAÇÃO CADASTRAL'),
(INFO, 'INFORMAÇÕES'),
(DAS, 'EMISSÃO DAS'),
(PARC, 'PARCELAMENTO'),
(EMISSAO_PARC, 'EMISSÃO DE PARCELA'),
(CODIGO, 'CÓDIGO DE ACESSO'),
(REGULARIZE, 'REGULARIZE'),
(BAIXA, 'BAIXA MEI'),
(CANCELADO, 'REGISTRO BAIXADO'),
)
cnpj = CNPJField('CNPJ')
cpf = CPFField('CPF')
nome = models.CharField('Nome', max_length=120)
nascimento = models.DateField()
email = models.EmailField('Email', max_length=100)
telefone_principal = models.CharField(max_length=11)
telefone_alternativo = models.CharField(max_length=11, blank=True)
descricao_atendimento
def __str__(self) -> str:
return self.nome
class DescricaoAtendimento(models.Model):
descricao = models.ForeignKey(CadastroEmpreendedor, on_delete=models.CASCADE)
forms.py
from django import forms
from .models import DescricaoAtendimento
class EmpreendedorForm(forms.ModelForm):
class Meta:
model = DescricaoAtendimento
fields = ['descricao']
widgets = {'descricao': forms.CheckboxSelectMultiple(),}
views.py
from django.shortcuts import render
from django.contrib import messages
from .forms import EmpreendedorForm
def cadastro_empreendedor(request):
if str(request.method) == 'POST':
form = EmpreendedorForm(request.POST, request.FILES)
if form.is_valid():
form.save()
messages.success(request, 'Produto salvo com sucesso!')
form = EmpreendedorForm()
else:
messages.success(request, 'Erro ao salvar produto!')
else:
form = EmpreendedorForm()
context = {
'form': form
}
return render(request, 'empreendedor.html', context)
If you have any tips, I really appreciate it, I started with Django almost a month ago, so there's a long way to go.
P.S.: I integrated with PostgreSQL and in the Django administration part I can save all the fields in the DB, but I can't implement that part of the checkbox.
At this moment, I get the error:
It is impossible to add a non-nullable field 'descricao' to descricaoatendimento without specifying a default. This is because the database needs something to populate existing rows.
Please select a fix:
1) Provide a one-off default now (will be set on all existing rows with a null value for this column)
2) Quit and manually define a default value in models.py.
In the template, I gonna work with bootstrap4 to create the forms. But I'd like to resolve this before. I'm still learning English, so sorry for some mistakes.
|
[
"It sounds like you have many Entrepreneurs, each of which can choose many Services. This is a ManyToMany Relationship and you can create it in Django by having one model for each and creating the link between them like this\nclass CadastroEmpreendedor(models.Model):\n ...\n descricao_atendimento = models.ManyToManyField(DescricaoAtendimento)\n\nclass DescricaoAtendimento(models.Model):\n nome = models.CharField('Nome', max_length=120, default=\"unnamed service\")\n \n\nIn this case, every object/row in DescricaoAtendimento is a service. Each entrepeneur can have many services associated with them.\nThis way you don't need to create a model form for DescricaoAtendimento to choose services for an entrepeneur. As it's linked to them by a manytomany relationship you can have an CadastroEmpreendedor model form with just the escricao_atendimento field and the various services become available as options.\nDjango handles this by creating a 'through table' which is basically a table with two fields of foreign keys, one pointing to an entrepeneur, and the other to a service. You can also create this table yourself as a through table - which is useful if you want to extend data about the relationship - eg, a begin and end date for the entrepeneur's use of a service.\nThe error you are getting when you migrate isn't an error, per se. It seems you created an number of DescricaoAtendimento objects and then added the descricao field later. When you then try and migrate, django wants you to provide a default value for the already existing rows, or allow the field to be empty (via blank=True in the model). You can assign a dummy value and then go back and change it in /admin later, or, if you don't have a lot of data, recreate your database and remigrate. Above I've used a default value to avoid this situation.\nHowever, if you are dead set against extra tables, you might want to look at an extension like django multiselectfield\n"
] |
[
0
] |
[] |
[] |
[
"checkbox",
"django",
"django_models",
"frameworks",
"python"
] |
stackoverflow_0074538014_checkbox_django_django_models_frameworks_python.txt
|
Q:
How to skip a tag when using Beautifulsoup find_all?
I want to edit an HTML document and parse some text using Beautifulsoup. I'm interested in <span> tags but the ones that are NOT inside a <table> element. I want to skip all tables when finding the <span> elements.
I've tried to find all <span> elements first and then filter out the ones that have <table> in any parent level. Here is the code. But this is too slow.
for tag in soup.find_all('span'):
ancestor_tables = [x for x in tag.find_all_previous(name='table')]
if len(ancestor_tables) > 0:
continue
text = tag.text
Is there a more efficient alternative? Is it possible to 'hide' / skip tags while searching for <span> in find_all method?
A:
The way I would approach this would be to simply remove all tables from the html before doing my find_all on span.
Here is a thread I found on removing tables. I like the accepted answer because .extract() gives you the opportunity to capture the removed tables, though .decompose() would be better if you don't care about anything in the tables.
Here is the code from this answer:
for table in soup.find_all("table"):
table.extract()
A:
You can use .find_parent():
for tag in soup.find_all("span"):
if tag.find_parent("table"):
continue
# we are not inside <table>
# ...
|
How to skip a tag when using Beautifulsoup find_all?
|
I want to edit an HTML document and parse some text using Beautifulsoup. I'm interested in <span> tags but the ones that are NOT inside a <table> element. I want to skip all tables when finding the <span> elements.
I've tried to find all <span> elements first and then filter out the ones that have <table> in any parent level. Here is the code. But this is too slow.
for tag in soup.find_all('span'):
ancestor_tables = [x for x in tag.find_all_previous(name='table')]
if len(ancestor_tables) > 0:
continue
text = tag.text
Is there a more efficient alternative? Is it possible to 'hide' / skip tags while searching for <span> in find_all method?
|
[
"The way I would approach this would be to simply remove all tables from the html before doing my find_all on span.\nHere is a thread I found on removing tables. I like the accepted answer because .extract() gives you the opportunity to capture the removed tables, though .decompose() would be better if you don't care about anything in the tables.\nHere is the code from this answer:\n\nfor table in soup.find_all(\"table\"):\n table.extract()\n\n\n",
"You can use .find_parent():\nfor tag in soup.find_all(\"span\"):\n if tag.find_parent(\"table\"):\n continue\n # we are not inside <table>\n # ...\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"beautifulsoup",
"html",
"python"
] |
stackoverflow_0074538402_beautifulsoup_html_python.txt
|
Q:
Failed to upload photos on FaceBook 2022 using .send_keys() with selenium python
I'm trying to upload or post a image on facebook with selenium and python
for that i tryed with this
This is the path of the section "Add Photos/Videos":
post=driver.find_element_by_xpath('//*[@id="mount_0_0_fQ"]/div/div1/div/div[4]/div/div/div1/div/div[2]/div/div/div/form/div/div1/div/div/div/div[2]/div1/div[2]/div/div1/div/div1/div/div1/div/div')
post.send_keys("G:\PY SCRIPTS\IMAGES\img.png")
apparently this worked in 2020, by now the structure of Facebook changed and i have this error:
ElementNotInteractableException
A:
if you use free fb It's easier
free fb
post=driver.find_element_by_name("view_post").click()
post.send_keys(r"G:\PY SCRIPTS\IMAGES\img.png")
A:
This xpath worked for me trying to upload a video:
element = bot.find_element(By.XPATH, '//form[contains(@method, "POST")]//input[contains(@accept, "video")]')
I'll try the API anyway cause I don't wanna get banned.
|
Failed to upload photos on FaceBook 2022 using .send_keys() with selenium python
|
I'm trying to upload or post a image on facebook with selenium and python
for that i tryed with this
This is the path of the section "Add Photos/Videos":
post=driver.find_element_by_xpath('//*[@id="mount_0_0_fQ"]/div/div1/div/div[4]/div/div/div1/div/div[2]/div/div/div/form/div/div1/div/div/div/div[2]/div1/div[2]/div/div1/div/div1/div/div1/div/div')
post.send_keys("G:\PY SCRIPTS\IMAGES\img.png")
apparently this worked in 2020, by now the structure of Facebook changed and i have this error:
ElementNotInteractableException
|
[
"if you use free fb It's easier\nfree fb\npost=driver.find_element_by_name(\"view_post\").click() \npost.send_keys(r\"G:\\PY SCRIPTS\\IMAGES\\img.png\")\n\n",
"This xpath worked for me trying to upload a video:\nelement = bot.find_element(By.XPATH, '//form[contains(@method, \"POST\")]//input[contains(@accept, \"video\")]')\n\nI'll try the API anyway cause I don't wanna get banned.\n"
] |
[
0,
0
] |
[] |
[] |
[
"facebook",
"python",
"selenium_webdriver",
"sendkeys"
] |
stackoverflow_0071058452_facebook_python_selenium_webdriver_sendkeys.txt
|
Q:
Typehint Union Dictionary branching error
I want to implement a function like the one below, but it throws a type hint warning.
def test(flag: bool)->Dict[str, int]| Dict[str, str]:
a: Dict[str, str]| Dict[str, int] = {}
if flag:
a['a'] = 1
else:
a['a'] = 'hello'
return post_process(a)
With the following warning by Pylance:
Argument of type "Literal[1]" cannot be assigned to parameter "__value" of type "str" in function "__setitem__"
"Literal[1]" is incompatible with "str"PylancereportGeneralTypeIssues
I know the following is a solution, but it is not very semantically pleasing.
Since semantically, a and b are the same, but it appears to be different.
def test()->Dict[str,int]|Dict[str, str]:
flag = True
if flag:
a: Dict[str, int] = {'a': 1}
return a
else:
b: Dict[str, str] = {'a': 'hello'}
return b
if flag:
return post_process(a)
else:
return post_process(b)
Is there a solution that looks like the code below, that does not throw type hint warnings?:
def test(flag: bool)->Dict[str, int]| Dict[str, str]:
a: Dict[str, str]| Dict[str, int] = {}
if flag:
a: Dict[str, str]
a['a'] = 1
else:
a: Dict[str, int]
a['a'] = 'hello'
return post_process(a)
Note that if a weren't a dictionary and a literal type it could work:
def test(flag: bool)-> int|str:
a: int| str
if flag:
a = 1
else:
a = 'hello'
return post_process(a)
I want an implementation that do not contradict with the type hint system, without sacrificing any of the readability of the actual code. Thank you.
I know type hint warnings are not a part of the python protocol, but I am using Pylance linting system.
A:
Edit:
Since your comment outlined that you truly want the return value to be only ever a dict of strings or a dict of ints, and never a mix of both, you can use the @overload decorator to determine return types based on the boolean flag:
from typing import Dict, overload, Literal
@overload
def test(flag: Literal[True]) -> Dict[str, int]:
...
@overload
def test(flag: Literal[False]) -> Dict[str, str]:
...
def test(flag):
a = {}
if flag:
a["a"] = 1
else:
a["a"] = "hello"
return a
b = test(True) # Intellisensed as Dict[str, int]
c = test(False) # Intellisensed as Dict[str, str]
Original Post:
I was able to recreate your error and resolve it with Dict[str, str | int] like so:
def test(flag: bool)->Dict[str, int | str]:
a: Dict[str, int | str] = {}
if flag:
a['a'] = 1
else:
a['a'] = 'hello'
return a
Notice that if you swap the order of the int and str in a: Dict[str, int | str] = {}, then either assignment of a['a'] will fail. This helps indicate that your type hint suggests that the return value is either ONLY a dict of strings, or ONLY a dict of ints, but when intellisense reads your code, it sees that there's a possibility of a variable dict type, that of strings AND integers. If you wanted to type the return to be either a dict ONLY containing strings based on a flag, or ONLY containing integers based on the same flag, then check out the @overload decorator from the typing library.
|
Typehint Union Dictionary branching error
|
I want to implement a function like the one below, but it throws a type hint warning.
def test(flag: bool)->Dict[str, int]| Dict[str, str]:
a: Dict[str, str]| Dict[str, int] = {}
if flag:
a['a'] = 1
else:
a['a'] = 'hello'
return post_process(a)
With the following warning by Pylance:
Argument of type "Literal[1]" cannot be assigned to parameter "__value" of type "str" in function "__setitem__"
"Literal[1]" is incompatible with "str"PylancereportGeneralTypeIssues
I know the following is a solution, but it is not very semantically pleasing.
Since semantically, a and b are the same, but it appears to be different.
def test()->Dict[str,int]|Dict[str, str]:
flag = True
if flag:
a: Dict[str, int] = {'a': 1}
return a
else:
b: Dict[str, str] = {'a': 'hello'}
return b
if flag:
return post_process(a)
else:
return post_process(b)
Is there a solution that looks like the code below, that does not throw type hint warnings?:
def test(flag: bool)->Dict[str, int]| Dict[str, str]:
a: Dict[str, str]| Dict[str, int] = {}
if flag:
a: Dict[str, str]
a['a'] = 1
else:
a: Dict[str, int]
a['a'] = 'hello'
return post_process(a)
Note that if a weren't a dictionary and a literal type it could work:
def test(flag: bool)-> int|str:
a: int| str
if flag:
a = 1
else:
a = 'hello'
return post_process(a)
I want an implementation that do not contradict with the type hint system, without sacrificing any of the readability of the actual code. Thank you.
I know type hint warnings are not a part of the python protocol, but I am using Pylance linting system.
|
[
"Edit:\nSince your comment outlined that you truly want the return value to be only ever a dict of strings or a dict of ints, and never a mix of both, you can use the @overload decorator to determine return types based on the boolean flag:\nfrom typing import Dict, overload, Literal\n\n\n@overload\ndef test(flag: Literal[True]) -> Dict[str, int]:\n ...\n\n\n@overload\ndef test(flag: Literal[False]) -> Dict[str, str]:\n ...\n\n\ndef test(flag):\n a = {}\n if flag:\n a[\"a\"] = 1\n else:\n a[\"a\"] = \"hello\"\n return a\n\n\nb = test(True) # Intellisensed as Dict[str, int]\n\nc = test(False) # Intellisensed as Dict[str, str]\n\n\nOriginal Post:\nI was able to recreate your error and resolve it with Dict[str, str | int] like so:\ndef test(flag: bool)->Dict[str, int | str]:\n a: Dict[str, int | str] = {}\n if flag:\n a['a'] = 1\n else:\n a['a'] = 'hello'\n return a\n\nNotice that if you swap the order of the int and str in a: Dict[str, int | str] = {}, then either assignment of a['a'] will fail. This helps indicate that your type hint suggests that the return value is either ONLY a dict of strings, or ONLY a dict of ints, but when intellisense reads your code, it sees that there's a possibility of a variable dict type, that of strings AND integers. If you wanted to type the return to be either a dict ONLY containing strings based on a flag, or ONLY containing integers based on the same flag, then check out the @overload decorator from the typing library.\n"
] |
[
1
] |
[] |
[] |
[
"dictionary",
"python",
"type_hinting"
] |
stackoverflow_0074536240_dictionary_python_type_hinting.txt
|
Q:
How to check if second value of slice() is a certain value?
I have a function that takes in a slice. What I want to do is to check if the end value in this slice is equal to -1. If that is the case, I want to reset the slice to another value. I can't find supporting documentation and do not know how to proceed.
datalist=[None, 'Grey', 'EE20-700', 'EE-42-01', 'EE15-767', 'EE0-70650', 'B&B', 1, 1, 1, 1, 1, '300R', True, 'Nov. 2, 2022', 'Nov. 2, 2022', 1, 1, 1, True, 3]
#If we just apply the given slice_dimensions, the following happens:
slice_dimensions=slice(1, -1)
datalist[slice_dimensions]
['Grey', 'EE20-700', 'EE-42-01', 'EE15-767', 'EE0-70650', 'B&B', 1, 1, 1, 1, 1, '300R', True, 'Nov. 2, 2022', 'Nov. 2, 2022', 1, 1, 1, True]
As you can see this is problematic, because the last element of my list (3) is omitted with the current slice dimension. I have seen many solutions saying I should use len(datalist)-1 or something similar instead, but this is not a workable option because the list lengths differ, and I want to keep it simple. If we can just check if end of slice is equal to -1 and change it to None, the problem is solved:
#If we just apply the given slice_dimensions, the following happens:
slice_dimensions=slice(1, None)
datalist[slice_dimensions]
['Grey', 'EE20-700', 'EE-42-01', 'EE15-767', 'EE0-70650', 'B&B', 1, 1, 1, 1, 1, '300R', True, 'Nov. 2, 2022', 'Nov. 2, 2022', 1, 1, 1, True, 3]
I would like to make a function to do so, but do not know how to proceed, something like this:
def slicer(slice_dimensions):
if slice_dimensions[1] == -1: #This part I do not know how to proceed
slice_dimensions= slice(1, None)
data_of_interest=datalist[slice_dimensions]
return(data_of_interest)
How should I proceed in doing so?
A:
You could define a wrapper for slice(). Something like:
def nonwrapping_slice(start=0, stop=None, stride=1):
if stop is not None and stop < 0 and stride > 0:
stop = None
return slice(start, stop, stride)
Then you can call nonwrapping_slice(1, -1) and it will return slice(1, None, 1)
A:
slice object exposes all its parameters as start, stop and step attributes (readonly). So you can just create a new slice, if necessary:
def slicer(objects, slice_dimensions):
if slice_dimensions.end == -1:
# Create a copy slice with end replaced with None
slice_dimensions = slice(slice_dimensions.start, None, slice_dimensions.step)
return objects[slice_dimensions]
Also, slices have a useful indices method, that takes sequence length as argument and outputs a tuple of actual (start, stop, step) (without None and negative values). You can read more in docs (one screen above this anchor - they do not have own heading with href to an anchor)
|
How to check if second value of slice() is a certain value?
|
I have a function that takes in a slice. What I want to do is to check if the end value in this slice is equal to -1. If that is the case, I want to reset the slice to another value. I can't find supporting documentation and do not know how to proceed.
datalist=[None, 'Grey', 'EE20-700', 'EE-42-01', 'EE15-767', 'EE0-70650', 'B&B', 1, 1, 1, 1, 1, '300R', True, 'Nov. 2, 2022', 'Nov. 2, 2022', 1, 1, 1, True, 3]
#If we just apply the given slice_dimensions, the following happens:
slice_dimensions=slice(1, -1)
datalist[slice_dimensions]
['Grey', 'EE20-700', 'EE-42-01', 'EE15-767', 'EE0-70650', 'B&B', 1, 1, 1, 1, 1, '300R', True, 'Nov. 2, 2022', 'Nov. 2, 2022', 1, 1, 1, True]
As you can see this is problematic, because the last element of my list (3) is omitted with the current slice dimension. I have seen many solutions saying I should use len(datalist)-1 or something similar instead, but this is not a workable option because the list lengths differ, and I want to keep it simple. If we can just check if end of slice is equal to -1 and change it to None, the problem is solved:
#If we just apply the given slice_dimensions, the following happens:
slice_dimensions=slice(1, None)
datalist[slice_dimensions]
['Grey', 'EE20-700', 'EE-42-01', 'EE15-767', 'EE0-70650', 'B&B', 1, 1, 1, 1, 1, '300R', True, 'Nov. 2, 2022', 'Nov. 2, 2022', 1, 1, 1, True, 3]
I would like to make a function to do so, but do not know how to proceed, something like this:
def slicer(slice_dimensions):
if slice_dimensions[1] == -1: #This part I do not know how to proceed
slice_dimensions= slice(1, None)
data_of_interest=datalist[slice_dimensions]
return(data_of_interest)
How should I proceed in doing so?
|
[
"You could define a wrapper for slice(). Something like:\ndef nonwrapping_slice(start=0, stop=None, stride=1):\n if stop is not None and stop < 0 and stride > 0:\n stop = None\n return slice(start, stop, stride)\n\nThen you can call nonwrapping_slice(1, -1) and it will return slice(1, None, 1)\n",
"slice object exposes all its parameters as start, stop and step attributes (readonly). So you can just create a new slice, if necessary:\ndef slicer(objects, slice_dimensions):\n if slice_dimensions.end == -1:\n # Create a copy slice with end replaced with None\n slice_dimensions = slice(slice_dimensions.start, None, slice_dimensions.step)\n return objects[slice_dimensions]\n\nAlso, slices have a useful indices method, that takes sequence length as argument and outputs a tuple of actual (start, stop, step) (without None and negative values). You can read more in docs (one screen above this anchor - they do not have own heading with href to an anchor)\n"
] |
[
2,
1
] |
[] |
[] |
[
"function",
"list",
"python",
"slice"
] |
stackoverflow_0074538403_function_list_python_slice.txt
|
Q:
Converting int arrays to string arrays in numpy without truncation
Trying to convert int arrays to string arrays in numpy
In [66]: a=array([0,33,4444522])
In [67]: a.astype(str)
Out[67]:
array(['0', '3', '4'],
dtype='|S1')
Not what I intended
In [68]: a.astype('S10')
Out[68]:
array(['0', '33', '4444522'],
dtype='|S10')
This works but I had to know 10 was big enough to hold my longest string. Is there a way of doing this easily without knowing ahead of time what size string you need? It seems a little dangerous that it just quietly truncates your string without throwing an error.
A:
Again, this can be solved in pure Python:
>>> map(str, [0,33,4444522])
['0', '33', '4444522']
Or if you need to convert back and forth:
>>> a = np.array([0,33,4444522])
>>> np.array(map(str, a))
array(['0', '33', '4444522'],
dtype='|S7')
A:
You can stay in numpy, doing
np.char.mod('%d', a)
This is twice faster than map or list comprehensions for 10 elements, four times faster for 100. This and other string operations are documented here.
A:
Use arr.astype(str), as int to str conversion is now supported by numpy with the desired outcome:
import numpy as np
a = np.array([0,33,4444522])
res = a.astype(str)
print(res)
array(['0', '33', '4444522'],
dtype='<U11')
A:
You can find the smallest sufficient width like so:
In [3]: max(len(str(x)) for x in [0,33,4444522])
Out[3]: 7
Alternatively, just construct the ndarray from a list of strings:
In [7]: np.array([str(x) for x in [0,33,4444522]])
Out[7]:
array(['0', '33', '4444522'],
dtype='|S7')
or, using map():
In [8]: np.array(map(str, [0,33,4444522]))
Out[8]:
array(['0', '33', '4444522'],
dtype='|S7')
A:
np.apply_along_axis(lambda y: [str(i) for i in y], 0, x)
Example
>>> import numpy as np
>>> x = np.array([-1]*10+[0]*10+[1]*10)
array([-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
>>> np.apply_along_axis(lambda y: [str(i) for i in y], 0, x).tolist()
['-1', '-1', '-1', '-1', '-1', '-1', '-1', '-1', '-1', '-1', '0', '0',
'0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1',
'1', '1', '1', '1']
A:
For those working with Python 3.9, the command should be:
list(map(str, [1,2,3]))
|
Converting int arrays to string arrays in numpy without truncation
|
Trying to convert int arrays to string arrays in numpy
In [66]: a=array([0,33,4444522])
In [67]: a.astype(str)
Out[67]:
array(['0', '3', '4'],
dtype='|S1')
Not what I intended
In [68]: a.astype('S10')
Out[68]:
array(['0', '33', '4444522'],
dtype='|S10')
This works but I had to know 10 was big enough to hold my longest string. Is there a way of doing this easily without knowing ahead of time what size string you need? It seems a little dangerous that it just quietly truncates your string without throwing an error.
|
[
"Again, this can be solved in pure Python:\n>>> map(str, [0,33,4444522])\n['0', '33', '4444522']\n\nOr if you need to convert back and forth:\n>>> a = np.array([0,33,4444522])\n>>> np.array(map(str, a))\narray(['0', '33', '4444522'], \n dtype='|S7')\n\n",
"You can stay in numpy, doing\nnp.char.mod('%d', a)\n\nThis is twice faster than map or list comprehensions for 10 elements, four times faster for 100. This and other string operations are documented here.\n",
"Use arr.astype(str), as int to str conversion is now supported by numpy with the desired outcome:\nimport numpy as np\n\na = np.array([0,33,4444522])\n\nres = a.astype(str)\n\nprint(res)\n\narray(['0', '33', '4444522'], \n dtype='<U11')\n\n",
"You can find the smallest sufficient width like so:\nIn [3]: max(len(str(x)) for x in [0,33,4444522])\nOut[3]: 7\n\nAlternatively, just construct the ndarray from a list of strings:\nIn [7]: np.array([str(x) for x in [0,33,4444522]])\nOut[7]: \narray(['0', '33', '4444522'], \n dtype='|S7')\n\nor, using map():\nIn [8]: np.array(map(str, [0,33,4444522]))\nOut[8]: \narray(['0', '33', '4444522'], \n dtype='|S7')\n\n",
"np.apply_along_axis(lambda y: [str(i) for i in y], 0, x)\nExample\n>>> import numpy as np\n\n>>> x = np.array([-1]*10+[0]*10+[1]*10)\narray([-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 0, 0, 0, 0, 0, 0,\n 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])\n\n>>> np.apply_along_axis(lambda y: [str(i) for i in y], 0, x).tolist()\n['-1', '-1', '-1', '-1', '-1', '-1', '-1', '-1', '-1', '-1', '0', '0',\n '0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1',\n '1', '1', '1', '1']\n\n",
"For those working with Python 3.9, the command should be:\nlist(map(str, [1,2,3]))\n\n"
] |
[
53,
50,
16,
3,
0,
0
] |
[] |
[] |
[
"arrays",
"numpy",
"python",
"string"
] |
stackoverflow_0009958846_arrays_numpy_python_string.txt
|
Q:
Is it possible to recursively traverse nested data classes \ convert them to a nested dictionary without expanding some types of dataclasses?
I have a nested set of dataclasses that I want to convert to a dictionary
however, some classes should remain as a class, and not be converted to a dataclass
(the full structure is deeper and more complex)
in this example:
from dataclasses import dataclass, field, asdict
@dataclass
class C:
x: int = 1
@dataclass
class B:
c: C = C()
@dataclass
class A:
b: B = B()
asdict(A())
# returns
# {'b': {'c': {'x': 1}}}
# I want
custom_asdict(A())
# should return:
# {'b': {'c': C(x=1)}}
marking the class C as "do not expand" could be either as a parameter to custom_asdict or as a prameter to the dataclass decorator
A:
Although dataclasses.asdict allows for a "dict_factory" parameter, its use is limited, as it is only called for pairs of name/value for each field recursively, but "depth first": meaning all dataclass values are already serialized to a dict when the custom factory is called.
So, it is very hard to customize a "dict_factory" that would provide the needed behavior - on the other hand, it is possible to simply wrap "asdict" (or the inner function it calls) to do not serialize the classes you do not want.
That is way more straightforward, and, if needed, can be designed in a way to be turned "on or off" (for example, using unittest.mock.patch".)
Otherwise, just set for an attribute name to indicate the classes you don't want to serialize as dicts, and call the function bellow prior to calling asdict(). (This code checks for a _dont_expand attribute)
def patch():
import dataclasses
if getattr(dataclasses, "_patched", False):
return
original = dataclasses._asdict_inner
def new_asdict_inner(obj, factory):
if dataclasses._is_dataclass_instance(obj) and getattr(obj, "_dont_expand", False):
return obj
return original(obj, factory)
dataclasses._asdict_inner = new_asdict_inner
dataclasses._patched = True
I tested this in a Python shell with classes like yours and it works like a charm:
In [75]: @dataclasses.dataclass
...: class C:
...: _dont_expand = True
...: x: int = 1
...:
In [76]: @dataclasses.dataclass
...: class B:
...: c: C = dataclasses.field(default_factory=C)
...:
In [77]: @dataclasses.dataclass
...: class A:
...: b: B = dataclasses.field(default_factory=B)
...:
In [78]: a = A()
In [79]: a
Out[79]: A(b=B(c=C(x=1)))
In [80]: patch()
In [81]: dataclasses.asdict(a)
Out[81]: {'b': {'c': C(x=1)}}
(note: with this code you can set the _dont_expand attribute directly on instances you don't want to serialize: it will work just for those instances, while their class keep the normal behavior)
|
Is it possible to recursively traverse nested data classes \ convert them to a nested dictionary without expanding some types of dataclasses?
|
I have a nested set of dataclasses that I want to convert to a dictionary
however, some classes should remain as a class, and not be converted to a dataclass
(the full structure is deeper and more complex)
in this example:
from dataclasses import dataclass, field, asdict
@dataclass
class C:
x: int = 1
@dataclass
class B:
c: C = C()
@dataclass
class A:
b: B = B()
asdict(A())
# returns
# {'b': {'c': {'x': 1}}}
# I want
custom_asdict(A())
# should return:
# {'b': {'c': C(x=1)}}
marking the class C as "do not expand" could be either as a parameter to custom_asdict or as a prameter to the dataclass decorator
|
[
"Although dataclasses.asdict allows for a \"dict_factory\" parameter, its use is limited, as it is only called for pairs of name/value for each field recursively, but \"depth first\": meaning all dataclass values are already serialized to a dict when the custom factory is called.\nSo, it is very hard to customize a \"dict_factory\" that would provide the needed behavior - on the other hand, it is possible to simply wrap \"asdict\" (or the inner function it calls) to do not serialize the classes you do not want.\nThat is way more straightforward, and, if needed, can be designed in a way to be turned \"on or off\" (for example, using unittest.mock.patch\".)\nOtherwise, just set for an attribute name to indicate the classes you don't want to serialize as dicts, and call the function bellow prior to calling asdict(). (This code checks for a _dont_expand attribute)\ndef patch():\n import dataclasses\n if getattr(dataclasses, \"_patched\", False):\n return\n original = dataclasses._asdict_inner\n def new_asdict_inner(obj, factory):\n if dataclasses._is_dataclass_instance(obj) and getattr(obj, \"_dont_expand\", False):\n return obj\n return original(obj, factory)\n dataclasses._asdict_inner = new_asdict_inner\n dataclasses._patched = True\n\nI tested this in a Python shell with classes like yours and it works like a charm:\n\nIn [75]: @dataclasses.dataclass\n ...: class C:\n ...: _dont_expand = True\n ...: x: int = 1\n ...: \n\nIn [76]: @dataclasses.dataclass\n ...: class B:\n ...: c: C = dataclasses.field(default_factory=C)\n ...: \n\nIn [77]: @dataclasses.dataclass\n ...: class A:\n ...: b: B = dataclasses.field(default_factory=B)\n ...: \n\nIn [78]: a = A()\n\nIn [79]: a\nOut[79]: A(b=B(c=C(x=1)))\n\nIn [80]: patch()\n\nIn [81]: dataclasses.asdict(a)\nOut[81]: {'b': {'c': C(x=1)}}\n\n\n(note: with this code you can set the _dont_expand attribute directly on instances you don't want to serialize: it will work just for those instances, while their class keep the normal behavior)\n"
] |
[
1
] |
[] |
[] |
[
"python",
"python_dataclasses"
] |
stackoverflow_0074535848_python_python_dataclasses.txt
|
Q:
How do I perform query filtering in django templates
I need to perform a filtered query from within a django template, to get a set of objects equivalent to python code within a view:
queryset = Modelclass.objects.filter(somekey=foo)
In my template I would like to do
{% for object in data.somekey_set.FILTER %}
but I just can't seem to find out how to write FILTER.
A:
You can't do this, which is by design. The Django framework authors intended a strict separation of presentation code from data logic. Filtering models is data logic, and outputting HTML is presentation logic.
So you have several options. The easiest is to do the filtering, then pass the result to render_to_response. Or you could write a method in your model so that you can say {% for object in data.filtered_set %}. Finally, you could write your own template tag, although in this specific case I would advise against that.
A:
I just add an extra template tag like this:
@register.filter
def in_category(things, category):
return things.filter(category=category)
Then I can do:
{% for category in categories %}
{% for thing in things|in_category:category %}
{{ thing }}
{% endfor %}
{% endfor %}
A:
I run into this problem on a regular basis and often use the "add a method" solution. However, there are definitely cases where "add a method" or "compute it in the view" don't work (or don't work well). E.g. when you are caching template fragments and need some non-trivial DB computation to produce it. You don't want to do the DB work unless you need to, but you won't know if you need to until you are deep in the template logic.
Some other possible solutions:
Use the {% expr <expression> as <var_name> %} template tag found at http://www.djangosnippets.org/snippets/9/ The expression is any legal Python expression with your template's Context as your local scope.
Change your template processor. Jinja2 (http://jinja.pocoo.org/2/) has syntax that is almost identical to the Django template language, but with full Python power available. It's also faster. You can do this wholesale, or you might limit its use to templates that you are working on, but use Django's "safer" templates for designer-maintained pages.
A:
The other option is that if you have a filter that you always want applied, to add a custom manager on the model in question which always applies the filter to the results returned.
A good example of this is a Event model, where for 90% of the queries you do on the model you are going to want something like Event.objects.filter(date__gte=now), i.e. you're normally interested in Events that are upcoming. This would look like:
class EventManager(models.Manager):
def get_query_set(self):
now = datetime.now()
return super(EventManager,self).get_query_set().filter(date__gte=now)
And in the model:
class Event(models.Model):
...
objects = EventManager()
But again, this applies the same filter against all default queries done on the Event model and so isn't as flexible some of the techniques described above.
A:
This can be solved with an assignment tag:
from django import template
register = template.Library()
@register.assignment_tag
def query(qs, **kwargs):
""" template tag which allows queryset filtering. Usage:
{% query books author=author as mybooks %}
{% for book in mybooks %}
...
{% endfor %}
"""
return qs.filter(**kwargs)
EDIT: assignment_tag was removed in Django 2.0, this will no longer work.
A:
For anyone looking for an answer in 2020.
This worked for me.
In Views:
class InstancesView(generic.ListView):
model = AlarmInstance
context_object_name = 'settings_context'
queryset = Group.objects.all()
template_name = 'insta_list.html'
@register.filter
def filter_unknown(self, aVal):
result = aVal.filter(is_known=False)
return result
@register.filter
def filter_known(self, aVal):
result = aVal.filter(is_known=True)
return result
In template:
{% for instance in alarm.qar_alarm_instances|filter_unknown:alarm.qar_alarm_instances %}
In pseudocode:
For each in model.child_object|view_filter:filter_arg
Hope that helps.
A:
This is my approach:
@register.filter()
def query_filter(value, attr):
return value.filter(**eval(attr))
In the template:
{{ queryset|query_filter:'{"cod_tipoinmueble":1,"des_operacion": "alquiler"}'|length }}
|
How do I perform query filtering in django templates
|
I need to perform a filtered query from within a django template, to get a set of objects equivalent to python code within a view:
queryset = Modelclass.objects.filter(somekey=foo)
In my template I would like to do
{% for object in data.somekey_set.FILTER %}
but I just can't seem to find out how to write FILTER.
|
[
"You can't do this, which is by design. The Django framework authors intended a strict separation of presentation code from data logic. Filtering models is data logic, and outputting HTML is presentation logic.\nSo you have several options. The easiest is to do the filtering, then pass the result to render_to_response. Or you could write a method in your model so that you can say {% for object in data.filtered_set %}. Finally, you could write your own template tag, although in this specific case I would advise against that.\n",
"I just add an extra template tag like this:\n@register.filter\ndef in_category(things, category):\n return things.filter(category=category)\n\nThen I can do:\n{% for category in categories %}\n {% for thing in things|in_category:category %}\n {{ thing }}\n {% endfor %}\n{% endfor %}\n\n",
"I run into this problem on a regular basis and often use the \"add a method\" solution. However, there are definitely cases where \"add a method\" or \"compute it in the view\" don't work (or don't work well). E.g. when you are caching template fragments and need some non-trivial DB computation to produce it. You don't want to do the DB work unless you need to, but you won't know if you need to until you are deep in the template logic.\nSome other possible solutions:\n\nUse the {% expr <expression> as <var_name> %} template tag found at http://www.djangosnippets.org/snippets/9/ The expression is any legal Python expression with your template's Context as your local scope.\nChange your template processor. Jinja2 (http://jinja.pocoo.org/2/) has syntax that is almost identical to the Django template language, but with full Python power available. It's also faster. You can do this wholesale, or you might limit its use to templates that you are working on, but use Django's \"safer\" templates for designer-maintained pages.\n\n",
"The other option is that if you have a filter that you always want applied, to add a custom manager on the model in question which always applies the filter to the results returned.\nA good example of this is a Event model, where for 90% of the queries you do on the model you are going to want something like Event.objects.filter(date__gte=now), i.e. you're normally interested in Events that are upcoming. This would look like:\nclass EventManager(models.Manager):\n def get_query_set(self):\n now = datetime.now()\n return super(EventManager,self).get_query_set().filter(date__gte=now)\n\nAnd in the model:\nclass Event(models.Model):\n ...\n objects = EventManager()\n\nBut again, this applies the same filter against all default queries done on the Event model and so isn't as flexible some of the techniques described above. \n",
"This can be solved with an assignment tag:\nfrom django import template\n\nregister = template.Library()\n\n@register.assignment_tag\ndef query(qs, **kwargs):\n \"\"\" template tag which allows queryset filtering. Usage:\n {% query books author=author as mybooks %}\n {% for book in mybooks %}\n ...\n {% endfor %}\n \"\"\"\n return qs.filter(**kwargs)\n\nEDIT: assignment_tag was removed in Django 2.0, this will no longer work.\n",
"For anyone looking for an answer in 2020.\nThis worked for me.\nIn Views:\n class InstancesView(generic.ListView):\n model = AlarmInstance\n context_object_name = 'settings_context'\n queryset = Group.objects.all()\n template_name = 'insta_list.html'\n\n @register.filter\n def filter_unknown(self, aVal):\n result = aVal.filter(is_known=False)\n return result\n\n @register.filter\n def filter_known(self, aVal):\n result = aVal.filter(is_known=True)\n return result\n\nIn template:\n{% for instance in alarm.qar_alarm_instances|filter_unknown:alarm.qar_alarm_instances %}\n\nIn pseudocode:\nFor each in model.child_object|view_filter:filter_arg\n\nHope that helps.\n",
"This is my approach:\n@register.filter()\ndef query_filter(value, attr):\n return value.filter(**eval(attr))\n\nIn the template:\n{{ queryset|query_filter:'{\"cod_tipoinmueble\":1,\"des_operacion\": \"alquiler\"}'|length }}\n\n"
] |
[
137,
50,
13,
12,
9,
1,
0
] |
[] |
[] |
[
"django",
"django_templates",
"python"
] |
stackoverflow_0000223990_django_django_templates_python.txt
|
Q:
generate a set of of all combinations of special characters and numbers around a string - python
I am attempting to generate all combinations of special characters and numbers around a string. For example, suppose the string is 'notebook' and the special characters are @, #, $, %, & and numbers 0-9. This could generate: $#notebook12, notebook8, @5notebook0&. I am assuming no repeats of characters.
Thanks in advance.
So far I can only generate:
special = ['@','#','$','%','&',' ',0,1,2,3,4,5,6,7,8,9,' ']
choice = list(permutations(special, 2))
word = ['notebook']
pw_choice = word + choice
test = list(permutations(pw_choice, 2))
print(test)
But this results in a list of list that I would have to manipulate further. Is there an easier work around to produce the set of _ _ notebook _ _ ?
A:
Try this:
from itertools import combinations, permutations
result = [
''.join(p)
for n_chars in range(len(special) + 1)
for chars in combinations(special, n_chars)
for p in permutations(('notebook',) + chars)
]
Example with special = ['@','#','$']:
['notebook', 'notebook@', '@notebook', 'notebook#', '#notebook',
'notebook$', '$notebook', 'notebook@#', 'notebook#@',
'@notebook#', '@#notebook', '#notebook@', '#@notebook',
'notebook@$', 'notebook$@', '@notebook$', '@$notebook',
'$notebook@', '$@notebook', 'notebook#$', 'notebook$#',
'#notebook$', '#$notebook', '$notebook#', '$#notebook',
'notebook@#$', 'notebook@$#', 'notebook#@$', 'notebook#$@',
'notebook$@#', 'notebook$#@', '@notebook#$', '@notebook$#',
'@#notebook$', '@#$notebook', '@$notebook#', '@$#notebook',
'#notebook@$', '#notebook$@', '#@notebook$', '#@$notebook',
'#$notebook@', '#$@notebook', '$notebook@#', '$notebook#@',
'$@notebook#', '$@#notebook', '$#notebook@', '$#@notebook']
If you only want at most two chars before and after the "notebook" string:
from itertools import combinations
result = [
f'{"{}"*b}notebook{"{}"*a}'.format(*c)
for b in range(3)
for a in range(3)
for c in combinations(special, a + b)
]
This is the result (with special = ['@','#','$']):
['notebook', 'notebook@', 'notebook#', 'notebook$', 'notebook@#',
'notebook@$', 'notebook#$', '@notebook', '#notebook', '$notebook',
'@notebook#', '@notebook$', '#notebook$', '@notebook#$',
'@#notebook', '@$notebook', '#$notebook', '@#notebook$']
A:
Based on your comment: "at most 2 before and after the chose word", you only have 7 templates. I categorized those in a tuple in advance and format them in for loop later.
The outer loop is there to generate rs for the permutations.
from itertools import permutations
from time import sleep
def generate_word(word: str):
numbers_and_symbols = list("0123456789") + list("@#$%&")
templates = (
(f"{{}}{word}", f"{word}{{}}"), # 1 item
(f"{{}}{word}{{}}", f"{word}{{}}{{}}", f"{{}}{{}}{word}"), # 2 items
(f"{{}}{{}}{word}{{}}", f"{{}}{word}{{}}{{}}"), # 3 items
(f"{{}}{{}}{word}{{}}{{}}",), # 4 items
)
for i in range(1, 5):
for t in permutations(numbers_and_symbols, r=i):
for string in templates[i-1]:
yield string.format(*t)
sleep(0.05)
for i in generate_word("notebook"):
print(i)
I've added sleep so that you can see what is generated.
A:
Same thing but different - long form.
import itertools
q = ['@', '#', '$', '%', '&',0,1,2,3,4,5,6,7,8,9]
word = 'notebook'
results = []
# one at a time
for c in q:
results.append(f'{c}{word}')
results.append(f'{word}{c}')
for a,b in itertools.combinations(q,2):
results.append(f'{a}{b}{word}')
results.append(f'{word}{a}{b}')
results.append(f'{a}{word}{b}')
for a,b,c in itertools.combinations(q,3):
results.append(f'{a}{b}{word}{c}')
results.append(f'{a}{word}{b}{c}')
for a,b,c,d in itertools.combinations(q,4):
results.append(f'{a}{b}{word}{c}{d}')
Not very flexible; I like Ricardo Bucco's answer.
A:
Simply select 4 random unique elemnts from main list & Inject uisng fstrings.
import itertools
special = ['@','#','$','%','&',' ',0,1,2,3,4,5,6,7,8,9,' ']
lis=[]
word = ['notebook']
word = ''.join(word)
for comb in itertools.combinations(special, 4):
first = str(comb[0])+str(comb[1])
last = str(comb[2])+str(comb[3])
final =f'{first}{word}{last}'
if final.startswith("@"):
pass
else:
lis.append(final)
print(lis)
some sample outputs #
['#$notebook%&', '#$notebook% ', '#$notebook%0', '#$notebook%1',
'#$notebook%2', '#$notebook%3', '#$notebook%4', '#$notebook%5',
'#$notebook%6', '#$notebook%7', '#$notebook%8', '#$notebook%9',
'#$notebook% ', '#$notebook& ', '#$notebook&0', '#$notebook&1',
'#$notebook&2', '#$notebook&3', '#$notebook&4', '#$notebook&5',,,,,,,,,,,,,,,,,,,,,
In this case Produced
2380 combination
If we ignore starts with @ produces.
1820 combination
|
generate a set of of all combinations of special characters and numbers around a string - python
|
I am attempting to generate all combinations of special characters and numbers around a string. For example, suppose the string is 'notebook' and the special characters are @, #, $, %, & and numbers 0-9. This could generate: $#notebook12, notebook8, @5notebook0&. I am assuming no repeats of characters.
Thanks in advance.
So far I can only generate:
special = ['@','#','$','%','&',' ',0,1,2,3,4,5,6,7,8,9,' ']
choice = list(permutations(special, 2))
word = ['notebook']
pw_choice = word + choice
test = list(permutations(pw_choice, 2))
print(test)
But this results in a list of list that I would have to manipulate further. Is there an easier work around to produce the set of _ _ notebook _ _ ?
|
[
"Try this:\nfrom itertools import combinations, permutations\n\nresult = [\n ''.join(p)\n for n_chars in range(len(special) + 1)\n for chars in combinations(special, n_chars)\n for p in permutations(('notebook',) + chars)\n]\n\nExample with special = ['@','#','$']:\n['notebook', 'notebook@', '@notebook', 'notebook#', '#notebook',\n 'notebook$', '$notebook', 'notebook@#', 'notebook#@',\n '@notebook#', '@#notebook', '#notebook@', '#@notebook',\n 'notebook@$', 'notebook$@', '@notebook$', '@$notebook',\n '$notebook@', '$@notebook', 'notebook#$', 'notebook$#',\n '#notebook$', '#$notebook', '$notebook#', '$#notebook',\n 'notebook@#$', 'notebook@$#', 'notebook#@$', 'notebook#$@',\n 'notebook$@#', 'notebook$#@', '@notebook#$', '@notebook$#',\n '@#notebook$', '@#$notebook', '@$notebook#', '@$#notebook',\n '#notebook@$', '#notebook$@', '#@notebook$', '#@$notebook',\n '#$notebook@', '#$@notebook', '$notebook@#', '$notebook#@',\n '$@notebook#', '$@#notebook', '$#notebook@', '$#@notebook']\n\nIf you only want at most two chars before and after the \"notebook\" string:\nfrom itertools import combinations\n\nresult = [\n f'{\"{}\"*b}notebook{\"{}\"*a}'.format(*c)\n for b in range(3)\n for a in range(3)\n for c in combinations(special, a + b)\n]\n\nThis is the result (with special = ['@','#','$']):\n['notebook', 'notebook@', 'notebook#', 'notebook$', 'notebook@#',\n 'notebook@$', 'notebook#$', '@notebook', '#notebook', '$notebook',\n '@notebook#', '@notebook$', '#notebook$', '@notebook#$',\n '@#notebook', '@$notebook', '#$notebook', '@#notebook$']\n\n",
"Based on your comment: \"at most 2 before and after the chose word\", you only have 7 templates. I categorized those in a tuple in advance and format them in for loop later.\n\nThe outer loop is there to generate rs for the permutations.\n\nfrom itertools import permutations\nfrom time import sleep\n\n\ndef generate_word(word: str):\n numbers_and_symbols = list(\"0123456789\") + list(\"@#$%&\")\n\n templates = (\n (f\"{{}}{word}\", f\"{word}{{}}\"), # 1 item\n (f\"{{}}{word}{{}}\", f\"{word}{{}}{{}}\", f\"{{}}{{}}{word}\"), # 2 items\n (f\"{{}}{{}}{word}{{}}\", f\"{{}}{word}{{}}{{}}\"), # 3 items\n (f\"{{}}{{}}{word}{{}}{{}}\",), # 4 items\n )\n\n for i in range(1, 5):\n for t in permutations(numbers_and_symbols, r=i):\n for string in templates[i-1]:\n yield string.format(*t)\n sleep(0.05)\n\n\nfor i in generate_word(\"notebook\"):\n print(i)\n\nI've added sleep so that you can see what is generated.\n",
"Same thing but different - long form.\nimport itertools\n\nq = ['@', '#', '$', '%', '&',0,1,2,3,4,5,6,7,8,9]\nword = 'notebook'\nresults = []\n# one at a time\nfor c in q:\n results.append(f'{c}{word}')\n results.append(f'{word}{c}')\n\nfor a,b in itertools.combinations(q,2):\n results.append(f'{a}{b}{word}')\n results.append(f'{word}{a}{b}')\n results.append(f'{a}{word}{b}')\n \nfor a,b,c in itertools.combinations(q,3):\n results.append(f'{a}{b}{word}{c}')\n results.append(f'{a}{word}{b}{c}')\n\nfor a,b,c,d in itertools.combinations(q,4):\n results.append(f'{a}{b}{word}{c}{d}')\n\n\nNot very flexible; I like Ricardo Bucco's answer.\n",
"Simply select 4 random unique elemnts from main list & Inject uisng fstrings.\nimport itertools\nspecial = ['@','#','$','%','&',' ',0,1,2,3,4,5,6,7,8,9,' ']\nlis=[]\n\nword = ['notebook']\nword = ''.join(word)\n\n\nfor comb in itertools.combinations(special, 4):\n first = str(comb[0])+str(comb[1])\n last = str(comb[2])+str(comb[3])\n\n \n final =f'{first}{word}{last}'\n if final.startswith(\"@\"):\n pass\n else:\n \n lis.append(final)\n\n\nprint(lis)\n\nsome sample outputs #\n\n['#$notebook%&', '#$notebook% ', '#$notebook%0', '#$notebook%1',\n'#$notebook%2', '#$notebook%3', '#$notebook%4', '#$notebook%5',\n'#$notebook%6', '#$notebook%7', '#$notebook%8', '#$notebook%9',\n'#$notebook% ', '#$notebook& ', '#$notebook&0', '#$notebook&1',\n'#$notebook&2', '#$notebook&3', '#$notebook&4', '#$notebook&5',,,,,,,,,,,,,,,,,,,,,\n\nIn this case Produced\n2380 combination\n\nIf we ignore starts with @ produces.\n1820 combination\n\n"
] |
[
3,
2,
2,
1
] |
[] |
[] |
[
"combinations",
"permutation",
"python"
] |
stackoverflow_0074536936_combinations_permutation_python.txt
|
Q:
Assign involving both reducing & non-reducing operations in Pandas
I'm an R/Tidyverse guy getting my feet wet in python/pandas and having trouble discerning if there is a way to do the following as elegantly in pandas as tidyverse:
(
dat
%>% group_by(grp)
%>% mutate(
value = value/max(value)
)
)
So, there's a grouped mutate that involves a non-reducing operation (division) that in turn involves the result of a reducing operation (max). I know the following is possible:
import pandas as pd
import numpy as np
df = pd.DataFrame({'grp': np.random.randint(0,5, 10), 'value': np.random.randn(10)}).sort_values('grp')
tmp = (
df
.groupby('grp')
.agg('max')
)
(
df
.merge(tmp,on='grp')
.assign(
value = lambda x: x.value_x / x.value_y
)
)
But I feel like there must be a way to avoid the creation of the temporary variable tmp to achieve this in one expression like I can achieve in tidyverse. Am I wrong?
Update: I'm marking @PaulS's answer as correct as it indeed addresses the question as posed. On using it something other than my minimal example, I realized there was further implicit behaviour in tidyverse I hadn't accounted for; specifically, that columns not involved in the series of specified operations are kept in the tidyverse case and dropped in @PaulS's answer. So here instead is an example & solution that more closely emulates tidyverse:
df = (
pd.DataFrame({
'grp': np.random.randint(0,5, 10) #to be used for grouping
, 'time': np.random.normal(0,1,10) #extra column not involved in computation
, 'value': np.random.randn(10) #to be used for calculations
})
.sort_values(['grp','time'])
.reset_index()
)
#computing a grouped non-reduced-divided-by-reduced:
(
df
.groupby('grp', group_keys=False)
.apply(
lambda x: (
x.assign(
value = (
x.value
/ x.value.max()
)
)
)
)
.reset_index()
.drop(['index','level_0'],axis=1)
)
I also discovered that if I want to index into one column during the assignment, I have to tweak things a bit, for example:
#this time the reduced compute involves getting the value at the time closest to zero:
(
df
.groupby('grp', group_keys=False)
.apply(
lambda x: (
x.assign(
value = (
x.value
/ x.value.values[np.argmin(np.abs(x.time))] #note use of .values[]
)
)
)
)
.reset_index()
.drop(['index','level_0'],axis=1)
)
A:
For this specific case, a transform is a better fit, and should be more performant than apply:
df.assign(value = df.value/df.groupby('grp').value.transform('max'))
grp value
1 0 1.000000
2 1 -0.290494
3 1 1.000000
4 1 0.214848
6 2 8.242604
7 2 1.000000
8 2 1.156246
0 3 0.655760
9 3 1.000000
5 4 1.000000
A:
A possible solution:
(df.groupby('grp')
.apply(lambda g: g['value'].div(g['value'].max()))
.droplevel(1)
.reset_index())
Output:
grp value
0 0 1.000000
1 1 1.000000
2 1 1.052922
3 2 1.000000
4 2 5.873499
5 3 10.009542
6 3 1.000000
7 4 1.000000
8 4 -0.842420
9 4 0.410153
|
Assign involving both reducing & non-reducing operations in Pandas
|
I'm an R/Tidyverse guy getting my feet wet in python/pandas and having trouble discerning if there is a way to do the following as elegantly in pandas as tidyverse:
(
dat
%>% group_by(grp)
%>% mutate(
value = value/max(value)
)
)
So, there's a grouped mutate that involves a non-reducing operation (division) that in turn involves the result of a reducing operation (max). I know the following is possible:
import pandas as pd
import numpy as np
df = pd.DataFrame({'grp': np.random.randint(0,5, 10), 'value': np.random.randn(10)}).sort_values('grp')
tmp = (
df
.groupby('grp')
.agg('max')
)
(
df
.merge(tmp,on='grp')
.assign(
value = lambda x: x.value_x / x.value_y
)
)
But I feel like there must be a way to avoid the creation of the temporary variable tmp to achieve this in one expression like I can achieve in tidyverse. Am I wrong?
Update: I'm marking @PaulS's answer as correct as it indeed addresses the question as posed. On using it something other than my minimal example, I realized there was further implicit behaviour in tidyverse I hadn't accounted for; specifically, that columns not involved in the series of specified operations are kept in the tidyverse case and dropped in @PaulS's answer. So here instead is an example & solution that more closely emulates tidyverse:
df = (
pd.DataFrame({
'grp': np.random.randint(0,5, 10) #to be used for grouping
, 'time': np.random.normal(0,1,10) #extra column not involved in computation
, 'value': np.random.randn(10) #to be used for calculations
})
.sort_values(['grp','time'])
.reset_index()
)
#computing a grouped non-reduced-divided-by-reduced:
(
df
.groupby('grp', group_keys=False)
.apply(
lambda x: (
x.assign(
value = (
x.value
/ x.value.max()
)
)
)
)
.reset_index()
.drop(['index','level_0'],axis=1)
)
I also discovered that if I want to index into one column during the assignment, I have to tweak things a bit, for example:
#this time the reduced compute involves getting the value at the time closest to zero:
(
df
.groupby('grp', group_keys=False)
.apply(
lambda x: (
x.assign(
value = (
x.value
/ x.value.values[np.argmin(np.abs(x.time))] #note use of .values[]
)
)
)
)
.reset_index()
.drop(['index','level_0'],axis=1)
)
|
[
"For this specific case, a transform is a better fit, and should be more performant than apply:\ndf.assign(value = df.value/df.groupby('grp').value.transform('max'))\n grp value\n1 0 1.000000\n2 1 -0.290494\n3 1 1.000000\n4 1 0.214848\n6 2 8.242604\n7 2 1.000000\n8 2 1.156246\n0 3 0.655760\n9 3 1.000000\n5 4 1.000000\n\n",
"A possible solution:\n(df.groupby('grp')\n .apply(lambda g: g['value'].div(g['value'].max()))\n .droplevel(1)\n .reset_index())\n\nOutput:\n grp value\n0 0 1.000000\n1 1 1.000000\n2 1 1.052922\n3 2 1.000000\n4 2 5.873499\n5 3 10.009542\n6 3 1.000000\n7 4 1.000000\n8 4 -0.842420\n9 4 0.410153\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"pandas",
"python",
"tidyverse"
] |
stackoverflow_0074536116_pandas_python_tidyverse.txt
|
Q:
XPath get one attribute or another?
I have a Python XML XPath expression ancestor-or-self::*[@foo]/@foo, and I need to modify it to get attribute @foo if it exists otherwise get attribute @bar.
I've tried to use or operator similar to condition like [@foo or @bar], but got an expression error.
A:
XPath 1.0
For a correct XPath_Prefix, this XPath,
XPath_Prefix/@*[name()='foo' or name()='bar'][1]
will select the foo attribute if available; otherwise, the bar attribute.
|
XPath get one attribute or another?
|
I have a Python XML XPath expression ancestor-or-self::*[@foo]/@foo, and I need to modify it to get attribute @foo if it exists otherwise get attribute @bar.
I've tried to use or operator similar to condition like [@foo or @bar], but got an expression error.
|
[
"XPath 1.0\nFor a correct XPath_Prefix, this XPath,\n\nXPath_Prefix/@*[name()='foo' or name()='bar'][1]\n\nwill select the foo attribute if available; otherwise, the bar attribute.\n"
] |
[
2
] |
[] |
[] |
[
"elementtree",
"python",
"xml",
"xpath"
] |
stackoverflow_0074538241_elementtree_python_xml_xpath.txt
|
Q:
How to add or remove hairs from a hair curve in Blender Python (bpy)
I am trying to write my own scripts for working with hair in blender. I can already modify the position of points on a blender hair curve object like this:
bpy.data.objects["HairCurves"].data.curves[0].points[0].position = (1., 1., 1.)
But how can I add or remove curves and points from this hair_curve object? I have tried stuff like:
bpy.data.objects["HairCurves"].data.curves.new()
Traceback (most recent call last):
File "<blender_console>", line 1, in <module>
AttributeError: 'bpy_prop_collection' object has no attribute 'new'
I am at a loss.
A:
Follow this: [https://developer.blender.org/T68981]
Note that the box next to the Python API is not checked yet.
|
How to add or remove hairs from a hair curve in Blender Python (bpy)
|
I am trying to write my own scripts for working with hair in blender. I can already modify the position of points on a blender hair curve object like this:
bpy.data.objects["HairCurves"].data.curves[0].points[0].position = (1., 1., 1.)
But how can I add or remove curves and points from this hair_curve object? I have tried stuff like:
bpy.data.objects["HairCurves"].data.curves.new()
Traceback (most recent call last):
File "<blender_console>", line 1, in <module>
AttributeError: 'bpy_prop_collection' object has no attribute 'new'
I am at a loss.
|
[
"Follow this: [https://developer.blender.org/T68981]\nNote that the box next to the Python API is not checked yet.\n"
] |
[
1
] |
[] |
[] |
[
"blender",
"bpy",
"python"
] |
stackoverflow_0074526682_blender_bpy_python.txt
|
Q:
How ensure subprocess is killed on timeout when using `run`?
I am using the following code to launch a subprocess :
# Run the program
subprocess_result = subprocess.run(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
check=False,
timeout=timeout,
cwd=directory,
env=env,
preexec_fn=set_memory_limits,
)
The launched subprocess is also a Python program, with a shebang.
This subprocess may last for longer than the specified timeout.
The subprocess does heavy computations and write results in a file and does not contain any signal handler.
According to the documentation https://docs.python.org/3/library/subprocess.html#subprocess.run, subprocess.run kills a child that timesout :
The timeout argument is passed to Popen.communicate(). If the timeout
expires, the child process will be killed and waited for. The
TimeoutExpired exception will be re-raised after the child process has
terminated.
When my subprocess timesout, I always receive the subprocess.TimeoutExpired exception, but from time to time the subprocess is not killed, hence still consuming resources on my machine.
So my question is, am I doing something wrong here ? If yes, what and if no, why do I have this issue and how can I solve it ?
Note : I am using Python 3.10 on Ubuntu 22_04
A:
The most likely culprit for the behaviour you see is that the subprocess you are spawning is probably using multiprocessing and spawning its own child processes. Killing the parent process does not automatically kill the whole set of descendants. The granchildren are inherited by the init process (i.e. the process with PID 1) and will continue to run.
You can verify from the source code of suprocess.run :
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
Here you can see at line 550 the timeout is set on the communicate call, if it fires at line 552 the subprocess is .kill()ed. The kill method sends a SIGKILL which immediately kills the subprocess without any cleanup. It's a signal that cannot be caught by the subprocess, so it's not possible that the child is somehow ignoring it.
The TimeoutException is then re-raised at line 564, so if your parent process sees this exception the subprocess is already dead.
This however says nothing of granchildren processes. Those will continue to run as children of PID 1.
I don't see any way in which you can customize how subprocess.run handles subprocess termination. For example, if it used SIGTERM instead of SIGKILL you could modify your child process or write a wrapper process that will catch the signal and properly kill all its descendants. But SIGKILL doesn't give you this luxury.
So I believe that for your use case you cannot use the subprocess.run facade but you should use Popen directly. You can look at the subprocess.run implementation and take just the things that you need, maybe dropping support for platforms you don't use.
Note: There are extremely rare situations in which the subprocesses won't die immediately on SIGKILL. I believe the only situation in which this happens is if the subprocess is performing a very long system call or other kernel operation, which might not be interrupted immediately. If the operation is in deadlock this might prevent the process from terminating forever. However I don't think that this is your case, since you did not mention that the process is stuck doing nothing, but from what you said the process simply seems to continue running.
|
How ensure subprocess is killed on timeout when using `run`?
|
I am using the following code to launch a subprocess :
# Run the program
subprocess_result = subprocess.run(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
check=False,
timeout=timeout,
cwd=directory,
env=env,
preexec_fn=set_memory_limits,
)
The launched subprocess is also a Python program, with a shebang.
This subprocess may last for longer than the specified timeout.
The subprocess does heavy computations and write results in a file and does not contain any signal handler.
According to the documentation https://docs.python.org/3/library/subprocess.html#subprocess.run, subprocess.run kills a child that timesout :
The timeout argument is passed to Popen.communicate(). If the timeout
expires, the child process will be killed and waited for. The
TimeoutExpired exception will be re-raised after the child process has
terminated.
When my subprocess timesout, I always receive the subprocess.TimeoutExpired exception, but from time to time the subprocess is not killed, hence still consuming resources on my machine.
So my question is, am I doing something wrong here ? If yes, what and if no, why do I have this issue and how can I solve it ?
Note : I am using Python 3.10 on Ubuntu 22_04
|
[
"The most likely culprit for the behaviour you see is that the subprocess you are spawning is probably using multiprocessing and spawning its own child processes. Killing the parent process does not automatically kill the whole set of descendants. The granchildren are inherited by the init process (i.e. the process with PID 1) and will continue to run.\nYou can verify from the source code of suprocess.run :\nwith Popen(*popenargs, **kwargs) as process:\n try:\n stdout, stderr = process.communicate(input, timeout=timeout)\n except TimeoutExpired as exc:\n process.kill()\n if _mswindows:\n # Windows accumulates the output in a single blocking\n # read() call run on child threads, with the timeout\n # being done in a join() on those threads. communicate()\n # _after_ kill() is required to collect that and add it\n # to the exception.\n exc.stdout, exc.stderr = process.communicate()\n else:\n # POSIX _communicate already populated the output so\n # far into the TimeoutExpired exception.\n process.wait()\n raise\n except: # Including KeyboardInterrupt, communicate handled that.\n process.kill()\n # We don't call process.wait() as .__exit__ does that for us.\n raise\n\nHere you can see at line 550 the timeout is set on the communicate call, if it fires at line 552 the subprocess is .kill()ed. The kill method sends a SIGKILL which immediately kills the subprocess without any cleanup. It's a signal that cannot be caught by the subprocess, so it's not possible that the child is somehow ignoring it.\nThe TimeoutException is then re-raised at line 564, so if your parent process sees this exception the subprocess is already dead.\nThis however says nothing of granchildren processes. Those will continue to run as children of PID 1.\nI don't see any way in which you can customize how subprocess.run handles subprocess termination. For example, if it used SIGTERM instead of SIGKILL you could modify your child process or write a wrapper process that will catch the signal and properly kill all its descendants. But SIGKILL doesn't give you this luxury.\nSo I believe that for your use case you cannot use the subprocess.run facade but you should use Popen directly. You can look at the subprocess.run implementation and take just the things that you need, maybe dropping support for platforms you don't use.\n\nNote: There are extremely rare situations in which the subprocesses won't die immediately on SIGKILL. I believe the only situation in which this happens is if the subprocess is performing a very long system call or other kernel operation, which might not be interrupted immediately. If the operation is in deadlock this might prevent the process from terminating forever. However I don't think that this is your case, since you did not mention that the process is stuck doing nothing, but from what you said the process simply seems to continue running.\n"
] |
[
0
] |
[] |
[] |
[
"kill_process",
"python",
"subprocess"
] |
stackoverflow_0074524193_kill_process_python_subprocess.txt
|
Q:
Why is plotly express so much more performant than plotly graph_objects?
I'm visualizing a scatterplots with between 400K and 2.5M points. I expectected to need to downsample before visualizing but to see just how much I ran a pilot test with a 400k dataset in plotly express, and the plot popped up quickly, beautifully, and responsively.
In order to make the interractive figure I really need to use plotly.graph_objects, as I need multiple traces with different colorscales, so I made basically the same graph with graph_objects and it wasn't just slower, it crashed my computer.
I'd really like to downsample as little as possible and I'm surprised by the sheer performance difference between these two approaches so I guess that boils down to my question:
Why is there such a performance difference and is it possible to change layout/figure/whatever parameters in graph_objects so to close the gap?
Here is a snippet to show what I mean by basically the same graph:
graph_objects
fig = go.Figure()
fig.add_trace(go.Scatter(x = x_values, y = y_values, opacity = opacity, marker = {
'size': size,
'color': community,
'colorscale': colorscale
}))
express
pacmap_map = px.scatter(x = x_values, y = y_values, color_continuous_scale=colorscale, opacity = opacity, color = community)
pacmap_map.update_traces(marker = {
'size': size
})
I would have expected performance to either be identical or at least in the same ballpark, but express works like a dream and graph_objects crashes the jupyter kernel and whatever IDE it is running from, so a large difference.
A:
Running the following simple example:
import numpy as np
import plotly.graph_objects as go
import plotly.express as px
x = np.linspace(-2, 2, 100000)
y = np.cos(x)
fig = go.Figure(data=[go.Scatter(x=x, y=y)])
fig2 = px.scatter(x=x, y=y)
type(fig.data[0]), type(fig2.data[0])
# out: (plotly.graph_objs._scatter.Scatter, plotly.graph_objs._scattergl.Scattergl)
As you can see, plotly express appears to switch to Scattergl when the number of points is higher than some threshold. Scattergl renders on an html5 canvas, hence it uses the GPU (hence efficiency). Whereas Scatter creates svg objects that get inserted in the current document, consuming muuuuuch more memory.
|
Why is plotly express so much more performant than plotly graph_objects?
|
I'm visualizing a scatterplots with between 400K and 2.5M points. I expectected to need to downsample before visualizing but to see just how much I ran a pilot test with a 400k dataset in plotly express, and the plot popped up quickly, beautifully, and responsively.
In order to make the interractive figure I really need to use plotly.graph_objects, as I need multiple traces with different colorscales, so I made basically the same graph with graph_objects and it wasn't just slower, it crashed my computer.
I'd really like to downsample as little as possible and I'm surprised by the sheer performance difference between these two approaches so I guess that boils down to my question:
Why is there such a performance difference and is it possible to change layout/figure/whatever parameters in graph_objects so to close the gap?
Here is a snippet to show what I mean by basically the same graph:
graph_objects
fig = go.Figure()
fig.add_trace(go.Scatter(x = x_values, y = y_values, opacity = opacity, marker = {
'size': size,
'color': community,
'colorscale': colorscale
}))
express
pacmap_map = px.scatter(x = x_values, y = y_values, color_continuous_scale=colorscale, opacity = opacity, color = community)
pacmap_map.update_traces(marker = {
'size': size
})
I would have expected performance to either be identical or at least in the same ballpark, but express works like a dream and graph_objects crashes the jupyter kernel and whatever IDE it is running from, so a large difference.
|
[
"Running the following simple example:\nimport numpy as np\nimport plotly.graph_objects as go\nimport plotly.express as px\n\nx = np.linspace(-2, 2, 100000)\ny = np.cos(x)\n\nfig = go.Figure(data=[go.Scatter(x=x, y=y)])\nfig2 = px.scatter(x=x, y=y)\n\ntype(fig.data[0]), type(fig2.data[0])\n# out: (plotly.graph_objs._scatter.Scatter, plotly.graph_objs._scattergl.Scattergl)\n\nAs you can see, plotly express appears to switch to Scattergl when the number of points is higher than some threshold. Scattergl renders on an html5 canvas, hence it uses the GPU (hence efficiency). Whereas Scatter creates svg objects that get inserted in the current document, consuming muuuuuch more memory.\n"
] |
[
3
] |
[] |
[] |
[
"plotly",
"plotly_express",
"python"
] |
stackoverflow_0074536056_plotly_plotly_express_python.txt
|
Q:
How to generate a unique name for each uploaded file?
I am building a Flask website, and I want to save a path of a file to my sqlite database
I have a "create" view, where user uploads an image and it gets stored in a folder
@app.route('/create', methods = ["GET", "POST"])
@login_required
def create():
if request.method == "POST":
file = request.files['file']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return render_template('create.html')
I want to store a 'path' to each image in my sqlite database. To do this, I think, every filename should be unique. So how should I generate a unique name for each uploaded file?
A:
An easy way to generate a unique file name would be to just use a numbering system (first file being 1, then increasing by 1). Like so:
counter = 0 #put this at the beginning of your script
then when creating file name:
counter += 1
filename = counter
If you do not want your files to be named just numbers you could hash the number to generate a unique code, like so:
counter += 1
filename = counter.encode("utf-8")
filename = hashlib.sha224(filename).hexdigest()
If doing the latter, you would want to save this code somewhere associated with the user so you can open it again.
|
How to generate a unique name for each uploaded file?
|
I am building a Flask website, and I want to save a path of a file to my sqlite database
I have a "create" view, where user uploads an image and it gets stored in a folder
@app.route('/create', methods = ["GET", "POST"])
@login_required
def create():
if request.method == "POST":
file = request.files['file']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return render_template('create.html')
I want to store a 'path' to each image in my sqlite database. To do this, I think, every filename should be unique. So how should I generate a unique name for each uploaded file?
|
[
"An easy way to generate a unique file name would be to just use a numbering system (first file being 1, then increasing by 1). Like so:\n counter = 0 #put this at the beginning of your script\n\nthen when creating file name:\n counter += 1\n filename = counter\n\nIf you do not want your files to be named just numbers you could hash the number to generate a unique code, like so:\ncounter += 1\nfilename = counter.encode(\"utf-8\")\nfilename = hashlib.sha224(filename).hexdigest()\n\nIf doing the latter, you would want to save this code somewhere associated with the user so you can open it again.\n"
] |
[
1
] |
[] |
[] |
[
"flask",
"python"
] |
stackoverflow_0074538676_flask_python.txt
|
Q:
How do I control where pip3 installs packages?
I updated pip3 and now packages are being installed for python 3.8 and not 3.9. What do I do to make it so that packages are installed to where they used to be installed?
I updated pip3 today using the command pip3 install --upgrade pip and then installed a new package with pip3 install statsmodels which did indeed install the package. I checked the install location with pip3 list -v and saw that the package and its dependencies were installed to /Users/myusername/Library/Python/3.8/lib/python/site-packages.
The problem is... this is not where my other packages are. pip3 list -v doesn't show any of the packages I've installed in the past (for example, I know I have matplotlib installed but it doesn't show up). I only use python 3.9, so I don't want any of my packages being installed for python 3.8 but don't know how to fix this.
A:
If you want to change your default installation target
The --target switch is the thing you're looking for:
pip config set global.target /Users/Bob/Library/Python/3.8/lib/python/site-packages
Installs packages into directory provied. By default this will not replace existing files/folders in the directory. Use --upgrade to replace existing packages in with new versions.
|
How do I control where pip3 installs packages?
|
I updated pip3 and now packages are being installed for python 3.8 and not 3.9. What do I do to make it so that packages are installed to where they used to be installed?
I updated pip3 today using the command pip3 install --upgrade pip and then installed a new package with pip3 install statsmodels which did indeed install the package. I checked the install location with pip3 list -v and saw that the package and its dependencies were installed to /Users/myusername/Library/Python/3.8/lib/python/site-packages.
The problem is... this is not where my other packages are. pip3 list -v doesn't show any of the packages I've installed in the past (for example, I know I have matplotlib installed but it doesn't show up). I only use python 3.9, so I don't want any of my packages being installed for python 3.8 but don't know how to fix this.
|
[
"If you want to change your default installation target\nThe --target switch is the thing you're looking for:\npip config set global.target /Users/Bob/Library/Python/3.8/lib/python/site-packages\n\nInstalls packages into directory provied. By default this will not replace existing files/folders in the directory. Use --upgrade to replace existing packages in with new versions.\n"
] |
[
0
] |
[] |
[] |
[
"pip",
"python"
] |
stackoverflow_0074538632_pip_python.txt
|
Q:
Using Tweepy and Twitter API v2 to retrieve replies from a single Tweet
I'm currently trying to generate the replies for a single tweet, but can't retrieve all of them. While it works to retrieve some, adding .flatten(limit=1000) breaks my code and will return an error.
I need to return all replies from a single tweet and am using paginator to do so, but for some reason am only seeing 6 replies of the multiple hundred.
import csv
from multiprocessing.connection import Client
import tweepy
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
# Oauth and client creation
bearer_token = "XXX"
client = tweepy.Client(bearer_token)
# update these for the tweet you want to process replies to 'name' = the account username and you can find the tweet id within the tweet URL
#name = 'XXX'
#tweet_id = 'XXX'
q = 'conversation_id:XXX'
for tweet_batch in tweepy.Paginator(client.search_recent_tweets, query=q,
tweet_fields=['context_annotations','created_at', 'public_metrics', 'author_id'],
user_fields=['name','username','location','verified','description'],
max_results=100, expansions='author_id'):
tweets = tweet_batch.data
users = tweet_batch.includes["users"]
users = {user["id"]: user for user in users}
print(len(tweets),len(users))
with open('replies_clean.csv', 'w') as f:
csv_writer = csv.DictWriter(f, fieldnames=('user', 'text'))
csv_writer.writeheader()
for tweet in tweets:
user = users[tweet.author_id]
row = {'user': '@' + user.username, 'text': tweet.text.replace('\n', ' ')}
csv_writer.writerow(row)
A:
It's a small issue with your code. In the for loop you are overwriting the tweets and users lists with each pass instead of appending to it. Corrected loop:
q = 'conversation_id:XXX'
tweets = []
users = []
for tweet_batch in tweepy.Paginator(client.search_recent_tweets, query=q,
tweet_fields=['context_annotations','created_at', 'public_metrics', 'author_id'],
user_fields=['name','username','location','verified','description'],
max_results=100, expansions='author_id'):
for tweet in tweet_batch.data:
tweets.append(tweet)
for user in tweet_batch.includes["users"]:
users.append(user)
|
Using Tweepy and Twitter API v2 to retrieve replies from a single Tweet
|
I'm currently trying to generate the replies for a single tweet, but can't retrieve all of them. While it works to retrieve some, adding .flatten(limit=1000) breaks my code and will return an error.
I need to return all replies from a single tweet and am using paginator to do so, but for some reason am only seeing 6 replies of the multiple hundred.
import csv
from multiprocessing.connection import Client
import tweepy
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
# Oauth and client creation
bearer_token = "XXX"
client = tweepy.Client(bearer_token)
# update these for the tweet you want to process replies to 'name' = the account username and you can find the tweet id within the tweet URL
#name = 'XXX'
#tweet_id = 'XXX'
q = 'conversation_id:XXX'
for tweet_batch in tweepy.Paginator(client.search_recent_tweets, query=q,
tweet_fields=['context_annotations','created_at', 'public_metrics', 'author_id'],
user_fields=['name','username','location','verified','description'],
max_results=100, expansions='author_id'):
tweets = tweet_batch.data
users = tweet_batch.includes["users"]
users = {user["id"]: user for user in users}
print(len(tweets),len(users))
with open('replies_clean.csv', 'w') as f:
csv_writer = csv.DictWriter(f, fieldnames=('user', 'text'))
csv_writer.writeheader()
for tweet in tweets:
user = users[tweet.author_id]
row = {'user': '@' + user.username, 'text': tweet.text.replace('\n', ' ')}
csv_writer.writerow(row)
|
[
"It's a small issue with your code. In the for loop you are overwriting the tweets and users lists with each pass instead of appending to it. Corrected loop:\nq = 'conversation_id:XXX'\ntweets = []\nusers = []\n\nfor tweet_batch in tweepy.Paginator(client.search_recent_tweets, query=q,\n tweet_fields=['context_annotations','created_at', 'public_metrics', 'author_id'],\n user_fields=['name','username','location','verified','description'],\n max_results=100, expansions='author_id'):\n\n for tweet in tweet_batch.data:\n tweets.append(tweet)\n for user in tweet_batch.includes[\"users\"]:\n users.append(user)\n\n"
] |
[
0
] |
[] |
[] |
[
"paginator",
"python",
"tweepy",
"twitter"
] |
stackoverflow_0073085747_paginator_python_tweepy_twitter.txt
|
Q:
Splitting a txt file into indiviudal words andwriting them to a new txt file
I wanted to take a txt file that is formatted as such:
apple banana peach pear
(item then space then next item) and then print it so that it prints as:
apple
banana
peach
pear
At the same time, it has to write to the file called output.txt in the same manner (each word on a new line). My code so far as is follows and I would appreciate it if only changes would be made to it and not a new code itself.
def task_2():
in_file = open("input.txt", "r")
out_file = open("output.txt", "w")
line = in_file.line()
words = line.strip()
for word in words:
print(f'word\n')
out_file.write(word)
in_file.close()
out_file.close()
A:
Your code has a lot of small mistakes:
def task_2():
in_file = open("input.txt", "r") # this is better with a context manager
out_file = open("output.txt", "w")
line = in_file.line() # .line() does not exist, you wanted .readline()
words = line.strip() # you want .split() here
for word in words:
print(f'word\n') # you use an f-string, but forgot the {}
out_file.write(word)
in_file.close() # with a context manager, these are not needed
out_file.close()
So:
def task_2():
with open("input.txt", "r") as in_file:
with open("output.txt", "w") as out_file:
line = in_file.readline()
words = line.split()
for word in words:
print(f'{word}') # also, print automatically writes a newline
out_file.write(f'{word}\n') # but .write() doesn't
task_2() # let's also call the function
However, you can combine those two context manager with lines into a single one and save a few needless variables as well:
def task_2():
with open("input.txt", "r") as in_file, open("output.txt", "w") as out_file:
for word in in_file.readline().split():
print(f'{word}')
out_file.write(f'{word}\n')
task_2()
|
Splitting a txt file into indiviudal words andwriting them to a new txt file
|
I wanted to take a txt file that is formatted as such:
apple banana peach pear
(item then space then next item) and then print it so that it prints as:
apple
banana
peach
pear
At the same time, it has to write to the file called output.txt in the same manner (each word on a new line). My code so far as is follows and I would appreciate it if only changes would be made to it and not a new code itself.
def task_2():
in_file = open("input.txt", "r")
out_file = open("output.txt", "w")
line = in_file.line()
words = line.strip()
for word in words:
print(f'word\n')
out_file.write(word)
in_file.close()
out_file.close()
|
[
"Your code has a lot of small mistakes:\ndef task_2():\n in_file = open(\"input.txt\", \"r\") # this is better with a context manager\n out_file = open(\"output.txt\", \"w\")\n line = in_file.line() # .line() does not exist, you wanted .readline()\n words = line.strip() # you want .split() here\n for word in words:\n print(f'word\\n') # you use an f-string, but forgot the {}\n out_file.write(word)\n in_file.close() # with a context manager, these are not needed\n out_file.close()\n\nSo:\ndef task_2():\n with open(\"input.txt\", \"r\") as in_file:\n with open(\"output.txt\", \"w\") as out_file:\n line = in_file.readline()\n words = line.split()\n for word in words:\n print(f'{word}') # also, print automatically writes a newline\n out_file.write(f'{word}\\n') # but .write() doesn't\n\n\ntask_2() # let's also call the function\n\nHowever, you can combine those two context manager with lines into a single one and save a few needless variables as well:\ndef task_2():\n with open(\"input.txt\", \"r\") as in_file, open(\"output.txt\", \"w\") as out_file:\n for word in in_file.readline().split():\n print(f'{word}')\n out_file.write(f'{word}\\n')\n\n\ntask_2()\n\n"
] |
[
0
] |
[] |
[] |
[
"object",
"python"
] |
stackoverflow_0074538694_object_python.txt
|
Q:
How to annotate grouped bars with group count instead of bar height
To draw plot, I am using seaborn and below is my code
import seaborn as sns
sns.set_theme(style="whitegrid")
tips = sns.load_dataset("tips")
tips=tips.head()
ax = sns.barplot(x="day", y="total_bill",hue="sex", data=tips, palette="tab20_r")
I want to get and print frequency of data plots that is no. of times it occurred and below is the expected image
To Add label in bar,
I have used below code
for rect in ax.patches:
y_value = rect.get_height()
x_value = rect.get_x() + rect.get_width() / 2
space = 1
label = "{:.0f}".format(y_value)
ax.annotate(label, (x_value, y_value), xytext=(0, space), textcoords="offset points", ha='center', va='bottom')
plt.show()
So, With above code. I am able to display height with respect to x-axis , but I don't want height. I want frequency/count that satisfies relationship. For above example, there are 2 male and 3 female who gave tip on Sunday. So it should display 2 and 3 and not the amount of tip
Below is the code
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_theme(style="whitegrid")
df = sns.load_dataset("tips")
ax = sns.barplot(x='day', y='tip',hue="sex", data=df, palette="tab20_r")
for rect in ax.patches:
y_value = rect.get_height()
x_value = rect.get_x() + rect.get_width() / 2
space = 1
label = "{:.0f}".format(y_value)
ax.annotate(label, (x_value, y_value), xytext=(0, space), textcoords="offset points", ha='center', va='bottom')
plt.show()
A:
How to display custom values on a bar plot does not clearly show how to annotate grouped bars, nor does it show how to determine the frequency of each hue category for each day.
How to plot and annotate grouped bars in seaborn / matplotlib shows how to annotate grouped bars, but not with custom labels.
for rect in ax.patches is an obsolete way to annotate bars. Use matplotlib.pyplot.bar_label, as fully described in How to add value labels on a bar chart.
Use pandas.crosstab or pandas.DataFrame.groupby to calculate the count of each category by the hue group.
As tips.info() shows, several columns have a category Dtype, which insures the plotting order and why the tp.index and tp.column order matches the x-axis and hue order of ax. Use pandas.Categorical to set a column to a category Dtype.
Tested in python 3.11, pandas 1.5.2, matplotlib 3.6.2, seaborn 0.12.1
import pandas as pd
import seaborn as sns
# load the data
tips = sns.load_dataset('tips')
# determine the number of each gender for each day
tp = pd.crosstab(tips.day, tips.sex)
# or use groupby
# tp = tips.groupby(['day', 'sex']).sex.count().unstack('sex')
# plot the data
ax = sns.barplot(x='day', y='total_bill', hue='sex', data=tips)
# move the legend if needed
sns.move_legend(ax, bbox_to_anchor=(1, 1.02), loc='upper left', frameon=False)
# iterate through each group of bars, zipped to the corresponding column name
for c, col in zip(ax.containers, tp):
# add bar labels with custom annotation values
ax.bar_label(c, labels=tp[col], padding=3, label_type='center')
DataFrame Views
tips
tips.head()
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
tips.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 244 entries, 0 to 243
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 total_bill 244 non-null float64
1 tip 244 non-null float64
2 sex 244 non-null category
3 smoker 244 non-null category
4 day 244 non-null category
5 time 244 non-null category
6 size 244 non-null int64
dtypes: category(4), float64(2), int64(1)
memory usage: 7.4 KB
tp
sex Male Female
day
Thur 30 32
Fri 10 9
Sat 59 28
Sun 58 18
|
How to annotate grouped bars with group count instead of bar height
|
To draw plot, I am using seaborn and below is my code
import seaborn as sns
sns.set_theme(style="whitegrid")
tips = sns.load_dataset("tips")
tips=tips.head()
ax = sns.barplot(x="day", y="total_bill",hue="sex", data=tips, palette="tab20_r")
I want to get and print frequency of data plots that is no. of times it occurred and below is the expected image
To Add label in bar,
I have used below code
for rect in ax.patches:
y_value = rect.get_height()
x_value = rect.get_x() + rect.get_width() / 2
space = 1
label = "{:.0f}".format(y_value)
ax.annotate(label, (x_value, y_value), xytext=(0, space), textcoords="offset points", ha='center', va='bottom')
plt.show()
So, With above code. I am able to display height with respect to x-axis , but I don't want height. I want frequency/count that satisfies relationship. For above example, there are 2 male and 3 female who gave tip on Sunday. So it should display 2 and 3 and not the amount of tip
Below is the code
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_theme(style="whitegrid")
df = sns.load_dataset("tips")
ax = sns.barplot(x='day', y='tip',hue="sex", data=df, palette="tab20_r")
for rect in ax.patches:
y_value = rect.get_height()
x_value = rect.get_x() + rect.get_width() / 2
space = 1
label = "{:.0f}".format(y_value)
ax.annotate(label, (x_value, y_value), xytext=(0, space), textcoords="offset points", ha='center', va='bottom')
plt.show()
|
[
"\nHow to display custom values on a bar plot does not clearly show how to annotate grouped bars, nor does it show how to determine the frequency of each hue category for each day.\nHow to plot and annotate grouped bars in seaborn / matplotlib shows how to annotate grouped bars, but not with custom labels.\nfor rect in ax.patches is an obsolete way to annotate bars. Use matplotlib.pyplot.bar_label, as fully described in How to add value labels on a bar chart.\nUse pandas.crosstab or pandas.DataFrame.groupby to calculate the count of each category by the hue group.\nAs tips.info() shows, several columns have a category Dtype, which insures the plotting order and why the tp.index and tp.column order matches the x-axis and hue order of ax. Use pandas.Categorical to set a column to a category Dtype.\nTested in python 3.11, pandas 1.5.2, matplotlib 3.6.2, seaborn 0.12.1\n\nimport pandas as pd\nimport seaborn as sns\n\n# load the data\ntips = sns.load_dataset('tips')\n\n# determine the number of each gender for each day\ntp = pd.crosstab(tips.day, tips.sex)\n\n# or use groupby\n# tp = tips.groupby(['day', 'sex']).sex.count().unstack('sex')\n\n# plot the data\nax = sns.barplot(x='day', y='total_bill', hue='sex', data=tips)\n\n# move the legend if needed\nsns.move_legend(ax, bbox_to_anchor=(1, 1.02), loc='upper left', frameon=False)\n\n# iterate through each group of bars, zipped to the corresponding column name\nfor c, col in zip(ax.containers, tp):\n \n # add bar labels with custom annotation values\n ax.bar_label(c, labels=tp[col], padding=3, label_type='center')\n\n\nDataFrame Views\ntips\ntips.head()\n\n total_bill tip sex smoker day time size\n0 16.99 1.01 Female No Sun Dinner 2\n1 10.34 1.66 Male No Sun Dinner 3\n2 21.01 3.50 Male No Sun Dinner 3\n3 23.68 3.31 Male No Sun Dinner 2\n4 24.59 3.61 Female No Sun Dinner 4\n\ntips.info()\n\n<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 244 entries, 0 to 243\nData columns (total 7 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 total_bill 244 non-null float64 \n 1 tip 244 non-null float64 \n 2 sex 244 non-null category\n 3 smoker 244 non-null category\n 4 day 244 non-null category\n 5 time 244 non-null category\n 6 size 244 non-null int64 \ndtypes: category(4), float64(2), int64(1)\nmemory usage: 7.4 KB\n\ntp\nsex Male Female\nday \nThur 30 32\nFri 10 9\nSat 59 28\nSun 58 18\n\n"
] |
[
1
] |
[] |
[] |
[
"grouped_bar_chart",
"matplotlib",
"pandas",
"python",
"seaborn"
] |
stackoverflow_0074524083_grouped_bar_chart_matplotlib_pandas_python_seaborn.txt
|
Q:
Check if inputed date is under 18
How can I check with a inputed date if that date of birthday is under 18?
year=int(input("Year born: "))
month = int(input("Month born: "))
day = int(input("Day born: "))
date = date(year,month,day)
What code can I use with date.today() in order to check if user is under 18? Because if I substract 2022- year it could be under 17 because he was born in December
UPDATE:
def is_under_18(birth):
year=int(input("Year born: "))
month = int(input("Month born: "))
day = int(input("Day born: "))
date = date(year,month,day)
now = date.today()
return (
now.year - birth.year < 18
or now.year - birth.year == 18 and (
now.month < birth.month
or now.month == birth.month and now.day <= birth.day
)
)
Should it be like this? I didn't understand. And I would also like to add an if he is older than 18, then print("You are over 18")
A:
You can compare all date parts sequentially:
from datetime import date
def is_under_18(birth):
now = date.today()
return (
now.year - birth.year < 18
or now.year - birth.year == 18 and (
now.month < birth.month
or now.month == birth.month and now.day <= birth.day
)
)
In most countries people are considered to have age of n+1 at the next day after their birthday, so the last comparison uses <=.
You may also consider dateutil library that provides relativedelta class that gives you better difference approach (full years + full months + full days + ...).
The function above accepts one argument (birth), which should be a date inputted by your user.
year = int(input("Year born: "))
month = int(input("Month born: "))
day = int(input("Day born: "))
# Please, STOP using `date` as variable name here!
birth = date(year,month,day)
if is_under_18(birth):
print('Under 18')
else:
print('Adult')
|
Check if inputed date is under 18
|
How can I check with a inputed date if that date of birthday is under 18?
year=int(input("Year born: "))
month = int(input("Month born: "))
day = int(input("Day born: "))
date = date(year,month,day)
What code can I use with date.today() in order to check if user is under 18? Because if I substract 2022- year it could be under 17 because he was born in December
UPDATE:
def is_under_18(birth):
year=int(input("Year born: "))
month = int(input("Month born: "))
day = int(input("Day born: "))
date = date(year,month,day)
now = date.today()
return (
now.year - birth.year < 18
or now.year - birth.year == 18 and (
now.month < birth.month
or now.month == birth.month and now.day <= birth.day
)
)
Should it be like this? I didn't understand. And I would also like to add an if he is older than 18, then print("You are over 18")
|
[
"You can compare all date parts sequentially:\nfrom datetime import date\n\ndef is_under_18(birth):\n now = date.today()\n return (\n now.year - birth.year < 18\n or now.year - birth.year == 18 and (\n now.month < birth.month \n or now.month == birth.month and now.day <= birth.day\n )\n )\n\n\nIn most countries people are considered to have age of n+1 at the next day after their birthday, so the last comparison uses <=.\nYou may also consider dateutil library that provides relativedelta class that gives you better difference approach (full years + full months + full days + ...).\nThe function above accepts one argument (birth), which should be a date inputted by your user.\nyear = int(input(\"Year born: \"))\nmonth = int(input(\"Month born: \"))\nday = int(input(\"Day born: \"))\n# Please, STOP using `date` as variable name here!\nbirth = date(year,month,day)\n\nif is_under_18(birth):\n print('Under 18')\nelse:\n print('Adult')\n\n"
] |
[
0
] |
[] |
[] |
[
"date",
"python"
] |
stackoverflow_0074538629_date_python.txt
|
Q:
How to assign array in a dataframe to a variable
I need to fetch my array field in dataframe and assign it to a variable for further proceeding further. I am using collect() function, but its not working properly.
Input dataframe:
Department
Language
[A, B, C]
English
[]
Spanish
How can i fetch and assign variable like below:
English = [A,B,C]
Spanish = []
A:
The simplest solution I came with is just extracting data with collect and explicitly assigning it to the predefined variables, like so:
from pyspark.sql.types import StringType, ArrayType, StructType, StructField
schema = StructType([
StructField("Department", ArrayType(StringType()), True),
StructField("Language", StringType(), True)
])
df = spark.createDataFrame([(["A", "B", "C"], "English"), ([], "Spanish")], schema)
English = df.collect()[0]["Department"]
Spanish = df.collect()[1]["Department"]
print(f"English: {English}, Spanish: {Spanish}")
# English: ['A', 'B', 'C'], Spanish: []
A:
EDIT: I completely brain-farted and missed that this was a PySpark question.
The below code might still be helpful if you convert your PySpark Dataframe to pandas, which for your situation might not be as ridiculous as it sounds. If the table is too big to fit in a pandas DataFrame then its too big to store all arrays in a variable. You can probably use .filter() and .select() to shrink it first.
Old Answer:
The best way to approach this really depends on the complexity of your dataframe. Here are two ways:
# To recreate your dataframe
df = pd.DataFrame({
'Department': [['A','B', 'C']],
'Language': 'English'
})
df.loc[df.Language == 'English']
# Will return all rows where Language is English. If you only want Department then:
df.loc[df.Language == 'English'].Department
# This will return a list containing your list. If you are always expecting a single match add [0] as in:
df.loc[df.Language == 'English'].Department[0]
#Which will return only your list
# The alternate method below isn't great but might be preferable in some circumstances, also only if you expect a single match from any query.
department_lookup = df[['Language', 'Department']].set_index('Language').to_dict()['Department']
department_lookup['English']
#returns your list
# This will make a dictionary where 'Language' is the key and 'Department' is the value. It is more work to set up and only works for a two-column relationship but you might prefer working with dictionaries depending on the use-case
If you're having datatype issues it may deal with how the DataFrame is being loaded rather than how you're accessing it. Pandas loves to convert lists to strings.
# If I saved and reload the df as so:
df.to_csv("the_df.csv")
df = pd.read_csv("the_df.csv")
# Then we would see that the dtype has become a string, as in "[A, B, C]" rather than ["A", "B", "C"]
# We can typically correct this by giving pandas a method for converting the incoming string to list. This is done with the 'converters' argument, which takes a dictionary where trhe keys are column names and the values are functions, as such:
df = pd.read_csv("the_df.csv", converters = {"Department": lambda x: x.strip("[]").split(", "))
# df['Department'] should have a dtype of list
Its important to note that the lambda function is only reliable if python has converted a python list into a string in order to store the dataframe. Converting a list string into a list has been addressed here
|
How to assign array in a dataframe to a variable
|
I need to fetch my array field in dataframe and assign it to a variable for further proceeding further. I am using collect() function, but its not working properly.
Input dataframe:
Department
Language
[A, B, C]
English
[]
Spanish
How can i fetch and assign variable like below:
English = [A,B,C]
Spanish = []
|
[
"The simplest solution I came with is just extracting data with collect and explicitly assigning it to the predefined variables, like so:\nfrom pyspark.sql.types import StringType, ArrayType, StructType, StructField\n\nschema = StructType([\n StructField(\"Department\", ArrayType(StringType()), True),\n StructField(\"Language\", StringType(), True)\n ])\n\ndf = spark.createDataFrame([([\"A\", \"B\", \"C\"], \"English\"), ([], \"Spanish\")], schema)\n\nEnglish = df.collect()[0][\"Department\"]\nSpanish = df.collect()[1][\"Department\"]\nprint(f\"English: {English}, Spanish: {Spanish}\")\n\n# English: ['A', 'B', 'C'], Spanish: []\n\n",
"EDIT: I completely brain-farted and missed that this was a PySpark question.\nThe below code might still be helpful if you convert your PySpark Dataframe to pandas, which for your situation might not be as ridiculous as it sounds. If the table is too big to fit in a pandas DataFrame then its too big to store all arrays in a variable. You can probably use .filter() and .select() to shrink it first.\nOld Answer:\n\nThe best way to approach this really depends on the complexity of your dataframe. Here are two ways:\n# To recreate your dataframe\n\ndf = pd.DataFrame({\n 'Department': [['A','B', 'C']],\n 'Language': 'English'\n})\n\ndf.loc[df.Language == 'English']\n# Will return all rows where Language is English. If you only want Department then:\n\ndf.loc[df.Language == 'English'].Department\n# This will return a list containing your list. If you are always expecting a single match add [0] as in:\n\ndf.loc[df.Language == 'English'].Department[0]\n#Which will return only your list\n# The alternate method below isn't great but might be preferable in some circumstances, also only if you expect a single match from any query.\n\ndepartment_lookup = df[['Language', 'Department']].set_index('Language').to_dict()['Department']\n\ndepartment_lookup['English']\n#returns your list\n\n# This will make a dictionary where 'Language' is the key and 'Department' is the value. It is more work to set up and only works for a two-column relationship but you might prefer working with dictionaries depending on the use-case\n\n\nIf you're having datatype issues it may deal with how the DataFrame is being loaded rather than how you're accessing it. Pandas loves to convert lists to strings.\n\n# If I saved and reload the df as so: \ndf.to_csv(\"the_df.csv\")\ndf = pd.read_csv(\"the_df.csv\")\n\n# Then we would see that the dtype has become a string, as in \"[A, B, C]\" rather than [\"A\", \"B\", \"C\"]\n\n# We can typically correct this by giving pandas a method for converting the incoming string to list. This is done with the 'converters' argument, which takes a dictionary where trhe keys are column names and the values are functions, as such:\n\ndf = pd.read_csv(\"the_df.csv\", converters = {\"Department\": lambda x: x.strip(\"[]\").split(\", \"))\n\n# df['Department'] should have a dtype of list\n\n\nIts important to note that the lambda function is only reliable if python has converted a python list into a string in order to store the dataframe. Converting a list string into a list has been addressed here\n"
] |
[
2,
1
] |
[] |
[] |
[
"function",
"pyspark",
"python"
] |
stackoverflow_0074538022_function_pyspark_python.txt
|
Q:
How to search through an array of integers and strings and return the name and score of the highest competitor - python 3.10
this will probably be comically simple but I can't figure out the answer basically have to search through a 2d array something that I can't remember
what I have figured out doesn't help so was looking for some help
the code I have is as follows:
competitors = [["John", 11], ["Jenny", 13], ["Matthew", 3], ["Bev", 22], ["Claire", 12]]
A:
The max() function can take an optional key argument that lets you specify what python should be looking specifically to determine the maximum value, especially helpful when you have complex objects and you need to determine whether the max is the alphabetical sort of the name as a string (for example), or the top value of the integer. In our case, we want the 1st index value from each item (the integer) to determine maximum values.
competitors = [
["John", 11],
["Jenny", 13],
["Matthew", 3],
["Bev", 22],
["Claire", 12],
]
top_competitor = max(competitors, key=lambda x: x[1])
print(top_competitor) # ['Bev', 22]
|
How to search through an array of integers and strings and return the name and score of the highest competitor - python 3.10
|
this will probably be comically simple but I can't figure out the answer basically have to search through a 2d array something that I can't remember
what I have figured out doesn't help so was looking for some help
the code I have is as follows:
competitors = [["John", 11], ["Jenny", 13], ["Matthew", 3], ["Bev", 22], ["Claire", 12]]
|
[
"The max() function can take an optional key argument that lets you specify what python should be looking specifically to determine the maximum value, especially helpful when you have complex objects and you need to determine whether the max is the alphabetical sort of the name as a string (for example), or the top value of the integer. In our case, we want the 1st index value from each item (the integer) to determine maximum values.\ncompetitors = [\n [\"John\", 11],\n [\"Jenny\", 13],\n [\"Matthew\", 3],\n [\"Bev\", 22],\n [\"Claire\", 12],\n]\n\ntop_competitor = max(competitors, key=lambda x: x[1])\n\nprint(top_competitor) # ['Bev', 22]\n\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074538873_python.txt
|
Q:
Python Error (TypeError: 'pygame.Surface' object is not callable)
I have a problem with python when creating functions with images here is the code:
This usually happens when creating classes with a function I want to run that displays an image to the screen.
import pygame
white = (255,255,255)
width,height = 800,500
win = pygame.display.set_mode((width,height))
pygame.display.set_caption("Test")
class Game:
def __init__(self):
self.player = pygame.image.load("graphics/player/player.png")
def player(self):
win.blit(self.player, (0,0))
pygame.display.update()
game = Game()
def functions():
game.player()
def loop():
flag = True
clock = pygame.time.Clock()
while flag:
clock.tick(60)
for event in pygame.event.get():
if event.type == pygame.QUIT:
flag = False
pygame.quit()
functions()
pygame.display.update()
loop()
The error message is:
Traceback (most recent call last):
File "C:", line 35, in <module>
loop()
File "C:", line 32, in loop
functions()
File "C:", line 19, in functions
game.player()
TypeError: 'pygame.Surface' object is not callable
A:
class Game:
def __init__(self):
self.player = pygame.image.load("graphics/player/player.png")
def player(self):
win.blit(self.player, (0,0))
pygame.display.update()
__init__() defines an attribute named self.player.
There is also a class instance method named player.
You can't do that. Pick a different name for one of them.
|
Python Error (TypeError: 'pygame.Surface' object is not callable)
|
I have a problem with python when creating functions with images here is the code:
This usually happens when creating classes with a function I want to run that displays an image to the screen.
import pygame
white = (255,255,255)
width,height = 800,500
win = pygame.display.set_mode((width,height))
pygame.display.set_caption("Test")
class Game:
def __init__(self):
self.player = pygame.image.load("graphics/player/player.png")
def player(self):
win.blit(self.player, (0,0))
pygame.display.update()
game = Game()
def functions():
game.player()
def loop():
flag = True
clock = pygame.time.Clock()
while flag:
clock.tick(60)
for event in pygame.event.get():
if event.type == pygame.QUIT:
flag = False
pygame.quit()
functions()
pygame.display.update()
loop()
The error message is:
Traceback (most recent call last):
File "C:", line 35, in <module>
loop()
File "C:", line 32, in loop
functions()
File "C:", line 19, in functions
game.player()
TypeError: 'pygame.Surface' object is not callable
|
[
"class Game:\n def __init__(self):\n self.player = pygame.image.load(\"graphics/player/player.png\")\n def player(self):\n win.blit(self.player, (0,0))\n pygame.display.update()\n\n__init__() defines an attribute named self.player.\nThere is also a class instance method named player.\nYou can't do that. Pick a different name for one of them.\n"
] |
[
0
] |
[] |
[] |
[
"error_handling",
"python"
] |
stackoverflow_0074538734_error_handling_python.txt
|
Q:
Stop canvas from leaving frame tkinter
I am trying to create pong and i have basic movement of the paddle, but how would you go about keeping the paddle on the screen? I have tried to read the cords and only allowing moment if cords is less then a certain amount, but the cords stay the same no matter the actual position of the paddle. Thoughts?
import tkinter as tk
ypong = 0
def keyup(e):
global pongMovement
global pongEdit
global Pong
pongMovement = pongMovement - 10
pongEdit.place(x=1, y=pongMovement)
pongEdit.update()
print('YAY')
def keydown(e):
global pongMovement
global pongEdit
global Pong
pongMovement = pongMovement + 10
pongEdit.place(x=1, y=pongMovement)
pongEdit.update()
print('YAY')
pongMovement = 0
window = tk.Tk()
window.geometry('600x600')
window.resizable(width=False, height=False)
window.bind('<Up>', keyup)
window.bind('<Down>', keydown)
pongEdit = tk.Canvas(window, width=10, height=100, bg="black")
Pong = pongEdit.create_rectangle(10, 10, 20, 20, fill="red")
pongEdit.pack()
pongEdit.place(x=1, y=pongMovement)
window.mainloop()
A:
Just adding conditionals that check pongMovement seems to do it:
import tkinter as tk
ypong = 0
def keyup(e):
global pongMovement
global pongEdit
global Pong
if pongMovement > 0:
pongMovement = pongMovement - 10
pongEdit.place(x=1, y=pongMovement)
pongEdit.update()
print('YAY')
def keydown(e):
global pongMovement
global pongEdit
global Pong
if pongMovement < 500:
pongMovement = pongMovement + 10
print(pongMovement)
pongEdit.place(x=1, y=pongMovement)
pongEdit.update()
print('YAY')
pongMovement = 0
window = tk.Tk()
window.geometry('600x600')
window.resizable(width=False, height=False)
window.bind('<Up>', keyup)
window.bind('<Down>', keydown)
pongEdit = tk.Canvas(window, width=10, height=100, bg="black")
Pong = pongEdit.create_rectangle(10, 10, 20, 20, fill="red")
pongEdit.pack()
pongEdit.place(x=1, y=pongMovement)
window.mainloop()
|
Stop canvas from leaving frame tkinter
|
I am trying to create pong and i have basic movement of the paddle, but how would you go about keeping the paddle on the screen? I have tried to read the cords and only allowing moment if cords is less then a certain amount, but the cords stay the same no matter the actual position of the paddle. Thoughts?
import tkinter as tk
ypong = 0
def keyup(e):
global pongMovement
global pongEdit
global Pong
pongMovement = pongMovement - 10
pongEdit.place(x=1, y=pongMovement)
pongEdit.update()
print('YAY')
def keydown(e):
global pongMovement
global pongEdit
global Pong
pongMovement = pongMovement + 10
pongEdit.place(x=1, y=pongMovement)
pongEdit.update()
print('YAY')
pongMovement = 0
window = tk.Tk()
window.geometry('600x600')
window.resizable(width=False, height=False)
window.bind('<Up>', keyup)
window.bind('<Down>', keydown)
pongEdit = tk.Canvas(window, width=10, height=100, bg="black")
Pong = pongEdit.create_rectangle(10, 10, 20, 20, fill="red")
pongEdit.pack()
pongEdit.place(x=1, y=pongMovement)
window.mainloop()
|
[
"Just adding conditionals that check pongMovement seems to do it:\nimport tkinter as tk\n\nypong = 0\n\n\ndef keyup(e):\n global pongMovement\n global pongEdit\n global Pong\n if pongMovement > 0:\n pongMovement = pongMovement - 10\n pongEdit.place(x=1, y=pongMovement)\n pongEdit.update()\n print('YAY')\n\n\ndef keydown(e):\n global pongMovement\n global pongEdit\n global Pong\n if pongMovement < 500:\n pongMovement = pongMovement + 10\n print(pongMovement)\n pongEdit.place(x=1, y=pongMovement)\n pongEdit.update()\n print('YAY')\n\n\npongMovement = 0\n\nwindow = tk.Tk()\nwindow.geometry('600x600')\nwindow.resizable(width=False, height=False)\nwindow.bind('<Up>', keyup)\nwindow.bind('<Down>', keydown)\npongEdit = tk.Canvas(window, width=10, height=100, bg=\"black\")\nPong = pongEdit.create_rectangle(10, 10, 20, 20, fill=\"red\")\npongEdit.pack()\n\npongEdit.place(x=1, y=pongMovement)\n\nwindow.mainloop()\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"tkinter",
"tkinter_canvas"
] |
stackoverflow_0074538852_python_tkinter_tkinter_canvas.txt
|
Q:
How to use pandas.dataframe.corr with only a specific number of columns?
Let's say for example I have a dataset with 1000 rows, and 10 variables:
Now, let's say I want to calculate the correlation between the first 4 variables... How would I go about doing this?
import pandas as pd
df = pd.read_csv('random_data.csv')
df.corr()[0:4]
This code I have calculates the correlation between the first 4 variables with all the variables total in the dataset. How would I adjust this to make it a 4x4 correlation matrix and not a 4x10 correlation matrix?
Any helps thank you!
A:
To do this you want to use a subset of the dataframe that contains only the columns you want.
df[['col1', 'col2', 'col3', 'col4']].corr()
OR
df.iloc[:, :4].corr() to select first 4 columns
|
How to use pandas.dataframe.corr with only a specific number of columns?
|
Let's say for example I have a dataset with 1000 rows, and 10 variables:
Now, let's say I want to calculate the correlation between the first 4 variables... How would I go about doing this?
import pandas as pd
df = pd.read_csv('random_data.csv')
df.corr()[0:4]
This code I have calculates the correlation between the first 4 variables with all the variables total in the dataset. How would I adjust this to make it a 4x4 correlation matrix and not a 4x10 correlation matrix?
Any helps thank you!
|
[
"To do this you want to use a subset of the dataframe that contains only the columns you want.\ndf[['col1', 'col2', 'col3', 'col4']].corr()\nOR\ndf.iloc[:, :4].corr() to select first 4 columns\n"
] |
[
2
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074538936_dataframe_pandas_python.txt
|
Q:
Handling try except multiple times while web scraping BeautifulSoup
while web scraping using BeautifulSoup I have to write try except multiple times. See the code below:
try:
addr1 = soup.find('span', {'class' : 'addr1'}).text
except:
addr1 = ''
try:
addr2 = soup.find('span', {'class' : 'addr2'}).text
except:
addr2 = ''
try:
city = soup.find('strong', {'class' : 'city'}).text
except:
city = ''
The problem is that I have to write try except multiple times and that is very annoying. I want to write a function to handle the exception.
I tried to use the following function but it is still showing an error:
def datascraping(var):
try:
return var
except:
return None
addr1 = datascraping(soup.find('span', {'class' : 'addr1'}).text)
addr2 = datascraping(soup.find('span', {'class' : 'addr2'}).text)
Can anyone help me to solve the issue?
A:
Use a for loop that iterates through a sequence containing your arguments. Then use a conditional statement that checks if the return value is None, prior to attempting to get the text attribute. Then store the results in a dictionary. This way there is no need to use try/except at all.
seq = [('span', 'addr1'), ('span', 'addr2'), ('strong', 'city')]
results = {}
for tag, value in seq:
var = soup.find(tag, {'class': value})
if var is not None:
results[value] = var.text
else:
results[value] = ''
|
Handling try except multiple times while web scraping BeautifulSoup
|
while web scraping using BeautifulSoup I have to write try except multiple times. See the code below:
try:
addr1 = soup.find('span', {'class' : 'addr1'}).text
except:
addr1 = ''
try:
addr2 = soup.find('span', {'class' : 'addr2'}).text
except:
addr2 = ''
try:
city = soup.find('strong', {'class' : 'city'}).text
except:
city = ''
The problem is that I have to write try except multiple times and that is very annoying. I want to write a function to handle the exception.
I tried to use the following function but it is still showing an error:
def datascraping(var):
try:
return var
except:
return None
addr1 = datascraping(soup.find('span', {'class' : 'addr1'}).text)
addr2 = datascraping(soup.find('span', {'class' : 'addr2'}).text)
Can anyone help me to solve the issue?
|
[
"Use a for loop that iterates through a sequence containing your arguments. Then use a conditional statement that checks if the return value is None, prior to attempting to get the text attribute. Then store the results in a dictionary. This way there is no need to use try/except at all.\nseq = [('span', 'addr1'), ('span', 'addr2'), ('strong', 'city')]\nresults = {}\nfor tag, value in seq:\n var = soup.find(tag, {'class': value})\n if var is not None:\n results[value] = var.text\n else:\n results[value] = ''\n\n"
] |
[
0
] |
[] |
[] |
[
"beautifulsoup",
"python"
] |
stackoverflow_0074538717_beautifulsoup_python.txt
|
Q:
How to send push notifications to iOS using a python api
I have create a webscraper that sends notifications to my phone whenever certain events are detected. So far I have achieved this by sending emails through the sendgrid api. Its a pretty nice service, and it is free, but it clutters up the mailbox quite a bit.
In stead I’d like to send messages directly to the iOS notification bar. Does anyone here has experience with sending push-notifications to iOS and can point me in the correct direction? I would be happy with a subscription service, but would off course prefer a solution that does not require a third party if it is possible.
I have tested PushNotifier, but I found it a bit clunky, and the notifications are neither customisable or beautiful. Its also not a free service, which would have been a great plus.
A:
Maybe you should check out pushover.net. They have a simple WebAPI to send customized notifications to iOS devices.
See https://support.pushover.net/i44-example-code-and-pushover-libraries#python for code samples.
|
How to send push notifications to iOS using a python api
|
I have create a webscraper that sends notifications to my phone whenever certain events are detected. So far I have achieved this by sending emails through the sendgrid api. Its a pretty nice service, and it is free, but it clutters up the mailbox quite a bit.
In stead I’d like to send messages directly to the iOS notification bar. Does anyone here has experience with sending push-notifications to iOS and can point me in the correct direction? I would be happy with a subscription service, but would off course prefer a solution that does not require a third party if it is possible.
I have tested PushNotifier, but I found it a bit clunky, and the notifications are neither customisable or beautiful. Its also not a free service, which would have been a great plus.
|
[
"Maybe you should check out pushover.net. They have a simple WebAPI to send customized notifications to iOS devices.\nSee https://support.pushover.net/i44-example-code-and-pushover-libraries#python for code samples.\n"
] |
[
1
] |
[] |
[] |
[
"iphone",
"push_notification",
"python"
] |
stackoverflow_0074538831_iphone_push_notification_python.txt
|
Q:
How to use python output as input for next step in argo workflow?
Based on the last line of the output of a python script (I cannot adapt the output-format) I want to trigger multiple new steps in argo-wf. How can I ignore all output lines except the last one in below example? I cannot adapt thy python-code so I guess I have to include an additional step to exclude all lines except last one.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: loops-param-result-
spec:
entrypoint: loop-param-result-example
templates:
- name: loop-param-result-example
steps:
- - name: generate
template: gen-number-list
- - name: sleep
template: sleep-n-sec
arguments:
parameters:
- name: seconds
value: "{{item}}"
withParam: "{{steps.generate.outputs.result}}"
- name: gen-number-list
script:
image: python:alpine3.6
command: [python]
source: |
import json
import sys
print("abc")
print("def")
print("1, 2, 3")
- name: sleep-n-sec
inputs:
parameters:
- name: seconds
container:
image: alpine:latest
command: [sh, -c]
args: ["echo sleeping for {{inputs.parameters.seconds}} seconds; sleep {{inputs.parameters.seconds}}; echo done"]
As I am not able to adapt the python-script I cannot use the standard-solution as shown below:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: loops-param-result-
spec:
entrypoint: loop-param-result-example
templates:
- name: loop-param-result-example
steps:
- - name: generate
template: gen-number-list
- - name: sleep
template: sleep-n-sec
arguments:
parameters:
- name: seconds
value: "{{item}}"
withParam: "{{steps.generate.outputs.result}}"
- name: gen-number-list
script:
image: python:alpine3.6
command: [python]
source: |
import json
import sys
json.dump([i for i in range(20, 31)], sys.stdout)
- name: sleep-n-sec
inputs:
parameters:
- name: seconds
container:
image: alpine:latest
command: [sh, -c]
args: ["echo sleeping for {{inputs.parameters.seconds}} seconds; sleep {{inputs.parameters.seconds}}; echo done"]
A:
I never got the
command: [ python ]
source |
import json
to work, however, this works for me:
command: [ python, -c ]
args: &script
- |
import json
|
How to use python output as input for next step in argo workflow?
|
Based on the last line of the output of a python script (I cannot adapt the output-format) I want to trigger multiple new steps in argo-wf. How can I ignore all output lines except the last one in below example? I cannot adapt thy python-code so I guess I have to include an additional step to exclude all lines except last one.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: loops-param-result-
spec:
entrypoint: loop-param-result-example
templates:
- name: loop-param-result-example
steps:
- - name: generate
template: gen-number-list
- - name: sleep
template: sleep-n-sec
arguments:
parameters:
- name: seconds
value: "{{item}}"
withParam: "{{steps.generate.outputs.result}}"
- name: gen-number-list
script:
image: python:alpine3.6
command: [python]
source: |
import json
import sys
print("abc")
print("def")
print("1, 2, 3")
- name: sleep-n-sec
inputs:
parameters:
- name: seconds
container:
image: alpine:latest
command: [sh, -c]
args: ["echo sleeping for {{inputs.parameters.seconds}} seconds; sleep {{inputs.parameters.seconds}}; echo done"]
As I am not able to adapt the python-script I cannot use the standard-solution as shown below:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: loops-param-result-
spec:
entrypoint: loop-param-result-example
templates:
- name: loop-param-result-example
steps:
- - name: generate
template: gen-number-list
- - name: sleep
template: sleep-n-sec
arguments:
parameters:
- name: seconds
value: "{{item}}"
withParam: "{{steps.generate.outputs.result}}"
- name: gen-number-list
script:
image: python:alpine3.6
command: [python]
source: |
import json
import sys
json.dump([i for i in range(20, 31)], sys.stdout)
- name: sleep-n-sec
inputs:
parameters:
- name: seconds
container:
image: alpine:latest
command: [sh, -c]
args: ["echo sleeping for {{inputs.parameters.seconds}} seconds; sleep {{inputs.parameters.seconds}}; echo done"]
|
[
"I never got the\ncommand: [ python ]\nsource |\n import json\n\nto work, however, this works for me:\n command: [ python, -c ]\n args: &script\n - |\n import json\n\n"
] |
[
0
] |
[] |
[] |
[
"argo",
"python"
] |
stackoverflow_0074275358_argo_python.txt
|
Q:
How to add python version as environment variable in poetry?
I have created a simple django project using poetry in my local machine , the pyproject.toml is the following
[tool.poetry]
name = "vending-machine-api"
version = "0.1.0"
description = ""
authors = ["mohamed ibrahim"]
readme = "README.md"
packages = [{include = "vending_machine_api"}]
[tool.poetry.dependencies]
python = "^3.10"
django = "^4.1.3"
django-rest-knox = "^4.2.0"
djangorestframework = "^3.14.0"
django-model-utils = "^4.3.1"
drf-access-policy = "^1.3.0"
pre-commit = "^2.20.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
It has my python3.10 from this machine i have created the project from
I'm trying to run the project from another machine which has python version 3.9 but I 'm getting this error
The currently activated Python version 3.9.6 is not supported by the project (^3.10).
Is there any way to add the python version from created project to be compatible with older versions ?
A:
It's not using an environment variable like you asked, but this solved the issue of incompatibility:
python = ">=3.9,<3.11"
to add a range of compatible python versions
|
How to add python version as environment variable in poetry?
|
I have created a simple django project using poetry in my local machine , the pyproject.toml is the following
[tool.poetry]
name = "vending-machine-api"
version = "0.1.0"
description = ""
authors = ["mohamed ibrahim"]
readme = "README.md"
packages = [{include = "vending_machine_api"}]
[tool.poetry.dependencies]
python = "^3.10"
django = "^4.1.3"
django-rest-knox = "^4.2.0"
djangorestframework = "^3.14.0"
django-model-utils = "^4.3.1"
drf-access-policy = "^1.3.0"
pre-commit = "^2.20.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
It has my python3.10 from this machine i have created the project from
I'm trying to run the project from another machine which has python version 3.9 but I 'm getting this error
The currently activated Python version 3.9.6 is not supported by the project (^3.10).
Is there any way to add the python version from created project to be compatible with older versions ?
|
[
"It's not using an environment variable like you asked, but this solved the issue of incompatibility:\npython = \">=3.9,<3.11\"\n\nto add a range of compatible python versions\n"
] |
[
1
] |
[] |
[] |
[
"django",
"pip",
"python",
"python_poetry"
] |
stackoverflow_0074539017_django_pip_python_python_poetry.txt
|
Q:
applying function to list of dataframes in python
beginner python question here that I've had struggles getting answered from related stack questions.
I've got a list
dfList = df0,df1,df2,...,df7
I've got a function that I've defined and takes a dataframe as its argument. I'm not sure the function itself matters, but to be safe it is basically
def rateCalc (outcomeDataFrame):
rateList = list()
upperRateList = list()
lowerRateList = list()
for i in range(len(outcomeDataFrame)):
lowlevel, highlevel = proportion_confint(count=outcomeDataFrame.iloc[i,4], nobs=outcomeDataFrame.iloc[i,3])
lowerRateList.append(lowlevel)
rateList.append(outcomeDataFrame.iloc[i,4]/outcomeDataFrame.iloc[i,3])
upperRateList.append(highlevel)
outcomeDataFrame = outcomeDataFrame.assign(lowerRate=lowerRateList)
outcomeDataFrame = outcomeDataFrame.assign(midrate=rateList)
outcomeDataFrame = outcomeDataFrame.assign(upperRate=upperRateList)
return outcomeDataFrame
What I'm trying to do is append a the observed success ratio of two numbers as well as their 95% confidence interval. Goes fine when working with any individual df.
What I want to accomplish is turn each item of dfList into a version of itself with those lowerRate, midRate, and higherRate values appended as new columns.
When I try to apply across each dataframe with
for i in range(len(dfList):
rateCalc(dfList[i])
though, it seems to only execute for df0. I can't make any sense of that; a full error I'd assume I had some basic flaw in the code, but it seems to work for df0 and then not iterate to df1 and beyond.
I also thought there may be an issue of "df1 != dfList[1]" in some backend sense (that running the function on the item in a list dfList[1] would not have any affect on the original item df1) but, again, the fact it seems to work with df0 would imply that's not the issue.
I also tried throwing some mud at the wall with the "map" function but am not sure I understand how to use that in this context (or any other for that matter ha)
Thanks all
A:
I think it is because the assing function returns another Data Frame which only exists inside the function scope, here is an example
import pandas as pd
df_0 = pd.DataFrame(data = [{'column':'a'}])
df_1 = pd.DataFrame(data = [{'column':'c'}])
df_2 = pd.DataFrame(data = [{'column':'d'}])
df_altos = df_0,df_1,df_2
def mod_df(df):
test = list()
test.append('d')
#print('id before setting another column '+str(id(df)))
#df['b'] = test
print('id before assinging '+str(id(df)))
df = df.assign(lowerRate = test)
print('id after assinging '+str(id(df)))
return df
for i in range(len(df_altos)):
mod_df(df_altos[i])
The returning id of each dataframe is the following
id before assinging 1833832455136
id after assinging 1833832523568
id before assinging 1833832456144
id after assinging 1833832525776
id before assinging 1833832454416
id after assinging 1833832521888
As you can see, the id changes.
You could try another atribution method, as the following
def mod_df(df):
test = list()
test.append('d')
print('id before setting another column '+str(id(df)))
df['b'] = test
print('id after assinging '+str(id(df)))
return df
which outputs
id before setting another column 1833831955520
id after assinging 1833831955520
id before setting another column 1833791973888
id after assinging 1833791973888
id before setting another column 1833791973264
id after assinging 1833791973264
Now the ids are the same and the new column exists on all the dataframes.
How the first dataframe of you code was working i dont know.
|
applying function to list of dataframes in python
|
beginner python question here that I've had struggles getting answered from related stack questions.
I've got a list
dfList = df0,df1,df2,...,df7
I've got a function that I've defined and takes a dataframe as its argument. I'm not sure the function itself matters, but to be safe it is basically
def rateCalc (outcomeDataFrame):
rateList = list()
upperRateList = list()
lowerRateList = list()
for i in range(len(outcomeDataFrame)):
lowlevel, highlevel = proportion_confint(count=outcomeDataFrame.iloc[i,4], nobs=outcomeDataFrame.iloc[i,3])
lowerRateList.append(lowlevel)
rateList.append(outcomeDataFrame.iloc[i,4]/outcomeDataFrame.iloc[i,3])
upperRateList.append(highlevel)
outcomeDataFrame = outcomeDataFrame.assign(lowerRate=lowerRateList)
outcomeDataFrame = outcomeDataFrame.assign(midrate=rateList)
outcomeDataFrame = outcomeDataFrame.assign(upperRate=upperRateList)
return outcomeDataFrame
What I'm trying to do is append a the observed success ratio of two numbers as well as their 95% confidence interval. Goes fine when working with any individual df.
What I want to accomplish is turn each item of dfList into a version of itself with those lowerRate, midRate, and higherRate values appended as new columns.
When I try to apply across each dataframe with
for i in range(len(dfList):
rateCalc(dfList[i])
though, it seems to only execute for df0. I can't make any sense of that; a full error I'd assume I had some basic flaw in the code, but it seems to work for df0 and then not iterate to df1 and beyond.
I also thought there may be an issue of "df1 != dfList[1]" in some backend sense (that running the function on the item in a list dfList[1] would not have any affect on the original item df1) but, again, the fact it seems to work with df0 would imply that's not the issue.
I also tried throwing some mud at the wall with the "map" function but am not sure I understand how to use that in this context (or any other for that matter ha)
Thanks all
|
[
"I think it is because the assing function returns another Data Frame which only exists inside the function scope, here is an example\nimport pandas as pd\ndf_0 = pd.DataFrame(data = [{'column':'a'}])\ndf_1 = pd.DataFrame(data = [{'column':'c'}])\ndf_2 = pd.DataFrame(data = [{'column':'d'}])\ndf_altos = df_0,df_1,df_2\n\ndef mod_df(df):\n test = list()\n test.append('d')\n #print('id before setting another column '+str(id(df)))\n #df['b'] = test\n print('id before assinging '+str(id(df)))\n df = df.assign(lowerRate = test)\n print('id after assinging '+str(id(df)))\n return df\n\nfor i in range(len(df_altos)):\n mod_df(df_altos[i])\n\nThe returning id of each dataframe is the following\nid before assinging 1833832455136\nid after assinging 1833832523568\nid before assinging 1833832456144\nid after assinging 1833832525776\nid before assinging 1833832454416\nid after assinging 1833832521888\n\nAs you can see, the id changes.\nYou could try another atribution method, as the following\ndef mod_df(df):\n test = list()\n test.append('d')\n print('id before setting another column '+str(id(df)))\n df['b'] = test\n print('id after assinging '+str(id(df)))\n return df\n\nwhich outputs\nid before setting another column 1833831955520\nid after assinging 1833831955520\nid before setting another column 1833791973888\nid after assinging 1833791973888\nid before setting another column 1833791973264\nid after assinging 1833791973264\n\nNow the ids are the same and the new column exists on all the dataframes.\nHow the first dataframe of you code was working i dont know.\n"
] |
[
1
] |
[] |
[] |
[
"apply",
"dataframe",
"list",
"pandas",
"python"
] |
stackoverflow_0074538258_apply_dataframe_list_pandas_python.txt
|
Q:
How to optimize reading and cleaning file?
I have a file, which contains strings separated by spaces, tabs and carriage return:
one two
three
four
I'm trying to remove all spaces, tabs and carriage return:
def txt_cleaning(fname):
with open(fname) as f:
new_txt = []
fname = f.readline().strip()
new_txt += [line.split() for line in f.readlines()]
return new_txt
Output:
[['one'], ['two'], [], ['three'], [], ['four']]
Expecting, without importing libraries:
['one', 'two', 'three', 'four']
A:
def txt_cleaning(fname):
new_text = []
with open(fname) as f:
for line in f.readlines():
new_text += [s.strip() for s in line.split() if s]
return new_text
Or
def txt_cleaning(fname):
with open(fname) as f:
return [word.strip() for word in f.read().split() if word]
A:
My method:
use read (not readline) to get the whole text in a single element
replace tabs and newlines with a space
split
def txt_cleaning(fname):
with open(fname) as f:
return f.read().replace( '\t', ' ').replace( '\n', ' ').split()
|
How to optimize reading and cleaning file?
|
I have a file, which contains strings separated by spaces, tabs and carriage return:
one two
three
four
I'm trying to remove all spaces, tabs and carriage return:
def txt_cleaning(fname):
with open(fname) as f:
new_txt = []
fname = f.readline().strip()
new_txt += [line.split() for line in f.readlines()]
return new_txt
Output:
[['one'], ['two'], [], ['three'], [], ['four']]
Expecting, without importing libraries:
['one', 'two', 'three', 'four']
|
[
"def txt_cleaning(fname):\n new_text = []\n with open(fname) as f:\n for line in f.readlines():\n new_text += [s.strip() for s in line.split() if s]\n return new_text\n\nOr\ndef txt_cleaning(fname):\n with open(fname) as f:\n return [word.strip() for word in f.read().split() if word]\n\n",
"My method:\n\nuse read (not readline) to get the whole text in a single element\nreplace tabs and newlines with a space\nsplit\n\ndef txt_cleaning(fname):\n with open(fname) as f:\n return f.read().replace( '\\t', ' ').replace( '\\n', ' ').split()\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"file",
"python",
"txt"
] |
stackoverflow_0074537382_file_python_txt.txt
|
Q:
String-based enum in Python
To encapsulate a list of states I am using enum module:
from enum import Enum
class MyEnum(Enum):
state1='state1'
state2 = 'state2'
state = MyEnum.state1
MyEnum['state1'] == state # here it works
'state1' == state # here it does not throw but returns False (fail!)
However, the issue is that I need to seamlessly use the values as strings in many contexts in my script, like:
select_query1 = select(...).where(Process.status == str(MyEnum.state1)) # works but ugly
select_query2 = select(...).where(Process.status == MyEnum.state1) # throws exeption
How to do it avoiding calling additional type conversion (str(state) above) or the underlying value (state.value)?
A:
It seems that it is enough to inherit from str class at the same time as Enum:
from enum import Enum
class MyEnum(str, Enum):
state1 = 'state1'
state2 = 'state2'
The tricky part is that the order of classes in the inheritance chain is important as this:
class MyEnum(Enum, str):
state1 = 'state1'
state2 = 'state2'
throws:
TypeError: new enumerations should be created as `EnumName([mixin_type, ...] [data_type,] enum_type)`
With the correct class the following operations on MyEnum are fine:
print('This is the state value: ' + state)
As a side note, it seems that the special inheritance trick is not needed for formatted strings which work even for Enum inheritance only:
msg = f'This is the state value: {state}' # works without inheriting from str
A:
By reading the documentation (i.e., I didn't try it because I use an older version of Python, but I trust the docs), since Python 3.11 you can do the following:
from enum import StrEnum
class Directions(StrEnum):
NORTH = 'north', # notice the trailing comma
SOUTH = 'south'
print(Directions.NORTH)
>>> north
Please refer to the docs and the design discussion for further understanding.
If you're running python 3.6+, execute pip install StrEnum, and then you can do the following (confirmed by me):
from strenum import StrEnum
class URLs(StrEnum):
GOOGLE = 'www.google.com'
STACKOVERFLOW = 'www.stackoverflow.com'
print(URLs.STACKOVERFLOW)
>>> www.stackoverflow.com
You can read more about it here.
Also, this was mentioned in the docs - how to create your own enums based on other classes:
While IntEnum is part of the enum module, it would be very simple to
implement independently:
class IntEnum(int, Enum):
pass This demonstrates how similar derived enumerations can be defined; for example a StrEnum that mixes in str instead of int.
Some rules:
When subclassing Enum, mix-in types must appear before Enum itself in
the sequence of bases, as in the IntEnum example above.
While Enum can have members of any type, once you mix in an additional
type, all the members must have values of that type, e.g. int above.
This restriction does not apply to mix-ins which only add methods and
don’t specify another type.
When another data type is mixed in, the value attribute is not the
same as the enum member itself, although it is equivalent and will
compare equal.
%-style formatting: %s and %r call the Enum class’s str() and
repr() respectively; other codes (such as %i or %h for IntEnum) treat the enum member as its mixed-in type.
Formatted string literals, str.format(), and format() will use the
mixed-in type’s format() unless str() or format() is
overridden in the subclass, in which case the overridden methods or
Enum methods will be used. Use the !s and !r format codes to force
usage of the Enum class’s str() and repr() methods.
Source: https://docs.python.org/3/library/enum.html#others
A:
While a mixin class between str and Enum can solve this problem, you should always also think about getting the right tool for the job.
And sometimes, the right tool could easily just be a MODULE_CONSTANT with a string value. For example, logging has a few constants like DEBUG, INFO, etc with meaningful values - even if they're ints in this case.
Enums are a good tool and I often use them. However, they're intended to be primarily compared against other members of the same Enum, which is why comparing them to, for example, strings requires you to jump through an additional hoop.
A:
If associated string values are valid Python names then you can get names of enum members using .name property like this:
from enum import Enum
class MyEnum(Enum):
state1=0
state2=1
print (MyEnum.state1.name) # 'state1'
a = MyEnum.state1
print(a.name) # 'state1'
If associated string values are arbitrary strings then you can do this:
class ModelNames(str, Enum):
gpt2 = 'gpt2'
distilgpt2 = 'distilgpt2'
gpt2_xl = 'gpt2-XL'
gpt2_large = 'gpt2-large'
print(ModelNames.gpt2) # 'ModelNames.gpt2'
print(ModelNames.gpt2 is str) # False
print(ModelNames.gpt2_xl.name) # 'gpt2_xl'
print(ModelNames.gpt2_xl.value) # 'gpt2-XL'
Try this online: https://repl.it/@sytelus/enumstrtest
A:
If you want to work with strings directly, you could consider using
MyEnum = collections.namedtuple(
"MyEnum", ["state1", "state2"]
)(
state1="state1",
state2="state2"
)
rather than enum at all. Iterating over this or doing MyEnum.state1 will give the string values directly. Creating the namedtuple within the same statement means there can only be one.
Obviously there are trade offs for not using Enum, so it depends on what you value more.
A:
what is wrong with using the value?
Imho, unless using Python version 3.11 with StrEnum I just override the __str__(self) method in the proper Enum class:
class MyStrEnum(str, Enum):
OK = 'OK'
FAILED = 'FAILED'
def __str__(self) -> str:
return self.value
Best
A:
With auto:
from enum import Enum, auto
class AutoStrEnum(str, Enum):
"""
StrEnum where auto() returns the field name.
See https://docs.python.org/3.9/library/enum.html#using-automatic-values
"""
@staticmethod
def _generate_next_value_(name: str, start: int, count: int, last_values: list) -> str:
return name
class MyEnum(AutoStrEnum):
STATE_1 = auto()
STATE_2 = auto()
Try it:
MyEnum.STATE_1 == "STATE_1" # True
|
String-based enum in Python
|
To encapsulate a list of states I am using enum module:
from enum import Enum
class MyEnum(Enum):
state1='state1'
state2 = 'state2'
state = MyEnum.state1
MyEnum['state1'] == state # here it works
'state1' == state # here it does not throw but returns False (fail!)
However, the issue is that I need to seamlessly use the values as strings in many contexts in my script, like:
select_query1 = select(...).where(Process.status == str(MyEnum.state1)) # works but ugly
select_query2 = select(...).where(Process.status == MyEnum.state1) # throws exeption
How to do it avoiding calling additional type conversion (str(state) above) or the underlying value (state.value)?
|
[
"It seems that it is enough to inherit from str class at the same time as Enum:\nfrom enum import Enum\n\nclass MyEnum(str, Enum):\n state1 = 'state1'\n state2 = 'state2'\n\nThe tricky part is that the order of classes in the inheritance chain is important as this:\nclass MyEnum(Enum, str):\n state1 = 'state1'\n state2 = 'state2'\n\nthrows:\nTypeError: new enumerations should be created as `EnumName([mixin_type, ...] [data_type,] enum_type)`\n\nWith the correct class the following operations on MyEnum are fine:\nprint('This is the state value: ' + state)\n\n\nAs a side note, it seems that the special inheritance trick is not needed for formatted strings which work even for Enum inheritance only:\nmsg = f'This is the state value: {state}' # works without inheriting from str\n\n",
"By reading the documentation (i.e., I didn't try it because I use an older version of Python, but I trust the docs), since Python 3.11 you can do the following:\nfrom enum import StrEnum\n\nclass Directions(StrEnum):\n NORTH = 'north', # notice the trailing comma\n SOUTH = 'south'\n\nprint(Directions.NORTH)\n>>> north\n\nPlease refer to the docs and the design discussion for further understanding.\nIf you're running python 3.6+, execute pip install StrEnum, and then you can do the following (confirmed by me):\nfrom strenum import StrEnum\n\nclass URLs(StrEnum):\n GOOGLE = 'www.google.com'\n STACKOVERFLOW = 'www.stackoverflow.com'\n\nprint(URLs.STACKOVERFLOW)\n\n>>> www.stackoverflow.com\n\nYou can read more about it here.\n\nAlso, this was mentioned in the docs - how to create your own enums based on other classes:\n\nWhile IntEnum is part of the enum module, it would be very simple to\nimplement independently:\nclass IntEnum(int, Enum):\npass This demonstrates how similar derived enumerations can be defined; for example a StrEnum that mixes in str instead of int.\nSome rules:\nWhen subclassing Enum, mix-in types must appear before Enum itself in\nthe sequence of bases, as in the IntEnum example above.\nWhile Enum can have members of any type, once you mix in an additional\ntype, all the members must have values of that type, e.g. int above.\nThis restriction does not apply to mix-ins which only add methods and\ndon’t specify another type.\nWhen another data type is mixed in, the value attribute is not the\nsame as the enum member itself, although it is equivalent and will\ncompare equal.\n%-style formatting: %s and %r call the Enum class’s str() and\nrepr() respectively; other codes (such as %i or %h for IntEnum) treat the enum member as its mixed-in type.\nFormatted string literals, str.format(), and format() will use the\nmixed-in type’s format() unless str() or format() is\noverridden in the subclass, in which case the overridden methods or\nEnum methods will be used. Use the !s and !r format codes to force\nusage of the Enum class’s str() and repr() methods.\n\nSource: https://docs.python.org/3/library/enum.html#others\n",
"While a mixin class between str and Enum can solve this problem, you should always also think about getting the right tool for the job. \nAnd sometimes, the right tool could easily just be a MODULE_CONSTANT with a string value. For example, logging has a few constants like DEBUG, INFO, etc with meaningful values - even if they're ints in this case. \nEnums are a good tool and I often use them. However, they're intended to be primarily compared against other members of the same Enum, which is why comparing them to, for example, strings requires you to jump through an additional hoop.\n",
"If associated string values are valid Python names then you can get names of enum members using .name property like this:\nfrom enum import Enum\nclass MyEnum(Enum):\n state1=0\n state2=1\n\nprint (MyEnum.state1.name) # 'state1'\n\na = MyEnum.state1\nprint(a.name) # 'state1'\n\nIf associated string values are arbitrary strings then you can do this:\nclass ModelNames(str, Enum):\n gpt2 = 'gpt2'\n distilgpt2 = 'distilgpt2'\n gpt2_xl = 'gpt2-XL'\n gpt2_large = 'gpt2-large'\n\nprint(ModelNames.gpt2) # 'ModelNames.gpt2'\nprint(ModelNames.gpt2 is str) # False\nprint(ModelNames.gpt2_xl.name) # 'gpt2_xl'\nprint(ModelNames.gpt2_xl.value) # 'gpt2-XL'\n\nTry this online: https://repl.it/@sytelus/enumstrtest\n",
"If you want to work with strings directly, you could consider using\nMyEnum = collections.namedtuple(\n \"MyEnum\", [\"state1\", \"state2\"]\n)(\n state1=\"state1\", \n state2=\"state2\"\n)\n\nrather than enum at all. Iterating over this or doing MyEnum.state1 will give the string values directly. Creating the namedtuple within the same statement means there can only be one.\nObviously there are trade offs for not using Enum, so it depends on what you value more.\n",
"what is wrong with using the value?\nImho, unless using Python version 3.11 with StrEnum I just override the __str__(self) method in the proper Enum class:\nclass MyStrEnum(str, Enum):\n\n OK = 'OK'\n FAILED = 'FAILED'\n\n def __str__(self) -> str:\n return self.value\n\nBest\n",
"With auto:\nfrom enum import Enum, auto\n\nclass AutoStrEnum(str, Enum):\n \"\"\"\n StrEnum where auto() returns the field name.\n See https://docs.python.org/3.9/library/enum.html#using-automatic-values\n \"\"\"\n @staticmethod\n def _generate_next_value_(name: str, start: int, count: int, last_values: list) -> str:\n return name\n\nclass MyEnum(AutoStrEnum):\n STATE_1 = auto()\n STATE_2 = auto()\n\nTry it:\nMyEnum.STATE_1 == \"STATE_1\" # True\n\n"
] |
[
116,
60,
4,
4,
1,
1,
0
] |
[
"Simply use .value :\nMyEnum.state1.value == 'state1'\n# True\n\n"
] |
[
-1
] |
[
"enums",
"python",
"python_3.x",
"string"
] |
stackoverflow_0058608361_enums_python_python_3.x_string.txt
|
Q:
Populate dataframe by unnesting list of the first column
I have the following issue with a csv in panda the data looks as follow :
Column A :row1: [« a », « b »; « c »
Row2 : [« d »; « e », « f »
Etc …
Note the different delimiters.
I would like it to populate next column based on the cell keys in the list in it like this :
ColA row 1: [a] col b:[b] colc[c]
Row 2: [d] col b:[e] colc:[f]
And so on for as many values there is in a cell I would like it to populate over every column it’s row.
I hope to get some insights from you and that my explanation is clear,
Thanks
Im struggling so far
I can’t share the data but basically I have every row in column A that contains a list csv like with separators and I would like for n number of values within this list in this cell to populate n number of rows in the next columns. , I think I would need to strip the data based on the multiple delimiters and treat them as one ( as you would do in excel ) and then for each row create a function appending each values of the first cell list ? But I’m not sure how to create this…
Each « Keys » of the list in the cell with separated values should go to a the next row (horizontal) in the next column and this for each rows in the data set I would like to un-nest these strings
A:
I'm not sure I understand your I/O but you can try this :
import pandas as pd
df= (
pd.read_csv("test.txt", sep="[;,]", engine="python",
header=None, skiprows=1)
.astype(str).apply(lambda x: x.str.strip("« »"))
)
# convert the numeric index columns to alphabetic letters
df.columns= (
df.columns.astype(str)
.str.replace(r"(\d)",
lambda m: "Col" + chr(ord('@')+ int(float(m.group(0)))+1),
regex=True)
)
# Output:
print(df)
ColA ColB ColC
0 a b c
1 d e f
# .txt used:
|
Populate dataframe by unnesting list of the first column
|
I have the following issue with a csv in panda the data looks as follow :
Column A :row1: [« a », « b »; « c »
Row2 : [« d »; « e », « f »
Etc …
Note the different delimiters.
I would like it to populate next column based on the cell keys in the list in it like this :
ColA row 1: [a] col b:[b] colc[c]
Row 2: [d] col b:[e] colc:[f]
And so on for as many values there is in a cell I would like it to populate over every column it’s row.
I hope to get some insights from you and that my explanation is clear,
Thanks
Im struggling so far
I can’t share the data but basically I have every row in column A that contains a list csv like with separators and I would like for n number of values within this list in this cell to populate n number of rows in the next columns. , I think I would need to strip the data based on the multiple delimiters and treat them as one ( as you would do in excel ) and then for each row create a function appending each values of the first cell list ? But I’m not sure how to create this…
Each « Keys » of the list in the cell with separated values should go to a the next row (horizontal) in the next column and this for each rows in the data set I would like to un-nest these strings
|
[
"I'm not sure I understand your I/O but you can try this :\nimport pandas as pd\n\ndf= (\n pd.read_csv(\"test.txt\", sep=\"[;,]\", engine=\"python\",\n header=None, skiprows=1)\n .astype(str).apply(lambda x: x.str.strip(\"« »\"))\n )\n\n# convert the numeric index columns to alphabetic letters\ndf.columns= (\n df.columns.astype(str)\n .str.replace(r\"(\\d)\", \n lambda m: \"Col\" + chr(ord('@')+ int(float(m.group(0)))+1),\n regex=True)\n )\n\n# Output:\nprint(df)\n\n ColA ColB ColC\n0 a b c\n1 d e f\n\n# .txt used:\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"delimiter",
"list",
"pandas",
"python"
] |
stackoverflow_0074538750_dataframe_delimiter_list_pandas_python.txt
|
Q:
Get Pyrebase HTTPError information
Using the pyrebase wrapper for Firebase Authentication, when attempting to create a new user that is already a user pyrebase wraps the google API response in an HTTPError message. But when I try to capture this exception it doesn't recognize HTTPError as an exception. I can access the exception by using expect Exception as e show in greater detail below.
Code:
config = {
"apiKey": os.environ.get('WEB_API_KEY'),
"authDomain": "project.firebaseapp.com",
"databaseURL": "https://project.firebaseio.com",
"storageBucket": "project.appspot.com",
"serviceAccount": os.environ.get('FIREBASE_APPLICATION_CREDENTIALS')
}
firebase = pyrebase.initialize_app(config)
auth = firebase.auth()
# Attempt to register a user that already exists
try:
user = auth.create_user_with_email_and_password('myemail@email.com', 'mypassword')
except HTTPError as e:
print('Handling HTTPError:', e)
This will output:
Traceback (most recent call last):
File "<console>", line 3, in <module>
NameError: name 'HTTPError' is not defined
I can catch the error if i take a more general approach and use:
try:
user = auth.create_user_with_email_and_password('myemail@email.com', 'mypassword')
except Exception as e:
print(e.args)
This will then gracefully print the exception:
(HTTPError('400 Client Error: Bad Request for url: https://www.googleapis.com/identitytoolkit/v3/relyingparty/signupNewUser?key=<WEB_API_KEY>'), '{\n "error": {\n "code": 400,\n "message": "EMAIL_EXISTS",\n "errors": [\n {\n "message": "EMAIL_EXISTS",\n "domain": "global",\n "reason": "invalid"\n }\n ]\n }\n}\n')
This gives me the info, but it is a string.
How do I access the response JSON that is shown in the Exception message?
Thanks!
A:
json.loads(e.args[1])['error']['message']
this will give you as a result : EMAIL_EXISTS
A:
For your case, try it, with firebase-admin==6.0.0:
import firebase_admin
try:
user = auth.create_user_with_email_and_password('myemail@email.com', 'mypassword')
except firebase_admin._auth_utils.EmailAlreadyExistsError as e:
print('Handling HTTPError:', e)
|
Get Pyrebase HTTPError information
|
Using the pyrebase wrapper for Firebase Authentication, when attempting to create a new user that is already a user pyrebase wraps the google API response in an HTTPError message. But when I try to capture this exception it doesn't recognize HTTPError as an exception. I can access the exception by using expect Exception as e show in greater detail below.
Code:
config = {
"apiKey": os.environ.get('WEB_API_KEY'),
"authDomain": "project.firebaseapp.com",
"databaseURL": "https://project.firebaseio.com",
"storageBucket": "project.appspot.com",
"serviceAccount": os.environ.get('FIREBASE_APPLICATION_CREDENTIALS')
}
firebase = pyrebase.initialize_app(config)
auth = firebase.auth()
# Attempt to register a user that already exists
try:
user = auth.create_user_with_email_and_password('myemail@email.com', 'mypassword')
except HTTPError as e:
print('Handling HTTPError:', e)
This will output:
Traceback (most recent call last):
File "<console>", line 3, in <module>
NameError: name 'HTTPError' is not defined
I can catch the error if i take a more general approach and use:
try:
user = auth.create_user_with_email_and_password('myemail@email.com', 'mypassword')
except Exception as e:
print(e.args)
This will then gracefully print the exception:
(HTTPError('400 Client Error: Bad Request for url: https://www.googleapis.com/identitytoolkit/v3/relyingparty/signupNewUser?key=<WEB_API_KEY>'), '{\n "error": {\n "code": 400,\n "message": "EMAIL_EXISTS",\n "errors": [\n {\n "message": "EMAIL_EXISTS",\n "domain": "global",\n "reason": "invalid"\n }\n ]\n }\n}\n')
This gives me the info, but it is a string.
How do I access the response JSON that is shown in the Exception message?
Thanks!
|
[
"json.loads(e.args[1])['error']['message']\n\nthis will give you as a result : EMAIL_EXISTS\n",
"For your case, try it, with firebase-admin==6.0.0:\nimport firebase_admin\n\ntry:\n user = auth.create_user_with_email_and_password('myemail@email.com', 'mypassword')\nexcept firebase_admin._auth_utils.EmailAlreadyExistsError as e:\n print('Handling HTTPError:', e)\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"flask",
"pyrebase",
"python"
] |
stackoverflow_0061627506_flask_pyrebase_python.txt
|
Q:
pyinstaller command not found
I am using Ubuntu on VirtualBox. How do I add pyinstaller to the PATH?
The issue is when I say
pyinstaller file.py
it says pyinstaller command not found
It says it installed correctly, and according to other posts, I think it has, but I just can't get it to work. I ran:
pip install pyinstaller
and
pyinstaller file.py
but it won't work. I think I need to add it to the shell path so Linux knows where to find it.
pip show pyinstaller works.
A:
You can use the following command if you do not want to create additional python file.
python -m PyInstaller myscript.py
A:
There is another way to use pyinstaller using it as a Python script.
This is how I did it, go through pyinstaller's documentation
Create a Python script named setup.py or whatever you are comfortable with.
copy this code snippet to the setup.py:
import PyInstaller.__main__
import os
PyInstaller.__main__.run([
'name-%s%' % 'name_of_your_executable file',
'--onefile',
'--windowed',
os.path.join('/path/to/your/script/', 'your script.py'), """your script and path to the script"""
])
Make sure you have installed pyinstaller.
To test it:
open the terminal
type python3
type import PyInstaller
If no errors appear then you are good to go.
Put the setup.py in the folder of your script. Then run the setup.py
This was tested in Python3.
A:
Come across the same issue today. In my case, pyinstaller was sitting in ~/.local/bin and this path was not in my PATH environment variable.
A:
Just get root access first by running sudo -i
and then installing pyinstaller again:
pip3 install pyinstaller
A:
You could do a echo $PATH to see the content of it, an then create a symbolic link from one of the directories listed on $PATH to the current location of your pyinstaller:
sudo ln -s ~/.local/bin/pyinstaller /usr/local/sbin/pyinstaller
On the above case, usr/local/sbin/ is the path already listed on $PATH.
A:
python3 -m PyInstaller file.py worked in my case on Ubuntu 22.04 LTS.
|
pyinstaller command not found
|
I am using Ubuntu on VirtualBox. How do I add pyinstaller to the PATH?
The issue is when I say
pyinstaller file.py
it says pyinstaller command not found
It says it installed correctly, and according to other posts, I think it has, but I just can't get it to work. I ran:
pip install pyinstaller
and
pyinstaller file.py
but it won't work. I think I need to add it to the shell path so Linux knows where to find it.
pip show pyinstaller works.
|
[
"You can use the following command if you do not want to create additional python file. \npython -m PyInstaller myscript.py\n\n",
"There is another way to use pyinstaller using it as a Python script.\nThis is how I did it, go through pyinstaller's documentation\nCreate a Python script named setup.py or whatever you are comfortable with.\ncopy this code snippet to the setup.py:\nimport PyInstaller.__main__\nimport os\n \nPyInstaller.__main__.run([ \n 'name-%s%' % 'name_of_your_executable file',\n '--onefile',\n '--windowed',\n os.path.join('/path/to/your/script/', 'your script.py'), \"\"\"your script and path to the script\"\"\" \n])\n\nMake sure you have installed pyinstaller.\nTo test it:\n\nopen the terminal\ntype python3\ntype import PyInstaller\n\nIf no errors appear then you are good to go.\nPut the setup.py in the folder of your script. Then run the setup.py\nThis was tested in Python3.\n",
"Come across the same issue today. In my case, pyinstaller was sitting in ~/.local/bin and this path was not in my PATH environment variable.\n",
"Just get root access first by running sudo -i\nand then installing pyinstaller again:\npip3 install pyinstaller\n\n",
"You could do a echo $PATH to see the content of it, an then create a symbolic link from one of the directories listed on $PATH to the current location of your pyinstaller:\nsudo ln -s ~/.local/bin/pyinstaller /usr/local/sbin/pyinstaller\nOn the above case, usr/local/sbin/ is the path already listed on $PATH.\n",
"python3 -m PyInstaller file.py worked in my case on Ubuntu 22.04 LTS.\n"
] |
[
52,
3,
3,
3,
0,
0
] |
[] |
[] |
[
"executable",
"pyinstaller",
"python",
"python_3.x"
] |
stackoverflow_0053798660_executable_pyinstaller_python_python_3.x.txt
|
Q:
How to fix "NameError: name 'api' is not defined" Tweepy.py
The Error "
Traceback (most recent call last):
File "/home/dcaus/tweet-custom-label.py", line 16, in <module>
api.update_with_media(filename, status, in_reply_to_status_id = in_reply_to_status_id)
NameError: name 'api' is not defined
"
My code
import tweepy
# Authenticate to Twitter
auth = tweepy.OAuthHandler("Token Is Hidden For Privacy.", "Token Is Hidden For Privacy.")
auth.set_access_token("Token Is Hidden For Privacy.", "Token Is Hidden For Privacy.")
# Create API object
status = "This is cool"
in_reply_to_status_id = "1595131614425542656"
filename = "Property 1=Default.png"
api.update_with_media(filename, status, in_reply_to_status_id = in_reply_to_status_id)
A:
After you set up authentication, you need to create an API object. You're missing something like
api = tweepy.API(auth, wait_on_rate_limit=True)
|
How to fix "NameError: name 'api' is not defined" Tweepy.py
|
The Error "
Traceback (most recent call last):
File "/home/dcaus/tweet-custom-label.py", line 16, in <module>
api.update_with_media(filename, status, in_reply_to_status_id = in_reply_to_status_id)
NameError: name 'api' is not defined
"
My code
import tweepy
# Authenticate to Twitter
auth = tweepy.OAuthHandler("Token Is Hidden For Privacy.", "Token Is Hidden For Privacy.")
auth.set_access_token("Token Is Hidden For Privacy.", "Token Is Hidden For Privacy.")
# Create API object
status = "This is cool"
in_reply_to_status_id = "1595131614425542656"
filename = "Property 1=Default.png"
api.update_with_media(filename, status, in_reply_to_status_id = in_reply_to_status_id)
|
[
"After you set up authentication, you need to create an API object. You're missing something like\napi = tweepy.API(auth, wait_on_rate_limit=True)\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"tweepy"
] |
stackoverflow_0074539099_python_tweepy.txt
|
Q:
more efficient ways other than itterrows() on my code?
this code is taking forever to run given i have 1million rows and 43 columns.
the idea of it is try and find pairs which have the same values for a specific number of columns but the 'CA' column must be opposite and we remove this pair as they will be considered reversing rows.
i.e I have a dataframe = df
Column A
Column B
Column C
Column D
'Brown'
'Bottle'
1234555
100
'yellow'
'Cup'
1234555
80
'Red'
'Bottle'
1234555
-100
'Red'
'Bottle'
1234555
-100
'Brown'
'Bottle'
1234533
100
If I decide to consider column B and C, the program will remove the first and third row since they have the same value in column B and C and also the value in column D is the opposite (one positive and one negative). And also they will be considered a reversing rows and thus only the pair will be removed.
Desired output:
Column A
Column B
Column C
Column D
'yellow'
'Cup'
1234555
80
'Red'
'Bottle'
1234555
-100
'Brown'
'Bottle'
1234533
100
The code I currently have is this but is super inefficient:
df_dupes = data[data.duplicated(subset = criteria_, keep=False)]
df_dupes_list = np.array(df_dupes.to_numpy().tolist())
df_1 = df_dupes_list[:,[0,1,7,9,8,23,35]]
df_2 = df_1.tolist()
for i, row in df_dupes.iterrows():
if row.ConvertedAUD < 0 and [row.BA, row.OA, row.BN, row.DN, row.DT,row.D, -row.CA] in df_2:
try:
c = np.where((data['BA'] ==row.BA) & (data['OA'] ==row.OA) & (data['BN'] ==row.BN)& (data['DT'] ==row.DT)& (data['DN'] ==row.DN)& (data['D'] ==row.D)& (data['CA'] ==-row.CA))[0][0]
data.drop(labels=[i,data.index.values[c]], axis=0, inplace=True)
except:
pass
A:
Here is my solution. The idea is to have an additional structure to quickly find opposite pairs and create a boolean mask for filtering instead of calling drop() in a loop.
import pandas as pd
data = pd.DataFrame(
[
["Brown", "Bottle", 1234555, 100],
["yellow", "Cup", 1234555, 80],
["Red", "Bottle", 1234555, -100],
["Red", "Bottle", 1234555, -100],
["Brown", "Bottle", 1234533, 100],
],
columns=["A", "B", "C", "D"],
)
# "lookup table"
seen = {} # {(key1, key2): (index, value)}
# which rows to keep?
mask = pd.Series(True, index=data.index)
# itertuples is faster than iterrows
for row in data.itertuples():
# create a lookup key
key = (row.B, row.C)
if key not in seen:
# store Index and Value in the "lookup table"
# if we haven't seen this key before
seen[key] = (row.Index, row.D)
else:
prev_index, prev_value = seen[key]
# if the stored value is the opposite of the current one
if prev_value == -row.D:
# we don't want to keep both rows
mask.loc[prev_index] = False
mask.loc[row.Index] = False
# and remove the key from the lookup table
del seen[key]
# else:
# undefined case:
# the key exists, but the value is not
# the opposite of the previous one
# remove "collapsed" rows from the data
result = data[mask]
|
more efficient ways other than itterrows() on my code?
|
this code is taking forever to run given i have 1million rows and 43 columns.
the idea of it is try and find pairs which have the same values for a specific number of columns but the 'CA' column must be opposite and we remove this pair as they will be considered reversing rows.
i.e I have a dataframe = df
Column A
Column B
Column C
Column D
'Brown'
'Bottle'
1234555
100
'yellow'
'Cup'
1234555
80
'Red'
'Bottle'
1234555
-100
'Red'
'Bottle'
1234555
-100
'Brown'
'Bottle'
1234533
100
If I decide to consider column B and C, the program will remove the first and third row since they have the same value in column B and C and also the value in column D is the opposite (one positive and one negative). And also they will be considered a reversing rows and thus only the pair will be removed.
Desired output:
Column A
Column B
Column C
Column D
'yellow'
'Cup'
1234555
80
'Red'
'Bottle'
1234555
-100
'Brown'
'Bottle'
1234533
100
The code I currently have is this but is super inefficient:
df_dupes = data[data.duplicated(subset = criteria_, keep=False)]
df_dupes_list = np.array(df_dupes.to_numpy().tolist())
df_1 = df_dupes_list[:,[0,1,7,9,8,23,35]]
df_2 = df_1.tolist()
for i, row in df_dupes.iterrows():
if row.ConvertedAUD < 0 and [row.BA, row.OA, row.BN, row.DN, row.DT,row.D, -row.CA] in df_2:
try:
c = np.where((data['BA'] ==row.BA) & (data['OA'] ==row.OA) & (data['BN'] ==row.BN)& (data['DT'] ==row.DT)& (data['DN'] ==row.DN)& (data['D'] ==row.D)& (data['CA'] ==-row.CA))[0][0]
data.drop(labels=[i,data.index.values[c]], axis=0, inplace=True)
except:
pass
|
[
"Here is my solution. The idea is to have an additional structure to quickly find opposite pairs and create a boolean mask for filtering instead of calling drop() in a loop.\nimport pandas as pd\n\ndata = pd.DataFrame(\n [\n [\"Brown\", \"Bottle\", 1234555, 100],\n [\"yellow\", \"Cup\", 1234555, 80],\n [\"Red\", \"Bottle\", 1234555, -100],\n [\"Red\", \"Bottle\", 1234555, -100],\n [\"Brown\", \"Bottle\", 1234533, 100],\n ],\n columns=[\"A\", \"B\", \"C\", \"D\"],\n)\n\n# \"lookup table\"\nseen = {} # {(key1, key2): (index, value)}\n# which rows to keep?\nmask = pd.Series(True, index=data.index)\n\n# itertuples is faster than iterrows\nfor row in data.itertuples():\n # create a lookup key\n key = (row.B, row.C)\n if key not in seen:\n # store Index and Value in the \"lookup table\"\n # if we haven't seen this key before\n seen[key] = (row.Index, row.D)\n else:\n prev_index, prev_value = seen[key]\n # if the stored value is the opposite of the current one\n if prev_value == -row.D:\n # we don't want to keep both rows\n mask.loc[prev_index] = False\n mask.loc[row.Index] = False\n # and remove the key from the lookup table\n del seen[key]\n # else:\n # undefined case:\n # the key exists, but the value is not\n # the opposite of the previous one\n\n# remove \"collapsed\" rows from the data\nresult = data[mask]\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"performance",
"python"
] |
stackoverflow_0074526965_dataframe_pandas_performance_python.txt
|
Q:
What is wrong with this isPrime function?
I'm making an isPrime function. Any odd number that I put in (unless it's 1, 2 or 3, which break it) says that it is prime even when they clearly aren't.
from even import *
num = input("What number? ")
def isPrime(n):
n = int(n)
if isEven(n):
return False
i = 2
while i < n:
a = n / i
if isinstance(a, int):
return False
else:
d = n - 2
if i == d:
return True
else:
i += 1
if isPrime(num) is True:
print(num + " is a prime number!")
if isPrime(num) is False:
print(num + " is not a prime number!")
And the code for the isEven function is here:
def isEven(num):
if num == 0:
return True
elif num % 2 == 0:
return True
else:
return False
What am I doing wrong? Also, any general tips for improving my code?
A:
Here is what you meant to type. This is not the best way, but this is parallel to the approach you were taking:
def isEven(num):
return num % 2 == 0
def isPrime(n):
if isEven(n):
return False
for i in range(2,n//2):
if n % i != 0:
return False
return True
num = int(input("What number? "))
if isPrime(num):
print(num, "is a prime number!")
else:
print(num, "is not a prime number!")
|
What is wrong with this isPrime function?
|
I'm making an isPrime function. Any odd number that I put in (unless it's 1, 2 or 3, which break it) says that it is prime even when they clearly aren't.
from even import *
num = input("What number? ")
def isPrime(n):
n = int(n)
if isEven(n):
return False
i = 2
while i < n:
a = n / i
if isinstance(a, int):
return False
else:
d = n - 2
if i == d:
return True
else:
i += 1
if isPrime(num) is True:
print(num + " is a prime number!")
if isPrime(num) is False:
print(num + " is not a prime number!")
And the code for the isEven function is here:
def isEven(num):
if num == 0:
return True
elif num % 2 == 0:
return True
else:
return False
What am I doing wrong? Also, any general tips for improving my code?
|
[
"Here is what you meant to type. This is not the best way, but this is parallel to the approach you were taking:\ndef isEven(num):\n return num % 2 == 0\n\ndef isPrime(n):\n if isEven(n):\n return False\n\n for i in range(2,n//2):\n if n % i != 0:\n return False\n return True\n\nnum = int(input(\"What number? \"))\n\nif isPrime(num):\n print(num, \"is a prime number!\")\nelse:\n print(num, \"is not a prime number!\")\n\n"
] |
[
1
] |
[] |
[] |
[
"primes",
"python"
] |
stackoverflow_0074539008_primes_python.txt
|
Q:
I am stuck trying to create a script for reading a txt file
I am trying to create a script for reading a txt file and calculating data
The purpose of creating this is to read a txt file with data such as
John Smith 11/18/2022 9:33 7.96
Chris Rock 11/19/2022 9:31 8.64
Jane Doe 11/12/2022 10:08 7.6
John Smith 11/9/2022 12:18 5.28
I am curious how I can create objects with each name and check if a object has been created with the name, if so update a total hours variable of the object.
for ex creating a object called JohnSmith and update the hours to 7.96
the program goes the the next line and creates a object called ChrisRock and updates this hours to 8.64. When the program reaches the second JohnSmith it will not create a new object but instead update the original JohnSmith objects hours from 7.96 to 13.24 (7.96+5.28)
|
I am stuck trying to create a script for reading a txt file
|
I am trying to create a script for reading a txt file and calculating data
The purpose of creating this is to read a txt file with data such as
John Smith 11/18/2022 9:33 7.96
Chris Rock 11/19/2022 9:31 8.64
Jane Doe 11/12/2022 10:08 7.6
John Smith 11/9/2022 12:18 5.28
I am curious how I can create objects with each name and check if a object has been created with the name, if so update a total hours variable of the object.
for ex creating a object called JohnSmith and update the hours to 7.96
the program goes the the next line and creates a object called ChrisRock and updates this hours to 8.64. When the program reaches the second JohnSmith it will not create a new object but instead update the original JohnSmith objects hours from 7.96 to 13.24 (7.96+5.28)
|
[] |
[] |
[
"The simplest way would be to create a dictionary with names as keys, and hours as values.\nYou could do something like:\nhours_dict = {}\n\n\nwith open('myfile.txt', 'r') as myfile:\n for line in myfile:\n # parse the line to extract the name and hours\n name = ...\n hours = ...\n\n if hours_dict.get(name):\n hours_dict[name] += hours\n else:\n hours_dict[name] = hours\n\n\n"
] |
[
-1
] |
[
"python"
] |
stackoverflow_0074538969_python.txt
|
Q:
Python Regular Expression matching extra characters
So I might be misunderstanding how this works, but I can't figure it out.
I have a string in python that has some text info and then contains a bunch of ip addresses in parenthesis followed by newline. So
"(192.168.2.101)\n(192.168.2.102)\n(192.168.2.103)\n..."
What I want to do is re to get a list of all the different host ID's (i.e. 101, 102, 103).
This is just a simple example btw so non re methods probably won't work.
The issue I am having is that the list keeps giving me the ending parenthesis. That is, ['101)', '102)', ...]
The expression I am using is
re.findall("192.168.2.(.*?)\n", str)
I am sure the issue will be obvious to people who are more knowledgeable about regular expressions. They drive me up the wall though. I have tried doing things like [^)] to exclude the end parenthesis and even just pull 0-9 but those all wind up returning [].
A:
You can match a sequence of digits followed by )\n.
ips = "(192.168.2.101)\n(192.168.2.102)\n(192.168.2.103)\n..."
re.findall(r'\d+(?=\)\n)', ips)
(?=\)\n) is a lookahead that constrains the match to be followed by close paren and newline.
|
Python Regular Expression matching extra characters
|
So I might be misunderstanding how this works, but I can't figure it out.
I have a string in python that has some text info and then contains a bunch of ip addresses in parenthesis followed by newline. So
"(192.168.2.101)\n(192.168.2.102)\n(192.168.2.103)\n..."
What I want to do is re to get a list of all the different host ID's (i.e. 101, 102, 103).
This is just a simple example btw so non re methods probably won't work.
The issue I am having is that the list keeps giving me the ending parenthesis. That is, ['101)', '102)', ...]
The expression I am using is
re.findall("192.168.2.(.*?)\n", str)
I am sure the issue will be obvious to people who are more knowledgeable about regular expressions. They drive me up the wall though. I have tried doing things like [^)] to exclude the end parenthesis and even just pull 0-9 but those all wind up returning [].
|
[
"You can match a sequence of digits followed by )\\n.\nips = \"(192.168.2.101)\\n(192.168.2.102)\\n(192.168.2.103)\\n...\"\nre.findall(r'\\d+(?=\\)\\n)', ips)\n\n(?=\\)\\n) is a lookahead that constrains the match to be followed by close paren and newline.\n"
] |
[
2
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0074539201_python_regex.txt
|
Q:
cannot access global variable after trying to modify it inside a function
I encounter no problems running this code:
x = 1
def func():
print(x + 1)
func()
2
But when I run this:
x = 1
def func():
try:
x += 1
except:
pass
print(x + 1)
func()
An error pops:
UnboundLocalError: cannot access local variable 'x' where it is not associated with a value
I am not asking if I can modify global variables in functions or not. I used try-except keywords to avoid that error. After python caught the exception, I thought nothing would change and print(x + 1) would work like the first code. So what changed in the scope during the try-except part that x is no longer accessible?
A:
The first example works because the function does not assign to the x variable; it only reads the variable.
The second example fails because if a function assigns to a variable then it is assumed to be a local variable, even if there is a global variable of the same name.
If you want to use the global variable in the function, put global x at the top of the function.
|
cannot access global variable after trying to modify it inside a function
|
I encounter no problems running this code:
x = 1
def func():
print(x + 1)
func()
2
But when I run this:
x = 1
def func():
try:
x += 1
except:
pass
print(x + 1)
func()
An error pops:
UnboundLocalError: cannot access local variable 'x' where it is not associated with a value
I am not asking if I can modify global variables in functions or not. I used try-except keywords to avoid that error. After python caught the exception, I thought nothing would change and print(x + 1) would work like the first code. So what changed in the scope during the try-except part that x is no longer accessible?
|
[
"The first example works because the function does not assign to the x variable; it only reads the variable.\nThe second example fails because if a function assigns to a variable then it is assumed to be a local variable, even if there is a global variable of the same name.\nIf you want to use the global variable in the function, put global x at the top of the function.\n"
] |
[
0
] |
[] |
[] |
[
"function",
"local",
"python",
"scope",
"variables"
] |
stackoverflow_0074539222_function_local_python_scope_variables.txt
|
Q:
Spark: Count occurrence of each word for each column of a dataframe
I have a pyspark dataframe with some columns.
I want to count the occurrence of each word for each column of the dataframe.
I can count the word using the group by query, but I need to figure out how to get this detail for each column using only a single query.
I have attached a sample data frame for reference and expected output.
Following Query which I am using to get the count but it works only on a particular column:
DF.groupBy('ColumnName').count()
I appreciate your input on this.
Sample Input dataframe:
Expected Output:
A:
Data
df =spark.createDataFrame([
('1' , 'null' , ''),
('1' , '' , 'null'),
('1' ,'0' , '0'),
('1' , '1' , 'null'),
('1' , '1' , '0'),
('null' ,'1' , '0'),
('' , '1' , ''),
('0' , '1' , '1'),
('' , '1' , '1')],
('Ratings', 'Vote', 'PointsGiven'))
Approach
create aan array of struct of each column by combining the column name and value
Use inline command to explode into individual columns
groupby pivote and count
Code
df.withColumn('tab', F.array(*[F.struct(lit(x).alias('g'), col(x).alias('v')).alias(x) for x in df.columns])).selectExpr('inline(tab)').groupby('g').pivot('v').agg(count('v')).show()
Outcome
+-----------+---+---+---+----+
| g| | 0| 1|null|
+-----------+---+---+---+----+
| Vote| 1| 1| 6| 1|
|PointsGiven| 2| 3| 2| 2|
| Ratings| 2| 1| 5| 1|
+-----------+---+---+---+----+
|
Spark: Count occurrence of each word for each column of a dataframe
|
I have a pyspark dataframe with some columns.
I want to count the occurrence of each word for each column of the dataframe.
I can count the word using the group by query, but I need to figure out how to get this detail for each column using only a single query.
I have attached a sample data frame for reference and expected output.
Following Query which I am using to get the count but it works only on a particular column:
DF.groupBy('ColumnName').count()
I appreciate your input on this.
Sample Input dataframe:
Expected Output:
|
[
"Data\ndf =spark.createDataFrame([\n('1' , 'null' , ''),\n('1' , '' , 'null'),\n('1' ,'0' , '0'),\n('1' , '1' , 'null'),\n('1' , '1' , '0'),\n('null' ,'1' , '0'),\n('' , '1' , ''),\n('0' , '1' , '1'),\n('' , '1' , '1')],\n('Ratings', 'Vote', 'PointsGiven'))\n\nApproach\n\ncreate aan array of struct of each column by combining the column name and value\nUse inline command to explode into individual columns\ngroupby pivote and count\n\nCode\ndf.withColumn('tab', F.array(*[F.struct(lit(x).alias('g'), col(x).alias('v')).alias(x) for x in df.columns])).selectExpr('inline(tab)').groupby('g').pivot('v').agg(count('v')).show()\n\nOutcome\n+-----------+---+---+---+----+\n| g| | 0| 1|null|\n+-----------+---+---+---+----+\n| Vote| 1| 1| 6| 1|\n|PointsGiven| 2| 3| 2| 2|\n| Ratings| 2| 1| 5| 1|\n+-----------+---+---+---+----+\n\n"
] |
[
0
] |
[] |
[] |
[
"apache_spark",
"databricks",
"pyspark",
"python"
] |
stackoverflow_0074538581_apache_spark_databricks_pyspark_python.txt
|
Q:
Python: using join on a list , output has brackets
I feel like its probably a simple solution but I can’t seem to figure it out and my google-fu is failing me.
currently, I’m consuming data from a CSV file, I then read each line and append to a list. I then use join to combine them all but the output is separated by brackets. What am I missing here?
Code:
data_file = csv.reader(open(‘data.csv’,’r’))
ip_addr=[]
for row in data_file:
ip_addr.append(row)
combine_ips = ‘,’.join(map(str, ip_addr))
Output
[‘1.1.1.1’],[‘1.1.1.2’],[‘1.1.1.3’]
What I need: ( I need it to be a string of course)
1.1.1.1.1,1.1.1.2,1.1.1.3
A:
row evaluates as a list even if it is only a list of 1 in your case, so the first thing you need to do is convert row to a string before appending it to ip_addr. Then, as pointed out by @wrbp you only need to join the (now) string contents of ip_addr:
data_file = csv.reader(open("data.csv","r"))
ip_addr=[]
for row in data_file:
ip_addr.append("".join(row))
combine_ips = ",".join(ip_addr)
|
Python: using join on a list , output has brackets
|
I feel like its probably a simple solution but I can’t seem to figure it out and my google-fu is failing me.
currently, I’m consuming data from a CSV file, I then read each line and append to a list. I then use join to combine them all but the output is separated by brackets. What am I missing here?
Code:
data_file = csv.reader(open(‘data.csv’,’r’))
ip_addr=[]
for row in data_file:
ip_addr.append(row)
combine_ips = ‘,’.join(map(str, ip_addr))
Output
[‘1.1.1.1’],[‘1.1.1.2’],[‘1.1.1.3’]
What I need: ( I need it to be a string of course)
1.1.1.1.1,1.1.1.2,1.1.1.3
|
[
"row evaluates as a list even if it is only a list of 1 in your case, so the first thing you need to do is convert row to a string before appending it to ip_addr. Then, as pointed out by @wrbp you only need to join the (now) string contents of ip_addr:\ndata_file = csv.reader(open(\"data.csv\",\"r\"))\nip_addr=[]\n\nfor row in data_file:\n ip_addr.append(\"\".join(row))\n\ncombine_ips = \",\".join(ip_addr)\n\n"
] |
[
1
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0074539110_list_python.txt
|
Q:
Scrollable window Tkinter
We created a browser on tkinter with a first window asking the user to enter criteria for a research. But for the results window, we have a problem. We can't scroll down on the window, even if there is more results below. How can we add a scroll bar? We tried this:
We tried this:
result_window = Tk()
result_window.geometry("1080x600")
result_window.minsize(480,360)
my_canvas= Canvas(result_window)
my_canvas.pack(side=LEFT, fill=BOTH, expand=1)
swin = Scrollbar(result_window, orient=VERTICAL, command=my_canvas.yview)
swin.pack(side=RIGHT, fill=Y)
my_canvas.configure(yscrollcommand=swin)
my_canvas.bind('<Configure>', lambda e: my_canvas.configure(scrollregion =my_canvas.bbox("all")))
It is not working, do we actually need to use a canvas?
Thank you very much for your help
A:
You can try this:
result_window = Tk()
result_window.geometry("1080x600")
result_window.minsize(480,360)
my_canvas= Canvas(result_window)
my_canvas.pack(side=LEFT, fill=BOTH, expand=1)
swin = ttk.Scrollbar(result_window, orient=VERTICAL, command=my_canvas.yview)
swin.pack(side=RIGHT, fill=Y)
my_canvas.configure(yscrollcommand=swin.set)
my_canvas.bind('<Configure>', lambda e: my_canvas.configure(scrollregion =my_canvas.bbox("all")))
You forgot to set the scrollcommand
my_canvas.configure(yscrollcommand=swin.set)
And I always put ttk. here swin = ttk.Scrollbar(result_window, orient=VERTICAL, command=my_canvas.yview) because it avoids some problems with other functions
|
Scrollable window Tkinter
|
We created a browser on tkinter with a first window asking the user to enter criteria for a research. But for the results window, we have a problem. We can't scroll down on the window, even if there is more results below. How can we add a scroll bar? We tried this:
We tried this:
result_window = Tk()
result_window.geometry("1080x600")
result_window.minsize(480,360)
my_canvas= Canvas(result_window)
my_canvas.pack(side=LEFT, fill=BOTH, expand=1)
swin = Scrollbar(result_window, orient=VERTICAL, command=my_canvas.yview)
swin.pack(side=RIGHT, fill=Y)
my_canvas.configure(yscrollcommand=swin)
my_canvas.bind('<Configure>', lambda e: my_canvas.configure(scrollregion =my_canvas.bbox("all")))
It is not working, do we actually need to use a canvas?
Thank you very much for your help
|
[
"You can try this:\nresult_window = Tk()\n result_window.geometry(\"1080x600\")\n result_window.minsize(480,360)\n my_canvas= Canvas(result_window)\n my_canvas.pack(side=LEFT, fill=BOTH, expand=1)\n swin = ttk.Scrollbar(result_window, orient=VERTICAL, command=my_canvas.yview)\n swin.pack(side=RIGHT, fill=Y)\n my_canvas.configure(yscrollcommand=swin.set)\n my_canvas.bind('<Configure>', lambda e: my_canvas.configure(scrollregion =my_canvas.bbox(\"all\")))\n\nYou forgot to set the scrollcommand\nmy_canvas.configure(yscrollcommand=swin.set)\n\nAnd I always put ttk. here swin = ttk.Scrollbar(result_window, orient=VERTICAL, command=my_canvas.yview) because it avoids some problems with other functions\n"
] |
[
0
] |
[] |
[] |
[
"interface",
"python",
"scrollbar",
"tkinter",
"window"
] |
stackoverflow_0074534933_interface_python_scrollbar_tkinter_window.txt
|
Q:
Find all combinations that add up to given number python with list of lists
I've seen plenty of threads on how to find all combinations that add up to a number with one list, but wanted to know how to expand this such that you can only pick one number at a time, from a list of lists
Question:
You must select 1 number from each list, how do you find all combinations that sum to N?
Given:
3 lists of differing fixed lengths [e.g. l1 will always have 6 values, l2 will always have 10 values, etc]:
l1 = [0.013,0.014,0.015,0.016,0.017,0.018]
l2 = [0.0396,0.0408,0.042,0.0432,0.0444,0.045,0.0468,0.048,0.0492,0.0504]
l3 = [0.0396,0.0408]
Desired Output:
If N = .0954 then the output is [0.015, 0.396, 0.408],[0.015, 0.408, 0.0396].
What I have tried:
output = sum(list(product(l1,l2,l3,l4,l5,l6,l7,l8)))
However this is too intensive as my largest bucket has 34 values, creating too many combinations.
Any help/tips on how to approach this in a more efficient manner would be greatly appreciated!
A:
My solution
So my attempt with Branch&Bound
def bb(target):
L=[l1,l2,l3,l4,l5,l6,l7,l8]
mn=[min(l) for l in L]
mx=[max(l) for l in L]
return bbrec([], target, L, mn, mx)
eps=1e-9
def bbrec(sofar, target, L, mn, mx):
if len(L)==0:
if target<eps and target>-eps: return [sofar]
return []
if sum(mn)>target+eps: return []
if sum(mx)<target-eps: return []
res=[]
for x in L[0]:
res += bbrec(sofar+[x], target-x, L[1:], mn[1:], mx[1:])
return res
Note that it is clearly not optimized. For example, it might be faster, to avoid list appending, to deal with 8 elements list from the start (for example, for sofar, filled with None slots at the beginning). Or to create an iterator (yielding results when we find some, rather than appending them.
But as is, it is already 40 times faster than brute force method on my generated data (giving the exact same result). Which is something, considering that this is pure python, when brute force can use by beloved itertools (that is python also, of course, but iterations are done faster, since they are done in implementation of itertools, not in python code).
And I must confess brute force was faster than expected. But, yet, still 40 times too slow.
Explanation
General principle of branch and bound is to enumerate all possible solution recursively (reasoning being "there are len(l1) sort of solutions: those containing l1[0], those containing l1[1], ...; and among the first category, there are len(l2) sort of solutions, ..."). Which, so far, is just another implementation of brute force. Except that during recursion, you can't cut whole branches, (whole subset of all candidates) if you know that finding a solution is impossible from where you are.
It is probably clearer with an example, so let's use yours.
bbrec is called with
a partial solution (starting with an empty list [], and ending with a list of 8 numbers)
a target for the sum of remaining numbers
a list of list from which we must take numbers (so at the beginning, your 8 lists. Once we have chosen the 1st number, the 7 remaining lists. Etc)
a list of minimum values of those lists (8 numbers at first, being the 8 minimum values)
a list of maximum values
It is called at first with ([], target, [l1,...,l8], [min(l1),...,min(l8)], [max(l1),...,max(l8)])
And each call is supposed to choose a number from the first list, and call bbrec recursively to choose the remaining numbers.
The eigth recursive call with be done with sofar a list of 8 numbers (a solution, or candidate). target being what we have to find in the rest. And since there is no rest, it should be 0. L, mn, and mx an empty list. So When we see that we are in this situation (that is len(L)=len(mn)=len(mx)=0 or len(sofar)=8 — any of those 4 criteria are equivalents), we just have to check if the remaining target is 0. If so, then sofar is a solution. If not, then sofar is not a solution.
If we are not in this situation. That is, if there are still numbers to choose for sofar. bbrec just choose the first number, by iterating all possibilites from the first list. And, for each of those, call itself recursively to choose remaining numbers.
But before doing so (and those are the 2 lines that make B&B useful. Otherwise it is just a recursive implementation of the enumeration of all 8-uples for 8 lists), we check if there is at least a chance to find a solution there.
For example, if you are calling
bbrec([1,2,3,4], 12, [[1,2,3],[1,2,3], [5,6,7], [8,9,10]], [1,1,5,8], [3,3,7,10])
(note that mn and mx are redundant information. They are just min and max of the lists. But no need to compute those min and max over and over again)
So, if you are calling bbrec like this, that means that you have already chosen 4 numbers, from the 4 first lists. And you need to choose 4 other numbers, from the 4 remaining list that are passed as the 3rd argument.
And the total of the 4 numbers you still have to choose must be 12.
But, you also know that any combination of 4 numbers from the 4 remaining list will sum to a total between 1+1+5+8=15 and 3+3+7+10=23.
So, no need to even bother enumerating all the solutions starting with [1,2,3,4] and continuing with 4 numbers chosen from [1,2,3],[1,2,3], [5,6,7], [8,9,10]. It is a lost cause: none of the remaining 4 numbers with result in a total of 12 anyway (they all will have a total of at least 15).
And that is what explain why this algorithm can beat, with a factor 40, an itertools based solution, by using only naive manipulation of lists, and for loops.
Brute force solution
If you want to compare yourself on your example, the brute force solution (already given in comments)
def brute(target):
return [k for k in itertools.product(l1,l2,l3,l4,l5,l6,l7,l8) if math.isclose(sum(k), target)]
Generator version
Not really faster. But at least, if the idea is not to build a list of all solutions, but to iterate through them, that version allows to do so (and it is very slightly faster). And since we talked about generator vs lists in comments...
eps=1e-9
def bb(target):
L=[l1,l2,l3,l4,l5,l6,l7,l8]
mn=[min(l) for l in L]
mx=[max(l) for l in L]
return list(bbit([], target, L, mn, mx))
def bbit(sofar, target, L, mn, mx):
if len(L)==0:
if target<eps and target>-eps:
print(sofar)
yield sofar
return
if sum(mn)>target+eps: return
if sum(mx)<target-eps: return
for x in L[0]:
yield from bbrec(sofar+[x], target-x, L[1:], mn[1:], mx[1:])
Here, I use it just to build a list (so, no advantage from the first version).
But if you wanted to just print solutions, for example, you could
for sol in bbit([], target, L, mn, mx):
print(sol)
Which would print all solutions, without building any list of solutions.
Example lists
Just for btilly or those who would like to test their method against the same lists I've used, here are the ones I've chosen
l1=list(np.arange(0.013, 0.019, 0.001))
l2=list(np.arange(0.0396, 0.0516, 0.0012))
l3=[0.0396, 0.0498]
l4=list(np.arange(0.02, 0.8, 0.02))
l5=list(np.arange(0.001, 0.020, 0.001))
l6=list(np.arange(0.021, 0.035, 0.001))
l7=list(np.arange(0.058, 0.088, 0.002))
l8=list(np.arange(0.020, 0.040, 0.005))
A:
Non-recursive solution:
from itertools import accumulate, product
from sys import float_info
def test(lists, target):
# will return a list of 2-tuples, containing sum and elements making it
convolutions = [(0,())]
# lower_bounds[i] - what is the least gain we'll get from remaining lists
lower_bounds = list(accumulate(map(min, lists[::-1])))[::-1][1:] + [0]
# upper_bounds[i] - what is the max gain we'll get from remaining lists
upper_bounds = list(accumulate(map(max, lists[::-1])))[::-1][1:] + [0]
for l, lower_bound, upper_bound in zip(lists, lower_bounds, upper_bounds):
convolutions = [
# update sum and extend the list for viable candidates
(accumulated + new_element, elements + (new_element,))
for (accumulated, elements), new_element in product(convolutions, l)
if lower_bound - float_info.epsilon <= target - accumulated - new_element <= upper_bound + float_info.epsilon
]
return convolutions
Output of test(lists, target):
[(0.09540000000000001, (0.015, 0.0396, 0.0408)),
(0.09540000000000001, (0.015, 0.0408, 0.0396))]
This can be further optimized by sorting lists and slicing them based on upper/lower bound using bisect:
from bisect import bisect_left, bisect_right
# ...
convolutions = [
(partial_sum + new_element, partial_elements + (new_element,))
for partial_sum, partial_elements in convolutions
for new_element in l[bisect_left(l, target-upper_bound-partial_sum-float_info.epsilon):bisect_right(l, target-lower_bound-partial_sum+float_info.epsilon)]
]
A:
And here is a straightforward dynamic programming solution. I build a data structure which has the answer, and then generate the answer from that data structure.
from dataclasses import dataclass
from decimal import Decimal
from typing import Any
@dataclass
class SummationNode:
value: Decimal
solution_tail: Any = None
next_solution: Any = None
def solutions (self):
if self.value is None:
yield []
else:
for rest in self.solution_tail.solutions():
rest.append(self.value)
yield rest
if self.next_solution is not None:
yield from self.next_solution.solutions()
def all_combinations(target, *lists):
solution_by_total = {
Decimal(0): SummationNode(None)
}
for l in lists:
old_solution_by_total = solution_by_total
solution_by_total = {}
for x_raw in l:
x = Decimal(str(x_raw)) # Deal with rounding.
for prev_total, prev_solution in old_solution_by_total.items():
next_solution = solution_by_total.get(x + prev_total)
solution_by_total[x + prev_total] = SummationNode(
x, prev_solution, next_solution
)
return solution_by_total.get(Decimal(str(target)))
l1 = [0.013,0.014,0.015,0.016,0.017,0.018]
l2 = [0.0396,0.0408,0.042,0.0432,0.0444,0.045,0.0468,0.048,0.0492,0.0504]
l3 = [0.0396,0.0408]
for answer in all_combinations(0.0964, l1, l2, l3).solutions():
print(answer)
To check that the logic of this matches the others, when rounding errors are fixed, use the following test:
import numpy as np
def arange(start, stop, step):
return [round(x, 5) for x in list(np.arange(start, stop, step))]
l1=arange(0.013, 0.019, 0.001)
l2=arange(0.0396, 0.0516, 0.0012)
l3=[0.0396, 0.0498]
l4=arange(0.02, 0.8, 0.02)
l5=arange(0.001, 0.020, 0.001)
l6=arange(0.021, 0.035, 0.001)
l7=arange(0.058, 0.088, 0.002)
l8=arange(0.020, 0.040, 0.005)
for answer in all_combinations(0.2716, l1, l2, l3, l4, l5, l6, l7, l8).solutions():
print([float(x) for x in answer])
|
Find all combinations that add up to given number python with list of lists
|
I've seen plenty of threads on how to find all combinations that add up to a number with one list, but wanted to know how to expand this such that you can only pick one number at a time, from a list of lists
Question:
You must select 1 number from each list, how do you find all combinations that sum to N?
Given:
3 lists of differing fixed lengths [e.g. l1 will always have 6 values, l2 will always have 10 values, etc]:
l1 = [0.013,0.014,0.015,0.016,0.017,0.018]
l2 = [0.0396,0.0408,0.042,0.0432,0.0444,0.045,0.0468,0.048,0.0492,0.0504]
l3 = [0.0396,0.0408]
Desired Output:
If N = .0954 then the output is [0.015, 0.396, 0.408],[0.015, 0.408, 0.0396].
What I have tried:
output = sum(list(product(l1,l2,l3,l4,l5,l6,l7,l8)))
However this is too intensive as my largest bucket has 34 values, creating too many combinations.
Any help/tips on how to approach this in a more efficient manner would be greatly appreciated!
|
[
"My solution\nSo my attempt with Branch&Bound\n\ndef bb(target):\n L=[l1,l2,l3,l4,l5,l6,l7,l8]\n mn=[min(l) for l in L]\n mx=[max(l) for l in L]\n return bbrec([], target, L, mn, mx)\n \neps=1e-9\n\ndef bbrec(sofar, target, L, mn, mx):\n if len(L)==0:\n if target<eps and target>-eps: return [sofar]\n return []\n if sum(mn)>target+eps: return []\n if sum(mx)<target-eps: return []\n res=[]\n for x in L[0]:\n res += bbrec(sofar+[x], target-x, L[1:], mn[1:], mx[1:])\n return res\n\nNote that it is clearly not optimized. For example, it might be faster, to avoid list appending, to deal with 8 elements list from the start (for example, for sofar, filled with None slots at the beginning). Or to create an iterator (yielding results when we find some, rather than appending them.\nBut as is, it is already 40 times faster than brute force method on my generated data (giving the exact same result). Which is something, considering that this is pure python, when brute force can use by beloved itertools (that is python also, of course, but iterations are done faster, since they are done in implementation of itertools, not in python code).\nAnd I must confess brute force was faster than expected. But, yet, still 40 times too slow.\nExplanation\nGeneral principle of branch and bound is to enumerate all possible solution recursively (reasoning being \"there are len(l1) sort of solutions: those containing l1[0], those containing l1[1], ...; and among the first category, there are len(l2) sort of solutions, ...\"). Which, so far, is just another implementation of brute force. Except that during recursion, you can't cut whole branches, (whole subset of all candidates) if you know that finding a solution is impossible from where you are.\nIt is probably clearer with an example, so let's use yours.\nbbrec is called with\n\na partial solution (starting with an empty list [], and ending with a list of 8 numbers)\na target for the sum of remaining numbers\na list of list from which we must take numbers (so at the beginning, your 8 lists. Once we have chosen the 1st number, the 7 remaining lists. Etc)\na list of minimum values of those lists (8 numbers at first, being the 8 minimum values)\na list of maximum values\n\nIt is called at first with ([], target, [l1,...,l8], [min(l1),...,min(l8)], [max(l1),...,max(l8)])\nAnd each call is supposed to choose a number from the first list, and call bbrec recursively to choose the remaining numbers.\nThe eigth recursive call with be done with sofar a list of 8 numbers (a solution, or candidate). target being what we have to find in the rest. And since there is no rest, it should be 0. L, mn, and mx an empty list. So When we see that we are in this situation (that is len(L)=len(mn)=len(mx)=0 or len(sofar)=8 — any of those 4 criteria are equivalents), we just have to check if the remaining target is 0. If so, then sofar is a solution. If not, then sofar is not a solution.\nIf we are not in this situation. That is, if there are still numbers to choose for sofar. bbrec just choose the first number, by iterating all possibilites from the first list. And, for each of those, call itself recursively to choose remaining numbers.\nBut before doing so (and those are the 2 lines that make B&B useful. Otherwise it is just a recursive implementation of the enumeration of all 8-uples for 8 lists), we check if there is at least a chance to find a solution there.\nFor example, if you are calling\nbbrec([1,2,3,4], 12, [[1,2,3],[1,2,3], [5,6,7], [8,9,10]], [1,1,5,8], [3,3,7,10])\n(note that mn and mx are redundant information. They are just min and max of the lists. But no need to compute those min and max over and over again)\nSo, if you are calling bbrec like this, that means that you have already chosen 4 numbers, from the 4 first lists. And you need to choose 4 other numbers, from the 4 remaining list that are passed as the 3rd argument.\nAnd the total of the 4 numbers you still have to choose must be 12.\nBut, you also know that any combination of 4 numbers from the 4 remaining list will sum to a total between 1+1+5+8=15 and 3+3+7+10=23.\nSo, no need to even bother enumerating all the solutions starting with [1,2,3,4] and continuing with 4 numbers chosen from [1,2,3],[1,2,3], [5,6,7], [8,9,10]. It is a lost cause: none of the remaining 4 numbers with result in a total of 12 anyway (they all will have a total of at least 15).\nAnd that is what explain why this algorithm can beat, with a factor 40, an itertools based solution, by using only naive manipulation of lists, and for loops.\nBrute force solution\nIf you want to compare yourself on your example, the brute force solution (already given in comments)\ndef brute(target):\n return [k for k in itertools.product(l1,l2,l3,l4,l5,l6,l7,l8) if math.isclose(sum(k), target)]\n\nGenerator version\nNot really faster. But at least, if the idea is not to build a list of all solutions, but to iterate through them, that version allows to do so (and it is very slightly faster). And since we talked about generator vs lists in comments...\neps=1e-9\ndef bb(target):\n L=[l1,l2,l3,l4,l5,l6,l7,l8]\n mn=[min(l) for l in L]\n mx=[max(l) for l in L]\n return list(bbit([], target, L, mn, mx))\ndef bbit(sofar, target, L, mn, mx):\n if len(L)==0:\n if target<eps and target>-eps:\n print(sofar)\n yield sofar\n return\n if sum(mn)>target+eps: return\n if sum(mx)<target-eps: return\n for x in L[0]:\n yield from bbrec(sofar+[x], target-x, L[1:], mn[1:], mx[1:])\n\nHere, I use it just to build a list (so, no advantage from the first version).\nBut if you wanted to just print solutions, for example, you could\nfor sol in bbit([], target, L, mn, mx):\n print(sol)\n\nWhich would print all solutions, without building any list of solutions.\nExample lists\nJust for btilly or those who would like to test their method against the same lists I've used, here are the ones I've chosen\nl1=list(np.arange(0.013, 0.019, 0.001))\nl2=list(np.arange(0.0396, 0.0516, 0.0012))\nl3=[0.0396, 0.0498]\nl4=list(np.arange(0.02, 0.8, 0.02))\nl5=list(np.arange(0.001, 0.020, 0.001))\nl6=list(np.arange(0.021, 0.035, 0.001))\nl7=list(np.arange(0.058, 0.088, 0.002))\nl8=list(np.arange(0.020, 0.040, 0.005))\n\n",
"Non-recursive solution:\nfrom itertools import accumulate, product\nfrom sys import float_info\n\ndef test(lists, target):\n # will return a list of 2-tuples, containing sum and elements making it\n convolutions = [(0,())]\n # lower_bounds[i] - what is the least gain we'll get from remaining lists\n lower_bounds = list(accumulate(map(min, lists[::-1])))[::-1][1:] + [0]\n # upper_bounds[i] - what is the max gain we'll get from remaining lists\n upper_bounds = list(accumulate(map(max, lists[::-1])))[::-1][1:] + [0]\n for l, lower_bound, upper_bound in zip(lists, lower_bounds, upper_bounds):\n convolutions = [\n # update sum and extend the list for viable candidates\n (accumulated + new_element, elements + (new_element,))\n for (accumulated, elements), new_element in product(convolutions, l)\n if lower_bound - float_info.epsilon <= target - accumulated - new_element <= upper_bound + float_info.epsilon\n ]\n\n return convolutions\n\nOutput of test(lists, target):\n[(0.09540000000000001, (0.015, 0.0396, 0.0408)),\n (0.09540000000000001, (0.015, 0.0408, 0.0396))]\n\nThis can be further optimized by sorting lists and slicing them based on upper/lower bound using bisect:\nfrom bisect import bisect_left, bisect_right\n# ...\n\nconvolutions = [\n (partial_sum + new_element, partial_elements + (new_element,))\n for partial_sum, partial_elements in convolutions\n for new_element in l[bisect_left(l, target-upper_bound-partial_sum-float_info.epsilon):bisect_right(l, target-lower_bound-partial_sum+float_info.epsilon)]\n]\n\n",
"And here is a straightforward dynamic programming solution. I build a data structure which has the answer, and then generate the answer from that data structure.\nfrom dataclasses import dataclass\nfrom decimal import Decimal\nfrom typing import Any\n\n@dataclass\nclass SummationNode:\n value: Decimal\n solution_tail: Any = None\n next_solution: Any = None\n\n def solutions (self):\n if self.value is None:\n yield []\n else:\n for rest in self.solution_tail.solutions():\n rest.append(self.value)\n yield rest\n\n if self.next_solution is not None:\n yield from self.next_solution.solutions()\n\n\ndef all_combinations(target, *lists):\n solution_by_total = {\n Decimal(0): SummationNode(None)\n }\n\n for l in lists:\n old_solution_by_total = solution_by_total\n solution_by_total = {}\n for x_raw in l:\n x = Decimal(str(x_raw)) # Deal with rounding.\n for prev_total, prev_solution in old_solution_by_total.items():\n next_solution = solution_by_total.get(x + prev_total)\n solution_by_total[x + prev_total] = SummationNode(\n x, prev_solution, next_solution\n )\n return solution_by_total.get(Decimal(str(target)))\n\nl1 = [0.013,0.014,0.015,0.016,0.017,0.018]\nl2 = [0.0396,0.0408,0.042,0.0432,0.0444,0.045,0.0468,0.048,0.0492,0.0504]\nl3 = [0.0396,0.0408]\nfor answer in all_combinations(0.0964, l1, l2, l3).solutions():\n print(answer)\n\n\nTo check that the logic of this matches the others, when rounding errors are fixed, use the following test:\nimport numpy as np\n\ndef arange(start, stop, step):\n return [round(x, 5) for x in list(np.arange(start, stop, step))]\n\nl1=arange(0.013, 0.019, 0.001)\nl2=arange(0.0396, 0.0516, 0.0012)\nl3=[0.0396, 0.0498]\nl4=arange(0.02, 0.8, 0.02)\nl5=arange(0.001, 0.020, 0.001)\nl6=arange(0.021, 0.035, 0.001)\nl7=arange(0.058, 0.088, 0.002)\nl8=arange(0.020, 0.040, 0.005)\n\nfor answer in all_combinations(0.2716, l1, l2, l3, l4, l5, l6, l7, l8).solutions():\n print([float(x) for x in answer])\n\n"
] |
[
2,
2,
2
] |
[] |
[] |
[
"algorithm",
"combinations",
"python",
"subset_sum",
"sum"
] |
stackoverflow_0074538180_algorithm_combinations_python_subset_sum_sum.txt
|
Q:
Numpy: Making overlapping vectorized modifications to an existing numpy array
I'm interested in knowing if its possible to modify individual indices of a numpy array in a manner flexible enough to modify the same index multiple times:
import numpy as np
zeros = np.zeros(10)
indices = np.array([0,0])
adders = np.array([5,8])
indexing adders in this way can give you a sum of 10
In [17]: adders[indices]
Out[17]: array([5, 5])
but modifying the same index on zeros twice will only yield a single modification
zeros[indices] += adders[indices]
zeros
Out[19]: array([5., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
I understand this can be done via a simple for loop, but is there any numpy functionality for this?
A:
figured it out
np.add.at(zeros, indices, adders[indices])
|
Numpy: Making overlapping vectorized modifications to an existing numpy array
|
I'm interested in knowing if its possible to modify individual indices of a numpy array in a manner flexible enough to modify the same index multiple times:
import numpy as np
zeros = np.zeros(10)
indices = np.array([0,0])
adders = np.array([5,8])
indexing adders in this way can give you a sum of 10
In [17]: adders[indices]
Out[17]: array([5, 5])
but modifying the same index on zeros twice will only yield a single modification
zeros[indices] += adders[indices]
zeros
Out[19]: array([5., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
I understand this can be done via a simple for loop, but is there any numpy functionality for this?
|
[
"figured it out\nnp.add.at(zeros, indices, adders[indices])\n\n"
] |
[
0
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0074536692_numpy_python.txt
|
Q:
How to override methods decorated @overload?
I made a class inheritted QGraphicsItem (of pyside6) and wrote two overridings.
from PySide6.QtCore import QRectF, QLineF, QPointF
from PySide6.QtWidgets import QGraphicsItem
class myItem(QGraphicsItem):
def setPos(self, x: float, y: float):
# do something and the coordinates maybe changed
super().setPos(x, y)
def setPos(self, pos: QPointF):
# do something and the coordinates maybe changed
super().setPos(pos)
item = myItem()
item.setPos(10, 10)
print(item.pos())
item.setPos(QPointF(100, 100))
print(item.pos())
But I get an error when I run the script.
Traceback (most recent call last):
File "d:\MyProjects\GitRepo\projects\test2.py", line 15, in <module>
item.setPos(10, 10)
TypeError: setPos() takes 2 positional arguments but 3 were given
When I checked the definition of QGraphicsItem, the setPos methods had an @overload decorator.
@overload
def setPos(self, pos:Union[PySide6.QtCore.QPointF, PySide6.QtCore.QPoint, PySide6.QtGui.QPainterPath.Element]) -> None: ...
@overload
def setPos(self, x:float, y:float) -> None: ...
How could I make it right..?
A:
At the expense of installing another package you could achieve what you are trying to do. These libraries will re-direct to methods based on the type signature of the method definition.
e.g. plum and multidispatch.
This example is with plum.
from plum import dispatch
class MyClass:
pass
class MultiDispatch:
@dispatch
def func(self,
a: int) -> str:
print(f'ex1. Type {type(a)}')
return str(type(a))
@dispatch
def func(self,
a: str) -> str:
print(f'ex2. Type {type(a)}')
return str(type(a))
@dispatch
def func(self,
a: int,
b: str) -> str:
print(f'ex3. Types {type(a)} - {type(b)}')
return str(type(a))
@dispatch
def func(self,
a: str,
b: int) -> str:
print(f'ex4. Types {type(a)} - {type(b)}')
return str(type(a))
@dispatch
def func(self,
a: MyClass) -> str:
print(f'ex5. Type {type(a)}')
return str(type(a))
if __name__ == "__main__":
m = MultiDispatch()
print(m.func(123))
print(m.func('456'))
print(m.func(MyClass()))
print(m.func(123, '456'))
print(m.func('456', 123))
print(m.func(MyClass()))
This give the result below, where you can see the calls to the same method name are redirected to the method with the matching type signature. Including where one of the type signatures depends on a user defined class.
ex1. Type <class 'int'>
<class 'int'>
ex2. Type <class 'str'>
<class 'str'>
ex5. Type <class '__main__.MyClass'>
<class '__main__.MyClass'>
ex3. Types <class 'int'> - <class 'str'>
<class 'int'>
ex4. Types <class 'str'> - <class 'int'>
<class 'str'>
ex5. Type <class '__main__.MyClass'>
<class '__main__.MyClass'>
I cannot test it, but I believe your code would work as you want if trivially modified to below.
from plum import dispatch
from PySide6.QtCore import QRectF, QLineF, QPointF
from PySide6.QtWidgets import QGraphicsItem
class myItem(QGraphicsItem):
@dispatch
def setPos(self, x: float, y: float):
# do something and the coordinates maybe changed
super().setPos(x, y)
@dispatch
def setPos(self, pos: QPointF):
# do something and the coordinates maybe changed
super().setPos(pos)
|
How to override methods decorated @overload?
|
I made a class inheritted QGraphicsItem (of pyside6) and wrote two overridings.
from PySide6.QtCore import QRectF, QLineF, QPointF
from PySide6.QtWidgets import QGraphicsItem
class myItem(QGraphicsItem):
def setPos(self, x: float, y: float):
# do something and the coordinates maybe changed
super().setPos(x, y)
def setPos(self, pos: QPointF):
# do something and the coordinates maybe changed
super().setPos(pos)
item = myItem()
item.setPos(10, 10)
print(item.pos())
item.setPos(QPointF(100, 100))
print(item.pos())
But I get an error when I run the script.
Traceback (most recent call last):
File "d:\MyProjects\GitRepo\projects\test2.py", line 15, in <module>
item.setPos(10, 10)
TypeError: setPos() takes 2 positional arguments but 3 were given
When I checked the definition of QGraphicsItem, the setPos methods had an @overload decorator.
@overload
def setPos(self, pos:Union[PySide6.QtCore.QPointF, PySide6.QtCore.QPoint, PySide6.QtGui.QPainterPath.Element]) -> None: ...
@overload
def setPos(self, x:float, y:float) -> None: ...
How could I make it right..?
|
[
"At the expense of installing another package you could achieve what you are trying to do. These libraries will re-direct to methods based on the type signature of the method definition.\ne.g. plum and multidispatch.\nThis example is with plum.\nfrom plum import dispatch\n\n\nclass MyClass:\n pass\n\n\nclass MultiDispatch:\n\n @dispatch\n def func(self,\n a: int) -> str:\n print(f'ex1. Type {type(a)}')\n return str(type(a))\n\n @dispatch\n def func(self,\n a: str) -> str:\n print(f'ex2. Type {type(a)}')\n return str(type(a))\n\n @dispatch\n def func(self,\n a: int,\n b: str) -> str:\n print(f'ex3. Types {type(a)} - {type(b)}')\n return str(type(a))\n\n @dispatch\n def func(self,\n a: str,\n b: int) -> str:\n print(f'ex4. Types {type(a)} - {type(b)}')\n return str(type(a))\n\n @dispatch\n def func(self,\n a: MyClass) -> str:\n print(f'ex5. Type {type(a)}')\n return str(type(a))\n\nif __name__ == \"__main__\":\n m = MultiDispatch()\n print(m.func(123))\n print(m.func('456'))\n print(m.func(MyClass()))\n print(m.func(123, '456'))\n print(m.func('456', 123))\n print(m.func(MyClass()))\n\nThis give the result below, where you can see the calls to the same method name are redirected to the method with the matching type signature. Including where one of the type signatures depends on a user defined class.\nex1. Type <class 'int'>\n<class 'int'>\nex2. Type <class 'str'>\n<class 'str'>\nex5. Type <class '__main__.MyClass'>\n<class '__main__.MyClass'>\nex3. Types <class 'int'> - <class 'str'>\n<class 'int'>\nex4. Types <class 'str'> - <class 'int'>\n<class 'str'>\nex5. Type <class '__main__.MyClass'>\n<class '__main__.MyClass'>\n\nI cannot test it, but I believe your code would work as you want if trivially modified to below.\nfrom plum import dispatch\nfrom PySide6.QtCore import QRectF, QLineF, QPointF\nfrom PySide6.QtWidgets import QGraphicsItem\n\n\nclass myItem(QGraphicsItem):\n @dispatch\n def setPos(self, x: float, y: float):\n # do something and the coordinates maybe changed\n super().setPos(x, y)\n\n @dispatch\n def setPos(self, pos: QPointF):\n # do something and the coordinates maybe changed\n super().setPos(pos)\n\n"
] |
[
0
] |
[] |
[] |
[
"overloading",
"overriding",
"python"
] |
stackoverflow_0071329090_overloading_overriding_python.txt
|
Q:
Python while not true loops
door = input("Do you want to open the door? Enter yes or no: ").lower()
while door != "yes" and door != "no":
print("Invalid answer.")
door = input("Do you want to open the door? Enter yes or no: ").lower()
if door == "yes":
print("You try to twist open the doorknob but it is locked.")
elif door == "no":
print("You decide not to open the door.")
Is there an easier way to use the while loop for invalid answers? So I won't need to add that line after every single question in the program.
I tried def() and while true, but not quite sure how to use the them correctly.
A:
One way to avoid the extra line:
while True
door = input("Do you want to open the door? Enter yes or no: ").lower()
if door in ("yes", "no"):
break
print("Invalid answer.")
Or if you do this a lot make a helper function.
def get_input(prompt, error, choices):
while True:
answer = input(f"{prompt} Enter {', '.join(choices)}: ")
if answer in choices:
return answer
print(error)
Example usage:
door = get_input("Do you want to open the door?", "Invalid answer.", ("yes", "no"))
if door == "yes":
print("You try to twist open the doorknob but it is locked.")
else:
print("You decide not to open the door.")
|
Python while not true loops
|
door = input("Do you want to open the door? Enter yes or no: ").lower()
while door != "yes" and door != "no":
print("Invalid answer.")
door = input("Do you want to open the door? Enter yes or no: ").lower()
if door == "yes":
print("You try to twist open the doorknob but it is locked.")
elif door == "no":
print("You decide not to open the door.")
Is there an easier way to use the while loop for invalid answers? So I won't need to add that line after every single question in the program.
I tried def() and while true, but not quite sure how to use the them correctly.
|
[
"One way to avoid the extra line:\nwhile True\n door = input(\"Do you want to open the door? Enter yes or no: \").lower()\n if door in (\"yes\", \"no\"):\n break\n print(\"Invalid answer.\")\n\nOr if you do this a lot make a helper function.\ndef get_input(prompt, error, choices):\n while True:\n answer = input(f\"{prompt} Enter {', '.join(choices)}: \")\n if answer in choices:\n return answer\n print(error)\n\nExample usage:\ndoor = get_input(\"Do you want to open the door?\", \"Invalid answer.\", (\"yes\", \"no\"))\nif door == \"yes\":\n print(\"You try to twist open the doorknob but it is locked.\")\nelse:\n print(\"You decide not to open the door.\")\n\n"
] |
[
2
] |
[
"while True:\n answer = (\"Enter yes or no: \").lower()\n if answer in [\"yes\", \"no\"]:\n break\n print(\"Invalid answer.\")\n # loop will repeat again\n \n\n"
] |
[
-1
] |
[
"function",
"if_statement",
"python",
"while_loop"
] |
stackoverflow_0074539288_function_if_statement_python_while_loop.txt
|
Q:
Pydantic Model: Convert UUID to string when calling .dict()
Thank you for your time.
I'm trying to convert UUID field into string when calling .dict() to save to a monogdb using pymongo. I tried with .json() but seems like mongodb doesn't like it
TypeError: document must be an instance of dict, bson.son.SON, bson.raw_bson.RawBSONDocument, or a type that inherits from collections.MutableMapping
Here is what I have done so far:
from uuid import uuid4
from datetime import datetime
from pydantic import BaseModel, Field, UUID4
class TestModel(BaseModel):
id: UUID4 = Field(default_factory=uuid4)
title: str = Field(default="")
ts: datetime = Field(default_factory=datetime.utcnow)
record = TestModel()
record.title = "Hello!"
print(record.json())
# {"id": "4d52517a-88a0-43f8-9d9a-df9d7b6ddf01", "title": "Hello!", "ts": "2021-08-18T03:00:54.913345"}
print(record.dict())
# {'id': UUID('4d52517a-88a0-43f8-9d9a-df9d7b6ddf01'), 'title': 'Hello!', 'ts': datetime.datetime(2021, 8, 18, 3, 0, 54, 913345)}
Any advice? Thank you.
The best I can do is make a new method called to_dict() inside that model and call it instead
class TestModel(BaseModel):
id: UUID4 = Field(default_factory=uuid4)
title: str = Field(default="")
def to_dict(self):
data = self.dict()
data["id"] = self.id.hex
return data
record = TestModel()
print(record.to_dict())
# {'id': '03c088da40e84ee7aa380fac82a839d6', 'title': ''}
A:
Following on Pydantic's docs for classes-with-get_validators
I created the following custom type NewUuid.
It accepts a string matching the UUID format and validates it by consuming the value with uuid.UUID(). If the value is invalid, uuid.UUID() throws an exception (see example output) and if it's valid, then NewUuid returns a string (see example output). The exception is any of uuid.UUID()'s exceptions, but it's wrapped with Pydantic's exception as well.
The script below can run as is.
import uuid
from pydantic import BaseModel
class NewUuid(str):
"""
Partial UK postcode validation. Note: this is just an example, and is not
intended for use in production; in particular this does NOT guarantee
a postcode exists, just that it has a valid format.
"""
@classmethod
def __get_validators__(cls):
# one or more validators may be yielded which will be called in the
# order to validate the input, each validator will receive as an input
# the value returned from the previous validator
yield cls.validate
@classmethod
def __modify_schema__(cls, field_schema):
# __modify_schema__ should mutate the dict it receives in place,
# the returned value will be ignored
field_schema.update(
# simplified regex here for brevity, see the wikipedia link above
pattern='^[A-F0-9a-f]{8}(-[A-F0-9a-f]{4}){3}-[A-F0-9a-f]{12}$',
# some example postcodes
examples=['4a33135d-8aa3-47ba-bcfd-faa297b7fb5b'],
)
@classmethod
def validate(cls, v):
if not isinstance(v, str):
raise TypeError('string required')
u = uuid.UUID(v)
# you could also return a string here which would mean model.post_code
# would be a string, pydantic won't care but you could end up with some
# confusion since the value's type won't match the type annotation
# exactly
return cls(f'{v}')
def __repr__(self):
return f'NewUuid({super().__repr__()})'
class Resource(BaseModel):
id: NewUuid
name: str
print('-' * 20)
resource_correct_id: Resource = Resource(id='e8991fd8-b655-45ff-996f-8bc1f60f31e0', name='Server2')
print(resource_correct_id)
print(resource_correct_id.id)
print(resource_correct_id.dict())
print('-' * 20)
resource_malformed_id: Resource = Resource(id='X8991fd8-b655-45ff-996f-8bc1f60f31e0', name='Server3')
print(resource_malformed_id)
print(resource_malformed_id.id)
Example Output
--------------------
id=NewUuid('e8991fd8-b655-45ff-996f-8bc1f60f31e0') name='Server2'
e8991fd8-b655-45ff-996f-8bc1f60f31e0
{'id': NewUuid('e8991fd8-b655-45ff-996f-8bc1f60f31e0'), 'name': 'Server2'}
--------------------
Traceback (most recent call last):
File "/Users/smoshkovits/ws/fallback/playground/test_pydantic8_uuid.py", line 58, in <module>
resource_malformed_id: Resource = Resource(id='X8991fd8-b655-45ff-996f-8bc1f60f31e0', name='Server3')
File "pydantic/main.py", line 406, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Resource
id
invalid literal for int() with base 16: 'X8991fd8b65545ff996f8bc1f60f31e0' (type=value_error)
A:
Pydantic has a possibility to transform or validate fields after the validation or at the same time. In that case, you need to use validator.
First way (this way validates/transforms at the same time to other fields):
from uuid import UUID, uuid4
from pydantic import BaseModel, validator, Field
class ExampleSerializer(BaseModel):
uuid: UUID = Field(default_factory=uuid4)
other_uuid: UUID = Field(default_factory=uuid4)
other_field: str
_transform_cloud = validator("uuid", "other_uuid", allow_reuse=True)(
lambda x: str(x) if x else x
)
req = ExampleSerializer(
uuid="a1fd6286-196c-4922-adeb-d48074f06d80",
other_uuid="a1fd6286-196c-4922-adeb-d48074f06d80",
other_field="123"
).dict()
print(req)
Second way (this way validates/transforms after the others):
from uuid import UUID, uuid4
from pydantic import BaseModel, validator, Field
class ExampleSerializer(BaseModel):
uuid: UUID = Field(default_factory=uuid4)
other_uuid: UUID = Field(default_factory=uuid4)
other_field: str
@validator("uuid", "other_uuid"):
def validate_uuids(cls, value):
if value:
return str(x)
return value
req = ExampleSerializer(
uuid="a1fd6286-196c-4922-adeb-d48074f06d80",
other_uuid="a1fd6286-196c-4922-adeb-d48074f06d80",
other_field="123"
).dict()
print(req)
Result:
{'uuid': 'a1fd6286-196c-4922-adeb-d48074f06d80', 'other_uuid': 'a1fd6286-196c-4922-adeb-d48074f06d80', 'other_field': '123'}
A:
I found a easy way, to convert UUID to string using .dict():
from uuid import UUID
from pydantic import BaseModel
class Person(BaseModel):
id: UUID
name: str
married: bool
person = Person(id='a746f0ec-3d4c-4e23-b6f6-f159a00ed792', name='John', married=True)
print(json.loads(person.json()))
Result:
{'id': 'a746f0ec-3d4c-4e23-b6f6-f159a00ed792', 'name': 'John', 'married': True}
|
Pydantic Model: Convert UUID to string when calling .dict()
|
Thank you for your time.
I'm trying to convert UUID field into string when calling .dict() to save to a monogdb using pymongo. I tried with .json() but seems like mongodb doesn't like it
TypeError: document must be an instance of dict, bson.son.SON, bson.raw_bson.RawBSONDocument, or a type that inherits from collections.MutableMapping
Here is what I have done so far:
from uuid import uuid4
from datetime import datetime
from pydantic import BaseModel, Field, UUID4
class TestModel(BaseModel):
id: UUID4 = Field(default_factory=uuid4)
title: str = Field(default="")
ts: datetime = Field(default_factory=datetime.utcnow)
record = TestModel()
record.title = "Hello!"
print(record.json())
# {"id": "4d52517a-88a0-43f8-9d9a-df9d7b6ddf01", "title": "Hello!", "ts": "2021-08-18T03:00:54.913345"}
print(record.dict())
# {'id': UUID('4d52517a-88a0-43f8-9d9a-df9d7b6ddf01'), 'title': 'Hello!', 'ts': datetime.datetime(2021, 8, 18, 3, 0, 54, 913345)}
Any advice? Thank you.
The best I can do is make a new method called to_dict() inside that model and call it instead
class TestModel(BaseModel):
id: UUID4 = Field(default_factory=uuid4)
title: str = Field(default="")
def to_dict(self):
data = self.dict()
data["id"] = self.id.hex
return data
record = TestModel()
print(record.to_dict())
# {'id': '03c088da40e84ee7aa380fac82a839d6', 'title': ''}
|
[
"Following on Pydantic's docs for classes-with-get_validators\nI created the following custom type NewUuid.\nIt accepts a string matching the UUID format and validates it by consuming the value with uuid.UUID(). If the value is invalid, uuid.UUID() throws an exception (see example output) and if it's valid, then NewUuid returns a string (see example output). The exception is any of uuid.UUID()'s exceptions, but it's wrapped with Pydantic's exception as well.\nThe script below can run as is.\n\nimport uuid\n\nfrom pydantic import BaseModel\n\n\nclass NewUuid(str):\n \"\"\"\n Partial UK postcode validation. Note: this is just an example, and is not\n intended for use in production; in particular this does NOT guarantee\n a postcode exists, just that it has a valid format.\n \"\"\"\n\n @classmethod\n def __get_validators__(cls):\n # one or more validators may be yielded which will be called in the\n # order to validate the input, each validator will receive as an input\n # the value returned from the previous validator\n yield cls.validate\n\n @classmethod\n def __modify_schema__(cls, field_schema):\n # __modify_schema__ should mutate the dict it receives in place,\n # the returned value will be ignored\n field_schema.update(\n # simplified regex here for brevity, see the wikipedia link above\n pattern='^[A-F0-9a-f]{8}(-[A-F0-9a-f]{4}){3}-[A-F0-9a-f]{12}$',\n # some example postcodes\n examples=['4a33135d-8aa3-47ba-bcfd-faa297b7fb5b'],\n )\n\n @classmethod\n def validate(cls, v):\n if not isinstance(v, str):\n raise TypeError('string required')\n u = uuid.UUID(v)\n # you could also return a string here which would mean model.post_code\n # would be a string, pydantic won't care but you could end up with some\n # confusion since the value's type won't match the type annotation\n # exactly\n return cls(f'{v}')\n\n def __repr__(self):\n return f'NewUuid({super().__repr__()})'\n\n\nclass Resource(BaseModel):\n id: NewUuid\n name: str\n\n\nprint('-' * 20)\nresource_correct_id: Resource = Resource(id='e8991fd8-b655-45ff-996f-8bc1f60f31e0', name='Server2')\nprint(resource_correct_id)\nprint(resource_correct_id.id)\nprint(resource_correct_id.dict())\nprint('-' * 20)\n\nresource_malformed_id: Resource = Resource(id='X8991fd8-b655-45ff-996f-8bc1f60f31e0', name='Server3')\nprint(resource_malformed_id)\nprint(resource_malformed_id.id)\n\n\nExample Output\n--------------------\n\nid=NewUuid('e8991fd8-b655-45ff-996f-8bc1f60f31e0') name='Server2'\ne8991fd8-b655-45ff-996f-8bc1f60f31e0\n{'id': NewUuid('e8991fd8-b655-45ff-996f-8bc1f60f31e0'), 'name': 'Server2'}\n\n--------------------\n\nTraceback (most recent call last):\n File \"/Users/smoshkovits/ws/fallback/playground/test_pydantic8_uuid.py\", line 58, in <module>\n resource_malformed_id: Resource = Resource(id='X8991fd8-b655-45ff-996f-8bc1f60f31e0', name='Server3')\n File \"pydantic/main.py\", line 406, in pydantic.main.BaseModel.__init__\npydantic.error_wrappers.ValidationError: 1 validation error for Resource\nid\n invalid literal for int() with base 16: 'X8991fd8b65545ff996f8bc1f60f31e0' (type=value_error)\n\n",
"Pydantic has a possibility to transform or validate fields after the validation or at the same time. In that case, you need to use validator.\nFirst way (this way validates/transforms at the same time to other fields):\nfrom uuid import UUID, uuid4\nfrom pydantic import BaseModel, validator, Field\n\nclass ExampleSerializer(BaseModel):\n uuid: UUID = Field(default_factory=uuid4)\n other_uuid: UUID = Field(default_factory=uuid4)\n other_field: str\n \n _transform_cloud = validator(\"uuid\", \"other_uuid\", allow_reuse=True)(\n lambda x: str(x) if x else x\n )\n\nreq = ExampleSerializer(\n uuid=\"a1fd6286-196c-4922-adeb-d48074f06d80\",\n other_uuid=\"a1fd6286-196c-4922-adeb-d48074f06d80\",\n other_field=\"123\"\n).dict()\n\nprint(req)\n\nSecond way (this way validates/transforms after the others):\nfrom uuid import UUID, uuid4\nfrom pydantic import BaseModel, validator, Field\n\nclass ExampleSerializer(BaseModel):\n uuid: UUID = Field(default_factory=uuid4)\n other_uuid: UUID = Field(default_factory=uuid4)\n other_field: str\n \n @validator(\"uuid\", \"other_uuid\"):\n def validate_uuids(cls, value):\n if value:\n return str(x)\n return value\n\nreq = ExampleSerializer(\n uuid=\"a1fd6286-196c-4922-adeb-d48074f06d80\",\n other_uuid=\"a1fd6286-196c-4922-adeb-d48074f06d80\",\n other_field=\"123\"\n).dict()\n\nprint(req)\n\nResult:\n{'uuid': 'a1fd6286-196c-4922-adeb-d48074f06d80', 'other_uuid': 'a1fd6286-196c-4922-adeb-d48074f06d80', 'other_field': '123'}\n\n",
"I found a easy way, to convert UUID to string using .dict():\nfrom uuid import UUID\nfrom pydantic import BaseModel\n\n\nclass Person(BaseModel):\n id: UUID\n name: str\n married: bool\n\n\nperson = Person(id='a746f0ec-3d4c-4e23-b6f6-f159a00ed792', name='John', married=True)\n\nprint(json.loads(person.json()))\n\nResult:\n{'id': 'a746f0ec-3d4c-4e23-b6f6-f159a00ed792', 'name': 'John', 'married': True}\n\n"
] |
[
2,
0,
0
] |
[
"You don’t need to convert a UUID to a string for mongodb. You can just add the record to the DB as a UUID and it will save it as Binary.\nHere is an example creating a quick UUID and saving it directly to the DB:\n from pydantic import BaseModel\n from uuid import UUID, uuid4\n\n\n class Example(BaseModel):\n id: UUID\n note: str\n\n\n def add_uuid_to_db():\n #database = <get your mongo db from the client>\n collection = database.example_db\n new_id: UUID = uuid4()\n new_record = {\n 'id': new_id,\n 'note': \"Hello World\"\n }\n new_object = Example(**new_record)\n collection.update_one(\n filter={},\n update={\"$set\": new_object.dict()},\n upsert=True\n )\n\n\n if __name__ == '__main__':\n add_uuid_to_db()\n\nAnd here is the resulting record:\n {\n \"_id\": {\n \"$oid\": \"611d1d0d6e00f4849c14a792\"\n },\n \"id\": {\n \"$binary\": \"jyxxsFKaToupb55VUKm0kw==\",\n \"$type\": \"3\"\n },\n \"note\": \"Hello World\"\n }\n\n"
] |
[
-1
] |
[
"pydantic",
"python",
"python_3.x"
] |
stackoverflow_0068826089_pydantic_python_python_3.x.txt
|
Q:
How to sort a list of strings containing letters and numbers
I am trying to sort a list containing strings that are written in a certain format.
Here is an example of said list:
numberList = ['Task #59;', 'Task #40.5; additional', 'Task #40.9; test', 'Task #40; Task Description Difference; test', 'Task #11;', 'Task #12;', 'Task #1;', 'Task #30.1;']
I am currently use this function below that I found online and modified based on an older post.
def natural_sort(listnum):
convert = lambda text: int(text) if text.isdigit() else text.lower()
alphanum_key = lambda key: [convert(c) for c in re.split('([0-9]+)', key)]
listnum.sort(key=alphanum_key)
return listnum
It works as intended, expect it would always sort Task #40; behind Task #40.5; and Task #40.9;.
['Task #1;', 'Task #11;', 'Task #12;', 'Task #30.1;', 'Task #40.5; additional', 'Task #40.9; test', 'Task #40; Task Description Difference; test', 'Task #59;']
However, if I make it Task #40.0; it would sort correctly.
['Task #1;', 'Task #11;', 'Task #12;', 'Task #30.1;', 'Task #40.0; Task Description Difference; test', 'Task #40.5; additional', 'Task #40.9; test', 'Task #59;']
Is there anyway to sort Task #40; in front of Task #40.5; and Task #40.5; without having to make it Task #40.0?
Here is a link to the post that I got the code form:
Is there a built in function for string natural sort?
A:
Use a regex that extracts the number part and converts it to a float to be used as the key
numberList = ['Task #59;', 'Task #40.5; additional', 'Task #40.9; test',
'Task #40; Task Description Difference; test', 'Task #11;', 'Task #12;', 'Task #1;', 'Task #30.1;']
numberList = sorted(numberList, key=lambda v: float(re.search(r"Task #([\d.]+);", v).group(1)))
print(numberList)
['Task #1;', 'Task #11;', 'Task #12;', 'Task #30.1;', 'Task #40; Task Description Difference; test', 'Task #40.5; additional', 'Task #40.9; test', 'Task #59;']
|
How to sort a list of strings containing letters and numbers
|
I am trying to sort a list containing strings that are written in a certain format.
Here is an example of said list:
numberList = ['Task #59;', 'Task #40.5; additional', 'Task #40.9; test', 'Task #40; Task Description Difference; test', 'Task #11;', 'Task #12;', 'Task #1;', 'Task #30.1;']
I am currently use this function below that I found online and modified based on an older post.
def natural_sort(listnum):
convert = lambda text: int(text) if text.isdigit() else text.lower()
alphanum_key = lambda key: [convert(c) for c in re.split('([0-9]+)', key)]
listnum.sort(key=alphanum_key)
return listnum
It works as intended, expect it would always sort Task #40; behind Task #40.5; and Task #40.9;.
['Task #1;', 'Task #11;', 'Task #12;', 'Task #30.1;', 'Task #40.5; additional', 'Task #40.9; test', 'Task #40; Task Description Difference; test', 'Task #59;']
However, if I make it Task #40.0; it would sort correctly.
['Task #1;', 'Task #11;', 'Task #12;', 'Task #30.1;', 'Task #40.0; Task Description Difference; test', 'Task #40.5; additional', 'Task #40.9; test', 'Task #59;']
Is there anyway to sort Task #40; in front of Task #40.5; and Task #40.5; without having to make it Task #40.0?
Here is a link to the post that I got the code form:
Is there a built in function for string natural sort?
|
[
"Use a regex that extracts the number part and converts it to a float to be used as the key\nnumberList = ['Task #59;', 'Task #40.5; additional', 'Task #40.9; test',\n 'Task #40; Task Description Difference; test', 'Task #11;', 'Task #12;', 'Task #1;', 'Task #30.1;']\n\nnumberList = sorted(numberList, key=lambda v: float(re.search(r\"Task #([\\d.]+);\", v).group(1)))\nprint(numberList)\n\n\n['Task #1;', 'Task #11;', 'Task #12;', 'Task #30.1;', 'Task #40; Task Description Difference; test', 'Task #40.5; additional', 'Task #40.9; test', 'Task #59;']\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"sorting"
] |
stackoverflow_0074539276_python_sorting.txt
|
Q:
Converting from local time to UTC time in Python Pandas dataframe?
How would I efficiently convert local times in a dataframe to UTC times? There are 3 columns with information: the date (string), the timezone code (string), and the hour of the day (integer).
date
timezone
hour
7/31/2010 0:00:00
EST
1
6/14/2010 0:00:00
PST
3
6/14/2010 0:00:00
PST
4
5/30/2010 0:00:00
EDT
23
5/30/2010 0:00:00
EDT
24
After the data is converted I will be aggregating it to monthly data.
A:
Gday.
Working with dates is described reasonably well in this answer here: converting utc to est time in python
In that case they have the timezone offsets as numbers e.g +11:00. You have the US short code. So you could convert that column to the numerical equivalent first and then use that function.
Personally I find the notation "Australia/Melbourne" way easier to deal with - especially because it thinks about daylight savings etc for you. Timezones are a nightmare. Thats described here: Python: datetime tzinfo time zone names documentation
In terms of the hour column, you can just use a string function to join those two values together to form a date and time string.
So I'd suggest you convert that timezone column to that format (I.e EST as America/New York), etc, then feed all three columns into a datetime convert line per the first answer
|
Converting from local time to UTC time in Python Pandas dataframe?
|
How would I efficiently convert local times in a dataframe to UTC times? There are 3 columns with information: the date (string), the timezone code (string), and the hour of the day (integer).
date
timezone
hour
7/31/2010 0:00:00
EST
1
6/14/2010 0:00:00
PST
3
6/14/2010 0:00:00
PST
4
5/30/2010 0:00:00
EDT
23
5/30/2010 0:00:00
EDT
24
After the data is converted I will be aggregating it to monthly data.
|
[
"Gday.\nWorking with dates is described reasonably well in this answer here: converting utc to est time in python\nIn that case they have the timezone offsets as numbers e.g +11:00. You have the US short code. So you could convert that column to the numerical equivalent first and then use that function.\nPersonally I find the notation \"Australia/Melbourne\" way easier to deal with - especially because it thinks about daylight savings etc for you. Timezones are a nightmare. Thats described here: Python: datetime tzinfo time zone names documentation\nIn terms of the hour column, you can just use a string function to join those two values together to form a date and time string.\nSo I'd suggest you convert that timezone column to that format (I.e EST as America/New York), etc, then feed all three columns into a datetime convert line per the first answer\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"datetime",
"pandas",
"python"
] |
stackoverflow_0074538986_dataframe_datetime_pandas_python.txt
|
Q:
Matplotlib Inset Axes modify the rectange connectors
I would like to change the inset zoom rectangle to not be a rectangle but just two lines. IE I obtain the image on the left but want the one one the right.
I've tried a few things around modifying the rectangular draw. But I think I can't get it to just draw only a portion. I would really prefer not to manually just set the rectangle to 0 and then draw the lines, but I'm thinking that is most likely what I will need to do. Is there a better way?
A:
Axes.indicate_axes_zoom returns the Rectangle object:
import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(np.arange(0, 10, 0.1)**(1/2))
axins = ax.inset_axes([0.6, 0.1, 0.2, 0.2])
axins.plot(np.arange(0, 10, 0.1)**(1/2))
axins.set_xlim([20, 60])
axins.set_ylim([1, 2.5])
rect, lines = ax.indicate_inset_zoom(axins)
rect.set_edgecolor('none')
For what it is worth, I'm not clear why you want this, but...
|
Matplotlib Inset Axes modify the rectange connectors
|
I would like to change the inset zoom rectangle to not be a rectangle but just two lines. IE I obtain the image on the left but want the one one the right.
I've tried a few things around modifying the rectangular draw. But I think I can't get it to just draw only a portion. I would really prefer not to manually just set the rectangle to 0 and then draw the lines, but I'm thinking that is most likely what I will need to do. Is there a better way?
|
[
"Axes.indicate_axes_zoom returns the Rectangle object:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\n\nax.plot(np.arange(0, 10, 0.1)**(1/2))\n\naxins = ax.inset_axes([0.6, 0.1, 0.2, 0.2])\naxins.plot(np.arange(0, 10, 0.1)**(1/2))\naxins.set_xlim([20, 60])\naxins.set_ylim([1, 2.5])\nrect, lines = ax.indicate_inset_zoom(axins)\nrect.set_edgecolor('none')\n\nFor what it is worth, I'm not clear why you want this, but...\n\n"
] |
[
1
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0074539093_matplotlib_python.txt
|
Q:
How to get all tweets (more than 100) and associated user fields in python using twitter search API v2 and Tweepy?
I am trying to get all tweets and their associated user fields (username, name,...etc) that match a certain query using search_recent_tweets. I tried to use pagination and flattening but it only flattens the tweets (not the user fields). So I am trying to implement something like next_token in get_user_tweets but search_recent_tweets doesn't have pagination_next? How can I do this?
This is the code I am trying to use
import pandas as pd
import tweepy
BEARER_TOKEN = ''
api = tweepy.Client(BEARER_TOKEN)
response = api.search_recent_tweets(query = 'myquery',start_time = '2022-09-19T00:00:00Z', end_time = '2022-09-19T23:59:59Z',
expansions = ['author_id'],
tweet_fields = ['created_at'],
user_fields = ['username','name'],
max_results = 100)
tweet_df = pd.DataFrame(response.data)
metadata = response.meta
users = pd.concat({k: pd.DataFrame(v) for k, v in response.includes.items()}, axis=0)
users = users.reset_index(drop=True)
users.rename(columns={'id':'author_id'}, inplace=True)
all_tweets = tweet_df.merge(users)
next_token = metadata.get('next_token')
while next_token is not None:
response = api.search_recent_tweets(query = 'myquery',start_time = '2022-09-19T00:00:00Z', end_time = '2022-09-19T23:59:59Z',
expansions = ['author_id'],
tweet_fields = ['created_at'],
user_fields = ['username','name'],
pagination_token=next_token,
max_results = 100)
tweet_df = pd.DataFrame(response.data)
metadata = response.meta
users = pd.concat({k: pd.DataFrame(v) for k, v in response.includes.items()}, axis=0)
users = users.reset_index(drop=True)
users.rename(columns={'id':'author_id'}, inplace=True)
tweets = tweet_df.merge(users)
all_tweets.append(tweets)
next_token = metadata.get('next_token')
all_tweets
A:
You can use GTdownloader for that:
from gtdownloader import TweetDownloader
# create downloader using Twitter API credentials
gtd = TweetDownloader(credentials='twitter_keys.yaml')
gtd.get_tweets('myquery',
lang='en',
max_tweets=100,
start_time='09/19/2022',
end_time='09/20/2022'
)
# accessing tweets data frame
gtd.tweets_df.head()
See docs at https://gtdownloader.readthedocs.io/
|
How to get all tweets (more than 100) and associated user fields in python using twitter search API v2 and Tweepy?
|
I am trying to get all tweets and their associated user fields (username, name,...etc) that match a certain query using search_recent_tweets. I tried to use pagination and flattening but it only flattens the tweets (not the user fields). So I am trying to implement something like next_token in get_user_tweets but search_recent_tweets doesn't have pagination_next? How can I do this?
This is the code I am trying to use
import pandas as pd
import tweepy
BEARER_TOKEN = ''
api = tweepy.Client(BEARER_TOKEN)
response = api.search_recent_tweets(query = 'myquery',start_time = '2022-09-19T00:00:00Z', end_time = '2022-09-19T23:59:59Z',
expansions = ['author_id'],
tweet_fields = ['created_at'],
user_fields = ['username','name'],
max_results = 100)
tweet_df = pd.DataFrame(response.data)
metadata = response.meta
users = pd.concat({k: pd.DataFrame(v) for k, v in response.includes.items()}, axis=0)
users = users.reset_index(drop=True)
users.rename(columns={'id':'author_id'}, inplace=True)
all_tweets = tweet_df.merge(users)
next_token = metadata.get('next_token')
while next_token is not None:
response = api.search_recent_tweets(query = 'myquery',start_time = '2022-09-19T00:00:00Z', end_time = '2022-09-19T23:59:59Z',
expansions = ['author_id'],
tweet_fields = ['created_at'],
user_fields = ['username','name'],
pagination_token=next_token,
max_results = 100)
tweet_df = pd.DataFrame(response.data)
metadata = response.meta
users = pd.concat({k: pd.DataFrame(v) for k, v in response.includes.items()}, axis=0)
users = users.reset_index(drop=True)
users.rename(columns={'id':'author_id'}, inplace=True)
tweets = tweet_df.merge(users)
all_tweets.append(tweets)
next_token = metadata.get('next_token')
all_tweets
|
[
"You can use GTdownloader for that:\nfrom gtdownloader import TweetDownloader\n\n# create downloader using Twitter API credentials\ngtd = TweetDownloader(credentials='twitter_keys.yaml')\n\ngtd.get_tweets('myquery', \n lang='en', \n max_tweets=100,\n start_time='09/19/2022', \n end_time='09/20/2022'\n )\n\n# accessing tweets data frame\ngtd.tweets_df.head()\n\nSee docs at https://gtdownloader.readthedocs.io/\n"
] |
[
0
] |
[] |
[] |
[
"python",
"tweepy",
"twitter"
] |
stackoverflow_0073810522_python_tweepy_twitter.txt
|
Q:
What is the best way to initiate parameters on a Python library?
I have a homegrown python library. As it is a library, it should be initialized with parameters every time it is used, based on different projects using it. For example, here is the sample pseudo code:
import myownlibrary
myownlibrary.init('path_to_config_file_containing_details_to_process_data')
Any idea how this can be achieved?
A sample reference code will be very helpful.
Thanks
A:
I added it to the __init__.py as below
global configpath
def setpath(self, pathpassedin):
self.configpath = pathpassedin
print("Value passed in: ", pathpassedin)
I thought self was required, but having self also requires that the method be called as below:
import myownlibrary as mylib
myownlibrary.setpath(mylib, 'path_to_config_file')
Not sure if this is the way to set this up. Would appreciate feedback
|
What is the best way to initiate parameters on a Python library?
|
I have a homegrown python library. As it is a library, it should be initialized with parameters every time it is used, based on different projects using it. For example, here is the sample pseudo code:
import myownlibrary
myownlibrary.init('path_to_config_file_containing_details_to_process_data')
Any idea how this can be achieved?
A sample reference code will be very helpful.
Thanks
|
[
"I added it to the __init__.py as below\nglobal configpath\n\ndef setpath(self, pathpassedin):\n self.configpath = pathpassedin\n print(\"Value passed in: \", pathpassedin)\n\nI thought self was required, but having self also requires that the method be called as below:\nimport myownlibrary as mylib\n\nmyownlibrary.setpath(mylib, 'path_to_config_file')\n\nNot sure if this is the way to set this up. Would appreciate feedback\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_3.x",
"shared_libraries"
] |
stackoverflow_0074496771_python_python_3.x_shared_libraries.txt
|
Q:
Exception occurred processing WSGI script Flask Apche2 EC2
my wsgi file
#dico.wsgi
import sys
import os
sys.path.insert(0, '/var/www/html/disco')
from disco import app as application
application.debug = True
000-default.conf
<VirtualHost *:80>
ServerName 10.402.120.106
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
WSGIDaemonProcess disco threads=5
WSGIScriptAlias / /var/www/html/disco/disco.wsgi
<Directory /var/www/html/disco>
WSGIProcessGroup disco
WSGIApplicationGroup %{GLOBAL}
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
error.logs
mod_wsgi (pid=505): Failed to exec Python script file '/var/www/html/disco/disco.wsgi'.
mod_wsgi (pid=505): Exception occurred processing WSGI script '/var/www/html/disco/disco.wsgi'.
Traceback (most recent call last):
File "/var/www/html/disco/disco.wsgi", line 7, in <module>
from disco import app as application
File "/var/www/html/disco/disco/__init__.py", line 5, in <module>
from flask import Flask, abort, request, jsonify, g, url_for, make_response, Response, redirect
ModuleNotFoundError: No module named 'flask'
My python scripts executing fine locally, but when I put it in server with wsgi config and apache2 config getting like above errors.
mod_wsgi (pid=505): Failed to exec Python script file '/var/www/html/disco/disco.wsgi'.
mod_wsgi (pid=505): Exception occurred processing WSGI script '/var/www/html/disco/disco.wsgi'
Please help me, I badly stuck with this but running fine in my local system.
A:
You should declare your virtual environment path and run activate file on wsgi file. And if there are, don't forget environmental variables.
python_home = '/usr/local/envs/myapp1'
activate_this = python_home + '/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
|
Exception occurred processing WSGI script Flask Apche2 EC2
|
my wsgi file
#dico.wsgi
import sys
import os
sys.path.insert(0, '/var/www/html/disco')
from disco import app as application
application.debug = True
000-default.conf
<VirtualHost *:80>
ServerName 10.402.120.106
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
WSGIDaemonProcess disco threads=5
WSGIScriptAlias / /var/www/html/disco/disco.wsgi
<Directory /var/www/html/disco>
WSGIProcessGroup disco
WSGIApplicationGroup %{GLOBAL}
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
error.logs
mod_wsgi (pid=505): Failed to exec Python script file '/var/www/html/disco/disco.wsgi'.
mod_wsgi (pid=505): Exception occurred processing WSGI script '/var/www/html/disco/disco.wsgi'.
Traceback (most recent call last):
File "/var/www/html/disco/disco.wsgi", line 7, in <module>
from disco import app as application
File "/var/www/html/disco/disco/__init__.py", line 5, in <module>
from flask import Flask, abort, request, jsonify, g, url_for, make_response, Response, redirect
ModuleNotFoundError: No module named 'flask'
My python scripts executing fine locally, but when I put it in server with wsgi config and apache2 config getting like above errors.
mod_wsgi (pid=505): Failed to exec Python script file '/var/www/html/disco/disco.wsgi'.
mod_wsgi (pid=505): Exception occurred processing WSGI script '/var/www/html/disco/disco.wsgi'
Please help me, I badly stuck with this but running fine in my local system.
|
[
"You should declare your virtual environment path and run activate file on wsgi file. And if there are, don't forget environmental variables.\npython_home = '/usr/local/envs/myapp1'\n\nactivate_this = python_home + '/bin/activate_this.py'\nexecfile(activate_this, dict(__file__=activate_this))\n\n"
] |
[
0
] |
[] |
[] |
[
"apache2.4",
"flask",
"python"
] |
stackoverflow_0069077466_apache2.4_flask_python.txt
|
Q:
function that checks if a number is a float
I currently have this
how can I get it so it checks for a float and uses a while loop
def get_float():
number = input('Input a decimal number ')
while number != float:
print('bad input ')
number = input('Input a decimal number ')
else:
return number
get_float()
right now even if I enter a decimal number it says bad input and asks for another input
A:
Sometimes it's better to ask for forgiveness than permission.
def get_float():
while True:
number = input('Input a number ')
try:
return float(number)
except ValueError:
print('\n bad input\n ')
A:
number = input('Input a decimal number ')
while number != float:
This is wrong for a few reasons.
First, input() returns a string, so number is a string. It is not a floating point value.
Second, even if number were a proper floating point value, it is not equal to float, because float is a type object.
It seems like you really meant if type(number) != float. (But again, that would always be false here, because number is a string.)
A:
figured it out:
def get_float():
while True:
number = input('Input a decimal number ')
try:
if "." in number:
floatnumber = float(number)
print(floatnumber)
break
else:
print("invalid number")
except ValueError as err:
print(err)
get_float()
|
function that checks if a number is a float
|
I currently have this
how can I get it so it checks for a float and uses a while loop
def get_float():
number = input('Input a decimal number ')
while number != float:
print('bad input ')
number = input('Input a decimal number ')
else:
return number
get_float()
right now even if I enter a decimal number it says bad input and asks for another input
|
[
"Sometimes it's better to ask for forgiveness than permission.\ndef get_float():\n while True:\n number = input('Input a number ')\n try:\n return float(number)\n except ValueError:\n print('\\n bad input\\n ')\n\n",
"number = input('Input a decimal number ')\nwhile number != float:\n\nThis is wrong for a few reasons.\nFirst, input() returns a string, so number is a string. It is not a floating point value.\nSecond, even if number were a proper floating point value, it is not equal to float, because float is a type object.\nIt seems like you really meant if type(number) != float. (But again, that would always be false here, because number is a string.)\n",
"figured it out:\ndef get_float():\n while True:\n number = input('Input a decimal number ')\n try:\n if \".\" in number:\n floatnumber = float(number)\n print(floatnumber)\n break\n else:\n print(\"invalid number\")\n except ValueError as err:\n print(err)\n \n \nget_float() \n\n\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074524638_python.txt
|
Q:
Flattening multi nested json into a pandas dataframe
I'm trying to flatten this json response into a pandas dataframe to export to csv.
It looks like this:
j = [
{
"id": 401281949,
"teams": [
{
"school": "Louisiana Tech",
"conference": "Conference USA",
"homeAway": "away",
"points": 34,
"stats": [
{"category": "rushingTDs", "stat": "1"},
{"category": "puntReturnYards", "stat": "24"},
{"category": "puntReturnTDs", "stat": "0"},
{"category": "puntReturns", "stat": "3"},
],
}
],
}
]
...Many more items in the stats area.
If I run this and flatten to the teams level:
multiple_level_data = pd.json_normalize(j, record_path =['teams'])
I get:
school conference homeAway points stats
0 Louisiana Tech Conference USA away 34 [{'category': 'rushingTDs', 'stat': '1'}, {'ca...
How do I flatten it twice so that all of the stats are on their own column in each row?
If I do this:
multiple_level_data = pd.json_normalize(j, record_path =['teams'])
multiple_level_data = multiple_level_data.explode('stats').reset_index(drop=True)
multiple_level_data=multiple_level_data.join(pd.json_normalize(multiple_level_data.pop('stats')))
I end up with multiple rows instead of more columns:
A:
can you try this:
multiple_level_data = pd.json_normalize(j, record_path =['teams'])
multiple_level_data = multiple_level_data.explode('stats').reset_index(drop=True)
multiple_level_data=multiple_level_data.join(pd.json_normalize(multiple_level_data.pop('stats')))
#convert rows to columns.
multiple_level_data=multiple_level_data.set_index(multiple_level_data.columns[0:4].to_list())
dfx=multiple_level_data.pivot_table(values='stat',columns='category',aggfunc=list).apply(pd.Series.explode).reset_index(drop=True)
multiple_level_data=multiple_level_data.reset_index().drop(['stat','category'],axis=1).drop_duplicates().reset_index(drop=True)
multiple_level_data=multiple_level_data.join(dfx)
Output:
school
conference
homeAway
points
puntReturnTDs
puntReturnYards
puntReturns
rushingTDs
0
Louisiana Tech
Conference USA
away
34
0
24
3
1
A:
You can try:
df = pd.DataFrame(j).explode("teams")
df = pd.concat([df, df.pop("teams").apply(pd.Series)], axis=1)
df["stats"] = df["stats"].apply(lambda x: {d["category"]: d["stat"] for d in x})
df = pd.concat(
[
df,
df.pop("stats").apply(pd.Series),
],
axis=1,
)
print(df)
Prints:
id school conference homeAway points rushingTDs puntReturnYards puntReturnTDs puntReturns
0 401281949 Louisiana Tech Conference USA away 34 1 24 0 3
A:
Instead of calling explode() on an output of a json_normalize(), you can explicitly pass the paths to the meta data for each column in a single json_normalize() call. For example, ['teams', 'school'] would be one path, ['teams', 'conference'] is another path, etc. This will create a long dataframe similar to what you already have.
Then you can call pivot() to reshape this output into the correct shape.
# normalize json
df = pd.json_normalize(
j, record_path=['teams', 'stats'],
meta=['id', *(['teams', c] for c in ('school', 'conference', 'homeAway', 'points'))]
)
# column name contains 'teams' prefix; remove it
df.columns = [c.split('.')[1] if '.' in c else c for c in df]
# pivot the intermediate result
df = (
df.astype({'points': int, 'id': int})
.pivot(['id', 'school', 'conference', 'homeAway', 'points'], 'category', 'stat')
.reset_index()
)
# remove index name
df.columns.name = None
df
|
Flattening multi nested json into a pandas dataframe
|
I'm trying to flatten this json response into a pandas dataframe to export to csv.
It looks like this:
j = [
{
"id": 401281949,
"teams": [
{
"school": "Louisiana Tech",
"conference": "Conference USA",
"homeAway": "away",
"points": 34,
"stats": [
{"category": "rushingTDs", "stat": "1"},
{"category": "puntReturnYards", "stat": "24"},
{"category": "puntReturnTDs", "stat": "0"},
{"category": "puntReturns", "stat": "3"},
],
}
],
}
]
...Many more items in the stats area.
If I run this and flatten to the teams level:
multiple_level_data = pd.json_normalize(j, record_path =['teams'])
I get:
school conference homeAway points stats
0 Louisiana Tech Conference USA away 34 [{'category': 'rushingTDs', 'stat': '1'}, {'ca...
How do I flatten it twice so that all of the stats are on their own column in each row?
If I do this:
multiple_level_data = pd.json_normalize(j, record_path =['teams'])
multiple_level_data = multiple_level_data.explode('stats').reset_index(drop=True)
multiple_level_data=multiple_level_data.join(pd.json_normalize(multiple_level_data.pop('stats')))
I end up with multiple rows instead of more columns:
|
[
"can you try this:\nmultiple_level_data = pd.json_normalize(j, record_path =['teams'])\nmultiple_level_data = multiple_level_data.explode('stats').reset_index(drop=True)\nmultiple_level_data=multiple_level_data.join(pd.json_normalize(multiple_level_data.pop('stats')))\n\n#convert rows to columns.\nmultiple_level_data=multiple_level_data.set_index(multiple_level_data.columns[0:4].to_list())\ndfx=multiple_level_data.pivot_table(values='stat',columns='category',aggfunc=list).apply(pd.Series.explode).reset_index(drop=True)\nmultiple_level_data=multiple_level_data.reset_index().drop(['stat','category'],axis=1).drop_duplicates().reset_index(drop=True)\nmultiple_level_data=multiple_level_data.join(dfx)\n\n\nOutput:\n\n\n\n\n\nschool\nconference\nhomeAway\npoints\npuntReturnTDs\npuntReturnYards\npuntReturns\nrushingTDs\n\n\n\n\n0\nLouisiana Tech\nConference USA\naway\n34\n0\n24\n3\n1\n\n\n\n",
"You can try:\ndf = pd.DataFrame(j).explode(\"teams\")\ndf = pd.concat([df, df.pop(\"teams\").apply(pd.Series)], axis=1)\n\ndf[\"stats\"] = df[\"stats\"].apply(lambda x: {d[\"category\"]: d[\"stat\"] for d in x})\n\ndf = pd.concat(\n [\n df,\n df.pop(\"stats\").apply(pd.Series),\n ],\n axis=1,\n)\n\nprint(df)\n\nPrints:\n id school conference homeAway points rushingTDs puntReturnYards puntReturnTDs puntReturns\n0 401281949 Louisiana Tech Conference USA away 34 1 24 0 3\n\n",
"Instead of calling explode() on an output of a json_normalize(), you can explicitly pass the paths to the meta data for each column in a single json_normalize() call. For example, ['teams', 'school'] would be one path, ['teams', 'conference'] is another path, etc. This will create a long dataframe similar to what you already have.\nThen you can call pivot() to reshape this output into the correct shape.\n# normalize json\ndf = pd.json_normalize(\n j, record_path=['teams', 'stats'], \n meta=['id', *(['teams', c] for c in ('school', 'conference', 'homeAway', 'points'))]\n)\n# column name contains 'teams' prefix; remove it\ndf.columns = [c.split('.')[1] if '.' in c else c for c in df]\n\n# pivot the intermediate result\ndf = (\n df.astype({'points': int, 'id': int})\n .pivot(['id', 'school', 'conference', 'homeAway', 'points'], 'category', 'stat')\n .reset_index()\n)\n# remove index name\ndf.columns.name = None\ndf\n\n\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"dataframe",
"json",
"pandas",
"python"
] |
stackoverflow_0074538822_dataframe_json_pandas_python.txt
|
Q:
How to overlay plots with different dates?
I would like to overlay two plots that are in different date or time. To do so, I have implemented the following code.
import random
import pandas as pd
import numpy as np
import plotly.express as px
df = pd.DataFrame({'DATE_TIME':pd.date_range('2022-11-01', '2022-11-06 23:00:00',freq='20min'),
'ID':[random.randrange(1, 20) for n in range(430)]})
df['VALUE1'] = [random.uniform(110, 160) for n in range(430)]
df['VALUE2'] = [random.uniform(50, 80) for n in range(430)]
df['INSPECTION'] = df['DATE_TIME'].dt.day
df['MODE'] = np.select([df['INSPECTION']==1, df['INSPECTION'].isin([2,3])], ['A', 'B'], 'C')
#print(df)
machine15 = df[df.ID==15]
machine15_inspection_1 = machine15[machine15.INSPECTION==1]
machine15_inspection_2 = machine15[machine15.INSPECTION==2]
fig1 = px.line(machine15_inspection_1, x="DATE_TIME", y=["VALUE1","VALUE2"], facet_col='INSPECTION',
facet_row_spacing=0.1,
facet_col_spacing=0.09,
markers=True)
fig2 = px.line(machine15_inspection_2, x="DATE_TIME", y=["VALUE1","VALUE2"],facet_col='INSPECTION',
facet_row_spacing=0.1,
facet_col_spacing=0.09,
markers=True)
fig2.update_traces(opacity=0.6)
fig1.add_traces(
list(fig2.select_traces())
)
fig1.update_xaxes(matches=None, showticklabels=True)
fig1.show()
And then, obtained the plot below:
How should I twist my code so that I can overlay VALUE1 and VALUE2?
Or, in other words, how can I overlay fig1 and fig2.
A:
Would a secondary x-axis work for you? Like this?
In that case you can set up a figure with multiple axes like this:
fig=make_subplots(
specs=[[{"secondary_y": True}]])
fig.update_layout(xaxis2= {'anchor': 'y', 'overlaying': 'x', 'side': 'top'})
And then make a few tweaks with:
fig1.for_each_trace(lambda t: fig.add_trace(t, secondary_y=False))
fig2.for_each_trace(lambda t: fig.add_trace(t, secondary_y=False))
fig.data[2].update(xaxis='x2')
fig.data[3].update(xaxis='x2')
There are numerous ways you can put together that last part. Let me know if this is something you can use, or if anything is unclear.
Complete code:
import random
import pandas as pd
import numpy as np
import plotly.express as px
from plotly.subplots import make_subplots
df = pd.DataFrame({'DATE_TIME':pd.date_range('2022-11-01', '2022-11-06 23:00:00',freq='20min'),
'ID':[random.randrange(1, 20) for n in range(430)]})
df['VALUE1'] = [random.uniform(110, 160) for n in range(430)]
df['VALUE2'] = [random.uniform(50, 80) for n in range(430)]
df['INSPECTION'] = df['DATE_TIME'].dt.day
df['MODE'] = np.select([df['INSPECTION']==1, df['INSPECTION'].isin([2,3])], ['A', 'B'], 'C')
#print(df)
machine15 = df[df.ID==15]
machine15_inspection_1 = machine15[machine15.INSPECTION==1]
machine15_inspection_2 = machine15[machine15.INSPECTION==2]
fig1 = px.line(machine15_inspection_1, x="DATE_TIME", y=["VALUE1","VALUE2"], facet_col='INSPECTION',
facet_row_spacing=0.1,
facet_col_spacing=0.09,
markers=True)
fig2 = px.line(machine15_inspection_2, x="DATE_TIME", y=["VALUE1","VALUE2"],facet_col='INSPECTION',
facet_row_spacing=0.1,
facet_col_spacing=0.09,
markers=True)
fig2.update_traces(opacity=0.6)
fig=make_subplots(
specs=[[{"secondary_y": True}]])
fig.update_layout(xaxis2= {'anchor': 'y', 'overlaying': 'x', 'side': 'top'})
# fig.datafig = fig1.data + fig2.data
# fig.update_traces(list(fig1.select_traces()))
fig1.for_each_trace(lambda t: fig.add_trace(t, secondary_y=False))
fig2.for_each_trace(lambda t: fig.add_trace(t, secondary_y=False))
fig.data[2].update(xaxis='x2')
fig.data[3].update(xaxis='x2')
fig.show()
|
How to overlay plots with different dates?
|
I would like to overlay two plots that are in different date or time. To do so, I have implemented the following code.
import random
import pandas as pd
import numpy as np
import plotly.express as px
df = pd.DataFrame({'DATE_TIME':pd.date_range('2022-11-01', '2022-11-06 23:00:00',freq='20min'),
'ID':[random.randrange(1, 20) for n in range(430)]})
df['VALUE1'] = [random.uniform(110, 160) for n in range(430)]
df['VALUE2'] = [random.uniform(50, 80) for n in range(430)]
df['INSPECTION'] = df['DATE_TIME'].dt.day
df['MODE'] = np.select([df['INSPECTION']==1, df['INSPECTION'].isin([2,3])], ['A', 'B'], 'C')
#print(df)
machine15 = df[df.ID==15]
machine15_inspection_1 = machine15[machine15.INSPECTION==1]
machine15_inspection_2 = machine15[machine15.INSPECTION==2]
fig1 = px.line(machine15_inspection_1, x="DATE_TIME", y=["VALUE1","VALUE2"], facet_col='INSPECTION',
facet_row_spacing=0.1,
facet_col_spacing=0.09,
markers=True)
fig2 = px.line(machine15_inspection_2, x="DATE_TIME", y=["VALUE1","VALUE2"],facet_col='INSPECTION',
facet_row_spacing=0.1,
facet_col_spacing=0.09,
markers=True)
fig2.update_traces(opacity=0.6)
fig1.add_traces(
list(fig2.select_traces())
)
fig1.update_xaxes(matches=None, showticklabels=True)
fig1.show()
And then, obtained the plot below:
How should I twist my code so that I can overlay VALUE1 and VALUE2?
Or, in other words, how can I overlay fig1 and fig2.
|
[
"Would a secondary x-axis work for you? Like this?\n\nIn that case you can set up a figure with multiple axes like this:\nfig=make_subplots(\n specs=[[{\"secondary_y\": True}]])\nfig.update_layout(xaxis2= {'anchor': 'y', 'overlaying': 'x', 'side': 'top'})\n\nAnd then make a few tweaks with:\nfig1.for_each_trace(lambda t: fig.add_trace(t, secondary_y=False))\nfig2.for_each_trace(lambda t: fig.add_trace(t, secondary_y=False))\nfig.data[2].update(xaxis='x2')\nfig.data[3].update(xaxis='x2')\n\nThere are numerous ways you can put together that last part. Let me know if this is something you can use, or if anything is unclear.\nComplete code:\nimport random\nimport pandas as pd\nimport numpy as np\nimport plotly.express as px\nfrom plotly.subplots import make_subplots\n\ndf = pd.DataFrame({'DATE_TIME':pd.date_range('2022-11-01', '2022-11-06 23:00:00',freq='20min'),\n 'ID':[random.randrange(1, 20) for n in range(430)]})\ndf['VALUE1'] = [random.uniform(110, 160) for n in range(430)]\ndf['VALUE2'] = [random.uniform(50, 80) for n in range(430)]\ndf['INSPECTION'] = df['DATE_TIME'].dt.day\n\ndf['MODE'] = np.select([df['INSPECTION']==1, df['INSPECTION'].isin([2,3])], ['A', 'B'], 'C')\n#print(df)\nmachine15 = df[df.ID==15]\nmachine15_inspection_1 = machine15[machine15.INSPECTION==1]\nmachine15_inspection_2 = machine15[machine15.INSPECTION==2]\n\n\nfig1 = px.line(machine15_inspection_1, x=\"DATE_TIME\", y=[\"VALUE1\",\"VALUE2\"], facet_col='INSPECTION',\n facet_row_spacing=0.1,\n facet_col_spacing=0.09,\n markers=True)\n\nfig2 = px.line(machine15_inspection_2, x=\"DATE_TIME\", y=[\"VALUE1\",\"VALUE2\"],facet_col='INSPECTION',\n facet_row_spacing=0.1,\n facet_col_spacing=0.09,\n markers=True)\n\nfig2.update_traces(opacity=0.6)\n\nfig=make_subplots(\n specs=[[{\"secondary_y\": True}]])\nfig.update_layout(xaxis2= {'anchor': 'y', 'overlaying': 'x', 'side': 'top'})\n# fig.datafig = fig1.data + fig2.data\n# fig.update_traces(list(fig1.select_traces()))\nfig1.for_each_trace(lambda t: fig.add_trace(t, secondary_y=False))\nfig2.for_each_trace(lambda t: fig.add_trace(t, secondary_y=False))\nfig.data[2].update(xaxis='x2')\nfig.data[3].update(xaxis='x2')\n\nfig.show()\n\n"
] |
[
1
] |
[] |
[] |
[
"plotly",
"plotly_dash",
"python"
] |
stackoverflow_0074531824_plotly_plotly_dash_python.txt
|
Q:
Reversing bits of Python integer
Given a decimal integer (eg. 65), how does one reverse the underlying bits in Python? i.e.. the following operation:
65 → 01000001 → 10000010 → 130
It seems that this task can be broken down into three steps:
Convert the decimal integer to binary representation
Reverse the bits
Convert back to decimal
Steps #2 and 3 seem pretty straightforward (see this and this SO question related to step #2), but I'm stuck on step #1. The issue with step #1 is retrieving the full decimal representation with filling zeros (ie. 65 = 01000001, not 1000001).
I've searched around, but I can't seem to find anything.
A:
int('{:08b}'.format(n)[::-1], 2)
You can specify any filling length in place of the 8. If you want to get really fancy,
b = '{:0{width}b}'.format(n, width=width)
int(b[::-1], 2)
lets you specify the width programmatically.
A:
If you are after more speed, you can use the technique described in
http://leetcode.com/2011/08/reverse-bits.html
def reverse_mask(x):
x = ((x & 0x55555555) << 1) | ((x & 0xAAAAAAAA) >> 1)
x = ((x & 0x33333333) << 2) | ((x & 0xCCCCCCCC) >> 2)
x = ((x & 0x0F0F0F0F) << 4) | ((x & 0xF0F0F0F0) >> 4)
x = ((x & 0x00FF00FF) << 8) | ((x & 0xFF00FF00) >> 8)
x = ((x & 0x0000FFFF) << 16) | ((x & 0xFFFF0000) >> 16)
return x
A:
best way to do is perform bit by bit shifting
def reverse_Bits(n, no_of_bits):
result = 0
for i in range(no_of_bits):
result <<= 1
result |= n & 1
n >>= 1
return result
# for example we reverse 12 i.e 1100 which is 4 bits long
print(reverse_Bits(12,4))
A:
def reverse_bit(num):
result = 0
while num:
result = (result << 1) + (num & 1)
num >>= 1
return result
We don't really need to convert the integer into binary, since integers are actually binary in Python.
The reversing idea is like doing the in-space reversing of integers.
def reverse_int(x):
result = 0
pos_x = abs(x)
while pos_x:
result = result * 10 + pos_x % 10
pos_x /= 10
return result if x >= 0 else (-1) * result
For each loop, the original number is dropping the right-most bit(in binary). We get that right-most bit and multiply 2 (<<1) in the next loop when the new bit is added.
A:
There's no need, and no way, to "convert a decimal integer to binary representation". All Python integers are represented as binary; they're just converted to decimal when you print them for convenience.
If you want to follow this solution to the reversal problem, you only need to find appropriate numbits. You can either specify this by hand, or compute the number of bits needed to represent an integer n with n.bit_length() (new in Python 2.7 and 3.1).
However, for 65, that would give you 7, as there's no reason why 65 should require any more bits. (You might want to round up to the nearest multiple of 8...)
A:
You can test the i'th bit of a number by using a shift and mask. For example, bit 6 of 65 is (65 >> 6) & 1. You can set a bit in a similar way by shifting 1 left the right number of times. These insights gives you code like this (which reverses x in a field of 'n' bits).
def reverse(x, n):
result = 0
for i in xrange(n):
if (x >> i) & 1: result |= 1 << (n - 1 - i)
return result
print bin(reverse(65, 8))
A:
An inefficient but concise method that works in both Python 2.7 and Python 3:
def bit_reverse(i, n):
return int(format(i, '0%db' % n)[::-1], 2)
For your example:
>>> bit_reverse(65, 8)
130
A:
Regularly there is the need to apply this operation on array of numbers and not for single number.
To increase speed, it's probably better to use NumPy array.
There are two solutions.
x1.34 faster than second solution:
import numpy as np
def reverse_bits_faster(x):
x = np.array(x)
bits_num = x.dtype.itemsize * 8
# because bitwise operations may change number of bits in numbers
one_array = np.array([1], x.dtype)
# switch bits in-place
for i in range(int(bits_num / 2)):
right_bit_mask = (one_array << i)[0]
left_bit = (x & right_bit_mask) << (bits_num - 1 - i * 2)
left_bit_mask = (one_array << (bits_num - 1 - i))[0]
right_bit = (x & left_bit_mask) >> (bits_num - 1 - i * 2)
moved_bits_mask = left_bit_mask | right_bit_mask
x = x & (~moved_bits_mask) | left_bit | right_bit
return x
Slower, but more easy to understand (based on solution proposed by Sudip Ghimire):
import numpy as np
def reverse_bits(x):
x = np.array(x)
bits_num = x.dtype.itemsize * 8
x_reversed = np.zeros_like(x)
for i in range(bits_num):
x_reversed = (x_reversed << 1) | x & 1
x >>= 1
return x_reversed
A:
You could also use a Look up table (that can be generated once using methods above):
LUT = [0, 128, 64, 192, 32, 160, 96, 224, 16, 144, 80, 208, 48, 176, 112, 240,
8, 136, 72, 200, 40, 168, 104, 232, 24, 152, 88, 216, 56, 184, 120,
248, 4, 132, 68, 196, 36, 164, 100, 228, 20, 148, 84, 212, 52, 180,
116, 244, 12, 140, 76, 204, 44, 172, 108, 236, 28, 156, 92, 220, 60,
188, 124, 252, 2, 130, 66, 194, 34, 162, 98, 226, 18, 146, 82, 210, 50,
178, 114, 242, 10, 138, 74, 202, 42, 170, 106, 234, 26, 154, 90, 218,
58, 186, 122, 250, 6, 134, 70, 198, 38, 166, 102, 230, 22, 150, 86, 214,
54, 182, 118, 246, 14, 142, 78, 206, 46, 174, 110, 238, 30, 158, 94,
222, 62, 190, 126, 254, 1, 129, 65, 193, 33, 161, 97, 225, 17, 145, 81,
209, 49, 177, 113, 241, 9, 137, 73, 201, 41, 169, 105, 233, 25, 153, 89,
217, 57, 185, 121, 249, 5, 133, 69, 197, 37, 165, 101, 229, 21, 149, 85,
213, 53, 181, 117, 245, 13, 141, 77, 205, 45, 173, 109, 237, 29, 157,
93, 221, 61, 189, 125, 253, 3, 131, 67, 195, 35, 163, 99, 227, 19, 147,
83, 211, 51, 179, 115, 243, 11, 139, 75, 203, 43, 171, 107, 235, 27,
155, 91, 219, 59, 187, 123, 251, 7, 135, 71, 199, 39, 167, 103, 231, 23,
151, 87, 215, 55, 183, 119, 247, 15, 143, 79, 207, 47, 175, 111, 239,
31, 159, 95, 223, 63, 191, 127, 255]
def reverseBitOrder(uint8):
return LUT[uint8]
A:
All what you need is numpy
import numpy as np
x = np.uint8(65)
print( np.packbits(np.unpackbits(x, bitorder='little')) )
performance:
py -3 -m timeit "import numpy as np; import timeit; x=np.uint8(65); timeit.timeit(lambda: np.packbits(np.unpackbits(x, bitorder='little')), number=100000)"
1 loop, best of 5: 326 msec per loop
A:
One more way to do it is to loop through the bits from both end and swap each other. This i learned from EPI python book.
i = 0; j = 7
num = 230
print(bin(num))
while i<j:
# Get the bits from both end iteratively
if (x>>i)&1 != (x>>j)&1:
# if the bits don't match swap them by creating a bit mask
# and XOR it with the number
mask = (1<<i) | (1<<j)
num ^= mask
i += 1; j -= 1
print(bin(num))
A:
bin(x)[:1:-1]
one line, and it automatically goes for the top bit. (edit: use zfill or rjust to get a fixed width - see below)
>>> x = 0b1011000
>>> bin(x)[:1:-1]
'0001101'
>>> x = 0b100
>>> bin(x)[:1:-1]
'001'
the "0b" on the front of the text-conversion is stripped by the "1" in [:1:-1] which, after the inversion (by -1) has 1 automatically added to it (sigh, range is really weird) before being used as the start point not the end.
you'll need zero-padding on the front to get it a fixed-width reversing but even there [:1:-1] will still do the auto-length-detection
zfill does the job but you need to split off the "0b" from bin()
first, then zfill, then invert (then convert to int)
length=10
bin(x)[2:].zfill(length)[::-1]
int(bin(x)[2:].zfill(length)[::-1],2)
using ljust:
bin(x)[:1:-1].ljust(length, '0')
strangely although longer i find ljust clearer.
A:
The first and second steps have a very neat algorthom:
num = int(input())
while num > 0:
reminder = num % 2
print(f'{str(reminder)}', end = '')
num = int(num / 2)
|
Reversing bits of Python integer
|
Given a decimal integer (eg. 65), how does one reverse the underlying bits in Python? i.e.. the following operation:
65 → 01000001 → 10000010 → 130
It seems that this task can be broken down into three steps:
Convert the decimal integer to binary representation
Reverse the bits
Convert back to decimal
Steps #2 and 3 seem pretty straightforward (see this and this SO question related to step #2), but I'm stuck on step #1. The issue with step #1 is retrieving the full decimal representation with filling zeros (ie. 65 = 01000001, not 1000001).
I've searched around, but I can't seem to find anything.
|
[
"int('{:08b}'.format(n)[::-1], 2)\n\nYou can specify any filling length in place of the 8. If you want to get really fancy,\nb = '{:0{width}b}'.format(n, width=width)\nint(b[::-1], 2)\n\nlets you specify the width programmatically.\n",
"If you are after more speed, you can use the technique described in\nhttp://leetcode.com/2011/08/reverse-bits.html\ndef reverse_mask(x):\n x = ((x & 0x55555555) << 1) | ((x & 0xAAAAAAAA) >> 1)\n x = ((x & 0x33333333) << 2) | ((x & 0xCCCCCCCC) >> 2)\n x = ((x & 0x0F0F0F0F) << 4) | ((x & 0xF0F0F0F0) >> 4)\n x = ((x & 0x00FF00FF) << 8) | ((x & 0xFF00FF00) >> 8)\n x = ((x & 0x0000FFFF) << 16) | ((x & 0xFFFF0000) >> 16)\n return x\n\n",
"best way to do is perform bit by bit shifting\ndef reverse_Bits(n, no_of_bits):\n result = 0\n for i in range(no_of_bits):\n result <<= 1\n result |= n & 1\n n >>= 1\n return result\n# for example we reverse 12 i.e 1100 which is 4 bits long\nprint(reverse_Bits(12,4))\n\n",
"def reverse_bit(num):\n result = 0\n while num:\n result = (result << 1) + (num & 1)\n num >>= 1\n return result\n\nWe don't really need to convert the integer into binary, since integers are actually binary in Python. \nThe reversing idea is like doing the in-space reversing of integers. \ndef reverse_int(x):\n result = 0\n pos_x = abs(x)\n while pos_x:\n result = result * 10 + pos_x % 10\n pos_x /= 10\n return result if x >= 0 else (-1) * result\n\nFor each loop, the original number is dropping the right-most bit(in binary). We get that right-most bit and multiply 2 (<<1) in the next loop when the new bit is added. \n",
"There's no need, and no way, to \"convert a decimal integer to binary representation\". All Python integers are represented as binary; they're just converted to decimal when you print them for convenience.\nIf you want to follow this solution to the reversal problem, you only need to find appropriate numbits. You can either specify this by hand, or compute the number of bits needed to represent an integer n with n.bit_length() (new in Python 2.7 and 3.1).\nHowever, for 65, that would give you 7, as there's no reason why 65 should require any more bits. (You might want to round up to the nearest multiple of 8...)\n",
"You can test the i'th bit of a number by using a shift and mask. For example, bit 6 of 65 is (65 >> 6) & 1. You can set a bit in a similar way by shifting 1 left the right number of times. These insights gives you code like this (which reverses x in a field of 'n' bits).\ndef reverse(x, n):\n result = 0\n for i in xrange(n):\n if (x >> i) & 1: result |= 1 << (n - 1 - i)\n return result\n\nprint bin(reverse(65, 8))\n\n",
"An inefficient but concise method that works in both Python 2.7 and Python 3:\ndef bit_reverse(i, n):\n return int(format(i, '0%db' % n)[::-1], 2)\n\nFor your example:\n>>> bit_reverse(65, 8)\n130\n\n",
"Regularly there is the need to apply this operation on array of numbers and not for single number.\nTo increase speed, it's probably better to use NumPy array.\nThere are two solutions.\nx1.34 faster than second solution:\nimport numpy as np\ndef reverse_bits_faster(x):\n x = np.array(x)\n bits_num = x.dtype.itemsize * 8\n # because bitwise operations may change number of bits in numbers\n one_array = np.array([1], x.dtype)\n # switch bits in-place\n for i in range(int(bits_num / 2)):\n right_bit_mask = (one_array << i)[0]\n left_bit = (x & right_bit_mask) << (bits_num - 1 - i * 2)\n left_bit_mask = (one_array << (bits_num - 1 - i))[0]\n right_bit = (x & left_bit_mask) >> (bits_num - 1 - i * 2)\n moved_bits_mask = left_bit_mask | right_bit_mask\n x = x & (~moved_bits_mask) | left_bit | right_bit\n return x\n\nSlower, but more easy to understand (based on solution proposed by Sudip Ghimire):\nimport numpy as np\ndef reverse_bits(x):\n x = np.array(x)\n bits_num = x.dtype.itemsize * 8\n x_reversed = np.zeros_like(x)\n for i in range(bits_num):\n x_reversed = (x_reversed << 1) | x & 1\n x >>= 1\n return x_reversed\n\n",
"You could also use a Look up table (that can be generated once using methods above):\nLUT = [0, 128, 64, 192, 32, 160, 96, 224, 16, 144, 80, 208, 48, 176, 112, 240,\n 8, 136, 72, 200, 40, 168, 104, 232, 24, 152, 88, 216, 56, 184, 120,\n 248, 4, 132, 68, 196, 36, 164, 100, 228, 20, 148, 84, 212, 52, 180,\n 116, 244, 12, 140, 76, 204, 44, 172, 108, 236, 28, 156, 92, 220, 60,\n 188, 124, 252, 2, 130, 66, 194, 34, 162, 98, 226, 18, 146, 82, 210, 50,\n 178, 114, 242, 10, 138, 74, 202, 42, 170, 106, 234, 26, 154, 90, 218,\n 58, 186, 122, 250, 6, 134, 70, 198, 38, 166, 102, 230, 22, 150, 86, 214,\n 54, 182, 118, 246, 14, 142, 78, 206, 46, 174, 110, 238, 30, 158, 94,\n 222, 62, 190, 126, 254, 1, 129, 65, 193, 33, 161, 97, 225, 17, 145, 81,\n 209, 49, 177, 113, 241, 9, 137, 73, 201, 41, 169, 105, 233, 25, 153, 89,\n 217, 57, 185, 121, 249, 5, 133, 69, 197, 37, 165, 101, 229, 21, 149, 85,\n 213, 53, 181, 117, 245, 13, 141, 77, 205, 45, 173, 109, 237, 29, 157,\n 93, 221, 61, 189, 125, 253, 3, 131, 67, 195, 35, 163, 99, 227, 19, 147,\n 83, 211, 51, 179, 115, 243, 11, 139, 75, 203, 43, 171, 107, 235, 27,\n 155, 91, 219, 59, 187, 123, 251, 7, 135, 71, 199, 39, 167, 103, 231, 23,\n 151, 87, 215, 55, 183, 119, 247, 15, 143, 79, 207, 47, 175, 111, 239,\n 31, 159, 95, 223, 63, 191, 127, 255]\n\ndef reverseBitOrder(uint8):\n return LUT[uint8]\n\n",
"All what you need is numpy\nimport numpy as np\nx = np.uint8(65)\nprint( np.packbits(np.unpackbits(x, bitorder='little')) )\n\nperformance:\npy -3 -m timeit \"import numpy as np; import timeit; x=np.uint8(65); timeit.timeit(lambda: np.packbits(np.unpackbits(x, bitorder='little')), number=100000)\"\n1 loop, best of 5: 326 msec per loop\n\n",
"One more way to do it is to loop through the bits from both end and swap each other. This i learned from EPI python book. \ni = 0; j = 7\nnum = 230\nprint(bin(num))\nwhile i<j:\n # Get the bits from both end iteratively\n if (x>>i)&1 != (x>>j)&1:\n # if the bits don't match swap them by creating a bit mask\n # and XOR it with the number \n mask = (1<<i) | (1<<j)\n num ^= mask\n i += 1; j -= 1\nprint(bin(num))\n\n",
"bin(x)[:1:-1]\n\none line, and it automatically goes for the top bit. (edit: use zfill or rjust to get a fixed width - see below)\n>>> x = 0b1011000\n>>> bin(x)[:1:-1]\n'0001101'\n>>> x = 0b100\n>>> bin(x)[:1:-1]\n'001'\n\nthe \"0b\" on the front of the text-conversion is stripped by the \"1\" in [:1:-1] which, after the inversion (by -1) has 1 automatically added to it (sigh, range is really weird) before being used as the start point not the end.\nyou'll need zero-padding on the front to get it a fixed-width reversing but even there [:1:-1] will still do the auto-length-detection\nzfill does the job but you need to split off the \"0b\" from bin()\nfirst, then zfill, then invert (then convert to int)\nlength=10\nbin(x)[2:].zfill(length)[::-1]\nint(bin(x)[2:].zfill(length)[::-1],2)\n\nusing ljust:\nbin(x)[:1:-1].ljust(length, '0')\n\nstrangely although longer i find ljust clearer.\n",
"The first and second steps have a very neat algorthom:\nnum = int(input())\n\nwhile num > 0:\n reminder = num % 2\n print(f'{str(reminder)}', end = '')\n num = int(num / 2)\n\n"
] |
[
60,
11,
10,
8,
3,
3,
3,
2,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"bit_manipulation",
"integer",
"python"
] |
stackoverflow_0012681945_bit_manipulation_integer_python.txt
|
Q:
pb avec web scraping
import requests
import pandas as pd
from urllib.request import urlopen
from bs4 import BeautifulSoup
df = []
for x in range(1,31):
url_allocine= 'https://www.allocine.fr/film/meilleurs/?page='
page = requests.get(url_allocine + str(x))
soup = BeautifulSoup(page.content, 'html.parser')
films_all = soup.findAll('div',{'class':'card entity-card entity-card-list cf'})
#print(len(films_all))
film = films_all[0]
#print(film)
titre = film.find("div",{'class':'meta'}).find('a').text
#print(titre)
note = film.findAll("div",{'class':'rating-item'})[0]
note_presse = note.find('span',{'class':'stareval-note'}).text
#print(note_presse)
note_1 = film.findAll("div",{'class':'rating-item'})[1]
note_spectateur = note_1.find('span',{'class':'stareval-note'}).text
#print(note_spectateur)
for film in films_all:
titre = film.find("div",{'class':'meta'}).find('a').text
note_presse= (note.find('span',{'class':'stareval-note'}).text)
note_spectateur = (note_1.find('span',{'class':'stareval-note'}).text)
property_info = {
'titre': titre,
'note_presse': note_presse,
'note_spectateur': note_spectateur,
}
df.append(property_info)
#print(len(df))
df_allocine = pd.DataFrame(df)
print(df_allocine[0:20])
In the above code and for the note selection, I could not select or find a way to create the note_presse and the note_spectateur on the same line, since they share the same tags. So, I tried to use indexation hoping to solve the problem. But, I found after creating the Datframe that for the first 10 rows the films have the same notes, and it changes for the second 10 rows(due to pagination but it stays the same for these also and so on).
Hope I find a solution using urllib or requests but not another methode like selinium. Thanks in advance for your efforts.
A:
To get "Note Presse" and "Note Spectateurs" you can use next example:
import requests
import pandas as pd
from bs4 import BeautifulSoup
data = []
for page in range(1, 3): # <-- increase number of pages here
url = f"https://www.allocine.fr/film/meilleurs/?page={page}"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
for movie in soup.select("li.mdl"):
data.append(
{
"Title": movie.h2.text.strip(),
"Note Presse": movie.select_one(
".rating-item:-soup-contains(Presse) .stareval-note"
).text.strip(),
"Note Spectateurs": movie.select_one(
".rating-item:-soup-contains(Spectateurs) .stareval-note"
).text.strip(),
}
)
df = pd.DataFrame(data)
print(df)
Prints:
Title Note Presse Note Spectateurs
0 Forrest Gump 2,6 4,6
1 La Liste de Schindler 4,2 4,6
2 La Ligne verte 2,8 4,6
3 12 hommes en colère 5,0 4,6
4 Le Parrain 4,6 4,5
5 Les Evadés 3,2 4,5
6 Le Seigneur des anneaux : le retour du roi 3,8 4,5
7 Le Roi Lion 3,4 4,5
8 Vol au-dessus d'un nid de coucou 5,0 4,5
9 The Dark Knight, Le Chevalier Noir 4,0 4,5
10 Pulp Fiction 4,4 4,5
11 Il était une fois dans l'Ouest 4,0 4,5
12 Le Bon, la brute et le truand 4,1 4,5
13 Il était une fois en Amérique 4,9 4,5
14 Django Unchained 4,6 4,5
15 Le Seigneur des anneaux : la communauté de l'anneau 3,7 4,5
16 Gladiator 4,3 4,5
17 Gran Torino 4,7 4,5
18 Le Seigneur des anneaux : les deux tours 4,0 4,5
19 Interstellar 3,8 4,5
A:
Andrej Kesely, this is the code I did: I know it works but it's so heavy:
import requests
import pandas as pd
from bs4 import BeautifulSoup
df = []
for x in range(1,31):
url_allocine= 'https://www.allocine.fr/film/meilleurs/?page='
page = requests.get(url_allocine + str(x))
soup = BeautifulSoup(page.content, 'html.parser')
films_all = soup.find_all('div',{'class':'card entity-card entity-card-list cf'})
def remove_word(string):
return string.replace("Presse","").replace("Spectateurs","")
for film in films_all:
title = film.find('h2').get_text(strip=True)
rates = film.find_all('div', class_='rating-holder rating-holder-3')
for rate in rates:
note_presse = remove_word(rate.find_all("div",{'class':'rating-item'})[0].get_text(strip=True))
note_spectateur = remove_word(rate.find_all("div",{'class':'rating-item'})[1].get_text(strip=True))
property_info = {
'title': title,
'note_presse': note_presse,
'note_spectateur': note_spectateur,
}
df.append(property_info)
# print(len(df))
df_allocine = pd.DataFrame(df)
print(df_allocine[0:10])
|
pb avec web scraping
|
import requests
import pandas as pd
from urllib.request import urlopen
from bs4 import BeautifulSoup
df = []
for x in range(1,31):
url_allocine= 'https://www.allocine.fr/film/meilleurs/?page='
page = requests.get(url_allocine + str(x))
soup = BeautifulSoup(page.content, 'html.parser')
films_all = soup.findAll('div',{'class':'card entity-card entity-card-list cf'})
#print(len(films_all))
film = films_all[0]
#print(film)
titre = film.find("div",{'class':'meta'}).find('a').text
#print(titre)
note = film.findAll("div",{'class':'rating-item'})[0]
note_presse = note.find('span',{'class':'stareval-note'}).text
#print(note_presse)
note_1 = film.findAll("div",{'class':'rating-item'})[1]
note_spectateur = note_1.find('span',{'class':'stareval-note'}).text
#print(note_spectateur)
for film in films_all:
titre = film.find("div",{'class':'meta'}).find('a').text
note_presse= (note.find('span',{'class':'stareval-note'}).text)
note_spectateur = (note_1.find('span',{'class':'stareval-note'}).text)
property_info = {
'titre': titre,
'note_presse': note_presse,
'note_spectateur': note_spectateur,
}
df.append(property_info)
#print(len(df))
df_allocine = pd.DataFrame(df)
print(df_allocine[0:20])
In the above code and for the note selection, I could not select or find a way to create the note_presse and the note_spectateur on the same line, since they share the same tags. So, I tried to use indexation hoping to solve the problem. But, I found after creating the Datframe that for the first 10 rows the films have the same notes, and it changes for the second 10 rows(due to pagination but it stays the same for these also and so on).
Hope I find a solution using urllib or requests but not another methode like selinium. Thanks in advance for your efforts.
|
[
"To get \"Note Presse\" and \"Note Spectateurs\" you can use next example:\nimport requests\nimport pandas as pd\nfrom bs4 import BeautifulSoup\n\ndata = []\nfor page in range(1, 3): # <-- increase number of pages here\n url = f\"https://www.allocine.fr/film/meilleurs/?page={page}\"\n soup = BeautifulSoup(requests.get(url).content, \"html.parser\")\n\n for movie in soup.select(\"li.mdl\"):\n data.append(\n {\n \"Title\": movie.h2.text.strip(),\n \"Note Presse\": movie.select_one(\n \".rating-item:-soup-contains(Presse) .stareval-note\"\n ).text.strip(),\n \"Note Spectateurs\": movie.select_one(\n \".rating-item:-soup-contains(Spectateurs) .stareval-note\"\n ).text.strip(),\n }\n )\n\ndf = pd.DataFrame(data)\nprint(df)\n\nPrints:\n Title Note Presse Note Spectateurs\n0 Forrest Gump 2,6 4,6\n1 La Liste de Schindler 4,2 4,6\n2 La Ligne verte 2,8 4,6\n3 12 hommes en colère 5,0 4,6\n4 Le Parrain 4,6 4,5\n5 Les Evadés 3,2 4,5\n6 Le Seigneur des anneaux : le retour du roi 3,8 4,5\n7 Le Roi Lion 3,4 4,5\n8 Vol au-dessus d'un nid de coucou 5,0 4,5\n9 The Dark Knight, Le Chevalier Noir 4,0 4,5\n10 Pulp Fiction 4,4 4,5\n11 Il était une fois dans l'Ouest 4,0 4,5\n12 Le Bon, la brute et le truand 4,1 4,5\n13 Il était une fois en Amérique 4,9 4,5\n14 Django Unchained 4,6 4,5\n15 Le Seigneur des anneaux : la communauté de l'anneau 3,7 4,5\n16 Gladiator 4,3 4,5\n17 Gran Torino 4,7 4,5\n18 Le Seigneur des anneaux : les deux tours 4,0 4,5\n19 Interstellar 3,8 4,5\n\n",
"Andrej Kesely, this is the code I did: I know it works but it's so heavy:\nimport requests\nimport pandas as pd\nfrom bs4 import BeautifulSoup\ndf = []\nfor x in range(1,31):\nurl_allocine= 'https://www.allocine.fr/film/meilleurs/?page='\npage = requests.get(url_allocine + str(x))\nsoup = BeautifulSoup(page.content, 'html.parser')\n\n\nfilms_all = soup.find_all('div',{'class':'card entity-card entity-card-list cf'})\ndef remove_word(string):\n return string.replace(\"Presse\",\"\").replace(\"Spectateurs\",\"\")\n\nfor film in films_all:\n title = film.find('h2').get_text(strip=True)\n rates = film.find_all('div', class_='rating-holder rating-holder-3')\n for rate in rates:\n note_presse = remove_word(rate.find_all(\"div\",{'class':'rating-item'})[0].get_text(strip=True))\n note_spectateur = remove_word(rate.find_all(\"div\",{'class':'rating-item'})[1].get_text(strip=True))\n\n property_info = {\n 'title': title,\n 'note_presse': note_presse,\n 'note_spectateur': note_spectateur,\n }\n df.append(property_info)\n# print(len(df))\n\ndf_allocine = pd.DataFrame(df)\nprint(df_allocine[0:10])\n"
] |
[
1,
0
] |
[] |
[] |
[
"pandas",
"python",
"urllib",
"web_scraping"
] |
stackoverflow_0074532249_pandas_python_urllib_web_scraping.txt
|
Q:
Changing NaN cells in pandas dataframe with different type of columns
How can I fill all the NaN values in pandas dataframe with the empty value of column type. For example, I have 2 columns - "Name" - str, "Age" - int. I want to fill all the NaN cells in "Name" with "" and all the NaN in "Age" with 0. Do pandas has a method to implement it. I can do that separately for "Name" and "Age" but I want to let pandas determine the type of column by himself and in dependence of this type change NaN to either "" either 0. Thank you in advance.
A:
The parameter value of pandas.DataFrame.fillna accept dictionnaries. So, assuming your dataframe is df, you can fill NaN values with multiple values in multiple columns by using :
df.fillna({"Name": "", "Age": 0}, inplace=True)
Furthermore, if you need to fill NaN values based on the type of the columns, use this :
df= pd.concat([df.select_dtypes(include=np.number).fillna(0),
df.select_dtypes(include='object').fillna("")], axis=1)
NB: The code above will work properly only if your dataframe holds string and/or numeric columns.
|
Changing NaN cells in pandas dataframe with different type of columns
|
How can I fill all the NaN values in pandas dataframe with the empty value of column type. For example, I have 2 columns - "Name" - str, "Age" - int. I want to fill all the NaN cells in "Name" with "" and all the NaN in "Age" with 0. Do pandas has a method to implement it. I can do that separately for "Name" and "Age" but I want to let pandas determine the type of column by himself and in dependence of this type change NaN to either "" either 0. Thank you in advance.
|
[
"The parameter value of pandas.DataFrame.fillna accept dictionnaries. So, assuming your dataframe is df, you can fill NaN values with multiple values in multiple columns by using :\ndf.fillna({\"Name\": \"\", \"Age\": 0}, inplace=True)\n\nFurthermore, if you need to fill NaN values based on the type of the columns, use this :\ndf= pd.concat([df.select_dtypes(include=np.number).fillna(0),\n df.select_dtypes(include='object').fillna(\"\")], axis=1)\n\nNB: The code above will work properly only if your dataframe holds string and/or numeric columns.\n"
] |
[
2
] |
[] |
[] |
[
"dataframe",
"jupyter_notebook",
"pandas",
"python",
"python_3.x"
] |
stackoverflow_0074539559_dataframe_jupyter_notebook_pandas_python_python_3.x.txt
|
Q:
Platform does not define a GLUT font retrieval function
I am trying to install a cozmo SDK on my ubuntu (v 20.4). I followed the instructions from the http://cozmosdk.anki.com/docs/install-linux.html and at the end I always get the same error. "Platform does not define a GLUT font retrieval function".
I did try installing it on my Host PC however I ended up with the same error message.
A:
It turned out to be a problem with the python version. Easy solution: use ubuntu version 20.04.
|
Platform does not define a GLUT font retrieval function
|
I am trying to install a cozmo SDK on my ubuntu (v 20.4). I followed the instructions from the http://cozmosdk.anki.com/docs/install-linux.html and at the end I always get the same error. "Platform does not define a GLUT font retrieval function".
I did try installing it on my Host PC however I ended up with the same error message.
|
[
"It turned out to be a problem with the python version. Easy solution: use ubuntu version 20.04.\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074041603_python.txt
|
Q:
fetch value from two fields (Date and Monetary) and convert it in text in Odoo
I have two fields in Sale order, i need to get values from each one change its value in text and display in another field in custom model
Two fiedls in sale.order module:
amount_total = fields.Monetary(string="Total", store=True, compute='_compute_amounts', tracking=4)
date_order = fields.Datetime()
and this is my code so far:
from odoo import fields, models, api
from odoo.exceptions import ValidationError
import random
readonly_fields_states = {
state: [('readonly', True)]
for state in {'sale', 'done', 'cancel'}
}
class SaleOrder(models.Model):
_inherit = "sale.order"
test = fields.Many2one(string="Test",
comodel_name='sale.order',
default=lambda x: random.randint(1, 10),
states=readonly_fields_states,
)
@api.constrains('test')
def check_test_length(self):
for rec in self:
if rec.test:
if len(rec.test) > 50:
raise ValidationError('Длина текста строки "test" должна быть меньше 50 символов!')
else:
pass
The goal is: get data from fields turn its type into text and display that data in fiedl test when the two from sale.order is changed. For now i get only names S00001, S00002.. etc.
I have no working solution. I have tried various fucntion but non of them seems to work or i doing something wrong. I realise that my case bit unclear thats because i can`t wrap my head around it. So ask me whatever needed if you want to help.
A:
You can use odoo related field attribute.
In your custom model add a relational field with sale.order and then create related fields.
sale_order_id = fields.Many2one(comodel_name="sale.order")
sale_order_amount_total = fields.Monetary(related="sale_order_id.amount_total")
sale_order_date_order = fields.Datetime(related="sale_order_id.date_order")
Now in custom model views you'll be able to use created fields.
<field name="sale_order_amount_total" />
<field name="sale_order_date_order" />
Note that these fields will be directly related to your Many2one model instance (sale.order record)
Good practice is to set them to readonly=True
In case you are going to execute queries in your custom models using your related fields, you should consider using store=True as attribute.
For example
sale_order_amount_total = fields.Monetary(
related="sale_order_id.amount_total",
store=True,
readonly=True
)
|
fetch value from two fields (Date and Monetary) and convert it in text in Odoo
|
I have two fields in Sale order, i need to get values from each one change its value in text and display in another field in custom model
Two fiedls in sale.order module:
amount_total = fields.Monetary(string="Total", store=True, compute='_compute_amounts', tracking=4)
date_order = fields.Datetime()
and this is my code so far:
from odoo import fields, models, api
from odoo.exceptions import ValidationError
import random
readonly_fields_states = {
state: [('readonly', True)]
for state in {'sale', 'done', 'cancel'}
}
class SaleOrder(models.Model):
_inherit = "sale.order"
test = fields.Many2one(string="Test",
comodel_name='sale.order',
default=lambda x: random.randint(1, 10),
states=readonly_fields_states,
)
@api.constrains('test')
def check_test_length(self):
for rec in self:
if rec.test:
if len(rec.test) > 50:
raise ValidationError('Длина текста строки "test" должна быть меньше 50 символов!')
else:
pass
The goal is: get data from fields turn its type into text and display that data in fiedl test when the two from sale.order is changed. For now i get only names S00001, S00002.. etc.
I have no working solution. I have tried various fucntion but non of them seems to work or i doing something wrong. I realise that my case bit unclear thats because i can`t wrap my head around it. So ask me whatever needed if you want to help.
|
[
"You can use odoo related field attribute.\nIn your custom model add a relational field with sale.order and then create related fields.\nsale_order_id = fields.Many2one(comodel_name=\"sale.order\")\nsale_order_amount_total = fields.Monetary(related=\"sale_order_id.amount_total\")\nsale_order_date_order = fields.Datetime(related=\"sale_order_id.date_order\")\n\nNow in custom model views you'll be able to use created fields.\n<field name=\"sale_order_amount_total\" />\n<field name=\"sale_order_date_order\" />\n\nNote that these fields will be directly related to your Many2one model instance (sale.order record)\nGood practice is to set them to readonly=True\nIn case you are going to execute queries in your custom models using your related fields, you should consider using store=True as attribute.\nFor example\nsale_order_amount_total = fields.Monetary(\n related=\"sale_order_id.amount_total\",\n store=True,\n readonly=True\n)\n\n"
] |
[
0
] |
[] |
[] |
[
"field",
"odoo",
"odoo_15",
"python"
] |
stackoverflow_0074474619_field_odoo_odoo_15_python.txt
|
Q:
Fourier transform of 2D Gaussian kernel is not matching up with its counterpart in the spatial domain
We know the Fourier transform of a Gaussian filter is again Gaussian in the frequency domain, I have written the following method to build a Gaussian kernel:
def get_gaussian(size, sigma):
g_kernel = np.zeros((size,size))
x_center = size // 2
y_center = size // 2
for i in range(size):
for j in range(size):
g_kernel[i,j] = np.exp(-((i - x_center)**2 + (j - y_center)**2) / (2*sigma**2))
return 1/(2*np.pi * sigma**2)* g_kernel
plt.figure(figsize=(8, 10))
sigma = 20
g_spatial = get_gaussian(img2.shape[0],sigma)
plt.imshow(g_spatial)
plt.figure(figsize=(8, 10))
g_frequnecy = fft.fftshift(fft.fft2(g_spatial))
plt.imshow(np.abs(g_frequnecy))
plt.figure(figsize=(8, 10))
g_spatial = get_gaussian(img2.shape[0],sigma / (np.sqrt(2) * np.pi))
plt.imshow(g_spatial)
The second and third Gaussians don't seem to match up at all. I will attach the pictures of all three Gaussian kernels below. I'm pretty sure I have calculated the variance of each Gaussian kernel such that they would match based on the 2D Fourier transform of a Gaussian kernel. Where have I made a mistake?
A:
In the frequency domain, the Gaussian has a sigma of size / (2 * pi * sigma), with size the size of the image, and sigma the spatial-domain sigma. Yes, for a non-square image, an isotropic Gaussian in the spatial domain is not isotropic in the frequency domain.
Your computation sigma / (np.sqrt(2) * np.pi) is wrong. One quick way to tell it's not correct is that a larger sigma in the spatial domain also produces a larger sigma in the frequency domain using your computation. But we know that scaling up (==making larger) in the spatial domain scales the frequency domain down, as larger wavelengths have smaller frequencies.
|
Fourier transform of 2D Gaussian kernel is not matching up with its counterpart in the spatial domain
|
We know the Fourier transform of a Gaussian filter is again Gaussian in the frequency domain, I have written the following method to build a Gaussian kernel:
def get_gaussian(size, sigma):
g_kernel = np.zeros((size,size))
x_center = size // 2
y_center = size // 2
for i in range(size):
for j in range(size):
g_kernel[i,j] = np.exp(-((i - x_center)**2 + (j - y_center)**2) / (2*sigma**2))
return 1/(2*np.pi * sigma**2)* g_kernel
plt.figure(figsize=(8, 10))
sigma = 20
g_spatial = get_gaussian(img2.shape[0],sigma)
plt.imshow(g_spatial)
plt.figure(figsize=(8, 10))
g_frequnecy = fft.fftshift(fft.fft2(g_spatial))
plt.imshow(np.abs(g_frequnecy))
plt.figure(figsize=(8, 10))
g_spatial = get_gaussian(img2.shape[0],sigma / (np.sqrt(2) * np.pi))
plt.imshow(g_spatial)
The second and third Gaussians don't seem to match up at all. I will attach the pictures of all three Gaussian kernels below. I'm pretty sure I have calculated the variance of each Gaussian kernel such that they would match based on the 2D Fourier transform of a Gaussian kernel. Where have I made a mistake?
|
[
"In the frequency domain, the Gaussian has a sigma of size / (2 * pi * sigma), with size the size of the image, and sigma the spatial-domain sigma. Yes, for a non-square image, an isotropic Gaussian in the spatial domain is not isotropic in the frequency domain.\nYour computation sigma / (np.sqrt(2) * np.pi) is wrong. One quick way to tell it's not correct is that a larger sigma in the spatial domain also produces a larger sigma in the frequency domain using your computation. But we know that scaling up (==making larger) in the spatial domain scales the frequency domain down, as larger wavelengths have smaller frequencies.\n"
] |
[
1
] |
[] |
[] |
[
"convolution",
"fft",
"image_processing",
"python",
"signal_processing"
] |
stackoverflow_0074539485_convolution_fft_image_processing_python_signal_processing.txt
|
Q:
Openpyxl Python Copy From One Excel Sheet&Paste in Existing Individual Workbooks in Subfolders
I have a code that copies data based on Column ['A'] cell value from one .xlsx workbook call it my source file and pastes it into the most recently modified .xlsm file in a subfolder. The problem I have is that I have 50 subfolders but my code only works for one, so I am repeating the script 50 times which is not productive. My source file has 2sheets my data is in "Sheet 1", it has 50k+ rows and has columns A:Q. My destination files have multiple sheets but I am pasting into a specific sheet, naming convention for the sheet is same across all 50 files so say sheet name is "Sheet 5". My destination files have header row so am pasting starting from row2, I paste into columns A:Q, it has formula columns R:T. Column A of my source file has multiple cities[DENVER, COLUMBUS, PORTLAND etc], the cities correspond to my destination files, my destination folders looks like this 'c/windows/users/me/documents/mainfolder/DEN' for DENVER, 'c/windows/users/me/documents/mainfolder/COL' for COLUMBUS, 'c/windows/users/me/documents/mainfolder/PRT' for PORTLAND.
Naming convention for the files in the DEN subfolder is
DEN 2022 random string r1.xlsm,
DEN 2022 random string r2.xlsm,
Naming convention for the files in PRT folder is
PRT 2022 random string r2.xlsm,
PRT 2022 random string r3.xlsm etc. My code copies rows from source file where Column A is eg: Denver and pastes starting from row2 into DEN 2022 random string r2.xlsm(most recently modified file). PS:I found the code here on stack, I repurposed it for my project but I need help with how to get the code to work for all 50 subfolders. See my code below:
Sourcefile enter image description here
Destinationfile enter image description here
from copy import copy
import os
import openpyxl
from openpyxl import Workbook, load_workbook
#my source file
wb_sf= load_workbook(r'C:\Users\me\Documents\Consol\Sourcefile.xlsx')
ws = wb_sf["Sheet 1"]
# my destination file
#find most recently modified file
dest_path = r'c\windows\users\me\documents\mainfolder\DEN' #DENVER subfolder
latest_file = max(glob.glob(f"{dest_path}/*.xlsm"), key=os.path.getmtime)
#load destination file
wb_df= load_workbook(latest_file, keep_vba = True)
ws2 = wb_df["Sheet 5"]
copyfrom_max_columns = ws.max_column
paste_start_min_row = 1
city_list = ['DENVER'] #search for DENVER
for city_number, city in enumerate(city_list, 1):
search_min_row = paste_start_min_row
for row in ws.iter_rows(max_col=1, min_row=search_min_row):
for cell in row:
if cell.value == city:
paste_start_min_row += 1
for i in range(copyfrom_max_columns):
# Set the copy and paste Cells
copy_cell = cell.offset(column=i)
paste_cell = ws2.cell(row=paste_start_min_row, column=i + 1)
paste_cell.value = copy_cell.value
paste_cell.number_format = copy_cell.number_format
wb_pst.save(latest_file)
A:
I figured it out, I am pasting the code below to help anybodyelse who may be doing similar project. I listed my cities in a list, listed my abbreviations(which also double as my folder name) in a list, I paired the list using zip and used the zip to enumerate and set my subfolder names.
wb_sf= load_workbook(r'C:\Users\me\Documents\Consol\Sourcefile.xlsx')
ws = wb_sf["Sheet 1"]
copyfrom_max_columns = ws.max_column
paste_start_min_row = 1
city_list = ['DENVER','COLUMBUS','PORTLAND' ]
ABBR_list =['DEN','COL','PRT']
for city, abbr in zip(city_list, ABBR_list):
dst_dir_path = 'c/windows/users/me/documents/mainfolder'/{f}/".format(f=(abbr))
latest_file= max(glob.glob(f"{dst_dir_path}/*.xlsm"), key=os.path.getmtime)
wb_pst = load_workbook(latest_file, keep_vba = True)
ws2 = wb_pst['Sheet 5']
search_min_row = paste_start_min_row
for row in ws.iter_rows(max_col=1, min_row=search_min_row):
for cell in row:
if cell.value == city:
paste_start_min_row += 1
for i in range(copyfrom_max_columns):
# Set the copy and paste Cells
copy_cell = cell.offset(column=i)
paste_cell = ws2.cell(row=paste_start_min_row, column=i + 1)
paste_cell.value = copy_cell.value
paste_cell.number_format = copy_cell.number_format
# Copy other Cell formatting
paste_cell.font = copy(copy_cell.font)
paste_cell.alignment = copy(copy_cell.alignment)
paste_cell.border = copy(copy_cell.border)
paste_cell.fill = copy(copy_cell.fill)
wb_pst.save(latest_file)
|
Openpyxl Python Copy From One Excel Sheet&Paste in Existing Individual Workbooks in Subfolders
|
I have a code that copies data based on Column ['A'] cell value from one .xlsx workbook call it my source file and pastes it into the most recently modified .xlsm file in a subfolder. The problem I have is that I have 50 subfolders but my code only works for one, so I am repeating the script 50 times which is not productive. My source file has 2sheets my data is in "Sheet 1", it has 50k+ rows and has columns A:Q. My destination files have multiple sheets but I am pasting into a specific sheet, naming convention for the sheet is same across all 50 files so say sheet name is "Sheet 5". My destination files have header row so am pasting starting from row2, I paste into columns A:Q, it has formula columns R:T. Column A of my source file has multiple cities[DENVER, COLUMBUS, PORTLAND etc], the cities correspond to my destination files, my destination folders looks like this 'c/windows/users/me/documents/mainfolder/DEN' for DENVER, 'c/windows/users/me/documents/mainfolder/COL' for COLUMBUS, 'c/windows/users/me/documents/mainfolder/PRT' for PORTLAND.
Naming convention for the files in the DEN subfolder is
DEN 2022 random string r1.xlsm,
DEN 2022 random string r2.xlsm,
Naming convention for the files in PRT folder is
PRT 2022 random string r2.xlsm,
PRT 2022 random string r3.xlsm etc. My code copies rows from source file where Column A is eg: Denver and pastes starting from row2 into DEN 2022 random string r2.xlsm(most recently modified file). PS:I found the code here on stack, I repurposed it for my project but I need help with how to get the code to work for all 50 subfolders. See my code below:
Sourcefile enter image description here
Destinationfile enter image description here
from copy import copy
import os
import openpyxl
from openpyxl import Workbook, load_workbook
#my source file
wb_sf= load_workbook(r'C:\Users\me\Documents\Consol\Sourcefile.xlsx')
ws = wb_sf["Sheet 1"]
# my destination file
#find most recently modified file
dest_path = r'c\windows\users\me\documents\mainfolder\DEN' #DENVER subfolder
latest_file = max(glob.glob(f"{dest_path}/*.xlsm"), key=os.path.getmtime)
#load destination file
wb_df= load_workbook(latest_file, keep_vba = True)
ws2 = wb_df["Sheet 5"]
copyfrom_max_columns = ws.max_column
paste_start_min_row = 1
city_list = ['DENVER'] #search for DENVER
for city_number, city in enumerate(city_list, 1):
search_min_row = paste_start_min_row
for row in ws.iter_rows(max_col=1, min_row=search_min_row):
for cell in row:
if cell.value == city:
paste_start_min_row += 1
for i in range(copyfrom_max_columns):
# Set the copy and paste Cells
copy_cell = cell.offset(column=i)
paste_cell = ws2.cell(row=paste_start_min_row, column=i + 1)
paste_cell.value = copy_cell.value
paste_cell.number_format = copy_cell.number_format
wb_pst.save(latest_file)
|
[
"I figured it out, I am pasting the code below to help anybodyelse who may be doing similar project. I listed my cities in a list, listed my abbreviations(which also double as my folder name) in a list, I paired the list using zip and used the zip to enumerate and set my subfolder names.\nwb_sf= load_workbook(r'C:\\Users\\me\\Documents\\Consol\\Sourcefile.xlsx')\nws = wb_sf[\"Sheet 1\"]\n\n\ncopyfrom_max_columns = ws.max_column\n\n\npaste_start_min_row = 1\n\ncity_list = ['DENVER','COLUMBUS','PORTLAND' ]\nABBR_list =['DEN','COL','PRT']\nfor city, abbr in zip(city_list, ABBR_list):\n dst_dir_path = 'c/windows/users/me/documents/mainfolder'/{f}/\".format(f=(abbr))\nlatest_file= max(glob.glob(f\"{dst_dir_path}/*.xlsm\"), key=os.path.getmtime)\n wb_pst = load_workbook(latest_file, keep_vba = True)\n ws2 = wb_pst['Sheet 5']\nsearch_min_row = paste_start_min_row \nfor row in ws.iter_rows(max_col=1, min_row=search_min_row): \n for cell in row:\n if cell.value == city: \n paste_start_min_row += 1 \n for i in range(copyfrom_max_columns): \n # Set the copy and paste Cells\n copy_cell = cell.offset(column=i)\n paste_cell = ws2.cell(row=paste_start_min_row, column=i + 1)\n paste_cell.value = copy_cell.value\n paste_cell.number_format = copy_cell.number_format\n\n # Copy other Cell formatting\n paste_cell.font = copy(copy_cell.font)\n paste_cell.alignment = copy(copy_cell.alignment)\n paste_cell.border = copy(copy_cell.border)\n paste_cell.fill = copy(copy_cell.fill)\n\nwb_pst.save(latest_file)\n\n"
] |
[
0
] |
[] |
[] |
[
"openpyxl",
"python"
] |
stackoverflow_0074523924_openpyxl_python.txt
|
Q:
Formatting the scientific notation phone number in python
I have my phone_number column listed below.
phone_number
--------------
001 1234567890
380 1234567890
27 1234567890
001 +11234567890
2.56898E+11
1 1234567890
123-456-7890
+1 (123) 456-7890
(123) 456-7890
NaN
The following step worked fine
character = '[^0-9]+'
df.phone_number.str.replace(character, '')
The result I got is
phone_number
--------------
11234567890
3.80123E+12
2.71234E+11
11234567890
2.56898E+11
11234567890
1234567890
11234567890
1234567890
NaN
Is there any elegant way to deal with the scientific notation format? I want them to be 11234567890 or longer because of the country code. From there I think I can figure out how to get both international and the US phone number formats. Thanks in advance.
A:
You can use conversion to Int64/string dtypes:
s1 = (pd.to_numeric(df['phone_number'], errors='coerce')
.astype('Int64').astype('string')
)
s2 = df['phone_number'].str.replace(r'\D+', '', regex=True)
df['phone_number_clean'] = s1.fillna(s2)
print(df)
Output:
phone_number phone_number_clean
0 001 1234567890 0011234567890
1 380 1234567890 3801234567890
2 27 1234567890 271234567890
3 001 +11234567890 00111234567890
4 2.56898E+11 256898000000
5 1 1234567890 11234567890
6 123-456-7890 1234567890
7 +1 (123) 456-7890 11234567890
8 (123) 456-7890 1234567890
Note that depending on the float precision and the way the number was converted to scientific notation, you might lose important digits.
|
Formatting the scientific notation phone number in python
|
I have my phone_number column listed below.
phone_number
--------------
001 1234567890
380 1234567890
27 1234567890
001 +11234567890
2.56898E+11
1 1234567890
123-456-7890
+1 (123) 456-7890
(123) 456-7890
NaN
The following step worked fine
character = '[^0-9]+'
df.phone_number.str.replace(character, '')
The result I got is
phone_number
--------------
11234567890
3.80123E+12
2.71234E+11
11234567890
2.56898E+11
11234567890
1234567890
11234567890
1234567890
NaN
Is there any elegant way to deal with the scientific notation format? I want them to be 11234567890 or longer because of the country code. From there I think I can figure out how to get both international and the US phone number formats. Thanks in advance.
|
[
"You can use conversion to Int64/string dtypes:\ns1 = (pd.to_numeric(df['phone_number'], errors='coerce')\n .astype('Int64').astype('string')\n )\n\ns2 = df['phone_number'].str.replace(r'\\D+', '', regex=True)\n\ndf['phone_number_clean'] = s1.fillna(s2)\n\nprint(df)\n\nOutput:\n phone_number phone_number_clean\n0 001 1234567890 0011234567890\n1 380 1234567890 3801234567890\n2 27 1234567890 271234567890\n3 001 +11234567890 00111234567890\n4 2.56898E+11 256898000000\n5 1 1234567890 11234567890\n6 123-456-7890 1234567890\n7 +1 (123) 456-7890 11234567890\n8 (123) 456-7890 1234567890\n\nNote that depending on the float precision and the way the number was converted to scientific notation, you might lose important digits.\n"
] |
[
2
] |
[] |
[] |
[
"data_cleaning",
"formatting",
"pandas",
"phone_number",
"python"
] |
stackoverflow_0074539685_data_cleaning_formatting_pandas_phone_number_python.txt
|
Q:
"database or disk is full" error when joining two tables
With California Traffic Collision Data from Kaggle I want to join two tables based on case id but selecting only rows that have a collision date of > 2020:
con = sqlite3.connect(".../switrs.sqlite")
df_sqllite = pd.read_sql_query('SELECT * FROM parties JOIN collisions USING (case_id) WHERE collision_date >= "2020-01-01"', con)
I get this error:
DatabaseError: Execution failed on sql 'SELECT * FROM parties JOIN collisions USING (case_id) WHERE collision_date >= "2020-01-01"': database or disk is full
How to solve this?
A:
SELECT * FROM parties as p JOIN collisions as c USING c.case_id WHERE c.collision_date >= "2020-01-01
|
"database or disk is full" error when joining two tables
|
With California Traffic Collision Data from Kaggle I want to join two tables based on case id but selecting only rows that have a collision date of > 2020:
con = sqlite3.connect(".../switrs.sqlite")
df_sqllite = pd.read_sql_query('SELECT * FROM parties JOIN collisions USING (case_id) WHERE collision_date >= "2020-01-01"', con)
I get this error:
DatabaseError: Execution failed on sql 'SELECT * FROM parties JOIN collisions USING (case_id) WHERE collision_date >= "2020-01-01"': database or disk is full
How to solve this?
|
[
"SELECT * FROM parties as p JOIN collisions as c USING c.case_id WHERE c.collision_date >= \"2020-01-01\n"
] |
[
0
] |
[] |
[] |
[
"python",
"sqlite"
] |
stackoverflow_0074539729_python_sqlite.txt
|
Q:
How keep latest records in dataframe according to last version using pandas
I have a dataframe like this
Id b_num b_type b_ver price
100 55 A 0 100
101 55 A 0 50
102 55 A 1 100
103 55 A 1 60
104 30 C 2 100
105 30 C 2 50
106 30 C 2 100
107 30 C 2 60
108 30 C 4 200
109 30 C 4 55
110 30 C 4 80
111 30 C 4 120
112 30 C 4 20
I would like to keep the latest version of b_num and b_type, b_ver is the number of version
The output expected:
Id b_num b_type b_ver price
102 55 A 1 100
103 55 A 1 60
108 30 C 4 200
109 30 C 4 55
110 30 C 4 80
111 30 C 4 120
112 30 C 4 20
thanks
A:
Consider trying with:
df.merge(df.groupby(['b_num','b_type'],as_index=False)['b_ver'].last(),
on=['b_num','b_type','b_ver'])
Outputting:
Id b_num b_type b_ver price
0 102 55 A 1 100
1 103 55 A 1 60
2 108 30 C 4 200
3 109 30 C 4 55
4 110 30 C 4 80
5 111 30 C 4 120
6 112 30 C 4 20
A:
Using groupby.rank()
df.loc[df.groupby(['b_num','b_type'])['b_ver'].rank(ascending=False,method = 'dense').eq(1)]
using groupby.nlargest()
df.loc[df.groupby(['b_num','b_type'])['b_ver'].nlargest(1,keep = 'all').droplevel([0,1]).index]
using groupby.transform()
df.loc[df.groupby(['b_num','b_type'])['b_ver'].transform('max').eq(df['b_ver'])]
Output:
Id b_num b_type b_ver price
2 102 55 A 1 100
3 103 55 A 1 60
8 108 30 C 4 200
9 109 30 C 4 55
10 110 30 C 4 80
11 111 30 C 4 120
12 112 30 C 4 20
|
How keep latest records in dataframe according to last version using pandas
|
I have a dataframe like this
Id b_num b_type b_ver price
100 55 A 0 100
101 55 A 0 50
102 55 A 1 100
103 55 A 1 60
104 30 C 2 100
105 30 C 2 50
106 30 C 2 100
107 30 C 2 60
108 30 C 4 200
109 30 C 4 55
110 30 C 4 80
111 30 C 4 120
112 30 C 4 20
I would like to keep the latest version of b_num and b_type, b_ver is the number of version
The output expected:
Id b_num b_type b_ver price
102 55 A 1 100
103 55 A 1 60
108 30 C 4 200
109 30 C 4 55
110 30 C 4 80
111 30 C 4 120
112 30 C 4 20
thanks
|
[
"Consider trying with:\ndf.merge(df.groupby(['b_num','b_type'],as_index=False)['b_ver'].last(),\n on=['b_num','b_type','b_ver'])\n\nOutputting:\n Id b_num b_type b_ver price\n0 102 55 A 1 100\n1 103 55 A 1 60\n2 108 30 C 4 200\n3 109 30 C 4 55\n4 110 30 C 4 80\n5 111 30 C 4 120\n6 112 30 C 4 20\n\n",
"Using groupby.rank()\ndf.loc[df.groupby(['b_num','b_type'])['b_ver'].rank(ascending=False,method = 'dense').eq(1)]\n\nusing groupby.nlargest()\ndf.loc[df.groupby(['b_num','b_type'])['b_ver'].nlargest(1,keep = 'all').droplevel([0,1]).index]\n\nusing groupby.transform()\ndf.loc[df.groupby(['b_num','b_type'])['b_ver'].transform('max').eq(df['b_ver'])]\n\nOutput:\n Id b_num b_type b_ver price\n2 102 55 A 1 100\n3 103 55 A 1 60\n8 108 30 C 4 200\n9 109 30 C 4 55\n10 110 30 C 4 80\n11 111 30 C 4 120\n12 112 30 C 4 20\n\n"
] |
[
3,
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074539711_dataframe_pandas_python.txt
|
Q:
how to check if file exists outside of current working directory in python
I am trying to find if a file exists that is not in the current directory. The file is here:
~/Documents/project/data.csv
I am trying to locate it by absolute path like this:
os.path.isfile(f'~/Documents/project/data.csv')
I always get false because I am running this code from outside of ~/Documents/project/. I understand os.path.isfile only works from the current directory. How do I modify my code above to return a bool if the file exists?
A:
What version of Python are you using? As of 3.4 you can use the pathlib library functions:
from pathlib import Path
p = Path("~/Documents/project/data.csv")
p.exists()
|
how to check if file exists outside of current working directory in python
|
I am trying to find if a file exists that is not in the current directory. The file is here:
~/Documents/project/data.csv
I am trying to locate it by absolute path like this:
os.path.isfile(f'~/Documents/project/data.csv')
I always get false because I am running this code from outside of ~/Documents/project/. I understand os.path.isfile only works from the current directory. How do I modify my code above to return a bool if the file exists?
|
[
"What version of Python are you using? As of 3.4 you can use the pathlib library functions:\nfrom pathlib import Path\n\np = Path(\"~/Documents/project/data.csv\")\np.exists()\n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074539755_python.txt
|
Q:
Unresponsive tkinter SimpleDialog box
Below is an outline of a tkinter GUI in which I want the same dialog box to be opened in various ways. The response selected by the user from choices in the dialog then needs to be returned to the mainloop.
The SimpleDialog class looks to be ideal for this and here I have just used the example provided in the dialog code. It is accessed by both the button and popup menu in the View class, along with their bindings in the Controller class.
It works just fine when called from the button, but when called from the popup menu (from a right click) the dialog appears and then freezes the entire app.
from tkinter import simpledialog as s
import tkinter as tk
class View(tk.Frame):
def __init__(self, root):
tk.Frame.__init__(self)
self.grid(row=0, column=0, sticky='nsew')
self.configure(bg = 'blue')
self.popup = tk.Menu(self, tearoff=0)
self.bind("<Button-2>", self.make_popup) #right-click to show popup
self.button = tk.Button(self, text='Test')
self.button.grid()
def make_popup(self, event):
try:
self.popup.tk_popup(event.x_root + 15, event.y_root, 0)
finally:
self.popup.grab_release()
class Controller():
def __init__(self, view):
view.popup.add_command(label ='do test', command = lambda : self.do_test(None, view))
view.popup.add_command(label ='dummy test', command = print('This one works OK'))
view.button.bind("<Button-1>", lambda e, : self.do_test(e, view))
def do_test(self, event, view):
d = s.SimpleDialog(view,
text="This is a test dialog. "
"Would this have been an actual dialog, "
"the buttons below would have been glowing "
"in soft pink light.\n"
"Do you believe this?",
buttons=["Yes", "No", "Cancel"],
default=0,
cancel=2,
title="Test Dialog")
print(d.go())
class App(tk.Tk):
def __init__(self):
super().__init__()
self.geometry('200x100')
self.columnconfigure(0, weight=1)
self.rowconfigure(0, weight=1)
view = View(self)
controller = Controller(view)
if __name__ == "__main__":
app = App()
app.mainloop()
It seems to me that the dialog should either work or not work, and not care how it is called. So I would be very grateful for an explanation as to why it responds in one case but not the other, and of course equally grateful for a fix.
A:
The problem appears to lie in the print statement in the do_test callback, as splitting this into two lines fixes it
#print(d.go())
answer = d.go()
print(answer)
As reported in a comment this may be only an issue for MacOS (I am using MacOS 11.1 and Python 3.10.8 ).
|
Unresponsive tkinter SimpleDialog box
|
Below is an outline of a tkinter GUI in which I want the same dialog box to be opened in various ways. The response selected by the user from choices in the dialog then needs to be returned to the mainloop.
The SimpleDialog class looks to be ideal for this and here I have just used the example provided in the dialog code. It is accessed by both the button and popup menu in the View class, along with their bindings in the Controller class.
It works just fine when called from the button, but when called from the popup menu (from a right click) the dialog appears and then freezes the entire app.
from tkinter import simpledialog as s
import tkinter as tk
class View(tk.Frame):
def __init__(self, root):
tk.Frame.__init__(self)
self.grid(row=0, column=0, sticky='nsew')
self.configure(bg = 'blue')
self.popup = tk.Menu(self, tearoff=0)
self.bind("<Button-2>", self.make_popup) #right-click to show popup
self.button = tk.Button(self, text='Test')
self.button.grid()
def make_popup(self, event):
try:
self.popup.tk_popup(event.x_root + 15, event.y_root, 0)
finally:
self.popup.grab_release()
class Controller():
def __init__(self, view):
view.popup.add_command(label ='do test', command = lambda : self.do_test(None, view))
view.popup.add_command(label ='dummy test', command = print('This one works OK'))
view.button.bind("<Button-1>", lambda e, : self.do_test(e, view))
def do_test(self, event, view):
d = s.SimpleDialog(view,
text="This is a test dialog. "
"Would this have been an actual dialog, "
"the buttons below would have been glowing "
"in soft pink light.\n"
"Do you believe this?",
buttons=["Yes", "No", "Cancel"],
default=0,
cancel=2,
title="Test Dialog")
print(d.go())
class App(tk.Tk):
def __init__(self):
super().__init__()
self.geometry('200x100')
self.columnconfigure(0, weight=1)
self.rowconfigure(0, weight=1)
view = View(self)
controller = Controller(view)
if __name__ == "__main__":
app = App()
app.mainloop()
It seems to me that the dialog should either work or not work, and not care how it is called. So I would be very grateful for an explanation as to why it responds in one case but not the other, and of course equally grateful for a fix.
|
[
"The problem appears to lie in the print statement in the do_test callback, as splitting this into two lines fixes it\n #print(d.go()) \n answer = d.go()\n print(answer)\n\nAs reported in a comment this may be only an issue for MacOS (I am using MacOS 11.1 and Python 3.10.8 ).\n"
] |
[
0
] |
[] |
[] |
[
"macos",
"python",
"simpledialog",
"tkinter"
] |
stackoverflow_0074398183_macos_python_simpledialog_tkinter.txt
|
Q:
import jwt ImportError: No module named jwt
I have been trying to run this project
https://github.com/udacity/FSND-Deploy-Flask-App-to-Kubernetes-Using-EKS
I installed all the dependencies.
I still did not make any adjustments. I need to run it first
but I get this error when I type the command
python main.py
this is the error i get:
Traceback (most recent call last):
File "main.py", line 8, in <module>
import jwt
ImportError: No module named jwt
I worked with similar errors before and managed to solve them but not with this one I could not figure out the source of the problem
A:
Check if PyJWTY is in the requirements file or if is installed in you system, using: pip3 install PyJWT
You could also face this error if you have running on your machine two versions of python. So the correct command will be python3 main.py
A:
I have hit the same issue with pyjwt 2.1.0 which was clearly installed in my venv as well as globally. What helped was to downgrade it to version 1.7.1
pip install "PyJWT==1.7.1"
run the app and then to reinstall newest version 2.1.0
pip install "PyJWT==2.1.0"
And the issue disappeared.
A:
This project has requirements that need to be installed for it to work. These can be installed via pip, pip install -r requirements.txt (I've linked to the requirements file in the project), which you can read more about here.
A:
What worked for me was using import jwt instead of import PyJWT. I am using version PyJWT-2.3.0.
jwt image on vscode
As you can see no errors in the above screenshot. The app runs without import errors.
Image of terminal
A:
Faced the same issue. Am using a guest VM running ubuntu 16.04.
I have multiple versions of python installed - both 3.5 and 3.7.
After repeated tries with and without using virtualenv what worked finally is:
Create a fresh virtual environment using :
priya:~$ virtualenv -p /usr/bin/python3.7 fenv
Activate the virutal environment :
priya:~$ source ./fenv/bin/activate
Note : You can find the path for python3.7 by using whereis python:
priya:~$ whereis python
python: /usr/bin/python /usr/bin/python3.5m /usr/bin/python3.5 /usr/bin/python3.7 /usr/bin/python3.5m-config /usr/bin/python3.5-config /usr/bin/python2.7 /usr/bin/python3.7m /usr/bin/python2.7-config /usr/lib/python3.5 /usr/lib/python3.7 /usr/lib/python2.7 /etc/python /etc/python3.5 /etc/python3.7 /etc/python2.7 /usr/local/lib/python3.5 /usr/local/lib/python3.7 /usr/local/lib/python2.7 /usr/include/python3.5m /usr/include/python3.5 /usr/include/python2.7 /usr/share/python /usr/share/man/man1/python.1.gz
Referenced link is : https://stackoverflow.com/questions/1534210/use-different-python-version-with-virtualenv#:~:text=By%20default%2C%20that%20will%20be,%2Flocal%2Fbin%2Fpython3
For your project - FSWD Nanodegree -
After you have activated your virtualenv, run pip install -r requirements.txt
You can test by :
(fenv) priya:FSND-Deploy-Flask-App-to-Kubernetes-Using-EKS :~$ python
Python 3.7.9 (default, Aug 18 2020, 06:24:24)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
import jwt
exit()
A:
pip3 install flask_jwt_ex.. I was doing this without sudo. And then I was working on the program as sudo.
A:
You have to have only PyJWT installed and not JWT. Make sure you uninstall JWT (pip uninstall JWT) and install PyJWT (pip install PyJWT)
|
import jwt ImportError: No module named jwt
|
I have been trying to run this project
https://github.com/udacity/FSND-Deploy-Flask-App-to-Kubernetes-Using-EKS
I installed all the dependencies.
I still did not make any adjustments. I need to run it first
but I get this error when I type the command
python main.py
this is the error i get:
Traceback (most recent call last):
File "main.py", line 8, in <module>
import jwt
ImportError: No module named jwt
I worked with similar errors before and managed to solve them but not with this one I could not figure out the source of the problem
|
[
"\nCheck if PyJWTY is in the requirements file or if is installed in you system, using: pip3 install PyJWT\nYou could also face this error if you have running on your machine two versions of python. So the correct command will be python3 main.py\n\n",
"I have hit the same issue with pyjwt 2.1.0 which was clearly installed in my venv as well as globally. What helped was to downgrade it to version 1.7.1\npip install \"PyJWT==1.7.1\"\nrun the app and then to reinstall newest version 2.1.0\npip install \"PyJWT==2.1.0\"\nAnd the issue disappeared.\n",
"This project has requirements that need to be installed for it to work. These can be installed via pip, pip install -r requirements.txt (I've linked to the requirements file in the project), which you can read more about here.\n",
"What worked for me was using import jwt instead of import PyJWT. I am using version PyJWT-2.3.0.\njwt image on vscode\nAs you can see no errors in the above screenshot. The app runs without import errors.\nImage of terminal\n",
"Faced the same issue. Am using a guest VM running ubuntu 16.04.\nI have multiple versions of python installed - both 3.5 and 3.7.\nAfter repeated tries with and without using virtualenv what worked finally is:\n\nCreate a fresh virtual environment using :\npriya:~$ virtualenv -p /usr/bin/python3.7 fenv\n\nActivate the virutal environment :\npriya:~$ source ./fenv/bin/activate\n\n\nNote : You can find the path for python3.7 by using whereis python:\npriya:~$ whereis python\npython: /usr/bin/python /usr/bin/python3.5m /usr/bin/python3.5 /usr/bin/python3.7 /usr/bin/python3.5m-config /usr/bin/python3.5-config /usr/bin/python2.7 /usr/bin/python3.7m /usr/bin/python2.7-config /usr/lib/python3.5 /usr/lib/python3.7 /usr/lib/python2.7 /etc/python /etc/python3.5 /etc/python3.7 /etc/python2.7 /usr/local/lib/python3.5 /usr/local/lib/python3.7 /usr/local/lib/python2.7 /usr/include/python3.5m /usr/include/python3.5 /usr/include/python2.7 /usr/share/python /usr/share/man/man1/python.1.gz\nReferenced link is : https://stackoverflow.com/questions/1534210/use-different-python-version-with-virtualenv#:~:text=By%20default%2C%20that%20will%20be,%2Flocal%2Fbin%2Fpython3\nFor your project - FSWD Nanodegree -\nAfter you have activated your virtualenv, run pip install -r requirements.txt\nYou can test by :\n(fenv) priya:FSND-Deploy-Flask-App-to-Kubernetes-Using-EKS :~$ python\nPython 3.7.9 (default, Aug 18 2020, 06:24:24)\n[GCC 5.4.0 20160609] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n\n\n\nimport jwt\nexit()\n\n\n\n",
"pip3 install flask_jwt_ex.. I was doing this without sudo. And then I was working on the program as sudo.\n",
"You have to have only PyJWT installed and not JWT. Make sure you uninstall JWT (pip uninstall JWT) and install PyJWT (pip install PyJWT)\n"
] |
[
15,
3,
2,
2,
0,
0,
0
] |
[] |
[] |
[
"jwt",
"python"
] |
stackoverflow_0063309591_jwt_python.txt
|
Q:
"key error" when using an enum as a dictionary key in Python3
I want to use an enum as the key for a dictionary, but get a KeyError.
#!/usr/bin/python3
from enum import Enum, unique
from typing import List
@unique
class Color(Enum):
RED = "cherry"
GREEN = "cucumber"
BLUE = "blueberry"
allColors = {}
def countColors(colors: List[Color]):
for c in colors:
allColors[c] += 1
countColors([Color.RED, Color.RED, Color.BLUE, Color.GREEN])
for c in allColors:
print(f"""{allColors[c]} {c.value} {c.name} pipes""")
When I run this, I get
Traceback (most recent call last):
File "mvce.py", line 18, in <module>
countColors([Color.RED, Color.RED, Color.BLUE, Color.GREEN])
File "mvce.py", line 16, in countColors
allColors[c] += 1
KeyError: <Color.RED: 'cherry'>
The documentation on dictionaries says that I can use any immutable value as a key, and I'd assume a enum value is immutable.
How can I use an enum as key in a dictionary?
A:
I think this is failing because of two things:
+= update expects to have the key already in the dictionary.
For this you will have to check if the dictionary has the key and if it doesn't then update the item
the access to the enum item like allColors[c] uses the __get_item__ method for enums.
https://docs.python.org/3/library/enum.html#enum.EnumType.__getitem__
When you are trying to access, the variable c is an instance of Color but the method expects to match the name of the enum's member.
Changing this line
print(f"""{allColors[c]} {c.value} {c.name} pipes""")
to
print(f"""{allColors[c.name]} {c.value} {c.name} pipes""")
This is what I have at the end:
from enum import Enum, unique
from typing import List
@unique
class Color(Enum):
RED = "cherry"
GREEN = "cucumber"
BLUE = "blueberry"
allColors = dict()
def countColors(colors: List[Color]):
for c in colors:
if c in allColors.keys():
allColors[c] += 1
else:
allColors.update({c:1})
countColors([Color.RED, Color.RED, Color.BLUE, Color.GREEN])
for c in allColors:
print(f"""{allColors[c]} {c.value} {c.name} pipes""")
|
"key error" when using an enum as a dictionary key in Python3
|
I want to use an enum as the key for a dictionary, but get a KeyError.
#!/usr/bin/python3
from enum import Enum, unique
from typing import List
@unique
class Color(Enum):
RED = "cherry"
GREEN = "cucumber"
BLUE = "blueberry"
allColors = {}
def countColors(colors: List[Color]):
for c in colors:
allColors[c] += 1
countColors([Color.RED, Color.RED, Color.BLUE, Color.GREEN])
for c in allColors:
print(f"""{allColors[c]} {c.value} {c.name} pipes""")
When I run this, I get
Traceback (most recent call last):
File "mvce.py", line 18, in <module>
countColors([Color.RED, Color.RED, Color.BLUE, Color.GREEN])
File "mvce.py", line 16, in countColors
allColors[c] += 1
KeyError: <Color.RED: 'cherry'>
The documentation on dictionaries says that I can use any immutable value as a key, and I'd assume a enum value is immutable.
How can I use an enum as key in a dictionary?
|
[
"I think this is failing because of two things:\n\n+= update expects to have the key already in the dictionary.\n\nFor this you will have to check if the dictionary has the key and if it doesn't then update the item\n\nthe access to the enum item like allColors[c] uses the __get_item__ method for enums.\n\nhttps://docs.python.org/3/library/enum.html#enum.EnumType.__getitem__\nWhen you are trying to access, the variable c is an instance of Color but the method expects to match the name of the enum's member.\nChanging this line\n print(f\"\"\"{allColors[c]} {c.value} {c.name} pipes\"\"\")\n\nto\n print(f\"\"\"{allColors[c.name]} {c.value} {c.name} pipes\"\"\")\n\nThis is what I have at the end:\nfrom enum import Enum, unique\nfrom typing import List\n\n@unique\nclass Color(Enum):\n RED = \"cherry\"\n GREEN = \"cucumber\"\n BLUE = \"blueberry\"\n\nallColors = dict()\n\ndef countColors(colors: List[Color]):\n for c in colors:\n if c in allColors.keys():\n allColors[c] += 1\n else:\n allColors.update({c:1})\n\n\ncountColors([Color.RED, Color.RED, Color.BLUE, Color.GREEN])\nfor c in allColors:\n print(f\"\"\"{allColors[c]} {c.value} {c.name} pipes\"\"\")\n\n"
] |
[
0
] |
[
"Could it be that you have set 3 colors but in line 18 you have 4\ncountColors([Color.RED, Color.RED, Color.BLUE, Color.GREEN])\n\ntry\ncountColors ([Color.RED, Color.Blue, Color.Green])\n\n"
] |
[
-1
] |
[
"dictionary",
"enums",
"python",
"python_3.x"
] |
stackoverflow_0058054345_dictionary_enums_python_python_3.x.txt
|
Q:
Replace Value on dict comprehension based on condition
a = ('A','B','C')
b = (45.43453453, 'Bad Val', 76.45645657 )
I want to create a dict, very simple:
{ k:v for k,v in zip(a,b) }
My problem is, now I want to apply float (if possible) or replace it with None
so, I want to apply a round of 2 and therefore my output should be:
{'A': 45.43, 'B': None, 'C': 76.46}
A:
Since round raises a TypeError whenever something doesn't implement __round__, this isn't possible directly with dictionary comprehensions, but you can write your own function to use inside of it.
def safe_round(val, decimals):
try:
return round(val, decimals)
except TypeError:
return None
a = ('A','B','C')
b = (45.43453453, 'Bad Val', 76.45645657 )
d = { k:safe_round(v, 2) for k,v in zip(a,b) }
{'A': 45.43, 'B': None, 'C': 76.46}
A:
define a function like
def try_round(n, d):
try:
return round(n, d)
except TypeError:
return None
and then
result = {k: try_round(v, 2) for k, v in zip(a, b)}
|
Replace Value on dict comprehension based on condition
|
a = ('A','B','C')
b = (45.43453453, 'Bad Val', 76.45645657 )
I want to create a dict, very simple:
{ k:v for k,v in zip(a,b) }
My problem is, now I want to apply float (if possible) or replace it with None
so, I want to apply a round of 2 and therefore my output should be:
{'A': 45.43, 'B': None, 'C': 76.46}
|
[
"Since round raises a TypeError whenever something doesn't implement __round__, this isn't possible directly with dictionary comprehensions, but you can write your own function to use inside of it.\ndef safe_round(val, decimals):\n try:\n return round(val, decimals)\n except TypeError:\n return None\n\na = ('A','B','C')\nb = (45.43453453, 'Bad Val', 76.45645657 )\n\nd = { k:safe_round(v, 2) for k,v in zip(a,b) }\n\n{'A': 45.43, 'B': None, 'C': 76.46}\n\n",
"define a function like\ndef try_round(n, d):\n try:\n return round(n, d)\n except TypeError:\n return None\n\nand then\nresult = {k: try_round(v, 2) for k, v in zip(a, b)}\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"dictionary",
"python",
"python_zip",
"rounding"
] |
stackoverflow_0074539840_dictionary_python_python_zip_rounding.txt
|
Q:
IndexError is being generated when deleting first and last terms in a Numpy array
I have a simple program that I'm writing for a math class that generates an array of numbers from 1 to 10. After removing the 10s I would like to count and remove pairs of numbers that add up to at least 10, therefore I order the array in ascending order and check if the sum of the first and last numbers is greater than or equal to 10. If the sum >= 10, I want to delete those numbers using numpy.delete.
What I don't understand is that sometimes the code works and other times I get an error: IndexError: index 0 is out of bounds for axis 0 with size 0. The error usually refers to the lines that have to do with sum or the numpy.delete operations.
Here is the code:
import numpy as np
no_of_dice = 10
dice = []
dice = np.random.randint(1, high=11, size=no_of_dice)
print(dice)
no_of_tens = np.count_nonzero(dice == 10)
print("Number of 10s:", no_of_tens)
dice = np.delete(dice, np.where(dice == 10))
dice = np.sort(dice)
print(dice.shape, dice)
no_of_pairs = 0
sum = dice[0] + dice[-1]
while sum >= 10:
dice = np.delete(dice, -1)
dice = np.delete(dice, 0)
sum = dice[0] + dice[-1]
no_of_pairs += 1
print(dice)
print("Number of pairs:", no_of_pairs)
A:
I checked and tested your code and I found two problems.
First, let's see about the both errors that you mentioned
a) index 0 is out of bounds for axis 0 with size 0 on sum = dice[0]
On that case, I tried to print dice to check if dice was empty (since that error refers that you tried to get an element from that list with an index bigger than the length of the list)
print(dice) #The command i used
>>>>[] #What the program returned
As you see, the list was empty. Why was it empty? Let's check the procedure of your program. For that, I added and print(dice) instruction every time it iterates on the while sum>=10
#--------------Your code------------------------------
while sum >= 10:
dice = np.delete(dice, -1)
dice = np.delete(dice, 0)
sum = dice[0] + dice[-1]
no_of_pairs += 1
#------------------Prints i added--------------------
while sum >= 10:
dice = np.delete(dice, -1)
dice = np.delete(dice, 0)
sum = dice[0] + dice[-1]
no_of_pairs += 1
print(dice) #-----------print i added
print("Denso") #----------Print i added
This was the result
[5 6 2 7 2 8 5 9 5 7]
Number of 10s: 0
(10,) [2 2 5 5 5 6 7 7 8 9]
[2 5 5 5 6 7 7 8]
Denso
[5 5 5 6 7 7]
Denso
[5 5 6 7]
Denso
[5 6]
Denso
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Input In [55], in <cell line: 17>()
18 dice = np.delete(dice, -1)
19 dice = np.delete(dice, 0)
---> 20 sum = dice[0] + dice[-1]
21 no_of_pairs += 1
22 print(dice)
IndexError: index 0 is out of bounds for axis 0 with size 0
So what happened? before the crash, you see that dice only had two elements: 5 and 6, and both of them sum 11, so sum was greather than 10. Then, the while executed again. However, if you notice your code, you execute two np.delete on the first and last elements. So, that means that you deleted, in this case, 5 and 6, which where the only two elements remaining on the list. After that, you try to execute this:
sum = dice[0] + dice[-1]
And that gives you the crash, why? because you deleted 5 and 6, which where the remaining elements of the list. With that, dice becomes empty, and trying to do dice[0] when the list is empty crashes the program. To solve that, a suggestion is to check after the delete instruction whereas the list is empty or not to forcebreak the while loop and avoid that crash
Now for the second case
b) index 0 is out of bounds for axis 0 with size 0 on np.delete(dice, 0)
I was able to reproduce the error. Considering the previous change i did, this was the result
[10 9 6 7 6 2 3 2 7 9]
Number of 10s: 1
(9,) [2 2 3 6 6 7 7 9 9]
[2 3 6 6 7 7 9]
Denso
[3 6 6 7 7]
Denso
[6 6 7]
Denso
[6]
Denso
IndexError Traceback (most recent call last)
Input In [44], in <cell line: 17>()
17 while sum >= 10:
18 dice = np.delete(dice, -1)
---> 19 dice = np.delete(dice, 0)
20 sum = dice[0] + dice[-1]
21 no_of_pairs += 1
[.....]
IndexError: index 0 is out of bounds for axis 0 with size 0
The problem here is the same, but if you notice, now there was only one element remaining on dice before the crash instead of two.
I executed the print instruction again
print(dice) #The command i used
>>>>[] #What the program returned
Again, dice was empty, How so? The problem was similars to the first case. If you notice, 6 was the last element before the iteration on sum>=10 restarts (and it restarts because the sum is 12, why? because dice[0] and dice [-1] refers to the same number, 6 (is the only element on the list)). So, you execute, again, two deletes on the dice list. However, you delete the last and first element of the list. The problem is that, on the first delete, you erase the last remaining element on the list. Then, you try to execute the delete command again, but since the list is empty, it crashes. It doesn't have anything to delete. A suggestion i make you is to check whereas (on the while conditional) dice has 1 element or less. With that, once you have 1 element remaining, it will break the while loop
Hopefully it helps you
The second problem (besides your question) is that your method itself doesn't solve the issue (delete the pairs that sum 10 or more). Take a look on this example result of your program on a iteration it worked
[10 3 6 2 3 4 10 3 5 8]
Number of 10s: 2
(8,) [2 3 3 3 4 5 6 8]
[3 3 3 4 5 6]
Number of pairs: 1
You can see that it removed the 2-8 pair, which is correct (both sum 10). However, it didn't remove the 4-6 pair. That is because on your for-while loop, once it checked that the first and last element (3 and 6) sum 9, it broke the while loop, and didn't checked for 4-6 pair for example.
My suggestion for that is, instead of suming the first and last element, you compare the first element to every element on the list and if you find a sum which goes to 10 or more, delete those. Then check the second element of the list, and so on. There are different ways to solve that issue, mine is one of them, probably you can make your own too.
|
IndexError is being generated when deleting first and last terms in a Numpy array
|
I have a simple program that I'm writing for a math class that generates an array of numbers from 1 to 10. After removing the 10s I would like to count and remove pairs of numbers that add up to at least 10, therefore I order the array in ascending order and check if the sum of the first and last numbers is greater than or equal to 10. If the sum >= 10, I want to delete those numbers using numpy.delete.
What I don't understand is that sometimes the code works and other times I get an error: IndexError: index 0 is out of bounds for axis 0 with size 0. The error usually refers to the lines that have to do with sum or the numpy.delete operations.
Here is the code:
import numpy as np
no_of_dice = 10
dice = []
dice = np.random.randint(1, high=11, size=no_of_dice)
print(dice)
no_of_tens = np.count_nonzero(dice == 10)
print("Number of 10s:", no_of_tens)
dice = np.delete(dice, np.where(dice == 10))
dice = np.sort(dice)
print(dice.shape, dice)
no_of_pairs = 0
sum = dice[0] + dice[-1]
while sum >= 10:
dice = np.delete(dice, -1)
dice = np.delete(dice, 0)
sum = dice[0] + dice[-1]
no_of_pairs += 1
print(dice)
print("Number of pairs:", no_of_pairs)
|
[
"I checked and tested your code and I found two problems.\nFirst, let's see about the both errors that you mentioned\na) index 0 is out of bounds for axis 0 with size 0 on sum = dice[0]\nOn that case, I tried to print dice to check if dice was empty (since that error refers that you tried to get an element from that list with an index bigger than the length of the list)\nprint(dice) #The command i used\n>>>>[] #What the program returned\n\nAs you see, the list was empty. Why was it empty? Let's check the procedure of your program. For that, I added and print(dice) instruction every time it iterates on the while sum>=10\n#--------------Your code------------------------------\nwhile sum >= 10:\n dice = np.delete(dice, -1)\n dice = np.delete(dice, 0)\n sum = dice[0] + dice[-1]\n no_of_pairs += 1\n\n#------------------Prints i added--------------------\nwhile sum >= 10:\n dice = np.delete(dice, -1)\n dice = np.delete(dice, 0)\n sum = dice[0] + dice[-1]\n no_of_pairs += 1\n print(dice) #-----------print i added\n print(\"Denso\") #----------Print i added\n\nThis was the result\n[5 6 2 7 2 8 5 9 5 7]\nNumber of 10s: 0\n(10,) [2 2 5 5 5 6 7 7 8 9]\n[2 5 5 5 6 7 7 8]\nDenso\n[5 5 5 6 7 7]\nDenso\n[5 5 6 7]\nDenso\n[5 6]\nDenso\n\n---------------------------------------------------------------------------\nIndexError Traceback (most recent call last)\nInput In [55], in <cell line: 17>()\n 18 dice = np.delete(dice, -1)\n 19 dice = np.delete(dice, 0)\n---> 20 sum = dice[0] + dice[-1]\n 21 no_of_pairs += 1\n 22 print(dice)\nIndexError: index 0 is out of bounds for axis 0 with size 0\n\nSo what happened? before the crash, you see that dice only had two elements: 5 and 6, and both of them sum 11, so sum was greather than 10. Then, the while executed again. However, if you notice your code, you execute two np.delete on the first and last elements. So, that means that you deleted, in this case, 5 and 6, which where the only two elements remaining on the list. After that, you try to execute this:\nsum = dice[0] + dice[-1]\n\nAnd that gives you the crash, why? because you deleted 5 and 6, which where the remaining elements of the list. With that, dice becomes empty, and trying to do dice[0] when the list is empty crashes the program. To solve that, a suggestion is to check after the delete instruction whereas the list is empty or not to forcebreak the while loop and avoid that crash\nNow for the second case\nb) index 0 is out of bounds for axis 0 with size 0 on np.delete(dice, 0)\nI was able to reproduce the error. Considering the previous change i did, this was the result\n[10 9 6 7 6 2 3 2 7 9]\nNumber of 10s: 1\n(9,) [2 2 3 6 6 7 7 9 9]\n[2 3 6 6 7 7 9]\nDenso\n[3 6 6 7 7]\nDenso\n[6 6 7]\nDenso\n[6]\nDenso\n\nIndexError Traceback (most recent call last)\nInput In [44], in <cell line: 17>()\n 17 while sum >= 10:\n 18 dice = np.delete(dice, -1)\n---> 19 dice = np.delete(dice, 0)\n 20 sum = dice[0] + dice[-1]\n 21 no_of_pairs += 1\n[.....]\nIndexError: index 0 is out of bounds for axis 0 with size 0\n\nThe problem here is the same, but if you notice, now there was only one element remaining on dice before the crash instead of two.\nI executed the print instruction again\nprint(dice) #The command i used\n>>>>[] #What the program returned\n\nAgain, dice was empty, How so? The problem was similars to the first case. If you notice, 6 was the last element before the iteration on sum>=10 restarts (and it restarts because the sum is 12, why? because dice[0] and dice [-1] refers to the same number, 6 (is the only element on the list)). So, you execute, again, two deletes on the dice list. However, you delete the last and first element of the list. The problem is that, on the first delete, you erase the last remaining element on the list. Then, you try to execute the delete command again, but since the list is empty, it crashes. It doesn't have anything to delete. A suggestion i make you is to check whereas (on the while conditional) dice has 1 element or less. With that, once you have 1 element remaining, it will break the while loop\nHopefully it helps you\nThe second problem (besides your question) is that your method itself doesn't solve the issue (delete the pairs that sum 10 or more). Take a look on this example result of your program on a iteration it worked\n[10 3 6 2 3 4 10 3 5 8]\nNumber of 10s: 2\n(8,) [2 3 3 3 4 5 6 8]\n[3 3 3 4 5 6]\nNumber of pairs: 1\n\nYou can see that it removed the 2-8 pair, which is correct (both sum 10). However, it didn't remove the 4-6 pair. That is because on your for-while loop, once it checked that the first and last element (3 and 6) sum 9, it broke the while loop, and didn't checked for 4-6 pair for example.\nMy suggestion for that is, instead of suming the first and last element, you compare the first element to every element on the list and if you find a sum which goes to 10 or more, delete those. Then check the second element of the list, and so on. There are different ways to solve that issue, mine is one of them, probably you can make your own too.\n"
] |
[
0
] |
[] |
[] |
[
"arrays",
"numpy",
"python"
] |
stackoverflow_0074539450_arrays_numpy_python.txt
|
Q:
How to append output values of a function to a new empty list?
My aim is to create a function, list_powers, and use a for loop to take a list and return a new list (power_list) where each element is exponentiated to a specified power.
I have moved the return statement appropriately in order to collect all topowers in power_list and return them but it still returns None. How do I change it so that it returns the list of powers?
def list_powers(collection = [], power = 2):
power_list = []
for elem in collection:
topower = elem ** power
power_list.append(topower)
return power_list.append(topower)
test:
list_powers([2, 4, 1, 5, 12], power = 2)
output:
None
A:
return leaves the function. You have to return after the loop, not unconditional in the loop. And of course you don't want to return the last value for topower but the full list.
def list_powers(iterable, power = 2):
power_list = []
for elem in iterable:
topower = elem ** power
power_list.append(topower)
return power_list
print(list_powers([2, 4, 1, 5, 12]))
The shorter version uses a list comprehension
def list_powers(iterable, power = 2):
return [elem ** power for elem in iterable]
A:
When you do return topower it stop the function therefore it will only return one value. If you want to return a list you should do return power_list and outside of the local scope.
|
How to append output values of a function to a new empty list?
|
My aim is to create a function, list_powers, and use a for loop to take a list and return a new list (power_list) where each element is exponentiated to a specified power.
I have moved the return statement appropriately in order to collect all topowers in power_list and return them but it still returns None. How do I change it so that it returns the list of powers?
def list_powers(collection = [], power = 2):
power_list = []
for elem in collection:
topower = elem ** power
power_list.append(topower)
return power_list.append(topower)
test:
list_powers([2, 4, 1, 5, 12], power = 2)
output:
None
|
[
"return leaves the function. You have to return after the loop, not unconditional in the loop. And of course you don't want to return the last value for topower but the full list.\ndef list_powers(iterable, power = 2):\n power_list = []\n\n for elem in iterable:\n topower = elem ** power\n power_list.append(topower)\n\n return power_list\n\nprint(list_powers([2, 4, 1, 5, 12]))\n\nThe shorter version uses a list comprehension\ndef list_powers(iterable, power = 2):\n return [elem ** power for elem in iterable]\n\n",
"When you do return topower it stop the function therefore it will only return one value. If you want to return a list you should do return power_list and outside of the local scope.\n"
] |
[
2,
1
] |
[] |
[] |
[
"append",
"function",
"loops",
"python"
] |
stackoverflow_0074539884_append_function_loops_python.txt
|
Q:
Trying to run a container on docker but can not access the website of the application we created
We've been using python3 and Docker as our framework. Our main issue is that while we try to run the docker container it redirects us to the browser but the website can not be reached. But it is working when we run the commands python manage.py runserver manualy from the terminal of VS code
here is the docker-compose.yml file
version: "2.12.2"
services:
web:
tty: true
build:
dockerfile: Dockerfile
context: .
command: bash -c "cd happy_traveller && python manage.py runserver 0.0.0.0:8000 "
ports:
\- 8000:8000
restart: always
the docker file
FROM python:3.10
EXPOSE 8000
WORKDIR /
COPY happy_traveller .
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
and the app structure
|_App_Folder
|_happy_traveller
|_API
|_paycache
|_core
|_settings
|_templates
|_folder
|_folder
|_folder
|_manage.py
|_dockerfile
|_docker-compose.yml
|_requirements.txt
|_readmme.md
|_get-pip.py
We would really apreciate the help. thank you for your time
A:
As you copied the source folder(happy_traveller) in your docker file, you don't need to run the cd command again, so the docker-compose file would look like this:
version: "2.12.2"
services:
web:
tty: true
build:
dockerfile: Dockerfile
context: .
command: bash -c "python manage.py runserver 0.0.0.0:8000 "
ports:
- 8000:8000
restart: always
|
Trying to run a container on docker but can not access the website of the application we created
|
We've been using python3 and Docker as our framework. Our main issue is that while we try to run the docker container it redirects us to the browser but the website can not be reached. But it is working when we run the commands python manage.py runserver manualy from the terminal of VS code
here is the docker-compose.yml file
version: "2.12.2"
services:
web:
tty: true
build:
dockerfile: Dockerfile
context: .
command: bash -c "cd happy_traveller && python manage.py runserver 0.0.0.0:8000 "
ports:
\- 8000:8000
restart: always
the docker file
FROM python:3.10
EXPOSE 8000
WORKDIR /
COPY happy_traveller .
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
and the app structure
|_App_Folder
|_happy_traveller
|_API
|_paycache
|_core
|_settings
|_templates
|_folder
|_folder
|_folder
|_manage.py
|_dockerfile
|_docker-compose.yml
|_requirements.txt
|_readmme.md
|_get-pip.py
We would really apreciate the help. thank you for your time
|
[
"As you copied the source folder(happy_traveller) in your docker file, you don't need to run the cd command again, so the docker-compose file would look like this:\nversion: \"2.12.2\"\n\nservices:\n web:\n tty: true\n build:\n dockerfile: Dockerfile\n context: .\n command: bash -c \"python manage.py runserver 0.0.0.0:8000 \"\n ports:\n - 8000:8000\n restart: always\n\n"
] |
[
1
] |
[] |
[] |
[
"django",
"docker",
"docker_compose",
"dockerfile",
"python"
] |
stackoverflow_0074539893_django_docker_docker_compose_dockerfile_python.txt
|
Q:
Update column value if another column has a certain substring
I'm facing a lot of trouble on what seems to be a simple matter:
I have a column with some beverages names, but they are poluted with "12oz" and "Boxes". I want to get only the name of the beverages. Unfortunally, they are not typed in the same particular form, so i cant just [0:5] them. I know all the beverages names on the column, if that helps
Ex:
Column name: WHISKY BALLANTINES12YO 12X1000 ML RESTAGE
Column Created based on the previous one: BALLANTINES
Thanks in advance,
EDIT
Some other examples:
CHIVAS REGAL 12 ANOS 12X1L should be CHIVAS
VODKA ABSOLUT 12X1000ML should be ABSOLUT
A:
Just use replace statements, set regex to true, and replace with an empty string, like this:
df.replace('12oz', '', regex=True)
This is assuming you know what text you will have to replace.
A:
If you have the list of all the beverages, you can use pandas.Series.extract :
import re
list_of_bvr= ["ballantines", "chivas", "absolut"]
df["Col1"]= df["Col1"].str.extract(f"({'|'.join(list_of_bvr)})", flags=re.I, expand=False)
# Output :
print(df)
Col1
0 BALLANTINES
1 CHIVAS
2 ABSOLUT
|
Update column value if another column has a certain substring
|
I'm facing a lot of trouble on what seems to be a simple matter:
I have a column with some beverages names, but they are poluted with "12oz" and "Boxes". I want to get only the name of the beverages. Unfortunally, they are not typed in the same particular form, so i cant just [0:5] them. I know all the beverages names on the column, if that helps
Ex:
Column name: WHISKY BALLANTINES12YO 12X1000 ML RESTAGE
Column Created based on the previous one: BALLANTINES
Thanks in advance,
EDIT
Some other examples:
CHIVAS REGAL 12 ANOS 12X1L should be CHIVAS
VODKA ABSOLUT 12X1000ML should be ABSOLUT
|
[
"Just use replace statements, set regex to true, and replace with an empty string, like this:\ndf.replace('12oz', '', regex=True)\n\nThis is assuming you know what text you will have to replace.\n",
"If you have the list of all the beverages, you can use pandas.Series.extract :\nimport re\n\nlist_of_bvr= [\"ballantines\", \"chivas\", \"absolut\"]\n\ndf[\"Col1\"]= df[\"Col1\"].str.extract(f\"({'|'.join(list_of_bvr)})\", flags=re.I, expand=False)\n\n# Output :\nprint(df)\n\n Col1\n0 BALLANTINES\n1 CHIVAS\n2 ABSOLUT\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"pandas",
"python",
"substring"
] |
stackoverflow_0074539589_pandas_python_substring.txt
|
Q:
PyJWT won't import jwt.algorithms (ModuleNotFoundError: No module named 'jwt.algorithms')
For some reason, PyJTW doesn't seem to work on my virtualenv on Ubuntu 16.04, but it worked fine on my local Windows machine (inside a venv too). I'm clueless, I've tried different versions, copied the exact same versions as I had on my Windows machine, and yet I still couldn't get this to work.
Installed packages:
Package Version
-------------------------- ---------
aiohttp 3.6.2
async-timeout 3.0.1
attrs 20.2.0
cachetools 4.1.1
certifi 2020.6.20
cffi 1.14.3
chardet 3.0.4
click 7.1.2
cryptography 2.9.2
DateTime 4.3
discord.py 1.5.0
Flask 1.1.2
Flask-Discord 0.1.61
flask-oidc 1.4.0
flask-oidc2 1.5.0
httplib2 0.18.1
idna 2.10
itsdangerous 1.1.0
Jinja2 2.11.2
jwt 1.0.0
MarkupSafe 1.1.1
multidict 4.7.6
mysql-connector-python 8.0.21
mysql-connector-repackaged 0.3.1
oauth2client 4.1.3
oauthlib 3.1.0
pip 20.2.3
protobuf 3.13.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.20
PyJWT 1.7.1
pytz 2020.1
requests 2.24.0
requests-oauthlib 1.3.0
rsa 4.6
schedule 0.6.0
setuptools 50.3.0
six 1.15.0
typing-extensions 3.7.4.3
urllib3 1.25.10
Werkzeug 1.0.1
wheel 0.35.1
yarl 1.6.0
zope.interface 5.1.0
The error:
[2020-09-29 21:58:44 +0000] [2036] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spaw n_worker
worker.init_process()
File "/usr/lib/python3.7/site-packages/gunicorn/workers/base.py", line 119, in init_process
self.load_wsgi()
File "/usr/lib/python3.7/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in l oad
return self.load_wsgiapp()
File "/usr/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 39, in l oad_wsgiapp
return util.import_app(self.app_uri)
File "/usr/lib/python3.7/site-packages/gunicorn/util.py", line 358, in import_ app
mod = importlib.import_module(module)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/soro/soros-dashboard/wsgi.py", line 1, in <module>
from app import app
File "/home/soro/soros-dashboard/app.py", line 9, in <module>
import keycloak
File "/home/soro/soros-dashboard/keycloak.py", line 4, in <module>
from jwt.algorithms import RSAAlgorithm
ModuleNotFoundError: No module named 'jwt.algorithms'
I'm running Python 3.7.7.
A:
I had the same issue. The error seems to be a conflict between the pyjwt and jwt modules (as mentioned by @vimalloc above). What worked for me was to run the following command (NOTE: I am using Python 3.6.10).
pip3 install -U pyjwt
A:
You don't have to register with the 'RSAAlgorithm'.
You have install the 'cryptography' too:
PyJWT==2.3.0
cryptography==36.0.1
Then you can just use the "RS256" algorithm without registration:
import jwt
jwt.encode(payload, private_key, algorithm="RS256")
A:
You shouldn’t have both the jwt and PyJWT packages installed, they have some namespace collisions. Try removing the jwt package and see if it works.
A:
It turned out to be an issue with my Python environment. I recreated it a few times, and eventually, it would work. I suspect that there are multiple jwt's that were somehow installed.
A:
Try uninstalling jwt (pip uninstall jwt) and install python-jwt instead (pip install python-jwt).
I had another issue before where these 2 packages were incompatible with each other.
|
PyJWT won't import jwt.algorithms (ModuleNotFoundError: No module named 'jwt.algorithms')
|
For some reason, PyJTW doesn't seem to work on my virtualenv on Ubuntu 16.04, but it worked fine on my local Windows machine (inside a venv too). I'm clueless, I've tried different versions, copied the exact same versions as I had on my Windows machine, and yet I still couldn't get this to work.
Installed packages:
Package Version
-------------------------- ---------
aiohttp 3.6.2
async-timeout 3.0.1
attrs 20.2.0
cachetools 4.1.1
certifi 2020.6.20
cffi 1.14.3
chardet 3.0.4
click 7.1.2
cryptography 2.9.2
DateTime 4.3
discord.py 1.5.0
Flask 1.1.2
Flask-Discord 0.1.61
flask-oidc 1.4.0
flask-oidc2 1.5.0
httplib2 0.18.1
idna 2.10
itsdangerous 1.1.0
Jinja2 2.11.2
jwt 1.0.0
MarkupSafe 1.1.1
multidict 4.7.6
mysql-connector-python 8.0.21
mysql-connector-repackaged 0.3.1
oauth2client 4.1.3
oauthlib 3.1.0
pip 20.2.3
protobuf 3.13.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.20
PyJWT 1.7.1
pytz 2020.1
requests 2.24.0
requests-oauthlib 1.3.0
rsa 4.6
schedule 0.6.0
setuptools 50.3.0
six 1.15.0
typing-extensions 3.7.4.3
urllib3 1.25.10
Werkzeug 1.0.1
wheel 0.35.1
yarl 1.6.0
zope.interface 5.1.0
The error:
[2020-09-29 21:58:44 +0000] [2036] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spaw n_worker
worker.init_process()
File "/usr/lib/python3.7/site-packages/gunicorn/workers/base.py", line 119, in init_process
self.load_wsgi()
File "/usr/lib/python3.7/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in l oad
return self.load_wsgiapp()
File "/usr/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 39, in l oad_wsgiapp
return util.import_app(self.app_uri)
File "/usr/lib/python3.7/site-packages/gunicorn/util.py", line 358, in import_ app
mod = importlib.import_module(module)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/soro/soros-dashboard/wsgi.py", line 1, in <module>
from app import app
File "/home/soro/soros-dashboard/app.py", line 9, in <module>
import keycloak
File "/home/soro/soros-dashboard/keycloak.py", line 4, in <module>
from jwt.algorithms import RSAAlgorithm
ModuleNotFoundError: No module named 'jwt.algorithms'
I'm running Python 3.7.7.
|
[
"I had the same issue. The error seems to be a conflict between the pyjwt and jwt modules (as mentioned by @vimalloc above). What worked for me was to run the following command (NOTE: I am using Python 3.6.10).\npip3 install -U pyjwt\n\n",
"You don't have to register with the 'RSAAlgorithm'.\nYou have install the 'cryptography' too:\nPyJWT==2.3.0\ncryptography==36.0.1\n\nThen you can just use the \"RS256\" algorithm without registration:\nimport jwt\njwt.encode(payload, private_key, algorithm=\"RS256\")\n\n",
"You shouldn’t have both the jwt and PyJWT packages installed, they have some namespace collisions. Try removing the jwt package and see if it works.\n",
"It turned out to be an issue with my Python environment. I recreated it a few times, and eventually, it would work. I suspect that there are multiple jwt's that were somehow installed.\n",
"Try uninstalling jwt (pip uninstall jwt) and install python-jwt instead (pip install python-jwt). \nI had another issue before where these 2 packages were incompatible with each other.\n"
] |
[
3,
3,
1,
0,
0
] |
[] |
[] |
[
"oauth_2.0",
"python",
"python_3.x"
] |
stackoverflow_0064128255_oauth_2.0_python_python_3.x.txt
|
Q:
Include Value from Dictionary in New Dictionary Only if Number
How can I create a new dictionary from an existing dictionary to include only key and values where values are numeric?
Example dictionary:
simple_dict = {
'a': 1,
'b': 2,
'c': 3,
'd': 'John',
'e': 4,
'f': 'Sandra'
}
What I have so far:
new_dict = {key: value for key, value in simple_dict.items() if }
I've tried using something like isdigit() or isnumeric() but keep receiving errors.
A:
import numbers
...
new_dict = {
key:value for key, value in simple_dict.items()
if isinstance(value, numbers.Number)
}
|
Include Value from Dictionary in New Dictionary Only if Number
|
How can I create a new dictionary from an existing dictionary to include only key and values where values are numeric?
Example dictionary:
simple_dict = {
'a': 1,
'b': 2,
'c': 3,
'd': 'John',
'e': 4,
'f': 'Sandra'
}
What I have so far:
new_dict = {key: value for key, value in simple_dict.items() if }
I've tried using something like isdigit() or isnumeric() but keep receiving errors.
|
[
"import numbers\n...\nnew_dict = {\n key:value for key, value in simple_dict.items()\n if isinstance(value, numbers.Number)\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"dictionary",
"dictionary_comprehension",
"python"
] |
stackoverflow_0074539976_dictionary_dictionary_comprehension_python.txt
|
Q:
Use class methods to list the names and positions of teachers and facility at a mock university
I'm trying to list the names and positions of teachers and facility at a university. When I try to run the program it does not work and I get an error message that says 'str' object has no attribute 'dean_print'
class TSC:
def __init__(self, President, Dean1, Dean2, Dean3, Dean4, Dean5, Chair1, Chair2, Chair3, Chair4, Chair5, Chair6, Chair7, Chair8, Chair9, Chair10):
self.president = President
self.dean_one = Dean1
self.dean_two = Dean2
self.dean_three = Dean3
self.dean_four = Dean4
self.dean_five = Dean5
self.chair_one = Chair1
self.chair_two = Chair2
self.chair_three = Chair3
self.chair_four = Chair4
self.chair_five = Chair5
self.chair_six = Chair6
self.chair_seven = Chair7
self.chair_eight = Chair8
self.chair_nine = Chair9
self.chair_ten = Chair10
def pres(self):
print(f"The President of TSC is ", end="")
print(self.president)
def deans(self):
print(f"The dean of the school of Arts & Sciences is ", end = "")
self.dean_one.dean_print()
print(f"The dean of the school of Education is ", end = "")
print(self.dean_two.dean_print1())
print(f"The dean of the school of Nursing and Wellness is ", end = "")
print(self.dean_three.dean_print1())
print(f"The dean of the school of Buisness ans Digital Media is ", end = "")
print(self.dean_four.dean_print1())
print(f"The dean of the school of Wizrady is ", end = "")
print(self.dean_five.dean_print1())
class Arts_and_Sciences:
def __init__(self, dean):
self.dean_one = dean
def dean_print(self):
print(f"{self.dean_one}")
def heads_AnS(self):
print(f"The head of of Art is ", end = "")
print(self.chair_one.heads1_print())
print(f"The head of Science is ", end = "")
print(self.chair_two.heads2_print())
class Education:
def __init__(self, dean):
self.dean_two_b = dean
def dean_print1(self):
print(f"{self.dean_two_b}")
def heads_Edu(self):
print(f"The head of Education is ", end = "")
print(self.chair_three.heads3_print())
print(f"The head of Teaching Skills is ", end = "")
print(self.chair_four.heads4_print())
class Nursing:
def __init__(self, dean):
self.dean_one_b = dean
def dean_print1(self):
print(f"{self.dean_three_b}")
def heads_Nrs(self):
print(f"The head of Nursing is ", end = "")
print(self.chair_five.heads5_print())
print(f"The head of Wellness is ", end = "")
print(self.chair_six.heads6_print())
class Business:
def __init__(self, dean):
self.dean_one_b = dean
def dean_print1(self):
print(f"{self.dean_four_b}")
def heads_Buss(self):
print(f"The head of Buisness is ", end = "")
print(self.chair_seven.heads7_print())
print(f"The head of Digital Media is ", end = "")
print(self.chair_eight.heads8_print())
class Wizardy:
def __init__(self, dean):
self.dean_one_b = dean
def dean_print1(self):
print(f"{self.dean_five_b}")
def heads_Wiz(self):
print(f"The head of Spells is ", end = "")
print(self.chair_nine.heads9_print())
print(f"The head of Potions is ", end = "")
print(self.chair_ten.heads10_print())
class Heads:
def __init__(self, head1, head2, head3, head4, head5, head6, head7, head8, head9, head10):
self.head_one = head1
self.head_two = head2
self.head_three = head3
self.head_four = head4
self.head_five = head5
self.head_six = head6
self.head_seven = head7
self.head_eight = head8
self.head_nine = head9
self.head_ten = head10
def heads1_print(self):
print(f"{self.head_one}")
def heads2_print(self):
print(f"{self.head_two}")
def heads3_print(self):
print(f"{self.head_three}")
def heads4_print(self):
print(f"{self.head_four}")
def heads5_print(self):
print(f"{self.head_five}")
def heads6_print(self):
print(f"{self.head_six}")
def heads7_print(self):
print(f"{self.head_seven}")
def heads8_print(self):
print(f"{self.head_eight}")
def heads9_print(self):
print(f"{self.head_nine}")
def heads10_print(self):
print(f"{self.head_ten}")
faculity = TSC("Pres Idint", "Scin Ark", "Eden Cathrine", "Nurswell Wells", "Bunson Meda", "Albus Dumbledore", "John Smith", "Paul Adam", "Betsy Zoe", "Simon Peter", "James Maddison", "Page Turner", "Max Scott", "Daniel Goodman", "Harry Potter", "Severus Snape")
faculity.pres()
faculity.deans()
When I ran the code I was expecting it to print out a list of people and their positions at the university. Instead of that happening I keep receiving the error message that I indicated above.
A:
There are many errors in the code. For example, in your Wizardy class, you have the class method dean_print1 which tries to reference "self.dean_five_b". You have not defined the instance attribute dean_five_b for this class, only dean_one_b. I'm not going to point out all these errors to you as I'm sure you can apply that concept to the rest of those mistakes. However, I recommend reviewing OOP concepts to improve your understanding and implementation of classes. For a relatively close comparison and generalization to what you did above, you can break this problem down to build a better class structure/understanding. Here you have a list of "faculty". Each faculty member has a "job title" and belongs to a "college" within the university (or maybe only a subset belong to a specific college). You can reduce the verbose writing above with a single class "faculty" where each member has attributes name, job_title, and college. From there, you could define a single print statement to display this information: print(f"{self.name} is the {self.job_title} for the college of {self.college}" or similar if you only a subset of faculty belong to a college and you need to make that output conditional. To do this, you will define an instance of your faculty class for each faculty member rather than defining a class with all faculty. You could also use a basic dictionary for this case but I'm assuming this is an attempt to understand classes. If you need methods specific to the job title or college, then you could create a subclass of faculty to store those specific methods. The original example is shown below.
class faculty:
def __init__(self, name, job_title, college=None):
self.name = name
self.job_title = job_title
self.college = college
def display_info(self):
if self.college is not None:
print(f"{self.name} is the {self.job_title} for the college of {self.college}")
else:
print(f"{self.name} is the {self.job_title}")
f1 = faculty("john doe", "Dean", "Education")
f2 = faculty("harry potter", "rebel", "Wizardy")
f1.display_info()
f2.display_info()
There's a lot I'm not covering so review some OOB literature on your own to learn more.
|
Use class methods to list the names and positions of teachers and facility at a mock university
|
I'm trying to list the names and positions of teachers and facility at a university. When I try to run the program it does not work and I get an error message that says 'str' object has no attribute 'dean_print'
class TSC:
def __init__(self, President, Dean1, Dean2, Dean3, Dean4, Dean5, Chair1, Chair2, Chair3, Chair4, Chair5, Chair6, Chair7, Chair8, Chair9, Chair10):
self.president = President
self.dean_one = Dean1
self.dean_two = Dean2
self.dean_three = Dean3
self.dean_four = Dean4
self.dean_five = Dean5
self.chair_one = Chair1
self.chair_two = Chair2
self.chair_three = Chair3
self.chair_four = Chair4
self.chair_five = Chair5
self.chair_six = Chair6
self.chair_seven = Chair7
self.chair_eight = Chair8
self.chair_nine = Chair9
self.chair_ten = Chair10
def pres(self):
print(f"The President of TSC is ", end="")
print(self.president)
def deans(self):
print(f"The dean of the school of Arts & Sciences is ", end = "")
self.dean_one.dean_print()
print(f"The dean of the school of Education is ", end = "")
print(self.dean_two.dean_print1())
print(f"The dean of the school of Nursing and Wellness is ", end = "")
print(self.dean_three.dean_print1())
print(f"The dean of the school of Buisness ans Digital Media is ", end = "")
print(self.dean_four.dean_print1())
print(f"The dean of the school of Wizrady is ", end = "")
print(self.dean_five.dean_print1())
class Arts_and_Sciences:
def __init__(self, dean):
self.dean_one = dean
def dean_print(self):
print(f"{self.dean_one}")
def heads_AnS(self):
print(f"The head of of Art is ", end = "")
print(self.chair_one.heads1_print())
print(f"The head of Science is ", end = "")
print(self.chair_two.heads2_print())
class Education:
def __init__(self, dean):
self.dean_two_b = dean
def dean_print1(self):
print(f"{self.dean_two_b}")
def heads_Edu(self):
print(f"The head of Education is ", end = "")
print(self.chair_three.heads3_print())
print(f"The head of Teaching Skills is ", end = "")
print(self.chair_four.heads4_print())
class Nursing:
def __init__(self, dean):
self.dean_one_b = dean
def dean_print1(self):
print(f"{self.dean_three_b}")
def heads_Nrs(self):
print(f"The head of Nursing is ", end = "")
print(self.chair_five.heads5_print())
print(f"The head of Wellness is ", end = "")
print(self.chair_six.heads6_print())
class Business:
def __init__(self, dean):
self.dean_one_b = dean
def dean_print1(self):
print(f"{self.dean_four_b}")
def heads_Buss(self):
print(f"The head of Buisness is ", end = "")
print(self.chair_seven.heads7_print())
print(f"The head of Digital Media is ", end = "")
print(self.chair_eight.heads8_print())
class Wizardy:
def __init__(self, dean):
self.dean_one_b = dean
def dean_print1(self):
print(f"{self.dean_five_b}")
def heads_Wiz(self):
print(f"The head of Spells is ", end = "")
print(self.chair_nine.heads9_print())
print(f"The head of Potions is ", end = "")
print(self.chair_ten.heads10_print())
class Heads:
def __init__(self, head1, head2, head3, head4, head5, head6, head7, head8, head9, head10):
self.head_one = head1
self.head_two = head2
self.head_three = head3
self.head_four = head4
self.head_five = head5
self.head_six = head6
self.head_seven = head7
self.head_eight = head8
self.head_nine = head9
self.head_ten = head10
def heads1_print(self):
print(f"{self.head_one}")
def heads2_print(self):
print(f"{self.head_two}")
def heads3_print(self):
print(f"{self.head_three}")
def heads4_print(self):
print(f"{self.head_four}")
def heads5_print(self):
print(f"{self.head_five}")
def heads6_print(self):
print(f"{self.head_six}")
def heads7_print(self):
print(f"{self.head_seven}")
def heads8_print(self):
print(f"{self.head_eight}")
def heads9_print(self):
print(f"{self.head_nine}")
def heads10_print(self):
print(f"{self.head_ten}")
faculity = TSC("Pres Idint", "Scin Ark", "Eden Cathrine", "Nurswell Wells", "Bunson Meda", "Albus Dumbledore", "John Smith", "Paul Adam", "Betsy Zoe", "Simon Peter", "James Maddison", "Page Turner", "Max Scott", "Daniel Goodman", "Harry Potter", "Severus Snape")
faculity.pres()
faculity.deans()
When I ran the code I was expecting it to print out a list of people and their positions at the university. Instead of that happening I keep receiving the error message that I indicated above.
|
[
"There are many errors in the code. For example, in your Wizardy class, you have the class method dean_print1 which tries to reference \"self.dean_five_b\". You have not defined the instance attribute dean_five_b for this class, only dean_one_b. I'm not going to point out all these errors to you as I'm sure you can apply that concept to the rest of those mistakes. However, I recommend reviewing OOP concepts to improve your understanding and implementation of classes. For a relatively close comparison and generalization to what you did above, you can break this problem down to build a better class structure/understanding. Here you have a list of \"faculty\". Each faculty member has a \"job title\" and belongs to a \"college\" within the university (or maybe only a subset belong to a specific college). You can reduce the verbose writing above with a single class \"faculty\" where each member has attributes name, job_title, and college. From there, you could define a single print statement to display this information: print(f\"{self.name} is the {self.job_title} for the college of {self.college}\" or similar if you only a subset of faculty belong to a college and you need to make that output conditional. To do this, you will define an instance of your faculty class for each faculty member rather than defining a class with all faculty. You could also use a basic dictionary for this case but I'm assuming this is an attempt to understand classes. If you need methods specific to the job title or college, then you could create a subclass of faculty to store those specific methods. The original example is shown below.\nclass faculty:\n\n def __init__(self, name, job_title, college=None):\n self.name = name\n self.job_title = job_title\n self.college = college\n\n def display_info(self):\n if self.college is not None:\n print(f\"{self.name} is the {self.job_title} for the college of {self.college}\")\n else:\n print(f\"{self.name} is the {self.job_title}\")\n\n\nf1 = faculty(\"john doe\", \"Dean\", \"Education\")\nf2 = faculty(\"harry potter\", \"rebel\", \"Wizardy\")\n\nf1.display_info()\nf2.display_info()\n\nThere's a lot I'm not covering so review some OOB literature on your own to learn more.\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074539750_python.txt
|
Q:
How to use two APIs to get the response of an endpoint once it processed?
I have two APIs: triggerAPI and triggerAPIResult. When I hit the first one, it would trigger a process which could take a few minutes to return the response. The second API is used to check if the process is successfully finished or not.
Therefore, when the second API returns true, that means now the response from the first API is the desired output. The second API response is very important since when the first API is still processing, it would return meaningless data until actually finished. Another thing is that the triggerAPIResult API should get triggered every minute for 10 minutes to constantly check the result. How could you I implement this in Python?
A:
You need to implement long running operations. This is an implementation strategy used by fe. Google (in their GCP APIs), IBM, and other big companies.
The principle is quite simple.
Do a request to the triggerAPI and immediately return a unique operation ID.
Store this ID somewhere and have an is_done value tied to it which is set to False.
Have whatever logic was triggered do its thing. Once it's done, update the operation and set the is_done value to True. Store the result of the operation somewhere.
Call the triggerAPIResult and have logic to check the state of the operation. Return the actual processed data when done, otherwise return something like still working.
Note that the actual implementation can be a bit complicated depending on the tech used. I would go for microservices in this scenario to avoid having to use threads or async.
External API with the two endpoints you mentioned.
Internal API that does the actual data processing.
NoSQL backend that stores the operation status' and the result of the data processing.
|
How to use two APIs to get the response of an endpoint once it processed?
|
I have two APIs: triggerAPI and triggerAPIResult. When I hit the first one, it would trigger a process which could take a few minutes to return the response. The second API is used to check if the process is successfully finished or not.
Therefore, when the second API returns true, that means now the response from the first API is the desired output. The second API response is very important since when the first API is still processing, it would return meaningless data until actually finished. Another thing is that the triggerAPIResult API should get triggered every minute for 10 minutes to constantly check the result. How could you I implement this in Python?
|
[
"You need to implement long running operations. This is an implementation strategy used by fe. Google (in their GCP APIs), IBM, and other big companies.\nThe principle is quite simple.\n\nDo a request to the triggerAPI and immediately return a unique operation ID.\nStore this ID somewhere and have an is_done value tied to it which is set to False.\nHave whatever logic was triggered do its thing. Once it's done, update the operation and set the is_done value to True. Store the result of the operation somewhere.\nCall the triggerAPIResult and have logic to check the state of the operation. Return the actual processed data when done, otherwise return something like still working.\n\n\nNote that the actual implementation can be a bit complicated depending on the tech used. I would go for microservices in this scenario to avoid having to use threads or async.\n\nExternal API with the two endpoints you mentioned.\nInternal API that does the actual data processing.\nNoSQL backend that stores the operation status' and the result of the data processing.\n\n"
] |
[
1
] |
[] |
[] |
[
"api",
"python",
"request",
"url"
] |
stackoverflow_0074539254_api_python_request_url.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.