content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
I cannot install Django, And give "Installation failled"
I want to install Django,But when I run pipenv install django it create virtual environment and then give Installation Failed.
Before try below codes I run pip install django to install Django. And there is no issue with internet connection.
How do I fix this?
I tried pip install django , pipenv install django
A:
Navigate to your desired folder, then:
# Creates the virtual environment
python -m venv venv
# Activate your venv
Mac/Linux: source venv/bin/activate
Windows: .\venv\Scripts\activate
# Update pip and install django
pip install --upgrade pip
pip install Django
Remember that virual environment is an isolated space, so you are going to need to activate it everytime you work on that specific project, else the libraries will not be found.
|
I cannot install Django, And give "Installation failled"
|
I want to install Django,But when I run pipenv install django it create virtual environment and then give Installation Failed.
Before try below codes I run pip install django to install Django. And there is no issue with internet connection.
How do I fix this?
I tried pip install django , pipenv install django
|
[
"Navigate to your desired folder, then:\n# Creates the virtual environment\npython -m venv venv\n\n# Activate your venv\nMac/Linux: source venv/bin/activate\nWindows: .\\venv\\Scripts\\activate\n\n# Update pip and install django\npip install --upgrade pip\npip install Django\n\nRemember that virual environment is an isolated space, so you are going to need to activate it everytime you work on that specific project, else the libraries will not be found.\n"
] |
[
0
] |
[] |
[] |
[
"backend",
"django",
"python"
] |
stackoverflow_0074555783_backend_django_python.txt
|
Q:
can you explain this logic
explain this logic
input:ABDEF
output:GFECB
input:ZZZ
output:AAA
write a program this logic
A:
A B D E F
1 2 4 5 6
G F E C B
7 6 5 3 2
Z Z Z
26 26 26
A A A
1 1 1
Basically its just reversing and adding 1 .Or Right shift by 1 all letters.
You can use ord for implementing this.
Sample code:
x='A'
b=ord(x)+1
print(chr(b))
|
can you explain this logic
|
explain this logic
input:ABDEF
output:GFECB
input:ZZZ
output:AAA
write a program this logic
|
[
"A B D E F\n1 2 4 5 6\n\nG F E C B\n7 6 5 3 2\n\nZ Z Z \n26 26 26\n\nA A A\n1 1 1\n\nBasically its just reversing and adding 1 .Or Right shift by 1 all letters.\nYou can use ord for implementing this.\nSample code:\nx='A'\nb=ord(x)+1\nprint(chr(b))\n\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074555827_python.txt
|
Q:
Print a dictionary into a table
I have a dictionary:
dic={'Tim':3, 'Kate':2}
I would like to output it as:
Name Age
Tim 3
Kate 2
Is it a good way to first convert them into a list of dictionaries,
lst = [{'Name':'Tim', 'Age':3}, {'Name':'Kate', 'Age':2}]
and then write them into a table, by the method in https://stackoverflow.com/a/10373268/156458?
Or is there a way better in some sense?
A:
Rather than convert to a list of dictionaries, directly use the .items of the dict to get the values to display on each line:
print('Name Age')
for name, age in dic.items():
print(f'{name} {age}')
In versions before 3.6 (lacking f-string support), we can do:
print('Name Age')
for name, age in dic.items():
print('{} {}'.format(name, age))
A:
You could use pandas.
In [15]: import pandas as pd
In [16]: df = pd.DataFrame({'Tim':3, 'Kate':2}.items(), columns=["name", "age"])
In [17]: df
Out[17]:
name age
0 Tim 3
1 Kate 2
A:
You can do it directly as in
>>> print("Name\tAge")
Name Age
>>> for i in dic:
... print("{}\t{}".format(i,dic[i]))
...
Tim 3
Kate 2
>>>
It displays even better if executed as a script
Name Age
Tim 3
Kate 2
And for the other representation
lst = [{'Name':'Tim', 'Age':3}, {'Name':'Kate', 'Age':2}]
print("Name\tAge")
for i in lst:
print("{}\t{}".format(i['Name'],i['Age']))
And for your final question - Is it a good way to first convert them into a list of dictionaries Answer is No, A dictionary is hashed and provides faster access than lists
A:
You can do it this way,
format = "{:<10}{:<10}"
print format.format("Name","Age")
for name,age in dic.iteritems():
print format.format(name,age)
I have written a simple library to pretty print dictionary as a table
https://github.com/varadchoudhari/Neat-Dictionary
which uses a similar implementation
A:
Iterate dictionary and print every item.
Demo:
>>> dic = {'Tim':3, 'Kate':2}
>>> print "Name\tAge"
Name Age
>>> for i in dic.items():
... print "%s\t%s"%(i[0], i[1])
...
Tim 3
Kate 2
>>>
By CSV module
>>> import csv
>>> dic = {'Tim':3, 'Kate':2}
>>> with open("output.csv", 'wb') as fp:
... root = csv.writer(fp, delimiter='\t')
... root.writerow(["Name", "Age"])
... for i,j in dic.items():
... root.writerow([i, j])
...
>>>
Output: output.csv file content
Name Age
Tim 3
Kate 2
We can use root.writerows(dic.items()) also
A:
If you go for higher numbers, then having number in the first column is usually a safer bet as you never know how long a name can be.
Given python3:
dic={'Foo':1234, 'Bar':5, 'Baz':123467}
This:
print("Count".rjust(9), "Name")
rint("\n".join(f'{v:9,} {k}' for k,v in dic.items()))
Prints
Count Name
1,234 Foo
5 Bar
123,467 Baz
A:
for each in zip(*([i] + (j) for i, j in c.items())):
print(*each)
|
Print a dictionary into a table
|
I have a dictionary:
dic={'Tim':3, 'Kate':2}
I would like to output it as:
Name Age
Tim 3
Kate 2
Is it a good way to first convert them into a list of dictionaries,
lst = [{'Name':'Tim', 'Age':3}, {'Name':'Kate', 'Age':2}]
and then write them into a table, by the method in https://stackoverflow.com/a/10373268/156458?
Or is there a way better in some sense?
|
[
"Rather than convert to a list of dictionaries, directly use the .items of the dict to get the values to display on each line:\nprint('Name Age')\nfor name, age in dic.items():\n print(f'{name} {age}')\n\nIn versions before 3.6 (lacking f-string support), we can do:\nprint('Name Age')\nfor name, age in dic.items():\n print('{} {}'.format(name, age))\n\n",
"You could use pandas.\nIn [15]: import pandas as pd\n\nIn [16]: df = pd.DataFrame({'Tim':3, 'Kate':2}.items(), columns=[\"name\", \"age\"]) \n\nIn [17]: df\nOut[17]: \n name age\n0 Tim 3\n1 Kate 2\n\n",
"You can do it directly as in\n>>> print(\"Name\\tAge\")\nName Age\n>>> for i in dic:\n... print(\"{}\\t{}\".format(i,dic[i]))\n... \nTim 3\nKate 2\n>>> \n\nIt displays even better if executed as a script\nName Age\nTim 3\nKate 2\n\nAnd for the other representation\nlst = [{'Name':'Tim', 'Age':3}, {'Name':'Kate', 'Age':2}]\nprint(\"Name\\tAge\")\nfor i in lst:\n print(\"{}\\t{}\".format(i['Name'],i['Age']))\n\nAnd for your final question - Is it a good way to first convert them into a list of dictionaries Answer is No, A dictionary is hashed and provides faster access than lists\n",
"You can do it this way, \nformat = \"{:<10}{:<10}\" \n print format.format(\"Name\",\"Age\")\n for name,age in dic.iteritems():\n print format.format(name,age)\n\nI have written a simple library to pretty print dictionary as a table \nhttps://github.com/varadchoudhari/Neat-Dictionary\nwhich uses a similar implementation\n",
"Iterate dictionary and print every item.\nDemo:\n>>> dic = {'Tim':3, 'Kate':2}\n>>> print \"Name\\tAge\"\nName Age\n>>> for i in dic.items():\n... print \"%s\\t%s\"%(i[0], i[1])\n... \nTim 3\nKate 2\n>>> \n\nBy CSV module\n>>> import csv\n>>> dic = {'Tim':3, 'Kate':2}\n>>> with open(\"output.csv\", 'wb') as fp:\n... root = csv.writer(fp, delimiter='\\t')\n... root.writerow([\"Name\", \"Age\"])\n... for i,j in dic.items():\n... root.writerow([i, j])\n... \n>>> \n\nOutput: output.csv file content\nName Age\nTim 3\nKate 2\n\nWe can use root.writerows(dic.items()) also\n",
"If you go for higher numbers, then having number in the first column is usually a safer bet as you never know how long a name can be.\nGiven python3:\ndic={'Foo':1234, 'Bar':5, 'Baz':123467}\n\nThis:\nprint(\"Count\".rjust(9), \"Name\")\nrint(\"\\n\".join(f'{v:9,} {k}' for k,v in dic.items()))\n\nPrints\n Count Name\n 1,234 Foo\n 5 Bar\n 123,467 Baz\n\n",
"for each in zip(*([i] + (j) for i, j in c.items())):\n print(*each) \n\n"
] |
[
11,
11,
4,
4,
1,
1,
0
] |
[
"To improve upon Francis Colas's answer, you can make it simpler by using f strings:\nprint('Name Age')\nfor name, age in dic.items():\n print(f'{name} {age}')\n\n"
] |
[
-1
] |
[
"dictionary",
"python",
"python_2.7"
] |
stackoverflow_0029265002_dictionary_python_python_2.7.txt
|
Q:
How to add element from last loop iteration to list in python?
I have this list:
t=['a','b','c']
and want to create the output such as:
['a']
['a','b']
['a','b','c']
I am not sure how to adjust this loop:
for i in t:
print(i)
I am not sure how to write the loop to create this appending effect from the last iteration and the current iteration. I hope someone can assist.
Thanks.
A:
For beginners I think this is the simple approach..!
Code:-
lis=['a','b','c']
res=[]
for i in range(len(lis)):
temp=[]
for j in range(i+1):
temp.append(lis[j])
res.append(temp)
print(res)
Output:-
[['a'], ['a', 'b'], ['a', 'b', 'c']]
A:
Solution without nested loop:
l = ['a','b','c']
res = []
temp = []
for item in l:
temp.append(item)
res.append(temp.copy())
print(res)
A:
This should be the easiest solution:
t=['a','b','c']
x=[]
for i in range(len(t)):
x.append(t[i])
print(x)
A:
You can use generator/list comprehension, unpacking the list while printing the sliced index and using '\n' as separators:
t = ['a','b','c']
print(*[t[:t.index(i)+1] for i in t], sep='\n')
or:
for i in t:
print(t[:t.index(i)+1])
Output:
['a']
['a', 'b']
['a', 'b', 'c']
|
How to add element from last loop iteration to list in python?
|
I have this list:
t=['a','b','c']
and want to create the output such as:
['a']
['a','b']
['a','b','c']
I am not sure how to adjust this loop:
for i in t:
print(i)
I am not sure how to write the loop to create this appending effect from the last iteration and the current iteration. I hope someone can assist.
Thanks.
|
[
"For beginners I think this is the simple approach..!\nCode:-\nlis=['a','b','c']\nres=[]\nfor i in range(len(lis)):\n temp=[]\n for j in range(i+1):\n temp.append(lis[j])\n res.append(temp)\nprint(res)\n\nOutput:-\n[['a'], ['a', 'b'], ['a', 'b', 'c']]\n\n",
"Solution without nested loop:\nl = ['a','b','c']\nres = []\ntemp = []\nfor item in l:\n temp.append(item)\n res.append(temp.copy())\nprint(res)\n\n",
"This should be the easiest solution:\nt=['a','b','c']\nx=[]\nfor i in range(len(t)):\n x.append(t[i])\n print(x)\n \n\n",
"You can use generator/list comprehension, unpacking the list while printing the sliced index and using '\\n' as separators:\nt = ['a','b','c']\n\nprint(*[t[:t.index(i)+1] for i in t], sep='\\n')\n\nor:\nfor i in t:\n print(t[:t.index(i)+1])\n\nOutput:\n['a']\n['a', 'b']\n['a', 'b', 'c']\n\n"
] |
[
0,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074555146_python.txt
|
Q:
Why am I getting a "TypeError: object of type 'NoneType' has no len()" for this using the len function on a set?
I'm getting an error for a piece of python code I wrote that shouldn't
This is the function I wrote and the input I gave it.
#turn list of ints into set, remove val from set, and return the length of the set without val.
def foo(nums,val):
sett = set(nums)
sett_without_val = sett.remove(val)
return len(sett_without_val)
print(foo([3,2,2,3],3))
sett should be {3,2}
sett_without_val should be {2}
and len(sett_without_val) should be 1. I'm not supposed to get this error:
TypeError: object of type 'NoneType' has no len()
I thought it had something to do with the remove method I used, so I used discard instead and still got the exact same error message.
A:
remove() method of set is changing it in-place, so
sett_without_val = sett.remove(val)
returns None and assign it to the variable,
working code is
def foo(nums,val):
sett = set(nums)
sett.remove(val)
return len(sett)
print(foo([3,2,2,3],3))
output:
1
Explanation:
1. [3, 2, 2, 3]
2. after set(): {3, 2}
3. after .remove(3): {2}
4. after len(): 1
A:
Sett.remove(val) does not return anything.
Try this
def foo(nums,val):
sett = set(nums)
sett.remove(val)
return len(sett)
|
Why am I getting a "TypeError: object of type 'NoneType' has no len()" for this using the len function on a set?
|
I'm getting an error for a piece of python code I wrote that shouldn't
This is the function I wrote and the input I gave it.
#turn list of ints into set, remove val from set, and return the length of the set without val.
def foo(nums,val):
sett = set(nums)
sett_without_val = sett.remove(val)
return len(sett_without_val)
print(foo([3,2,2,3],3))
sett should be {3,2}
sett_without_val should be {2}
and len(sett_without_val) should be 1. I'm not supposed to get this error:
TypeError: object of type 'NoneType' has no len()
I thought it had something to do with the remove method I used, so I used discard instead and still got the exact same error message.
|
[
"remove() method of set is changing it in-place, so\nsett_without_val = sett.remove(val)\n\nreturns None and assign it to the variable,\nworking code is\ndef foo(nums,val):\n sett = set(nums)\n sett.remove(val)\n return len(sett)\n\n\nprint(foo([3,2,2,3],3))\n\noutput:\n1\n\nExplanation:\n1. [3, 2, 2, 3] \n2. after set(): {3, 2} \n3. after .remove(3): {2} \n4. after len(): 1\n\n",
"Sett.remove(val) does not return anything.\nTry this\ndef foo(nums,val):\n sett = set(nums)\n sett.remove(val)\n return len(sett)\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"set"
] |
stackoverflow_0074555956_python_set.txt
|
Q:
Access custom module in Azure ML subprocess / Edit PYTHONPATH inside of Azure ML pipeline
I have the following project structure:
.
βββ my_custom_module
β βββ __init__.py
β βββ ...
βββ scripts
β βββ start_script.py
β βββ example.py
I am running start_script.py inside of Azure ML studio pipeline.
Inside of start_script.py I need to run example.py by using:
subprocess.run(['python3.9', "scripts/example.py"], check=True).
example.py on the other hand needs access to my_custom_module (from my_custom_module import some_class).
I keep getting ModuleNotFoundError: No module named my_custom_module errors, because the module is not added to the PYTHONPATH.
How do I add a custom module to the PYTHONPATH inside of Azure ML, such that it is visible by a subprocess?
Here are some debug information (I shortened some hash codes for better readability):
#os.getcwd() inside start_script.py and example.py returns:
/mnt/azureml/cr/j/e9e/exe/wd
# printing sys.path inside start_script.py and example.py returns:
/mnt/azureml/cr/j/21d/exe/wd/scripts
/azureml-envs/azureml_e9e/lib/python39.zip
/azureml-envs/azureml_e9e/lib/python3.9
/azureml-envs/azureml_e9e/lib/python3.9/lib-dynload
/azureml-envs/azureml_e9e/lib/python3.9/site-packages
/mnt/azureml/cr/j/21d/exe/wd
/azureml-envs/azureml_e9e/lib/python3.9/site-packages/azureml/_project/vendor
# os.system(which python) inside start_script.py:
/azureml-envs/azureml_e9e/bin/python
# os.system(which python) inside example.py returns nothing
so far I have tried to add my_custom_module to the PYTHONPATH inside of start_script.py so example.py can import it by using:
os.system(f"export PYTHONPATH=$PYTHONPATH:{os.getcwd()}") # tested also without "$PYTHONPATH:"
os.system(f"export PYTHONPATH=$PYTHONPATH:{os.getcwd() + '/my_custom_module'}") # tested also without "$PYTHONPATH:"
sys.path.append(os.getcwd() + "/my_custom_module")
So far, nothing had worked.
Appending it to sys.path made it show up inside of start_script.py, but NOT inside of example.py.
Does anyone have any idea how to solve my problem?
A:
we cannot edit the PYTHONPATH inside the default pipeline. Instead, we can create the Data Science VM using the ARM template and make the custom modifications inside the current working directory.
Ubuntu Based:
# create a Ubuntu Data Science VM in your resource group
az vm create --resource-group YOUR-RESOURCE-GROUP-NAME --name YOUR-VM-NAME --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username YOUR-USERNAME --admin-password YOUR-PASSWORD --generate-ssh-keys --authentication-type password
Windows Based:
# create a Windows Server 2016 DSVM in your resource group
az vm create --resource-group YOUR-RESOURCE-GROUP-NAME --name YOUR-VM-NAME --image microsoft-dsvm:dsvm-windows:server-2016:latest --admin-username YOUR-USERNAME --admin-password YOUR-PASSWORD --authentication-type password
Create a conda environment for azure machine learning
**conda create -n py310 python=310**
Activate and Install
conda activate py310
pip install azure-ai-ml
To deploy:
To deploy the template, we need to use the following procedure mentioned in below link.
https://learn.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-tutorial-resource-manager
|
Access custom module in Azure ML subprocess / Edit PYTHONPATH inside of Azure ML pipeline
|
I have the following project structure:
.
βββ my_custom_module
β βββ __init__.py
β βββ ...
βββ scripts
β βββ start_script.py
β βββ example.py
I am running start_script.py inside of Azure ML studio pipeline.
Inside of start_script.py I need to run example.py by using:
subprocess.run(['python3.9', "scripts/example.py"], check=True).
example.py on the other hand needs access to my_custom_module (from my_custom_module import some_class).
I keep getting ModuleNotFoundError: No module named my_custom_module errors, because the module is not added to the PYTHONPATH.
How do I add a custom module to the PYTHONPATH inside of Azure ML, such that it is visible by a subprocess?
Here are some debug information (I shortened some hash codes for better readability):
#os.getcwd() inside start_script.py and example.py returns:
/mnt/azureml/cr/j/e9e/exe/wd
# printing sys.path inside start_script.py and example.py returns:
/mnt/azureml/cr/j/21d/exe/wd/scripts
/azureml-envs/azureml_e9e/lib/python39.zip
/azureml-envs/azureml_e9e/lib/python3.9
/azureml-envs/azureml_e9e/lib/python3.9/lib-dynload
/azureml-envs/azureml_e9e/lib/python3.9/site-packages
/mnt/azureml/cr/j/21d/exe/wd
/azureml-envs/azureml_e9e/lib/python3.9/site-packages/azureml/_project/vendor
# os.system(which python) inside start_script.py:
/azureml-envs/azureml_e9e/bin/python
# os.system(which python) inside example.py returns nothing
so far I have tried to add my_custom_module to the PYTHONPATH inside of start_script.py so example.py can import it by using:
os.system(f"export PYTHONPATH=$PYTHONPATH:{os.getcwd()}") # tested also without "$PYTHONPATH:"
os.system(f"export PYTHONPATH=$PYTHONPATH:{os.getcwd() + '/my_custom_module'}") # tested also without "$PYTHONPATH:"
sys.path.append(os.getcwd() + "/my_custom_module")
So far, nothing had worked.
Appending it to sys.path made it show up inside of start_script.py, but NOT inside of example.py.
Does anyone have any idea how to solve my problem?
|
[
"we cannot edit the PYTHONPATH inside the default pipeline. Instead, we can create the Data Science VM using the ARM template and make the custom modifications inside the current working directory.\nUbuntu Based:\n# create a Ubuntu Data Science VM in your resource group\naz vm create --resource-group YOUR-RESOURCE-GROUP-NAME --name YOUR-VM-NAME --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username YOUR-USERNAME --admin-password YOUR-PASSWORD --generate-ssh-keys --authentication-type password\n\nWindows Based:\n# create a Windows Server 2016 DSVM in your resource group\naz vm create --resource-group YOUR-RESOURCE-GROUP-NAME --name YOUR-VM-NAME --image microsoft-dsvm:dsvm-windows:server-2016:latest --admin-username YOUR-USERNAME --admin-password YOUR-PASSWORD --authentication-type password\n\nCreate a conda environment for azure machine learning\n**conda create -n py310 python=310**\n\nActivate and Install\nconda activate py310\npip install azure-ai-ml\n\nTo deploy:\nTo deploy the template, we need to use the following procedure mentioned in below link.\nhttps://learn.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-tutorial-resource-manager\n"
] |
[
1
] |
[] |
[] |
[
"azure",
"azure_machine_learning_service",
"azure_ml_pipelines",
"python"
] |
stackoverflow_0074519733_azure_azure_machine_learning_service_azure_ml_pipelines_python.txt
|
Q:
'WebDriver' object has no attribute 'find_element_by_name'
Whenever I run the code it brings up the instagram page for about two seconds until it closes and then it gives me this error: 'WebDriver' object has no attribute 'find_element_by_name'
Whenever I run the code it brings up the instagram page for about two seconds until it closes and then it gives me this error: 'WebDriver' object has no attribute 'find_element_by_name'
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time, random
#Username and password of our instagram account
my_username = 'a'
my_password = 'm'
#Instagram username list for DM:
usernames = ['user1', 'user2', 'user3',]
#Messages:
messages = ['Hey! Please follow my page', 'Hey, how are you doing?', 'Hey']
#Delay time between messages in sec:
between_messages = 2000
browser = webdriver.Chrome('chromedriver')
# Authorization:
def auth(username, password):
try:
browser.get('https://instagram.com')
time.sleep(random.randrange(2,4))
input_username = browser.find_element_by_name('username')
input_password = browser.find_element_by_name('username')
input_username.send_keys(username)
time.sleep(random.randrange(1,2))
input_password.send_keys(password)
time.sleep(random.randrange(1,2))
input_password.send_keys(Keys.ENTER)
except Exception as err:
print(err)
browser.quit()
auth(my_username, my_password)
A:
find_element_by_name, and other methods starting with find_element_by, were deprecated in Selenium 4.0.0 with this commit and have been removed from Selenium as of version 4.3 with this pull request. The aforementioned PR has a comment saying how to update your code to use the find_element() method instead of find_element_by_name, find_element_by_id, etc.
Try changing your find_element_by_name calls to the below:
input_username = browser.find_element(By.NAME, 'username')
input_password = browser.find_element(By.NAME, 'password')
Remember to add the following to your imports, or else the above two lines will not work:
from selenium.webdriver.common.by import By
|
'WebDriver' object has no attribute 'find_element_by_name'
|
Whenever I run the code it brings up the instagram page for about two seconds until it closes and then it gives me this error: 'WebDriver' object has no attribute 'find_element_by_name'
Whenever I run the code it brings up the instagram page for about two seconds until it closes and then it gives me this error: 'WebDriver' object has no attribute 'find_element_by_name'
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time, random
#Username and password of our instagram account
my_username = 'a'
my_password = 'm'
#Instagram username list for DM:
usernames = ['user1', 'user2', 'user3',]
#Messages:
messages = ['Hey! Please follow my page', 'Hey, how are you doing?', 'Hey']
#Delay time between messages in sec:
between_messages = 2000
browser = webdriver.Chrome('chromedriver')
# Authorization:
def auth(username, password):
try:
browser.get('https://instagram.com')
time.sleep(random.randrange(2,4))
input_username = browser.find_element_by_name('username')
input_password = browser.find_element_by_name('username')
input_username.send_keys(username)
time.sleep(random.randrange(1,2))
input_password.send_keys(password)
time.sleep(random.randrange(1,2))
input_password.send_keys(Keys.ENTER)
except Exception as err:
print(err)
browser.quit()
auth(my_username, my_password)
|
[
"find_element_by_name, and other methods starting with find_element_by, were deprecated in Selenium 4.0.0 with this commit and have been removed from Selenium as of version 4.3 with this pull request. The aforementioned PR has a comment saying how to update your code to use the find_element() method instead of find_element_by_name, find_element_by_id, etc.\nTry changing your find_element_by_name calls to the below:\n input_username = browser.find_element(By.NAME, 'username')\n input_password = browser.find_element(By.NAME, 'password')\n\nRemember to add the following to your imports, or else the above two lines will not work:\nfrom selenium.webdriver.common.by import By\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"selenium",
"selenium_webdriver"
] |
stackoverflow_0074555404_python_selenium_selenium_webdriver.txt
|
Q:
I activate venv in vscode but it doesnt appear in terminal
I have no idea whats going on but I activated venv by using Scripts/activate and it doesnt working yet, the (venv) isnt appearing
please someone could help me? I tried everything I could find lol
A:
It looks like you are using Git Bash or a similar bash-like system for Windows. You need to run Scripts/activate.sh in this environment.
For PowerShell, use Scripts/activate.ps1, and for cmd use Scripts/activate.bat.
A:
What terminal are you using? Please create a new powershell or cmd terminal. Also you should not open the folder of the virtual environment as a workspace. Below are the correct steps.
Create a new folder as a workspace, then open this folder in vscode
Create a new terminal using the following command to create a virtual environment
python -m venv .venv
Use the following command to activate the environment after creation
.venv\scripts\activate
Another way is to select the interpreter of the virtual environment in the Select Interpreter panel after creating the environment
And then the new terminal will automatically activate the environment
You can read creating-environments and venv.
|
I activate venv in vscode but it doesnt appear in terminal
|
I have no idea whats going on but I activated venv by using Scripts/activate and it doesnt working yet, the (venv) isnt appearing
please someone could help me? I tried everything I could find lol
|
[
"It looks like you are using Git Bash or a similar bash-like system for Windows. You need to run Scripts/activate.sh in this environment.\nFor PowerShell, use Scripts/activate.ps1, and for cmd use Scripts/activate.bat.\n",
"What terminal are you using? Please create a new powershell or cmd terminal. Also you should not open the folder of the virtual environment as a workspace. Below are the correct steps.\n\nCreate a new folder as a workspace, then open this folder in vscode\n\nCreate a new terminal using the following command to create a virtual environment\npython -m venv .venv\n\n\nUse the following command to activate the environment after creation\n.venv\\scripts\\activate\n\n\nAnother way is to select the interpreter of the virtual environment in the Select Interpreter panel after creating the environment\n\n\nAnd then the new terminal will automatically activate the environment\n\n\n\nYou can read creating-environments and venv.\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"python_venv",
"visual_studio_code"
] |
stackoverflow_0074550767_python_python_venv_visual_studio_code.txt
|
Q:
Removing a node from a linked list
I would like to create a delete_node function that deletes the node at the location in the list as a count from the first node. So far this is the code I have:
class node:
def __init__(self):
self.data = None # contains the data
self.next = None # contains the reference to the next node
class linked_list:
def __init__(self):
self.cur_node = None
def add_node(self, data):
new_node = node() # create a new node
new_node.data = data
new_node.next = self.cur_node # link the new node to the 'previous' node.
self.cur_node = new_node # set the current node to the new one.
def list_print(self):
node = ll.cur_node
while node:
print node.data
node = node.next
def delete_node(self,location):
node = ll.cur_node
count = 0
while count != location:
node = node.next
count+=1
delete node
ll = linked_list()
ll.add_node(1)
ll.add_node(2)
ll.add_node(3)
ll.list_print()
A:
You shouldn't literally delete a node in Python. If nothing points to the node (or more precisely in Python, nothing references it) it will be eventually destroyed by the virtual machine anyway.
If n is a node and it has a .next field, then:
n.next = n.next.next
Effectively discards n.next, making the .next field of n point to n.next.next instead. If n is the node before the one you want to delete, this amounts to deleting it in Python.
[P.S. the last paragraph may be a bit confusing until you sketch it on paper - it should then become very clear]
A:
Here's one way to do it.
def delete_node(self,location):
if location == 0:
try:
self.cur_node = cur_node.next
except AttributeError:
# The list is only one element long
self.cur_node = None
finally:
return
node = self.cur_node
try:
for _ in xrange(location):
node = node.next
except AttributeError:
# the list isn't long enough
raise ValueError("List does not have index {0}".format(location))
try:
node.next = node.next.next # Taken from Eli Bendersky's answer.
except AttributeError:
# The desired node is the last one.
node.next = None
The reason that you don't really use del (and this tripped me up here until I came back and looked at it again) is that all it does is delete that particular reference that it's called on. It doesn't delete the object. In CPython, an object is deleted as soon as there are no more references to it. What happens here that when
del node
runs, there are (at least) two references to the node: the one named node that we are deleting and the next attribute of the previous node. Because the previous node is still referencing it, the actual object is not deleted and no change occurs to the list at all.
A:
# Creating a class node where the value and pointer is stored
# initialize the id and name parameter so it can be passed as Node(id, name)
class Node:
def __init__(self, id, name):
# modify this class to take both id and name
self.id = id
self.name = name
self.next = None
# Create a class linkedlist to store the value in a node
class LinkedList:
# Constructor function for the linkedlist class
def __init__(self):
self.first = None
# This function inserts the value passed by the user into parameters id and name
# id and name is then send to Node(id, name) to store the values in node called new_node
def insertStudent(self, id, name):
# modify this function to insert both id and names as in Q1
new_node = Node(id, name)
new_node.next = self.first
self.first = new_node
# We will call this function to remove the first data in the node
def removeFirst(self):
if self.first is not None:
self.first = self.first.next
# This function prints the length of the linked list
def length(self):
current = self.first
count = 0
while current is not None:
count += 1
current = current.next
return count
# This function prints the data in the list
def printStudents(self):
# modify this function to print the names only as in Q2.
current = self.first
while current is not None:
print(current.id, current.name)
current = current.next
# This function lets us to update the values and store in the linked list
def update(self, id):
current = self.first
while current is not None:
if (current.id == id):
current.id = current.id.next
# print(current.value)
current = current.next
# This function lets us search for a data and flag true is it exists
def searchStudent(self, x, y):
current = self.first
while current is not None:
if (current.id == x and current.name == y):
return True
current = current.next
# This function lets us delete a data from the node
def delStudent(self,key):
cur = self.node
#iterate through the linkedlist
while cur is not None:
#if the data is in first node, delete it
if (cur.data == key):
self.node = cur.next
return
#if the data is in other nodes delete it
prev = cur
cur = cur.next
if (cur.data == key):
prev.next = cur.next
return
# Initializing the constructor to linked list class
my_list = LinkedList()
# Adding the ID and Student name to the linked list
my_list.insertStudent(101, "David")
my_list.insertStudent(999, "Rosa")
my_list.insertStudent(321, "Max")
my_list.insertStudent(555, "Jenny")
my_list.insertStudent(369, "Jack")
# Print the list of students
my_list.printStudents()
# Print the length of the linked list
print(my_list.length(), " is the size of linked list ")
# Search for a data in linked list
if my_list.searchStudent(369, "Jack"):
print("True")
else:
print("False")
# Delete a value in the linked list
my_list.delStudent(101)
# Print the linked list after the value is deleted in the linked list
my_list.printStudents()
A:
I did the pop() function using a recursive function because the iterative way with references it's not so good. So the code is below. I hope this help you! ;)
class Node:
def __init__(self, data=0, next=None):
self.data = data
self.next = next
def __str__(self):
return str(self.data)
class LinkedList:
def __init__(self):
self.__head = None
self.__tail = None
self.__size = 0
def addFront(self, data):
newNode = Node(data, self.__head)
if (self.empty()):
self.__tail = newNode
self.__head = newNode
self.__size += 1
def __str__(self):
# retorno deve ser uma string:
s = "["
node = self.__head
while node:
s += str(node.data) + ' ' if node.next != None else str(node.data)
node = node.next
return s + "]"
def __recPop(self, no, i, index):
if (i == index-1):
if (index == self.size() - 1):
self.__tail = no
try:
no.next = no.next.next;
except AttributeError:
no.next = None
else:
self.__recPop(no.next, i+1, index)
def pop(self, index=0):
if (index < 0 or index >= self.__size or self.__size == 0):
return
if (index == 0):
try:
self.__head = self.__head.next
except AttributeError:
self.__head = None
self.__tail = None
else:
self.__recPop(self.__head, 0, index)
self.__size -= 1
def front(self):
return self.__head.data
def back(self):
return self.__tail.data
def addBack(self, data):
newNode = Node(data)
if (not self.empty()):
self.__tail.next = newNode
else:
self.__head = newNode
self.__tail = newNode
self.__size += 1
def empty(self):
return self.__size == 0
def size(self):
return self.__size
def __recursiveReverse(self, No):
if No == None : return
self.__recursiveReverse(No.next)
print(No, end=' ') if self.__head != No else print(No, end='')
def reverse(self):
print('[', end='')
self.__recursiveReverse(self.__head)
print(']')
A:
def remove(self,data):
current = self.head;
previous = None;
while current is not None:
if current.data == data:
# if this is the first node (head)
if previous is not None:
previous.nextNode = current.nextNode
else:
self.head = current.nextNode
previous = current
current = current.nextNode;
A:
Python lists are linked lists.
thelist = [1, 2, 3]
# delete the second
del thelist[2]
A:
Assume Linked List has more than 1 nodes. such as n1->n2->n3, and you want to del n2.
n1.next = n1.next.next
n2.next = None
If you want to del n1, which is the head.
head = n1.next
n1.next = None
A:
This is how I did it.
def delete_at_index(self, index):
length = self.get_length()
# Perform deletion only if the index to delete is within or equal to the length of the list.
if index<=length:
itr = self.head
count = 0
# If the index to delete is zeroth.
if count==index:
self.head = itr.next
return
while itr.next:
if count==index-1:
try:
# If the index to delete is any index other than the first and last.
itr.next = itr.next.next
except:
# If the index to delete is the last index.
itr.next = None
return
count+=1
itr = itr.next
def get_length(self):
itr = self.head
count = 0
while itr:
count += 1
itr = itr.next
return count
|
Removing a node from a linked list
|
I would like to create a delete_node function that deletes the node at the location in the list as a count from the first node. So far this is the code I have:
class node:
def __init__(self):
self.data = None # contains the data
self.next = None # contains the reference to the next node
class linked_list:
def __init__(self):
self.cur_node = None
def add_node(self, data):
new_node = node() # create a new node
new_node.data = data
new_node.next = self.cur_node # link the new node to the 'previous' node.
self.cur_node = new_node # set the current node to the new one.
def list_print(self):
node = ll.cur_node
while node:
print node.data
node = node.next
def delete_node(self,location):
node = ll.cur_node
count = 0
while count != location:
node = node.next
count+=1
delete node
ll = linked_list()
ll.add_node(1)
ll.add_node(2)
ll.add_node(3)
ll.list_print()
|
[
"You shouldn't literally delete a node in Python. If nothing points to the node (or more precisely in Python, nothing references it) it will be eventually destroyed by the virtual machine anyway.\nIf n is a node and it has a .next field, then:\nn.next = n.next.next \n\nEffectively discards n.next, making the .next field of n point to n.next.next instead. If n is the node before the one you want to delete, this amounts to deleting it in Python.\n[P.S. the last paragraph may be a bit confusing until you sketch it on paper - it should then become very clear]\n",
"Here's one way to do it.\ndef delete_node(self,location):\n if location == 0:\n try:\n self.cur_node = cur_node.next\n except AttributeError:\n # The list is only one element long\n self.cur_node = None\n finally:\n return \n\n node = self.cur_node \n try:\n for _ in xrange(location):\n node = node.next\n except AttributeError:\n # the list isn't long enough\n raise ValueError(\"List does not have index {0}\".format(location))\n\n try:\n node.next = node.next.next # Taken from Eli Bendersky's answer.\n except AttributeError:\n # The desired node is the last one.\n node.next = None\n\nThe reason that you don't really use del (and this tripped me up here until I came back and looked at it again) is that all it does is delete that particular reference that it's called on. It doesn't delete the object. In CPython, an object is deleted as soon as there are no more references to it. What happens here that when\ndel node\n\nruns, there are (at least) two references to the node: the one named node that we are deleting and the next attribute of the previous node. Because the previous node is still referencing it, the actual object is not deleted and no change occurs to the list at all.\n",
"# Creating a class node where the value and pointer is stored\n# initialize the id and name parameter so it can be passed as Node(id, name)\nclass Node:\n def __init__(self, id, name):\n # modify this class to take both id and name\n self.id = id\n self.name = name\n self.next = None\n\n\n# Create a class linkedlist to store the value in a node\nclass LinkedList:\n\n # Constructor function for the linkedlist class\n def __init__(self):\n self.first = None\n\n # This function inserts the value passed by the user into parameters id and name\n # id and name is then send to Node(id, name) to store the values in node called new_node\n def insertStudent(self, id, name):\n # modify this function to insert both id and names as in Q1\n new_node = Node(id, name)\n new_node.next = self.first\n self.first = new_node\n\n # We will call this function to remove the first data in the node\n def removeFirst(self):\n if self.first is not None:\n self.first = self.first.next\n\n # This function prints the length of the linked list\n def length(self):\n current = self.first\n count = 0\n while current is not None:\n count += 1\n current = current.next\n return count\n\n # This function prints the data in the list\n def printStudents(self):\n # modify this function to print the names only as in Q2.\n current = self.first\n while current is not None:\n print(current.id, current.name)\n current = current.next\n\n # This function lets us to update the values and store in the linked list\n def update(self, id):\n current = self.first\n while current is not None:\n if (current.id == id):\n current.id = current.id.next\n # print(current.value)\n current = current.next\n\n # This function lets us search for a data and flag true is it exists\n def searchStudent(self, x, y):\n current = self.first\n while current is not None:\n if (current.id == x and current.name == y):\n return True\n current = current.next\n\n # This function lets us delete a data from the node\n def delStudent(self,key):\n cur = self.node\n\n #iterate through the linkedlist\n while cur is not None: \n\n #if the data is in first node, delete it\n if (cur.data == key):\n self.node = cur.next\n return\n\n #if the data is in other nodes delete it\n prev = cur\n cur = cur.next\n if (cur.data == key):\n prev.next = cur.next\n return\n\n # Initializing the constructor to linked list class\n\nmy_list = LinkedList()\n\n# Adding the ID and Student name to the linked list\nmy_list.insertStudent(101, \"David\")\nmy_list.insertStudent(999, \"Rosa\")\nmy_list.insertStudent(321, \"Max\")\nmy_list.insertStudent(555, \"Jenny\")\nmy_list.insertStudent(369, \"Jack\")\n\n# Print the list of students\nmy_list.printStudents()\n\n# Print the length of the linked list\nprint(my_list.length(), \" is the size of linked list \")\n\n# Search for a data in linked list\nif my_list.searchStudent(369, \"Jack\"):\n print(\"True\")\nelse:\n print(\"False\")\n\n# Delete a value in the linked list\nmy_list.delStudent(101)\n\n# Print the linked list after the value is deleted in the linked list\nmy_list.printStudents() \n\n",
"I did the pop() function using a recursive function because the iterative way with references it's not so good. So the code is below. I hope this help you! ;)\nclass Node:\n\n def __init__(self, data=0, next=None):\n self.data = data\n self.next = next\n\n def __str__(self):\n return str(self.data)\n\n\nclass LinkedList:\n\n def __init__(self):\n self.__head = None\n self.__tail = None\n self.__size = 0\n\n def addFront(self, data):\n newNode = Node(data, self.__head)\n if (self.empty()):\n self.__tail = newNode \n self.__head = newNode\n self.__size += 1\n\n def __str__(self):\n # retorno deve ser uma string:\n s = \"[\"\n node = self.__head\n while node:\n s += str(node.data) + ' ' if node.next != None else str(node.data)\n node = node.next\n return s + \"]\"\n\n\n\n def __recPop(self, no, i, index):\n if (i == index-1):\n\n if (index == self.size() - 1):\n self.__tail = no\n try:\n no.next = no.next.next; \n except AttributeError:\n no.next = None\n \n else: \n self.__recPop(no.next, i+1, index)\n\n \n def pop(self, index=0):\n\n if (index < 0 or index >= self.__size or self.__size == 0):\n return\n\n if (index == 0):\n try:\n self.__head = self.__head.next\n except AttributeError:\n self.__head = None\n self.__tail = None\n \n else:\n self.__recPop(self.__head, 0, index)\n \n self.__size -= 1\n \n def front(self):\n return self.__head.data\n\n def back(self):\n return self.__tail.data\n\n def addBack(self, data):\n newNode = Node(data)\n if (not self.empty()):\n self.__tail.next = newNode\n else:\n self.__head = newNode\n\n self.__tail = newNode\n self.__size += 1\n\n def empty(self):\n return self.__size == 0\n\n def size(self):\n return self.__size\n\n \n\n def __recursiveReverse(self, No):\n if No == None : return\n self.__recursiveReverse(No.next)\n print(No, end=' ') if self.__head != No else print(No, end='')\n\n def reverse(self):\n print('[', end='')\n self.__recursiveReverse(self.__head)\n print(']')\n\n",
"def remove(self,data):\n current = self.head;\n previous = None;\n while current is not None:\n if current.data == data:\n # if this is the first node (head)\n if previous is not None:\n previous.nextNode = current.nextNode\n else:\n self.head = current.nextNode\n previous = current\n current = current.nextNode;\n\n",
"Python lists are linked lists.\nthelist = [1, 2, 3]\n# delete the second\ndel thelist[2]\n\n",
"\nAssume Linked List has more than 1 nodes. such as n1->n2->n3, and you want to del n2. \n\nn1.next = n1.next.next\nn2.next = None\n\n\nIf you want to del n1, which is the head.\n\nhead = n1.next\nn1.next = None\n\n",
"This is how I did it.\ndef delete_at_index(self, index):\n length = self.get_length()\n # Perform deletion only if the index to delete is within or equal to the length of the list.\n if index<=length:\n itr = self.head\n count = 0\n\n # If the index to delete is zeroth.\n if count==index:\n self.head = itr.next\n return\n \n while itr.next:\n if count==index-1:\n try:\n # If the index to delete is any index other than the first and last.\n itr.next = itr.next.next\n except:\n # If the index to delete is the last index.\n itr.next = None\n return\n count+=1\n itr = itr.next\n\ndef get_length(self):\n itr = self.head\n count = 0\n while itr:\n count += 1\n itr = itr.next\n return count\n\n"
] |
[
19,
6,
3,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"linked_list",
"python"
] |
stackoverflow_0004654953_linked_list_python.txt
|
Q:
how to reinstall a python environment offline?
I have server without Internet connection. I would to copy the current python environment from another machine (installed by pip and conda) to this server by disk. That is I need to know which packages are installed, download these package and reinstall these package in the server. Is there any way to manage the whole process automatically?
thanks in advance.
A:
Use pip freeze to create a list of installed packages and pip download to download them. Move them to your offline location and install them all with pip install:
$ pip freeze > requirements.txt
$ pip download -r ../requirements.txt -d packages
$ # move packages/* to offline host
offline_host$ pip install packages/*
|
how to reinstall a python environment offline?
|
I have server without Internet connection. I would to copy the current python environment from another machine (installed by pip and conda) to this server by disk. That is I need to know which packages are installed, download these package and reinstall these package in the server. Is there any way to manage the whole process automatically?
thanks in advance.
|
[
"Use pip freeze to create a list of installed packages and pip download to download them. Move them to your offline location and install them all with pip install:\n$ pip freeze > requirements.txt\n$ pip download -r ../requirements.txt -d packages\n$ # move packages/* to offline host\n\noffline_host$ pip install packages/*\n\n"
] |
[
2
] |
[] |
[] |
[
"conda",
"pip",
"python"
] |
stackoverflow_0074555903_conda_pip_python.txt
|
Q:
How to calculate peaks and valleys with first and last element using Numpy in performant way?
I want to find peaks and valleys in a single array and I have achieved this using the link. However, pv does not include the first element and the last element. How can I do it?
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# example data with peaks:
x = np.linspace(-1,3,1000)
data = -0.1*np.cos(12*x)+ np.exp(-(1-x)**2)
# ___ detection of local minimums and maximums ___
pv = np.diff(np.sign(np.diff(data))).nonzero()[0] + 1
A:
Here is how you can do this:
import numpy as np
import matplotlib.pyplot as plt
# example data with peaks:
x = np.linspace(-1, 3, 1000)
data = -0.1 * np.cos(12 * x) + np.exp(-((1 - x) ** 2))
# ___ detection of local minimums and maximums ___
pv = np.diff(np.sign(np.diff(data))).nonzero()[0] + 1
pv = np.concatenate(([0], pv, [len(x) - 1]))
plt.plot(x, data, c="black", zorder=1)
plt.scatter(x[pv][::2], data[pv][::2], c="red", zorder=2)
plt.scatter(x[pv][1::2], data[pv][1::2], c="blue", zorder=3)
plt.show()
Result:
|
How to calculate peaks and valleys with first and last element using Numpy in performant way?
|
I want to find peaks and valleys in a single array and I have achieved this using the link. However, pv does not include the first element and the last element. How can I do it?
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# example data with peaks:
x = np.linspace(-1,3,1000)
data = -0.1*np.cos(12*x)+ np.exp(-(1-x)**2)
# ___ detection of local minimums and maximums ___
pv = np.diff(np.sign(np.diff(data))).nonzero()[0] + 1
|
[
"Here is how you can do this:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# example data with peaks:\nx = np.linspace(-1, 3, 1000)\ndata = -0.1 * np.cos(12 * x) + np.exp(-((1 - x) ** 2))\n\n# ___ detection of local minimums and maximums ___\n\npv = np.diff(np.sign(np.diff(data))).nonzero()[0] + 1\npv = np.concatenate(([0], pv, [len(x) - 1]))\n\nplt.plot(x, data, c=\"black\", zorder=1)\nplt.scatter(x[pv][::2], data[pv][::2], c=\"red\", zorder=2)\nplt.scatter(x[pv][1::2], data[pv][1::2], c=\"blue\", zorder=3)\nplt.show()\n\nResult:\n\n"
] |
[
0
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0074555806_numpy_python.txt
|
Q:
regex to add space b/w alphabet and number
I have a string that has characters and numbers
string is like
iPhone8s
i want the output to be iPhone 8s
I have written this code
s = 'iPhone8s'
re.sub('(\d+(\.\d+)?)', r' \1 ', s).strip()
but it outputs iPhone 8 s how can i make the output to be iPhone 8s
i only want it for string that contains iPhone in it, i dont want to transform random8 as random 8
A:
i only want it for string that contains iPhone
then you have to put the iPhone into the regex
import re
s = 'iphone8s'
s = re.sub('iPhone(\d+)', r'iPhone \1', s, 0, re.IGNORECASE)
print(s) # iPhone 8s
s = 'random8s'
s = re.sub('iPhone(\d+)', r'iPhone \1', s, 0, re.IGNORECASE)
print(s) # random8s
A:
easy to understand and sort if you find for specifically for iphone word :
result='iphone8s'.lower().replace('iphone','iPhone ')
#iPhone 8s
|
regex to add space b/w alphabet and number
|
I have a string that has characters and numbers
string is like
iPhone8s
i want the output to be iPhone 8s
I have written this code
s = 'iPhone8s'
re.sub('(\d+(\.\d+)?)', r' \1 ', s).strip()
but it outputs iPhone 8 s how can i make the output to be iPhone 8s
i only want it for string that contains iPhone in it, i dont want to transform random8 as random 8
|
[
"\ni only want it for string that contains iPhone\n\nthen you have to put the iPhone into the regex\nimport re\n\ns = 'iphone8s'\ns = re.sub('iPhone(\\d+)', r'iPhone \\1', s, 0, re.IGNORECASE)\nprint(s) # iPhone 8s\n\ns = 'random8s'\ns = re.sub('iPhone(\\d+)', r'iPhone \\1', s, 0, re.IGNORECASE)\nprint(s) # random8s\n\n",
"easy to understand and sort if you find for specifically for iphone word :\nresult='iphone8s'.lower().replace('iphone','iPhone ')\n#iPhone 8s\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0074555621_python_regex.txt
|
Q:
What if we put zero at the place of negative index in a string?
name = "Deepesh"
print(name[0:0]) # this does not print anything
Q1. Why the above statement is not printing anything
print(name[-5:0]) # this also does not print anything
Q2. Why the given statement is not printing the value present from index -5 to -1?
Thanks for reading!
I have tried all the patterns similar to this kind of statements. I have got a point that what will be the output, but why this kind of output is coming? Actually, I want to know that how the compiler reads this kind of statements.
For example:
name[a : b : c]
In this compiler will read which of the parameter first? a, b or c?
A:
The first reason why it cannot be printed is that the slice is a left-closed-right intercept, that is, the right index is not included, so the result of [0:0] is the element that does not contain index 0.
The second slicing method is because the default way of slicing is to start slicing from left to right, [-5:0] means the same as [-5:0:1], that is, the starting position is at -5 and the ending position is at 0, but the interception order of the step is from left to right, obviously the opposite of the starting and ending positions, because there will not be any results, at this time, just need to change the direction of the step to be consistent with the interception of the starting and ending reverse that is, [-5:0:-1]
The specific code is as follows:
name = "Deepesh"
print(name[-5:0:-1])
outinfo: ee
|
What if we put zero at the place of negative index in a string?
|
name = "Deepesh"
print(name[0:0]) # this does not print anything
Q1. Why the above statement is not printing anything
print(name[-5:0]) # this also does not print anything
Q2. Why the given statement is not printing the value present from index -5 to -1?
Thanks for reading!
I have tried all the patterns similar to this kind of statements. I have got a point that what will be the output, but why this kind of output is coming? Actually, I want to know that how the compiler reads this kind of statements.
For example:
name[a : b : c]
In this compiler will read which of the parameter first? a, b or c?
|
[
"The first reason why it cannot be printed is that the slice is a left-closed-right intercept, that is, the right index is not included, so the result of [0:0] is the element that does not contain index 0.\nThe second slicing method is because the default way of slicing is to start slicing from left to right, [-5:0] means the same as [-5:0:1], that is, the starting position is at -5 and the ending position is at 0, but the interception order of the step is from left to right, obviously the opposite of the starting and ending positions, because there will not be any results, at this time, just need to change the direction of the step to be consistent with the interception of the starting and ending reverse that is, [-5:0:-1]\nThe specific code is as follows:\n name = \"Deepesh\"\n print(name[-5:0:-1])\n outinfo: ee\n\n"
] |
[
0
] |
[] |
[] |
[
"arrays",
"indexing",
"python",
"string"
] |
stackoverflow_0074555940_arrays_indexing_python_string.txt
|
Q:
How can we "associate" a Python context manager to the variables appearing in its block?
As I understand it, context managers are used in Python for defining initializing and finalizing pieces of code (__enter__ and __exit__) for an object.
However, in the tutorial for PyMC3 they show the following context manager example:
basic_model = pm.Model()
with basic_model:
# Priors for unknown model parameters
alpha = pm.Normal('alpha', mu=0, sd=10)
beta = pm.Normal('beta', mu=0, sd=10, shape=2)
sigma = pm.HalfNormal('sigma', sd=1)
# Expected value of outcome
mu = alpha + beta[0]*X1 + beta[1]*X2
# Likelihood (sampling distribution) of observations
Y_obs = pm.Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
and mention that this has the purpose of associating the variables alpha, beta, sigma, mu and Y_obs to the model basic_model.
I would like to understand how such a mechanism works. In the explanations of context managers I have found, I did not see anything suggesting how variables or objects defined within the context's block get somehow "associated" to the context manager. It would seem that the library (PyMC3) somehow has access to the "current" context manager so it can associate each newly created statement to it behind the scenes. But how can the library get access to the context manager?
A:
PyMC3 does this by maintaining a thread local variable as a class variable inside the Context class. Models inherit from Context.
Each time you call with on a model, the current model gets pushed onto the thread-specific context stack. The top of the stack thus always refers to the innermost (most recent) model used as a context manager.
Contexts (and thus Models) have a .get_context() class method to obtain the top of the context stack.
Distributions call Model.get_context() when they are created to associate themselves with the innermost model.
So in short:
with model pushes model onto the context stack. This means that inside of the with block, type(model).contexts or Model.contexts, or Context.contexts now contain model as its last (top-most) element.
Distribution.__init__() calls Model.get_context() (note capital M), which returns the top of the context stack. In our case this is model. The context stack is thread-local (there is one per thread), but it is not instance-specific. If there is only a single thread, there also is only a single context stack, regardless of the number of models.
When exiting the context manager. model gets popped from the context stack.
A:
I don't know how it works in this specific case, but in general you will use some 'behind the scenes magic':
class Parent:
def __init__(self):
self.active_child = None
def ContextManager(self):
return Child(self)
def Attribute(self):
return self.active_child.Attribute()
class Child:
def __init__(self,parent):
self.parent = parent
def __enter__(self):
self.parent.active_child = self
def __exit__(self, exc_type, exc_val, exc_tb):
self.parent.active_child = None
def Attribute(self):
print("Called Attribute of child")
Using this code:
p = Parent()
with p.ContextManager():
attr = p.Attribute()
will yield to following output:
Called Attribute of child
A:
One can also inspect the stack for locals() variables when entering and exiting the context manager block and identify which one have changed.
class VariablePostProcessor(object):
"""Context manager that applies a function to all newly defined variables in the context manager.
with VariablePostProcessor(print):
a = 1
b = 3
It uses the (name, id(obj)) of the variable & object to detect if a variable has been added.
If a name is already binded before the block to an object, it will detect the assignment to this name
in the context manager block only if the id of the object has changed.
a = 1
b = 2
with VariablePostProcessor(print):
a = 1
b = 3
# will only detect 'b' has newly defined variable/object. 'a' will not be detected as it points to the
# same object 1
"""
@staticmethod
def variables():
# get the locals 2 stack above
# (0 is this function, 1 is the __init__/__exit__ level, 2 is the context manager level)
return {(k, id(v)): v for k, v in inspect.stack()[2].frame.f_locals.items()}
def __init__(self, post_process):
self.post_process = post_process
# save the current stack
self.dct = self.variables()
def __enter__(self):
return
def __exit__(self, type, value, traceback):
# compare variables defined at __exist__ with variables defined at __enter__
dct_exit, dct_enter = self.variables(), self.dct
for (name, id_) in set(dct_exit).difference(dct_enter):
self.post_process(name, dct_exit[(name, id_)])
Typical use can be:
# let us define a Variable object that has a 'name' attribute that can be defined at initialisation time or later
class Variable:
def __init__(self, name=None):
self.name = name
# the following code
x = Variable('x')
y = Variable('y')
print(x.name, y.name)
# can be replaced by
with VariablePostProcessor(lambda name, obj: setattr(obj, "name", name)):
x = Variable()
y = Variable()
print(x.name, y.name)
# in such case, you can also define as a convenience
import functools
AutoRenamer = functools.partial(VariablePostProcessor, post_process=lambda name, obj: setattr(obj, "name", name))
# and rewrite the above code as
with AutoRenamer():
x = Variable()
y = Variable()
print(x.name, y.name) # => x y
|
How can we "associate" a Python context manager to the variables appearing in its block?
|
As I understand it, context managers are used in Python for defining initializing and finalizing pieces of code (__enter__ and __exit__) for an object.
However, in the tutorial for PyMC3 they show the following context manager example:
basic_model = pm.Model()
with basic_model:
# Priors for unknown model parameters
alpha = pm.Normal('alpha', mu=0, sd=10)
beta = pm.Normal('beta', mu=0, sd=10, shape=2)
sigma = pm.HalfNormal('sigma', sd=1)
# Expected value of outcome
mu = alpha + beta[0]*X1 + beta[1]*X2
# Likelihood (sampling distribution) of observations
Y_obs = pm.Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
and mention that this has the purpose of associating the variables alpha, beta, sigma, mu and Y_obs to the model basic_model.
I would like to understand how such a mechanism works. In the explanations of context managers I have found, I did not see anything suggesting how variables or objects defined within the context's block get somehow "associated" to the context manager. It would seem that the library (PyMC3) somehow has access to the "current" context manager so it can associate each newly created statement to it behind the scenes. But how can the library get access to the context manager?
|
[
"PyMC3 does this by maintaining a thread local variable as a class variable inside the Context class. Models inherit from Context.\nEach time you call with on a model, the current model gets pushed onto the thread-specific context stack. The top of the stack thus always refers to the innermost (most recent) model used as a context manager.\nContexts (and thus Models) have a .get_context() class method to obtain the top of the context stack.\nDistributions call Model.get_context() when they are created to associate themselves with the innermost model.\nSo in short:\n\nwith model pushes model onto the context stack. This means that inside of the with block, type(model).contexts or Model.contexts, or Context.contexts now contain model as its last (top-most) element.\nDistribution.__init__() calls Model.get_context() (note capital M), which returns the top of the context stack. In our case this is model. The context stack is thread-local (there is one per thread), but it is not instance-specific. If there is only a single thread, there also is only a single context stack, regardless of the number of models.\nWhen exiting the context manager. model gets popped from the context stack.\n\n",
"I don't know how it works in this specific case, but in general you will use some 'behind the scenes magic':\nclass Parent:\n def __init__(self):\n self.active_child = None\n\n def ContextManager(self):\n return Child(self)\n\n def Attribute(self):\n return self.active_child.Attribute()\n\nclass Child:\n def __init__(self,parent):\n self.parent = parent\n\n def __enter__(self):\n self.parent.active_child = self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n self.parent.active_child = None\n\n def Attribute(self):\n print(\"Called Attribute of child\")\n\nUsing this code:\np = Parent()\nwith p.ContextManager():\n attr = p.Attribute()\n\nwill yield to following output:\nCalled Attribute of child\n\n",
"One can also inspect the stack for locals() variables when entering and exiting the context manager block and identify which one have changed.\nclass VariablePostProcessor(object):\n \"\"\"Context manager that applies a function to all newly defined variables in the context manager.\n\n with VariablePostProcessor(print):\n a = 1\n b = 3\n\n It uses the (name, id(obj)) of the variable & object to detect if a variable has been added.\n If a name is already binded before the block to an object, it will detect the assignment to this name\n in the context manager block only if the id of the object has changed.\n\n a = 1\n b = 2\n with VariablePostProcessor(print):\n a = 1\n b = 3\n # will only detect 'b' has newly defined variable/object. 'a' will not be detected as it points to the\n # same object 1\n \"\"\"\n\n @staticmethod\n def variables():\n # get the locals 2 stack above\n # (0 is this function, 1 is the __init__/__exit__ level, 2 is the context manager level)\n return {(k, id(v)): v for k, v in inspect.stack()[2].frame.f_locals.items()}\n\n def __init__(self, post_process):\n self.post_process = post_process\n # save the current stack\n self.dct = self.variables()\n\n def __enter__(self):\n return\n\n def __exit__(self, type, value, traceback):\n # compare variables defined at __exist__ with variables defined at __enter__\n dct_exit, dct_enter = self.variables(), self.dct\n for (name, id_) in set(dct_exit).difference(dct_enter):\n self.post_process(name, dct_exit[(name, id_)])\n\nTypical use can be:\n# let us define a Variable object that has a 'name' attribute that can be defined at initialisation time or later\nclass Variable:\n def __init__(self, name=None):\n self.name = name\n\n# the following code\nx = Variable('x')\ny = Variable('y')\nprint(x.name, y.name)\n\n# can be replaced by\nwith VariablePostProcessor(lambda name, obj: setattr(obj, \"name\", name)):\n x = Variable()\n y = Variable()\nprint(x.name, y.name)\n\n# in such case, you can also define as a convenience\nimport functools\nAutoRenamer = functools.partial(VariablePostProcessor, post_process=lambda name, obj: setattr(obj, \"name\", name))\n\n# and rewrite the above code as\nwith AutoRenamer():\n x = Variable()\n y = Variable()\nprint(x.name, y.name) # => x y\n\n"
] |
[
7,
4,
1
] |
[] |
[] |
[
"contextmanager",
"pymc3",
"python"
] |
stackoverflow_0051849395_contextmanager_pymc3_python.txt
|
Q:
Get input from the terminal, and send it into a channel in discord.py
I have spent a decent amount of time trying to found some solution to this, as far as I am aware, no one has asked or done this online. So what I want is to get user input FROM THE TERMINAL, and then send in into a channel I specify(I don't need to change channels), and for getting the chat, I don't know how to print what the other users are saying, I don't really to build commands into it, I am not that experienced with Discord.py as I am with Discord.Js, but I don't want to do this in discord.js. Can anyone help me?
A:
This can be achieved by creating a function to handle messages from the console:
@client.event
async def sendFromConsole():
run = True
while run:
message = input('Enter message: ')
channel = client.get_channel(CHANNEL_ID)
await channel.send(message)
This repeatedly asks for your input in the console and will send the string to the desired channel. (Be sure to call this function from your on_ready function beforehand)
|
Get input from the terminal, and send it into a channel in discord.py
|
I have spent a decent amount of time trying to found some solution to this, as far as I am aware, no one has asked or done this online. So what I want is to get user input FROM THE TERMINAL, and then send in into a channel I specify(I don't need to change channels), and for getting the chat, I don't know how to print what the other users are saying, I don't really to build commands into it, I am not that experienced with Discord.py as I am with Discord.Js, but I don't want to do this in discord.js. Can anyone help me?
|
[
"This can be achieved by creating a function to handle messages from the console:\n@client.event\nasync def sendFromConsole():\n run = True\n while run:\n message = input('Enter message: ')\n channel = client.get_channel(CHANNEL_ID)\n await channel.send(message)\n\nThis repeatedly asks for your input in the console and will send the string to the desired channel. (Be sure to call this function from your on_ready function beforehand)\n"
] |
[
1
] |
[] |
[] |
[
"discord.py",
"input",
"python"
] |
stackoverflow_0074478106_discord.py_input_python.txt
|
Q:
python Gtk3 - Set label of button to default value of None
I am trying to reset a label of a button to its initial (default) value of None, which does not work as expected. Here's the minimal example:
from gi import require_version
require_version('Gtk', '3.0')
from gi.repository import Gtk
class GUI(Gtk.Window):
def __init__(self):
super().__init__()
self.connect('destroy', Gtk.main_quit)
self.set_position(Gtk.WindowPosition.CENTER)
self.grid = Gtk.Grid(hexpand=True, vexpand=True)
self.add(self.grid)
button = Gtk.Button(hexpand=True, vexpand=True)
self.grid.attach(button, 0, 0, 1, 1)
button.connect('clicked', self.on_button_clicked)
def on_button_clicked(self, button: Gtk.Button) -> None:
print(label := button.get_label(), type(label))
button.set_label(label)
def main() -> None:
win = GUI()
win.show_all()
Gtk.main()
if __name__ == '__main__':
main()
Result:
$ python example.py
None <class 'NoneType'>
Traceback (most recent call last):
File "/home/neumann/example.py", line 21, in on_button_clicked
button.set_label(label)
TypeError: Argument 1 does not allow None as a value
What's the correct way to do this?
Note: Before somebody suggests it: I do not want to set the label to an empty string, since that will change the size of the button, which is noticeable on a larger grid of buttons:
from gi import require_version
require_version('Gtk', '3.0')
from gi.repository import Gtk
class GUI(Gtk.Window):
def __init__(self):
super().__init__()
self.connect('destroy', Gtk.main_quit)
self.set_position(Gtk.WindowPosition.CENTER)
self.grid = Gtk.Grid(hexpand=True, vexpand=True)
self.add(self.grid)
for x in range(3):
button = Gtk.Button(hexpand=True, vexpand=True)
button.connect('clicked', self.on_button_clicked)
self.grid.attach(button, x, 0, 1, 1)
def on_button_clicked(self, button: Gtk.Button) -> None:
print(label := button.get_label(), type(label))
button.set_label('')
def main() -> None:
win = GUI()
win.show_all()
Gtk.main()
if __name__ == '__main__':
main()
A:
I'm not sure what your use case is, but you can try adding a GtkLabel child and set the string there:
from gi import require_version
require_version('Gtk', '3.0')
from gi.repository import Gtk
class GUI(Gtk.Window):
def __init__(self):
super().__init__()
self.connect('destroy', Gtk.main_quit)
self.set_position(Gtk.WindowPosition.CENTER)
self.grid = Gtk.Grid(hexpand=True, vexpand=True)
self.add(self.grid)
for x in range(3):
button = Gtk.Button(hexpand=True, vexpand=True)
label = Gtk.Label()
button.add(label)
button.connect('clicked', self.on_button_clicked)
self.grid.attach(button, x, 0, 1, 1)
def on_button_clicked(self, button: Gtk.Button) -> None:
label = button.get_child()
print(text := label.get_label(), type(text))
label.set_label('')
# or hide it if you want
# label.hide()
def main() -> None:
win = GUI()
win.show_all()
Gtk.main()
if __name__ == '__main__':
main()
GtkButton may be creating the internal GtkLabel child only when a label is set (which should be a valid string). And since the hexpand and vexpand for the GtkButton are set to True, they may be getting propagated to the internal GtkLabel.
If you simply want all the buttons to have same width and height, you may only need grid.set_row_homogeneous() and grid.set_column_homogeneous()
|
python Gtk3 - Set label of button to default value of None
|
I am trying to reset a label of a button to its initial (default) value of None, which does not work as expected. Here's the minimal example:
from gi import require_version
require_version('Gtk', '3.0')
from gi.repository import Gtk
class GUI(Gtk.Window):
def __init__(self):
super().__init__()
self.connect('destroy', Gtk.main_quit)
self.set_position(Gtk.WindowPosition.CENTER)
self.grid = Gtk.Grid(hexpand=True, vexpand=True)
self.add(self.grid)
button = Gtk.Button(hexpand=True, vexpand=True)
self.grid.attach(button, 0, 0, 1, 1)
button.connect('clicked', self.on_button_clicked)
def on_button_clicked(self, button: Gtk.Button) -> None:
print(label := button.get_label(), type(label))
button.set_label(label)
def main() -> None:
win = GUI()
win.show_all()
Gtk.main()
if __name__ == '__main__':
main()
Result:
$ python example.py
None <class 'NoneType'>
Traceback (most recent call last):
File "/home/neumann/example.py", line 21, in on_button_clicked
button.set_label(label)
TypeError: Argument 1 does not allow None as a value
What's the correct way to do this?
Note: Before somebody suggests it: I do not want to set the label to an empty string, since that will change the size of the button, which is noticeable on a larger grid of buttons:
from gi import require_version
require_version('Gtk', '3.0')
from gi.repository import Gtk
class GUI(Gtk.Window):
def __init__(self):
super().__init__()
self.connect('destroy', Gtk.main_quit)
self.set_position(Gtk.WindowPosition.CENTER)
self.grid = Gtk.Grid(hexpand=True, vexpand=True)
self.add(self.grid)
for x in range(3):
button = Gtk.Button(hexpand=True, vexpand=True)
button.connect('clicked', self.on_button_clicked)
self.grid.attach(button, x, 0, 1, 1)
def on_button_clicked(self, button: Gtk.Button) -> None:
print(label := button.get_label(), type(label))
button.set_label('')
def main() -> None:
win = GUI()
win.show_all()
Gtk.main()
if __name__ == '__main__':
main()
|
[
"I'm not sure what your use case is, but you can try adding a GtkLabel child and set the string there:\nfrom gi import require_version\nrequire_version('Gtk', '3.0')\nfrom gi.repository import Gtk\n\n\nclass GUI(Gtk.Window):\n\n def __init__(self):\n super().__init__()\n self.connect('destroy', Gtk.main_quit)\n self.set_position(Gtk.WindowPosition.CENTER)\n\n self.grid = Gtk.Grid(hexpand=True, vexpand=True)\n self.add(self.grid)\n\n for x in range(3):\n button = Gtk.Button(hexpand=True, vexpand=True)\n label = Gtk.Label()\n button.add(label)\n button.connect('clicked', self.on_button_clicked)\n self.grid.attach(button, x, 0, 1, 1)\n\n def on_button_clicked(self, button: Gtk.Button) -> None:\n label = button.get_child()\n print(text := label.get_label(), type(text))\n label.set_label('')\n # or hide it if you want\n # label.hide()\n\n\ndef main() -> None:\n\n win = GUI()\n win.show_all()\n Gtk.main()\n\n\nif __name__ == '__main__':\n main()\n\nGtkButton may be creating the internal GtkLabel child only when a label is set (which should be a valid string). And since the hexpand and vexpand for the GtkButton are set to True, they may be getting propagated to the internal GtkLabel.\nIf you simply want all the buttons to have same width and height, you may only need grid.set_row_homogeneous() and grid.set_column_homogeneous()\n"
] |
[
1
] |
[] |
[] |
[
"gtk",
"gtk3",
"pygtk",
"python",
"python_3.x"
] |
stackoverflow_0074547290_gtk_gtk3_pygtk_python_python_3.x.txt
|
Q:
Building basic terminal with Python and change directory function isn't changing directory
cd function isn't changing directory for some reason! Whenever I use is on my terminal, it temporarly changes the directory, when i move to next command, action gets to be undone.
import os
import pathlib
from os.path import join
path = os.getcwd()
# DONE
def ls():
os.listdir(path)
print(os.listdir(path))
def pwd():
print(os.getcwd())
def touch(file_name):
fp = open(join(path, file_name), 'a')
fp.close()
def rm(file_name):
file = pathlib.Path(join(path, file_name))
file.unlink()
def cd(file_name):
os.chdir(join(path, file_name))
while True < 100:
dirName = input()
cmd = dirName.split(" ")[0]
if cmd == "ls": # DONE
ls()
elif cmd == "pwd": # DONE
pwd()
elif cmd == "cd": # DONE
file_name = dirName.split(" ")[1]
cd(file_name)
print(os.getcwd())
elif cmd == "touch": # DONE
file_name = dirName.split(" ")[1]
touch(file_name)
elif cmd == "rm": # DONE
file_name = dirName.split(" ")[1]
rm(file_name)
elif cmd == 'cd': # DONE
file_name = dirName.split(" ")[1]
cd(file_name)
print(pwd(file_name))
else:
print("Command not found!")
The problem is with the cd function, it's not working!
def cd(file_name):
os.chdir(join(path, file_name))
It is expected that the cd function change directory.
A:
Notice how you set the initial value of path to os.getcwd and then you use it in the cd function.
This won't work like the cd command for every input because you can only access files and folders inside path.
What inputs have you tried?
|
Building basic terminal with Python and change directory function isn't changing directory
|
cd function isn't changing directory for some reason! Whenever I use is on my terminal, it temporarly changes the directory, when i move to next command, action gets to be undone.
import os
import pathlib
from os.path import join
path = os.getcwd()
# DONE
def ls():
os.listdir(path)
print(os.listdir(path))
def pwd():
print(os.getcwd())
def touch(file_name):
fp = open(join(path, file_name), 'a')
fp.close()
def rm(file_name):
file = pathlib.Path(join(path, file_name))
file.unlink()
def cd(file_name):
os.chdir(join(path, file_name))
while True < 100:
dirName = input()
cmd = dirName.split(" ")[0]
if cmd == "ls": # DONE
ls()
elif cmd == "pwd": # DONE
pwd()
elif cmd == "cd": # DONE
file_name = dirName.split(" ")[1]
cd(file_name)
print(os.getcwd())
elif cmd == "touch": # DONE
file_name = dirName.split(" ")[1]
touch(file_name)
elif cmd == "rm": # DONE
file_name = dirName.split(" ")[1]
rm(file_name)
elif cmd == 'cd': # DONE
file_name = dirName.split(" ")[1]
cd(file_name)
print(pwd(file_name))
else:
print("Command not found!")
The problem is with the cd function, it's not working!
def cd(file_name):
os.chdir(join(path, file_name))
It is expected that the cd function change directory.
|
[
"Notice how you set the initial value of path to os.getcwd and then you use it in the cd function.\nThis won't work like the cd command for every input because you can only access files and folders inside path.\nWhat inputs have you tried?\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074555861_python.txt
|
Q:
Why do I get "RuntimeError: This event loop is already running"?
I am running the following code that tries to get some information from https://cdn.ime.co.ir but it gives me this error:
RuntimeError: This event loop is already running
My code is:
import requests
import json
import asyncio
import websockets
import urllib
import random
from threading import Thread
connectionData = [{"name":"marketshub"}]
r = requests.get("https://cdn.ime.co.ir/realTimeServer/negotiate", params = {
"clientProtocol": "2.1",
"connectionData": json.dumps(connectionData),
})
response = r.json()
#print(f'got connection token : {response["ConnectionToken"]}')
wsParams = {
"transport": "webSockets",
"clientProtocol": "2.1",
"connectionToken": response["ConnectionToken"],
"connectionData": json.dumps(connectionData),
"tid": random.randint(0,9)
}
websocketUri = f"wss://cdn.ime.co.ir/realTimeServer/connect?{urllib.parse.urlencode(wsParams)}"
def startReceiving(arg):
r = requests.get("https://cdn.ime.co.ir/realTimeServer/start", params = wsParams)
print(f'started receiving data : {r.json()}')
result = []
async def websocketConnect():
async with websockets.connect(websocketUri) as websocket:
print(f'started websocket')
thread = Thread(target = startReceiving, args = (0, ))
thread.start()
for i in range(0,10):
print("receiving")
data = await websocket.recv()
jsonData = json.loads(data)
if ("M" in jsonData and len(jsonData["M"]) > 0 and "A" in jsonData["M"][0] and len(jsonData["M"][0]["A"]) > 0):
items = jsonData["M"][0]["A"][0]
if type(items) == list and len(items) > 0:
result = items
break
thread.join()
print(json.dumps(result, indent=4, sort_keys=True))
asyncio.get_event_loop().run_until_complete(websocketConnect())
#print([i for i in result if i["ContractCode"] == "SAFOR99"])
Why do I get this error message and how can I resolve the problem?
EDIT: This is the full error message in spyder IDE .............:
Python 3.7.3 (default, Apr 24 2019, 15:29:51) [MSC v.1915 64 bit (AMD64)]
Type "copyright", "credits" or "license" for more information.
IPython 7.7.0 -- An enhanced Interactive Python.
runfile('C:/Users/m/Desktop/python/selen/auto_new.py', wdir='C:/Users/m/Desktop/python/selen')
Traceback (most recent call last):
File "<ipython-input-1-6b5319e4825b>", line 1, in <module>
runfile('C:/Users/m/Desktop/python/selen/auto_new.py', wdir='C:/Users/m/Desktop/python/selen')
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/m/Desktop/python/selen/auto_new.py", line 52, in <module>
asyncio.get_event_loop().run_until_complete(websocketConnect())
File "C:\ProgramData\Anaconda3\lib\asyncio\base_events.py", line 571, in run_until_complete
self.run_forever()
File "C:\ProgramData\Anaconda3\lib\asyncio\base_events.py", line 526, in run_forever
raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running
A:
You're in an IPython kernel. Don't run this in an IPython kernel. They changed their async handling a while back in a way that breaks this kind of thing; the kernel itself is running in the event loop already. (Terminal IPython works a bit differently, so this might still work in IPython in a terminal, but I'd stick with running it through Python directly.)
A:
Install twint using:
!pip3 install --user --upgrade git+https://github.com/twintproject/twint.git@origin/master#egg=twint
if you are using Colab. This resolved the issue or me.
|
Why do I get "RuntimeError: This event loop is already running"?
|
I am running the following code that tries to get some information from https://cdn.ime.co.ir but it gives me this error:
RuntimeError: This event loop is already running
My code is:
import requests
import json
import asyncio
import websockets
import urllib
import random
from threading import Thread
connectionData = [{"name":"marketshub"}]
r = requests.get("https://cdn.ime.co.ir/realTimeServer/negotiate", params = {
"clientProtocol": "2.1",
"connectionData": json.dumps(connectionData),
})
response = r.json()
#print(f'got connection token : {response["ConnectionToken"]}')
wsParams = {
"transport": "webSockets",
"clientProtocol": "2.1",
"connectionToken": response["ConnectionToken"],
"connectionData": json.dumps(connectionData),
"tid": random.randint(0,9)
}
websocketUri = f"wss://cdn.ime.co.ir/realTimeServer/connect?{urllib.parse.urlencode(wsParams)}"
def startReceiving(arg):
r = requests.get("https://cdn.ime.co.ir/realTimeServer/start", params = wsParams)
print(f'started receiving data : {r.json()}')
result = []
async def websocketConnect():
async with websockets.connect(websocketUri) as websocket:
print(f'started websocket')
thread = Thread(target = startReceiving, args = (0, ))
thread.start()
for i in range(0,10):
print("receiving")
data = await websocket.recv()
jsonData = json.loads(data)
if ("M" in jsonData and len(jsonData["M"]) > 0 and "A" in jsonData["M"][0] and len(jsonData["M"][0]["A"]) > 0):
items = jsonData["M"][0]["A"][0]
if type(items) == list and len(items) > 0:
result = items
break
thread.join()
print(json.dumps(result, indent=4, sort_keys=True))
asyncio.get_event_loop().run_until_complete(websocketConnect())
#print([i for i in result if i["ContractCode"] == "SAFOR99"])
Why do I get this error message and how can I resolve the problem?
EDIT: This is the full error message in spyder IDE .............:
Python 3.7.3 (default, Apr 24 2019, 15:29:51) [MSC v.1915 64 bit (AMD64)]
Type "copyright", "credits" or "license" for more information.
IPython 7.7.0 -- An enhanced Interactive Python.
runfile('C:/Users/m/Desktop/python/selen/auto_new.py', wdir='C:/Users/m/Desktop/python/selen')
Traceback (most recent call last):
File "<ipython-input-1-6b5319e4825b>", line 1, in <module>
runfile('C:/Users/m/Desktop/python/selen/auto_new.py', wdir='C:/Users/m/Desktop/python/selen')
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/m/Desktop/python/selen/auto_new.py", line 52, in <module>
asyncio.get_event_loop().run_until_complete(websocketConnect())
File "C:\ProgramData\Anaconda3\lib\asyncio\base_events.py", line 571, in run_until_complete
self.run_forever()
File "C:\ProgramData\Anaconda3\lib\asyncio\base_events.py", line 526, in run_forever
raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running
|
[
"You're in an IPython kernel. Don't run this in an IPython kernel. They changed their async handling a while back in a way that breaks this kind of thing; the kernel itself is running in the event loop already. (Terminal IPython works a bit differently, so this might still work in IPython in a terminal, but I'd stick with running it through Python directly.)\n",
"Install twint using:\n!pip3 install --user --upgrade git+https://github.com/twintproject/twint.git@origin/master#egg=twint\nif you are using Colab. This resolved the issue or me.\n"
] |
[
2,
0
] |
[] |
[] |
[
"asynchronous",
"event_loop",
"python",
"runtime_error"
] |
stackoverflow_0060926440_asynchronous_event_loop_python_runtime_error.txt
|
Q:
Unable to get kafka connect redshift connector working
Following the question and suggestions here: Kafka JDBCSinkConnector Schema exception: JsonConverter with schemas.enable requires "schema" and "payload", I am trying to sink records into redshift using redshift connector and a producer written in python. Here is the connector config:
connector.class=io.confluent.connect.aws.redshift.RedshiftSinkConnector
aws.redshift.port=5439
confluent.topic.bootstrap.servers=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
tasks.max=2
topics=test_topic_3
aws.redshift.password=xxxxxxxxxxxxxxxxxxxxxxxxxxxx
aws.redshift.domain=xxxxxxxxxxxxxxxxxxxxxxxx.redshift.amazonaws.com
aws.redshift.database=xxxxxxxxxxxxxxxxxxx
confluent.topic.replication.factor=1
aws.redshift.user=xxxxxxxxxxxxxxxxxxxxxxxx
auto.create=true
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=true
pk.mode=kafka
The content in the schema file is as under:
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"url"}],"optional":false,"name":"test_data"},"payload":{"id":12,"url":"some_url"}}
and the python code is:
from kafka import KafkaProducer
import json
producer = KafkaProducer(bootstrap_servers="xxxxxxxxxxxxx",value_serializer=lambda v: json.dumps(v).encode('utf-8'))
with open("connector_test_schema.json", 'r') as file:
read = file.read()
for i in range(1):
producer.send("test_topic_3", key='abc'.encode('utf-8'), value=read)
producer.close()
I still get the following error:
[Worker-0fbc0b18922b147e0] org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
Is there something that's wrong with what I'm doing?
A:
As suggested by @OneCricketeer and confirmed, encoding was done twice, causing it to fail.
Solution: Only encode the string read from the JSON file
value_serializer=lambda v: v.encode('utf-8')
|
Unable to get kafka connect redshift connector working
|
Following the question and suggestions here: Kafka JDBCSinkConnector Schema exception: JsonConverter with schemas.enable requires "schema" and "payload", I am trying to sink records into redshift using redshift connector and a producer written in python. Here is the connector config:
connector.class=io.confluent.connect.aws.redshift.RedshiftSinkConnector
aws.redshift.port=5439
confluent.topic.bootstrap.servers=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
tasks.max=2
topics=test_topic_3
aws.redshift.password=xxxxxxxxxxxxxxxxxxxxxxxxxxxx
aws.redshift.domain=xxxxxxxxxxxxxxxxxxxxxxxx.redshift.amazonaws.com
aws.redshift.database=xxxxxxxxxxxxxxxxxxx
confluent.topic.replication.factor=1
aws.redshift.user=xxxxxxxxxxxxxxxxxxxxxxxx
auto.create=true
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=true
pk.mode=kafka
The content in the schema file is as under:
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"url"}],"optional":false,"name":"test_data"},"payload":{"id":12,"url":"some_url"}}
and the python code is:
from kafka import KafkaProducer
import json
producer = KafkaProducer(bootstrap_servers="xxxxxxxxxxxxx",value_serializer=lambda v: json.dumps(v).encode('utf-8'))
with open("connector_test_schema.json", 'r') as file:
read = file.read()
for i in range(1):
producer.send("test_topic_3", key='abc'.encode('utf-8'), value=read)
producer.close()
I still get the following error:
[Worker-0fbc0b18922b147e0] org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
Is there something that's wrong with what I'm doing?
|
[
"As suggested by @OneCricketeer and confirmed, encoding was done twice, causing it to fail.\nSolution: Only encode the string read from the JSON file\nvalue_serializer=lambda v: v.encode('utf-8')\n\n"
] |
[
2
] |
[] |
[] |
[
"apache_kafka",
"apache_kafka_connect",
"confluent_kafka_python",
"python"
] |
stackoverflow_0074552956_apache_kafka_apache_kafka_connect_confluent_kafka_python_python.txt
|
Q:
tensorflow.linalg.eig throwing error UnboundLocalError: local variable 'out_dtype' referenced before assignment
I have below code
import tensorflow as tf
X_tf = tf.Variable([[25, 2, 9], [5, 26, -5], [3, 7, -1]])
lambdas_X_tf, V_X_tf = tf.linalg.eig(X_tf)
when I execute it I get below error
File "C:\Users\u1.conda\envs\py39\lib\site-packages\tensorflow\python\util\traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\u1.conda\envs\py39\lib\site-packages\tensorflow\python\ops\linalg_ops.py", line 406, in eig
e, v = gen_linalg_ops.eig(tensor, Tout=out_dtype, compute_v=True, name=name)
UnboundLocalError: local variable 'out_dtype' referenced before assignment
What can be the reason?
A:
You need to set dtype as float32:
X_tf = tf.Variable([[25, 2, 9], [5, 26, -5], [3, 7, -1]], dtype=tf.float32)
|
tensorflow.linalg.eig throwing error UnboundLocalError: local variable 'out_dtype' referenced before assignment
|
I have below code
import tensorflow as tf
X_tf = tf.Variable([[25, 2, 9], [5, 26, -5], [3, 7, -1]])
lambdas_X_tf, V_X_tf = tf.linalg.eig(X_tf)
when I execute it I get below error
File "C:\Users\u1.conda\envs\py39\lib\site-packages\tensorflow\python\util\traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\u1.conda\envs\py39\lib\site-packages\tensorflow\python\ops\linalg_ops.py", line 406, in eig
e, v = gen_linalg_ops.eig(tensor, Tout=out_dtype, compute_v=True, name=name)
UnboundLocalError: local variable 'out_dtype' referenced before assignment
What can be the reason?
|
[
"You need to set dtype as float32:\nX_tf = tf.Variable([[25, 2, 9], [5, 26, -5], [3, 7, -1]], dtype=tf.float32) \n\n"
] |
[
0
] |
[] |
[] |
[
"eigenvalue",
"eigenvector",
"linear_algebra",
"python",
"tensorflow"
] |
stackoverflow_0074556086_eigenvalue_eigenvector_linear_algebra_python_tensorflow.txt
|
Q:
Matplotlib bbox_inches issue
I have this very simple code:
import math
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 2*math.pi)
fig, ax = plt.subplots()
ax.plot(x,np.sin(x))
plt.savefig('sinwave.png', bbox_inches='tight')
plt.show()
So what I would expect is that there is white background inside my plotting area, but no background around it, e.g. transparent background (only) behind the ticks and tick labels.
However, something is going wrong and it isn't working, the whole background is white. When I send it to a friend it works exactly as expected.
So I thought creating a new environment might fix it but the issue persisted. I tried it in ipython, jupyter lab and jupyter notebook, all with the same result (as to be expected but I just thought I would try). I'm using python 3.10.8 and matplotlib 3.6.2 on a Mac (2019 MacBook Pro) but it also doesn't work with other versions of python or matplotlib.
This is just a simple example but I have some 3-4 month old code which did exactly what I want back then and is now not doing it any more. Very odd.
Any ideas what might be going wrong? I really thought the new environment would fix the issue but no luck.
I'm fairly new to the community so let me know if you need any further details about my system (as it seems to be an issue with my computer/python install maybe??).
Cheers
Markus
Below are both images, you can't see the difference here as it's on white background but it's straight forward if opened e.g. in illustrator or photoshop.
A:
fig = plt.figure()
fig.patch.set_facecolor('white')
fig.patch.set_alpha(0.6) #play with the alpha value
ax = fig.add_subplot(111)
ax.patch.set_facecolor('white')
ax.patch.set_alpha(0.0) #play with the alpha value
Then simply plot here
Plt.scatter(x,y)
Plt.show()
|
Matplotlib bbox_inches issue
|
I have this very simple code:
import math
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 2*math.pi)
fig, ax = plt.subplots()
ax.plot(x,np.sin(x))
plt.savefig('sinwave.png', bbox_inches='tight')
plt.show()
So what I would expect is that there is white background inside my plotting area, but no background around it, e.g. transparent background (only) behind the ticks and tick labels.
However, something is going wrong and it isn't working, the whole background is white. When I send it to a friend it works exactly as expected.
So I thought creating a new environment might fix it but the issue persisted. I tried it in ipython, jupyter lab and jupyter notebook, all with the same result (as to be expected but I just thought I would try). I'm using python 3.10.8 and matplotlib 3.6.2 on a Mac (2019 MacBook Pro) but it also doesn't work with other versions of python or matplotlib.
This is just a simple example but I have some 3-4 month old code which did exactly what I want back then and is now not doing it any more. Very odd.
Any ideas what might be going wrong? I really thought the new environment would fix the issue but no luck.
I'm fairly new to the community so let me know if you need any further details about my system (as it seems to be an issue with my computer/python install maybe??).
Cheers
Markus
Below are both images, you can't see the difference here as it's on white background but it's straight forward if opened e.g. in illustrator or photoshop.
|
[
"fig = plt.figure()\nfig.patch.set_facecolor('white')\nfig.patch.set_alpha(0.6) #play with the alpha value\n\nax = fig.add_subplot(111)\nax.patch.set_facecolor('white')\nax.patch.set_alpha(0.0) #play with the alpha value\n\nThen simply plot here\nPlt.scatter(x,y)\nPlt.show()\n\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0074556178_matplotlib_python.txt
|
Q:
Is there a simple way to remove multiple spaces in a string?
Suppose this string:
The fox jumped over the log.
Turning into:
The fox jumped over the log.
What is the simplest (1-2 lines) to achieve this, without splitting and going into lists?
A:
>>> import re
>>> re.sub(' +', ' ', 'The quick brown fox')
'The quick brown fox'
A:
foo is your string:
" ".join(foo.split())
Be warned though this removes "all whitespace characters (space, tab, newline, return, formfeed)" (thanks to hhsaffar, see comments). I.e., "this is \t a test\n" will effectively end up as "this is a test".
A:
import re
s = "The fox jumped over the log."
re.sub("\s\s+" , " ", s)
or
re.sub("\s\s+", " ", s)
since the space before comma is listed as a pet peeve in PEPΒ 8, as mentioned by user Martin Thoma in the comments.
A:
Using regexes with "\s" and doing simple string.split()'s will also remove other whitespace - like newlines, carriage returns, tabs. Unless this is desired, to only do multiple spaces, I present these examples.
I used 11 paragraphs, 1000 words, 6665 bytes of Lorem Ipsum to get realistic time tests and used random-length extra spaces throughout:
original_string = ''.join(word + (' ' * random.randint(1, 10)) for word in lorem_ipsum.split(' '))
The one-liner will essentially do a strip of any leading/trailing spaces, and it preserves a leading/trailing space (but only ONE ;-).
# setup = '''
import re
def while_replace(string):
while ' ' in string:
string = string.replace(' ', ' ')
return string
def re_replace(string):
return re.sub(r' {2,}' , ' ', string)
def proper_join(string):
split_string = string.split(' ')
# To account for leading/trailing spaces that would simply be removed
beg = ' ' if not split_string[ 0] else ''
end = ' ' if not split_string[-1] else ''
# versus simply ' '.join(item for item in string.split(' ') if item)
return beg + ' '.join(item for item in split_string if item) + end
original_string = """Lorem ipsum ... no, really, it kept going... malesuada enim feugiat. Integer imperdiet erat."""
assert while_replace(original_string) == re_replace(original_string) == proper_join(original_string)
#'''
# while_replace_test
new_string = original_string[:]
new_string = while_replace(new_string)
assert new_string != original_string
# re_replace_test
new_string = original_string[:]
new_string = re_replace(new_string)
assert new_string != original_string
# proper_join_test
new_string = original_string[:]
new_string = proper_join(new_string)
assert new_string != original_string
NOTE: The "while version" made a copy of the original_string, as I believe once modified on the first run, successive runs would be faster (if only by a bit). As this adds time, I added this string copy to the other two so that the times showed the difference only in the logic. Keep in mind that the main stmt on timeit instances will only be executed once; the original way I did this, the while loop worked on the same label, original_string, thus the second run, there would be nothing to do. The way it's set up now, calling a function, using two different labels, that isn't a problem. I've added assert statements to all the workers to verify we change something every iteration (for those who may be dubious). E.g., change to this and it breaks:
# while_replace_test
new_string = original_string[:]
new_string = while_replace(new_string)
assert new_string != original_string # will break the 2nd iteration
while ' ' in original_string:
original_string = original_string.replace(' ', ' ')
Tests run on a laptop with an i5 processor running Windows 7 (64-bit).
timeit.Timer(stmt = test, setup = setup).repeat(7, 1000)
test_string = 'The fox jumped over\n\t the log.' # trivial
Python 2.7.3, 32-bit, Windows
test | minum | maximum | average | median
---------------------+------------+------------+------------+-----------
while_replace_test | 0.001066 | 0.001260 | 0.001128 | 0.001092
re_replace_test | 0.003074 | 0.003941 | 0.003357 | 0.003349
proper_join_test | 0.002783 | 0.004829 | 0.003554 | 0.003035
Python 2.7.3, 64-bit, Windows
test | minum | maximum | average | median
---------------------+------------+------------+------------+-----------
while_replace_test | 0.001025 | 0.001079 | 0.001052 | 0.001051
re_replace_test | 0.003213 | 0.004512 | 0.003656 | 0.003504
proper_join_test | 0.002760 | 0.006361 | 0.004626 | 0.004600
Python 3.2.3, 32-bit, Windows
test | minum | maximum | average | median
---------------------+------------+------------+------------+-----------
while_replace_test | 0.001350 | 0.002302 | 0.001639 | 0.001357
re_replace_test | 0.006797 | 0.008107 | 0.007319 | 0.007440
proper_join_test | 0.002863 | 0.003356 | 0.003026 | 0.002975
Python 3.3.3, 64-bit, Windows
test | minum | maximum | average | median
---------------------+------------+------------+------------+-----------
while_replace_test | 0.001444 | 0.001490 | 0.001460 | 0.001459
re_replace_test | 0.011771 | 0.012598 | 0.012082 | 0.011910
proper_join_test | 0.003741 | 0.005933 | 0.004341 | 0.004009
test_string = lorem_ipsum
# Thanks to http://www.lipsum.com/
# "Generated 11 paragraphs, 1000 words, 6665 bytes of Lorem Ipsum"
Python 2.7.3, 32-bit
test | minum | maximum | average | median
---------------------+------------+------------+------------+-----------
while_replace_test | 0.342602 | 0.387803 | 0.359319 | 0.356284
re_replace_test | 0.337571 | 0.359821 | 0.348876 | 0.348006
proper_join_test | 0.381654 | 0.395349 | 0.388304 | 0.388193
Python 2.7.3, 64-bit
test | minum | maximum | average | median
---------------------+------------+------------+------------+-----------
while_replace_test | 0.227471 | 0.268340 | 0.240884 | 0.236776
re_replace_test | 0.301516 | 0.325730 | 0.308626 | 0.307852
proper_join_test | 0.358766 | 0.383736 | 0.370958 | 0.371866
Python 3.2.3, 32-bit
test | minum | maximum | average | median
---------------------+------------+------------+------------+-----------
while_replace_test | 0.438480 | 0.463380 | 0.447953 | 0.446646
re_replace_test | 0.463729 | 0.490947 | 0.472496 | 0.468778
proper_join_test | 0.397022 | 0.427817 | 0.406612 | 0.402053
Python 3.3.3, 64-bit
test | minum | maximum | average | median
---------------------+------------+------------+------------+-----------
while_replace_test | 0.284495 | 0.294025 | 0.288735 | 0.289153
re_replace_test | 0.501351 | 0.525673 | 0.511347 | 0.508467
proper_join_test | 0.422011 | 0.448736 | 0.436196 | 0.440318
For the trivial string, it would seem that a while-loop is the fastest, followed by the Pythonic string-split/join, and regex pulling up the rear.
For non-trivial strings, seems there's a bit more to consider. 32-bit 2.7? It's regex to the rescue! 2.7 64-bit? A while loop is best, by a decent margin. 32-bit 3.2, go with the "proper" join. 64-bit 3.3, go for a while loop. Again.
In the end, one can improve performance if/where/when needed, but it's always best to remember the mantra:
Make It Work
Make It Right
Make It Fast
IANAL, YMMV, Caveat Emptor!
A:
I have to agree with Paul McGuire's comment. To me,
' '.join(the_string.split())
is vastly preferable to whipping out a regex.
My measurements (Linux and Python 2.5) show the split-then-join to be almost five times faster than doing the "re.sub(...)", and still three times faster if you precompile the regex once and do the operation multiple times. And it is by any measure easier to understand -- much more Pythonic.
A:
Similar to the previous solutions, but more specific: replace two or more spaces with one:
>>> import re
>>> s = "The fox jumped over the log."
>>> re.sub('\s{2,}', ' ', s)
'The fox jumped over the log.'
A:
I have tried the following method and it even works with the extreme case like:
str1=' I live on earth '
' '.join(str1.split())
But if you prefer a regular expression it can be done as:
re.sub('\s+', ' ', str1)
Although some preprocessing has to be done in order to remove the trailing and ending space.
A:
A simple soultion
>>> import re
>>> s="The fox jumped over the log."
>>> print re.sub('\s+',' ', s)
The fox jumped over the log.
A:
import re
Text = " You can select below trims for removing white space!! BR Aliakbar "
# trims all white spaces
print('Remove all space:',re.sub(r"\s+", "", Text), sep='')
# trims left space
print('Remove leading space:', re.sub(r"^\s+", "", Text), sep='')
# trims right space
print('Remove trailing spaces:', re.sub(r"\s+$", "", Text), sep='')
# trims both
print('Remove leading and trailing spaces:', re.sub(r"^\s+|\s+$", "", Text), sep='')
# replace more than one white space in the string with one white space
print('Remove more than one space:',re.sub(' +', ' ',Text), sep='')
Result: as code
"Remove all space:Youcanselectbelowtrimsforremovingwhitespace!!BRAliakbar"
"Remove leading space:You can select below trims for removing white space!! BR Aliakbar"
"Remove trailing spaces: You can select below trims for removing white space!! BR Aliakbar"
"Remove leading and trailing spaces:You can select below trims for removing white space!! BR Aliakbar"
"Remove more than one space: You can select below trims for removing white space!! BR Aliakbar"
A:
You can also use the string splitting technique in a Pandas DataFrame without needing to use .apply(..), which is useful if you need to perform the operation quickly on a large number of strings. Here it is on one line:
df['message'] = (df['message'].str.split()).str.join(' ')
A:
Solution for Python developers:
import re
text1 = 'Python Exercises Are Challenging Exercises'
print("Original string: ", text1)
print("Without extra spaces: ", re.sub(' +', ' ', text1))
Output:
Original string: Python Exercises Are Challenging Exercises
Without extra spaces: Python Exercises Are Challenging Exercises
A:
import re
string = re.sub('[ \t\n]+', ' ', 'The quick brown \n\n \t fox')
This will remove all the tabs, new lines and multiple white spaces with single white space.
A:
One line of code to remove all extra spaces before, after, and within a sentence:
sentence = " The fox jumped over the log. "
sentence = ' '.join(filter(None,sentence.split(' ')))
Explanation:
Split the entire string into a list.
Filter empty elements from the list.
Rejoin the remaining elements* with a single space
*The remaining elements should be words or words with punctuations, etc. I did not test this extensively, but this should be a good starting point. All the best!
A:
The fastest you can get for user-generated strings is:
if ' ' in text:
while ' ' in text:
text = text.replace(' ', ' ')
The short circuiting makes it slightly faster than pythonlarry's comprehensive answer. Go for this if you're after efficiency and are strictly looking to weed out extra whitespaces of the single space variety.
A:
" ".join(foo.split()) is not quite correct with respect to the question asked because it also entirely removes single leading and/or trailing white spaces. So, if they shall also be replaced by 1 blank, you should do something like the following:
" ".join(('*' + foo + '*').split()) [1:-1]
Of course, it's less elegant.
A:
Another alternative:
>>> import re
>>> str = 'this is a string with multiple spaces and tabs'
>>> str = re.sub('[ \t]+' , ' ', str)
>>> print str
this is a string with multiple spaces and tabs
A:
In some cases it's desirable to replace consecutive occurrences of every whitespace character with a single instance of that character. You'd use a regular expression with backreferences to do that.
(\s)\1{1,} matches any whitespace character, followed by one or more occurrences of that character. Now, all you need to do is specify the first group (\1) as the replacement for the match.
Wrapping this in a function:
import re
def normalize_whitespace(string):
return re.sub(r'(\s)\1{1,}', r'\1', string)
>>> normalize_whitespace('The fox jumped over the log.')
'The fox jumped over the log.'
>>> normalize_whitespace('First line\t\t\t \n\n\nSecond line')
'First line\t \nSecond line'
A:
Quite surprising - no one posted simple function which will be much faster than ALL other posted solutions. Here it goes:
def compactSpaces(s):
os = ""
for c in s:
if c != " " or (os and os[-1] != " "):
os += c
return os
A:
Because @pythonlarry asked here are the missing generator based versions
The groupby join is easy. Groupby will group elements consecutive with same key. And return pairs of keys and list of elements for each group. So when the key is an space an space is returne else the entire group.
from itertools import groupby
def group_join(string):
return ''.join(' ' if chr==' ' else ''.join(times) for chr,times in groupby(string))
The group by variant is simple but very slow. So now for the generator variant. Here we consume an iterator, the string, and yield all chars except chars that follow an char.
def generator_join_generator(string):
last=False
for c in string:
if c==' ':
if not last:
last=True
yield ' '
else:
last=False
yield c
def generator_join(string):
return ''.join(generator_join_generator(string))
So i meassured the timings with some other lorem ipsum.
while_replace 0.015868543065153062
re_replace 0.22579886706080288
proper_join 0.40058281796518713
group_join 5.53206754301209
generator_join 1.6673167790286243
With Hello and World separated by 64KB of spaces
while_replace 2.991308711003512
re_replace 0.08232860406860709
proper_join 6.294375243945979
group_join 2.4320066600339487
generator_join 6.329648651066236
Not forget the original sentence
while_replace 0.002160938922315836
re_replace 0.008620491018518806
proper_join 0.005650000995956361
group_join 0.028368217987008393
generator_join 0.009435956948436797
Interesting here for nearly space only strings group join is not that worse
Timing showing always median from seven runs of a thousand times each.
A:
This one does exactly what you want
old_string = 'The fox jumped over the log '
new_string = " ".join(old_string.split())
print(new_string)
Will results to
The fox jumped over the log.
A:
def unPretty(S):
# Given a dictionary, JSON, list, float, int, or even a string...
# return a string stripped of CR, LF replaced by space, with multiple spaces reduced to one.
return ' '.join(str(S).replace('\n', ' ').replace('\r', '').split())
A:
string = 'This is a string full of spaces and taps'
string = string.split(' ')
while '' in string:
string.remove('')
string = ' '.join(string)
print(string)
Results:
This is a string full of spaces and taps
A:
To remove white space, considering leading, trailing and extra white space in between words, use:
(?<=\s) +|^ +(?=\s)| (?= +[\n\0])
The first or deals with leading white space, the second or deals with start of string leading white space, and the last one deals with trailing white space.
For proof of use, this link will provide you with a test.
https://regex101.com/r/meBYli/4
This is to be used with the re.split function.
A:
I haven't read a lot into the other examples, but I have just created this method for consolidating multiple consecutive space characters.
It does not use any libraries, and whilst it is relatively long in terms of script length, it is not a complex implementation:
def spaceMatcher(command):
"""
Function defined to consolidate multiple whitespace characters in
strings to a single space
"""
# Initiate index to flag if more than one consecutive character
iteration
space_match = 0
space_char = ""
for char in command:
if char == " ":
space_match += 1
space_char += " "
elif (char != " ") & (space_match > 1):
new_command = command.replace(space_char, " ")
space_match = 0
space_char = ""
elif char != " ":
space_match = 0
space_char = ""
return new_command
command = None
command = str(input("Please enter a command ->"))
print(spaceMatcher(command))
print(list(spaceMatcher(command)))
A:
This does and will do: :)
# python... 3.x
import operator
...
# line: line of text
return " ".join(filter(lambda a: operator.is_not(a, ""), line.strip().split(" ")))
A:
Easiest solution ever!
a = 'The fox jumped over the log.'
while ' ' in a: a = a.replace(' ', ' ')
print(a)
Output:
The fox jumped over the log.
|
Is there a simple way to remove multiple spaces in a string?
|
Suppose this string:
The fox jumped over the log.
Turning into:
The fox jumped over the log.
What is the simplest (1-2 lines) to achieve this, without splitting and going into lists?
|
[
">>> import re\n>>> re.sub(' +', ' ', 'The quick brown fox')\n'The quick brown fox'\n\n",
"foo is your string:\n\" \".join(foo.split())\n\nBe warned though this removes \"all whitespace characters (space, tab, newline, return, formfeed)\" (thanks to hhsaffar, see comments). I.e., \"this is \\t a test\\n\" will effectively end up as \"this is a test\".\n",
"import re\ns = \"The fox jumped over the log.\"\nre.sub(\"\\s\\s+\" , \" \", s)\n\nor\nre.sub(\"\\s\\s+\", \" \", s)\n\nsince the space before comma is listed as a pet peeve in PEPΒ 8, as mentioned by user Martin Thoma in the comments.\n",
"Using regexes with \"\\s\" and doing simple string.split()'s will also remove other whitespace - like newlines, carriage returns, tabs. Unless this is desired, to only do multiple spaces, I present these examples.\nI used 11 paragraphs, 1000 words, 6665 bytes of Lorem Ipsum to get realistic time tests and used random-length extra spaces throughout:\noriginal_string = ''.join(word + (' ' * random.randint(1, 10)) for word in lorem_ipsum.split(' '))\n\nThe one-liner will essentially do a strip of any leading/trailing spaces, and it preserves a leading/trailing space (but only ONE ;-).\n# setup = '''\n\nimport re\n\ndef while_replace(string):\n while ' ' in string:\n string = string.replace(' ', ' ')\n\n return string\n\ndef re_replace(string):\n return re.sub(r' {2,}' , ' ', string)\n\ndef proper_join(string):\n split_string = string.split(' ')\n\n # To account for leading/trailing spaces that would simply be removed\n beg = ' ' if not split_string[ 0] else ''\n end = ' ' if not split_string[-1] else ''\n\n # versus simply ' '.join(item for item in string.split(' ') if item)\n return beg + ' '.join(item for item in split_string if item) + end\n\noriginal_string = \"\"\"Lorem ipsum ... no, really, it kept going... malesuada enim feugiat. Integer imperdiet erat.\"\"\"\n\nassert while_replace(original_string) == re_replace(original_string) == proper_join(original_string)\n\n#'''\n\n\n# while_replace_test\nnew_string = original_string[:]\n\nnew_string = while_replace(new_string)\n\nassert new_string != original_string\n\n\n# re_replace_test\nnew_string = original_string[:]\n\nnew_string = re_replace(new_string)\n\nassert new_string != original_string\n\n\n# proper_join_test\nnew_string = original_string[:]\n\nnew_string = proper_join(new_string)\n\nassert new_string != original_string\n\nNOTE: The \"while version\" made a copy of the original_string, as I believe once modified on the first run, successive runs would be faster (if only by a bit). As this adds time, I added this string copy to the other two so that the times showed the difference only in the logic. Keep in mind that the main stmt on timeit instances will only be executed once; the original way I did this, the while loop worked on the same label, original_string, thus the second run, there would be nothing to do. The way it's set up now, calling a function, using two different labels, that isn't a problem. I've added assert statements to all the workers to verify we change something every iteration (for those who may be dubious). E.g., change to this and it breaks:\n# while_replace_test\nnew_string = original_string[:]\n\nnew_string = while_replace(new_string)\n\nassert new_string != original_string # will break the 2nd iteration\n\nwhile ' ' in original_string:\n original_string = original_string.replace(' ', ' ')\n\n\nTests run on a laptop with an i5 processor running Windows 7 (64-bit).\n\ntimeit.Timer(stmt = test, setup = setup).repeat(7, 1000)\n\ntest_string = 'The fox jumped over\\n\\t the log.' # trivial\n\nPython 2.7.3, 32-bit, Windows\n test | minum | maximum | average | median\n---------------------+------------+------------+------------+-----------\n while_replace_test | 0.001066 | 0.001260 | 0.001128 | 0.001092\n re_replace_test | 0.003074 | 0.003941 | 0.003357 | 0.003349\n proper_join_test | 0.002783 | 0.004829 | 0.003554 | 0.003035\n\nPython 2.7.3, 64-bit, Windows\n test | minum | maximum | average | median\n---------------------+------------+------------+------------+-----------\n while_replace_test | 0.001025 | 0.001079 | 0.001052 | 0.001051\n re_replace_test | 0.003213 | 0.004512 | 0.003656 | 0.003504\n proper_join_test | 0.002760 | 0.006361 | 0.004626 | 0.004600\n\nPython 3.2.3, 32-bit, Windows\n test | minum | maximum | average | median\n---------------------+------------+------------+------------+-----------\n while_replace_test | 0.001350 | 0.002302 | 0.001639 | 0.001357\n re_replace_test | 0.006797 | 0.008107 | 0.007319 | 0.007440\n proper_join_test | 0.002863 | 0.003356 | 0.003026 | 0.002975\n\nPython 3.3.3, 64-bit, Windows\n test | minum | maximum | average | median\n---------------------+------------+------------+------------+-----------\n while_replace_test | 0.001444 | 0.001490 | 0.001460 | 0.001459\n re_replace_test | 0.011771 | 0.012598 | 0.012082 | 0.011910\n proper_join_test | 0.003741 | 0.005933 | 0.004341 | 0.004009\n\n\ntest_string = lorem_ipsum\n# Thanks to http://www.lipsum.com/\n# \"Generated 11 paragraphs, 1000 words, 6665 bytes of Lorem Ipsum\"\n\nPython 2.7.3, 32-bit\n test | minum | maximum | average | median\n---------------------+------------+------------+------------+-----------\n while_replace_test | 0.342602 | 0.387803 | 0.359319 | 0.356284\n re_replace_test | 0.337571 | 0.359821 | 0.348876 | 0.348006\n proper_join_test | 0.381654 | 0.395349 | 0.388304 | 0.388193 \n\nPython 2.7.3, 64-bit\n test | minum | maximum | average | median\n---------------------+------------+------------+------------+-----------\n while_replace_test | 0.227471 | 0.268340 | 0.240884 | 0.236776\n re_replace_test | 0.301516 | 0.325730 | 0.308626 | 0.307852\n proper_join_test | 0.358766 | 0.383736 | 0.370958 | 0.371866 \n\nPython 3.2.3, 32-bit\n test | minum | maximum | average | median\n---------------------+------------+------------+------------+-----------\n while_replace_test | 0.438480 | 0.463380 | 0.447953 | 0.446646\n re_replace_test | 0.463729 | 0.490947 | 0.472496 | 0.468778\n proper_join_test | 0.397022 | 0.427817 | 0.406612 | 0.402053 \n\nPython 3.3.3, 64-bit\n test | minum | maximum | average | median\n---------------------+------------+------------+------------+-----------\n while_replace_test | 0.284495 | 0.294025 | 0.288735 | 0.289153\n re_replace_test | 0.501351 | 0.525673 | 0.511347 | 0.508467\n proper_join_test | 0.422011 | 0.448736 | 0.436196 | 0.440318\n\nFor the trivial string, it would seem that a while-loop is the fastest, followed by the Pythonic string-split/join, and regex pulling up the rear.\nFor non-trivial strings, seems there's a bit more to consider. 32-bit 2.7? It's regex to the rescue! 2.7 64-bit? A while loop is best, by a decent margin. 32-bit 3.2, go with the \"proper\" join. 64-bit 3.3, go for a while loop. Again.\nIn the end, one can improve performance if/where/when needed, but it's always best to remember the mantra:\n\nMake It Work\nMake It Right\nMake It Fast\n\nIANAL, YMMV, Caveat Emptor!\n",
"I have to agree with Paul McGuire's comment. To me,\n' '.join(the_string.split())\n\nis vastly preferable to whipping out a regex.\nMy measurements (Linux and Python 2.5) show the split-then-join to be almost five times faster than doing the \"re.sub(...)\", and still three times faster if you precompile the regex once and do the operation multiple times. And it is by any measure easier to understand -- much more Pythonic.\n",
"Similar to the previous solutions, but more specific: replace two or more spaces with one:\n>>> import re\n>>> s = \"The fox jumped over the log.\"\n>>> re.sub('\\s{2,}', ' ', s)\n'The fox jumped over the log.'\n\n",
"I have tried the following method and it even works with the extreme case like:\nstr1=' I live on earth '\n\n' '.join(str1.split())\n\nBut if you prefer a regular expression it can be done as:\nre.sub('\\s+', ' ', str1)\n\nAlthough some preprocessing has to be done in order to remove the trailing and ending space.\n",
"A simple soultion\n>>> import re\n>>> s=\"The fox jumped over the log.\"\n>>> print re.sub('\\s+',' ', s)\nThe fox jumped over the log.\n\n",
"import re\n\nText = \" You can select below trims for removing white space!! BR Aliakbar \"\n # trims all white spaces\nprint('Remove all space:',re.sub(r\"\\s+\", \"\", Text), sep='') \n# trims left space\nprint('Remove leading space:', re.sub(r\"^\\s+\", \"\", Text), sep='') \n# trims right space\nprint('Remove trailing spaces:', re.sub(r\"\\s+$\", \"\", Text), sep='') \n# trims both\nprint('Remove leading and trailing spaces:', re.sub(r\"^\\s+|\\s+$\", \"\", Text), sep='')\n# replace more than one white space in the string with one white space\nprint('Remove more than one space:',re.sub(' +', ' ',Text), sep='') \n\nResult: as code\n\"Remove all space:Youcanselectbelowtrimsforremovingwhitespace!!BRAliakbar\"\n\"Remove leading space:You can select below trims for removing white space!! BR Aliakbar\" \n\"Remove trailing spaces: You can select below trims for removing white space!! BR Aliakbar\"\n\"Remove leading and trailing spaces:You can select below trims for removing white space!! BR Aliakbar\"\n\"Remove more than one space: You can select below trims for removing white space!! BR Aliakbar\" \n\n",
"You can also use the string splitting technique in a Pandas DataFrame without needing to use .apply(..), which is useful if you need to perform the operation quickly on a large number of strings. Here it is on one line:\ndf['message'] = (df['message'].str.split()).str.join(' ')\n\n",
"Solution for Python developers:\nimport re\n\ntext1 = 'Python Exercises Are Challenging Exercises'\nprint(\"Original string: \", text1)\nprint(\"Without extra spaces: \", re.sub(' +', ' ', text1))\n\nOutput: \nOriginal string: Python Exercises Are Challenging Exercises\n Without extra spaces: Python Exercises Are Challenging Exercises\n",
"import re\nstring = re.sub('[ \\t\\n]+', ' ', 'The quick brown \\n\\n \\t fox')\n\nThis will remove all the tabs, new lines and multiple white spaces with single white space.\n",
"One line of code to remove all extra spaces before, after, and within a sentence: \nsentence = \" The fox jumped over the log. \"\nsentence = ' '.join(filter(None,sentence.split(' ')))\n\nExplanation:\n\nSplit the entire string into a list.\nFilter empty elements from the list.\nRejoin the remaining elements* with a single space \n\n*The remaining elements should be words or words with punctuations, etc. I did not test this extensively, but this should be a good starting point. All the best!\n",
"The fastest you can get for user-generated strings is:\nif ' ' in text:\n while ' ' in text:\n text = text.replace(' ', ' ')\n\nThe short circuiting makes it slightly faster than pythonlarry's comprehensive answer. Go for this if you're after efficiency and are strictly looking to weed out extra whitespaces of the single space variety.\n",
"\" \".join(foo.split()) is not quite correct with respect to the question asked because it also entirely removes single leading and/or trailing white spaces. So, if they shall also be replaced by 1 blank, you should do something like the following:\n\" \".join(('*' + foo + '*').split()) [1:-1]\n\nOf course, it's less elegant.\n",
"Another alternative:\n>>> import re\n>>> str = 'this is a string with multiple spaces and tabs'\n>>> str = re.sub('[ \\t]+' , ' ', str)\n>>> print str\nthis is a string with multiple spaces and tabs\n\n",
"In some cases it's desirable to replace consecutive occurrences of every whitespace character with a single instance of that character. You'd use a regular expression with backreferences to do that.\n(\\s)\\1{1,} matches any whitespace character, followed by one or more occurrences of that character. Now, all you need to do is specify the first group (\\1) as the replacement for the match.\nWrapping this in a function:\nimport re\n\ndef normalize_whitespace(string):\n return re.sub(r'(\\s)\\1{1,}', r'\\1', string)\n\n>>> normalize_whitespace('The fox jumped over the log.')\n'The fox jumped over the log.'\n>>> normalize_whitespace('First line\\t\\t\\t \\n\\n\\nSecond line')\n'First line\\t \\nSecond line'\n\n",
"Quite surprising - no one posted simple function which will be much faster than ALL other posted solutions. Here it goes:\ndef compactSpaces(s):\n os = \"\"\n for c in s:\n if c != \" \" or (os and os[-1] != \" \"):\n os += c \n return os\n\n",
"Because @pythonlarry asked here are the missing generator based versions\nThe groupby join is easy. Groupby will group elements consecutive with same key. And return pairs of keys and list of elements for each group. So when the key is an space an space is returne else the entire group.\nfrom itertools import groupby\ndef group_join(string):\n return ''.join(' ' if chr==' ' else ''.join(times) for chr,times in groupby(string))\n\nThe group by variant is simple but very slow. So now for the generator variant. Here we consume an iterator, the string, and yield all chars except chars that follow an char.\ndef generator_join_generator(string):\n last=False\n for c in string:\n if c==' ':\n if not last:\n last=True\n yield ' '\n else:\n last=False\n yield c\n\ndef generator_join(string):\n return ''.join(generator_join_generator(string))\n\nSo i meassured the timings with some other lorem ipsum.\n\nwhile_replace 0.015868543065153062\nre_replace 0.22579886706080288\nproper_join 0.40058281796518713\ngroup_join 5.53206754301209\ngenerator_join 1.6673167790286243\n\nWith Hello and World separated by 64KB of spaces\n\nwhile_replace 2.991308711003512\nre_replace 0.08232860406860709\nproper_join 6.294375243945979\ngroup_join 2.4320066600339487\ngenerator_join 6.329648651066236\n\nNot forget the original sentence\n\nwhile_replace 0.002160938922315836\nre_replace 0.008620491018518806\nproper_join 0.005650000995956361\ngroup_join 0.028368217987008393\ngenerator_join 0.009435956948436797\n\nInteresting here for nearly space only strings group join is not that worse\nTiming showing always median from seven runs of a thousand times each.\n",
"This one does exactly what you want\nold_string = 'The fox jumped over the log '\nnew_string = \" \".join(old_string.split())\nprint(new_string)\n\nWill results to\nThe fox jumped over the log.\n\n",
"def unPretty(S):\n # Given a dictionary, JSON, list, float, int, or even a string...\n # return a string stripped of CR, LF replaced by space, with multiple spaces reduced to one.\n return ' '.join(str(S).replace('\\n', ' ').replace('\\r', '').split())\n\n",
"string = 'This is a string full of spaces and taps'\nstring = string.split(' ')\nwhile '' in string:\n string.remove('')\nstring = ' '.join(string)\nprint(string)\n\nResults:\n\nThis is a string full of spaces and taps\n\n",
"To remove white space, considering leading, trailing and extra white space in between words, use:\n(?<=\\s) +|^ +(?=\\s)| (?= +[\\n\\0])\n\nThe first or deals with leading white space, the second or deals with start of string leading white space, and the last one deals with trailing white space.\nFor proof of use, this link will provide you with a test.\nhttps://regex101.com/r/meBYli/4\nThis is to be used with the re.split function.\n",
"I haven't read a lot into the other examples, but I have just created this method for consolidating multiple consecutive space characters.\nIt does not use any libraries, and whilst it is relatively long in terms of script length, it is not a complex implementation:\ndef spaceMatcher(command):\n \"\"\"\n Function defined to consolidate multiple whitespace characters in\n strings to a single space\n \"\"\"\n # Initiate index to flag if more than one consecutive character\n iteration\n space_match = 0\n space_char = \"\"\n for char in command:\n if char == \" \":\n space_match += 1\n space_char += \" \"\n elif (char != \" \") & (space_match > 1):\n new_command = command.replace(space_char, \" \")\n space_match = 0\n space_char = \"\"\n elif char != \" \":\n space_match = 0\n space_char = \"\"\n return new_command\n\ncommand = None\ncommand = str(input(\"Please enter a command ->\"))\nprint(spaceMatcher(command))\nprint(list(spaceMatcher(command)))\n\n",
"This does and will do: :)\n# python... 3.x\nimport operator\n...\n# line: line of text\nreturn \" \".join(filter(lambda a: operator.is_not(a, \"\"), line.strip().split(\" \")))\n\n",
"Easiest solution ever!\na = 'The fox jumped over the log.'\nwhile ' ' in a: a = a.replace(' ', ' ')\nprint(a)\n\nOutput:\nThe fox jumped over the log.\n\n"
] |
[
826,
772,
127,
65,
57,
21,
19,
16,
15,
11,
10,
8,
4,
4,
4,
3,
3,
3,
3,
3,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"python",
"regex",
"string"
] |
stackoverflow_0001546226_python_regex_string.txt
|
Q:
List comprehension in nested ifs
I am a newbie trying to understand list comprehensions in python. My question is different from another posts.
I was asked to write list comprehension code to get the following output:
All odd numbers from 1 to 30 (both inclusive). Those that are multiples of 5 will be marked with an 'x'.
[1, 3, '5x', 7, 9, 11, 13, '15x', 17, 19, 21, 23, '25x', 27, 29]
For this, I tried to get it with normal for and if ways. This is my solution and it worked:
odds = []
for i in list(range(1,30+1)):
if i%2 !=0:
odds.append(i)
if i%5 == 0:
odds.append(f'{i}x')
odds.remove(i)
print(odds)
In the image you can find my failed list comprehension attempt. I need some light to place the rest of the stuff correctly.
Thank you!
A:
You cannot solve this problem in one line using list comprehension alone. You need the ternary operator (enclosed in the parentheses).
[(n if n%5 else f'{n}x') for n in range(1,31) if n%2]
A:
If you don't care about the order of the items, you can avoid needing the ternary operator (conditional expression) by concatenating the lists resulting from two list comprehensions, e.g.
[n for n in range(1,31,2) if n%5 != 0] + [f'{n}x' for n in range(1,31,2) if n%5 == 0]
... which produces:
[1, 3, 7, 9, 11, 13, 17, 19, 21, 23, 27, 29, '5x', '15x', '25x']
Alternatively, at the risk of making this look like code golf:
[[f'{n}x',n][min(n%5,1)] for n in range(1,31,2)]
... produces:
[1, 3, '5x', 7, 9, 11, 13, '15x', 17, 19, 21, 23, '25x', 27, 29]
I've used the expression min(n%5,1) to index the list [f'{n}x',n] and thus select either item 0 or item 1 in the list, depending on whether n is divisible by 5 or not -- also without using the conditional expression.
A:
Alternate workaround:
numlist = [(i,f'{i}x')[not i%5] for i in range(31) if i%2]
print(numlist)
# [1, 3, '5x', 7, 9, 11, 13, '15x', 17, 19, 21, 23, '25x', 27, 29]
|
List comprehension in nested ifs
|
I am a newbie trying to understand list comprehensions in python. My question is different from another posts.
I was asked to write list comprehension code to get the following output:
All odd numbers from 1 to 30 (both inclusive). Those that are multiples of 5 will be marked with an 'x'.
[1, 3, '5x', 7, 9, 11, 13, '15x', 17, 19, 21, 23, '25x', 27, 29]
For this, I tried to get it with normal for and if ways. This is my solution and it worked:
odds = []
for i in list(range(1,30+1)):
if i%2 !=0:
odds.append(i)
if i%5 == 0:
odds.append(f'{i}x')
odds.remove(i)
print(odds)
In the image you can find my failed list comprehension attempt. I need some light to place the rest of the stuff correctly.
Thank you!
|
[
"You cannot solve this problem in one line using list comprehension alone. You need the ternary operator (enclosed in the parentheses).\n[(n if n%5 else f'{n}x') for n in range(1,31) if n%2]\n\n",
"If you don't care about the order of the items, you can avoid needing the ternary operator (conditional expression) by concatenating the lists resulting from two list comprehensions, e.g.\n[n for n in range(1,31,2) if n%5 != 0] + [f'{n}x' for n in range(1,31,2) if n%5 == 0]\n\n... which produces:\n\n[1, 3, 7, 9, 11, 13, 17, 19, 21, 23, 27, 29, '5x', '15x', '25x']\n\nAlternatively, at the risk of making this look like code golf:\n[[f'{n}x',n][min(n%5,1)] for n in range(1,31,2)]\n\n... produces:\n\n[1, 3, '5x', 7, 9, 11, 13, '15x', 17, 19, 21, 23, '25x', 27, 29]\n\nI've used the expression min(n%5,1) to index the list [f'{n}x',n] and thus select either item 0 or item 1 in the list, depending on whether n is divisible by 5 or not -- also without using the conditional expression.\n",
"Alternate workaround:\nnumlist = [(i,f'{i}x')[not i%5] for i in range(31) if i%2]\nprint(numlist)\n# [1, 3, '5x', 7, 9, 11, 13, '15x', 17, 19, 21, 23, '25x', 27, 29]\n\n"
] |
[
3,
3,
2
] |
[] |
[] |
[
"list_comprehension",
"python"
] |
stackoverflow_0074555777_list_comprehension_python.txt
|
Q:
How to pass a custom equality_check function into perfplot
I'm working with the perfplot library to compare the performance of three functions f1, f2 and f3. The functions are supposed to return the same values, so I want to do equality checks. However, all other examples of perfplot I can find on the internet use pd.DataFrame.equals or np.allclose as an equality checker but these don't work for my specific case.
For example, np.allclose wouldn't work if the functions return a list of numpy arrays of different lengths.
import perfplot
import numpy as np
def f1(rng):
return [np.array(range(i)) for i in rng]
def f2(rng):
return [np.array(list(rng)[:i]) for i in range(len(rng))]
def f3(rng):
return [np.array([*range(i)]) for i in rng]
perfplot.show(
kernels=[f1, f2, f3],
n_range=[10**k for k in range(4)],
setup=lambda n: range(n),
equality_check=np.allclose # <-- doesn't work; neither does pd.DataFrame.equals
)
How do I pass a function that is different from the aforementioned functions?
A:
If we inspect the source code, the way the equality check works is that it takes the output of the first function passed to kernels as reference and compares it to the output of the subsequent functions passed to kernels in a loop.
For some reason, the equality check is different depending on if the first function in kernels returns a tuple or not.
If the first function in kernels doesn't returns a tuple, it simply calls the function passed to equality_check argument to perform the check. The equality check function takes two arguments and can do whatever. For example, it can only check if the lengths are equal and call it a day (i.e. pass the following lambda to equality_check: lambda x,y: len(x) == len(y)).
For the example in the question, a function that loops over pairs of elements and checks for inequality works.
def equality_check(x, y):
for i, j in zip(x, y):
if not np.allclose(i, j):
return False
return True
perfplot.show(
kernels=[f1, f2, f3],
n_range=[10**k for k in range(4)],
setup=lambda n: range(n),
equality_check=equality_check
)
In fact, all() also works.
perfplot.show(
kernels=[f1, f2, f3],
n_range=[10**k for k in range(4)],
setup=lambda n: range(1, n+1),
equality_check=lambda x,y: all([x,y])
)
The following code copied from the source code is the snippet that implements the equality check:
for k, kernel in enumerate(self.kernels):
val = kernel(*data)
if self.equality_check:
if k == 0:
reference = val
else:
try:
if isinstance(reference, tuple):
assert isinstance(val, tuple)
assert len(reference) == len(val)
is_equal = True
for r, v in zip(reference, val):
if not self.equality_check(r, v):
is_equal = False
break
else:
is_equal = self.equality_check(reference, val)
except TypeError:
raise PerfplotError(
"Error in equality_check. "
+ "Try setting equality_check=None."
)
else:
if not is_equal:
raise PerfplotError(
"Equality check failure.\n"
+ f"{self.labels[0]}:\n"
+ f"{reference}:\n\n"
+ f"{self.labels[k]}:\n"
+ f"{val}:\n"
)
|
How to pass a custom equality_check function into perfplot
|
I'm working with the perfplot library to compare the performance of three functions f1, f2 and f3. The functions are supposed to return the same values, so I want to do equality checks. However, all other examples of perfplot I can find on the internet use pd.DataFrame.equals or np.allclose as an equality checker but these don't work for my specific case.
For example, np.allclose wouldn't work if the functions return a list of numpy arrays of different lengths.
import perfplot
import numpy as np
def f1(rng):
return [np.array(range(i)) for i in rng]
def f2(rng):
return [np.array(list(rng)[:i]) for i in range(len(rng))]
def f3(rng):
return [np.array([*range(i)]) for i in rng]
perfplot.show(
kernels=[f1, f2, f3],
n_range=[10**k for k in range(4)],
setup=lambda n: range(n),
equality_check=np.allclose # <-- doesn't work; neither does pd.DataFrame.equals
)
How do I pass a function that is different from the aforementioned functions?
|
[
"If we inspect the source code, the way the equality check works is that it takes the output of the first function passed to kernels as reference and compares it to the output of the subsequent functions passed to kernels in a loop.\nFor some reason, the equality check is different depending on if the first function in kernels returns a tuple or not.\nIf the first function in kernels doesn't returns a tuple, it simply calls the function passed to equality_check argument to perform the check. The equality check function takes two arguments and can do whatever. For example, it can only check if the lengths are equal and call it a day (i.e. pass the following lambda to equality_check: lambda x,y: len(x) == len(y)).\nFor the example in the question, a function that loops over pairs of elements and checks for inequality works.\ndef equality_check(x, y):\n for i, j in zip(x, y):\n if not np.allclose(i, j):\n return False\n return True\n\nperfplot.show(\n kernels=[f1, f2, f3],\n n_range=[10**k for k in range(4)],\n setup=lambda n: range(n),\n equality_check=equality_check\n)\n\nIn fact, all() also works.\nperfplot.show(\n kernels=[f1, f2, f3],\n n_range=[10**k for k in range(4)],\n setup=lambda n: range(1, n+1),\n equality_check=lambda x,y: all([x,y])\n)\n\n\nThe following code copied from the source code is the snippet that implements the equality check:\nfor k, kernel in enumerate(self.kernels):\n\n val = kernel(*data)\n\n if self.equality_check:\n if k == 0:\n reference = val\n else:\n try:\n if isinstance(reference, tuple):\n assert isinstance(val, tuple)\n assert len(reference) == len(val)\n is_equal = True\n for r, v in zip(reference, val):\n if not self.equality_check(r, v):\n is_equal = False\n break\n else:\n is_equal = self.equality_check(reference, val)\n except TypeError:\n raise PerfplotError(\n \"Error in equality_check. \"\n + \"Try setting equality_check=None.\"\n )\n else:\n if not is_equal:\n raise PerfplotError(\n \"Equality check failure.\\n\"\n + f\"{self.labels[0]}:\\n\"\n + f\"{reference}:\\n\\n\"\n + f\"{self.labels[k]}:\\n\"\n + f\"{val}:\\n\"\n )\n\n"
] |
[
0
] |
[] |
[] |
[
"equality",
"performance",
"perfplot",
"python"
] |
stackoverflow_0074556381_equality_performance_perfplot_python.txt
|
Q:
Unzip file in blob storage with blob storage trigger
I have a task where I need to take a zipped file from an Azure Storage Container and spit back out the unzipped contents into said container... I've created a blob trigger with python to try and accomplish this task.
From what I can tell, usually people who use python unzip files using this method
import zipfile
with zipfile.ZipFile(path_to_zip_file, 'r') as zip_ref:
zip_ref.extractall(directory_to_extract_to)
However, I can't seem to mix that solution with my cloud programming.
Here is what I have so far:
import logging
import azure.functions as func
import zipfile
from azure.storage.blob import ContainerClient
from io import BytesIO
def main(myblob: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
if myblob.name.endswith('.zip'):
blob_name = myblob.name.split('/')[1]
container_str_url = 'my_url'
container_client = ContainerClient.from_container_url(container_str_url)
#blob client accessing specific blob
blob_client = container_client.get_blob_client(blob= blob_name)
#download blob into memory
stream_downloader = blob_client.download_blob()
stream = BytesIO()
stream_downloader.readinto(stream)
with zipfile.ZipFile(stream, 'r') as zip_ref:
zip_ref.extractall()
I'm downloading the zipped file into memory and then I'm trying to use the traditional method to unzip the contents back into the container.
When doing so, the trigger doesn't return an error, but I can see when the program reaches zip_ref.extractall()
part of the code, it makes a GET request that just returns information about the file instead of actually (as far as I can tell) extracting the contents anywhere.
I'm stuck here, my overall goal is just to unzip the file found in the storage container and re-upload the contents back into the said container. Any help would be appreciated.
A:
After reproducing from my end, I could able to achieve using the below code.
import logging
import azure.functions as func
from azure.storage.blob import BlobServiceClient
import zipfile
import os
blob_service_client = BlobServiceClient.from_connection_string("<YOUR_CONNECTION_STRING>")
dir_path = r'<PATH_OF_EXTRACTED_FILES>'
def main(myblob: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
container_client = blob_service_client.get_container_client("<INPUT_BLOB_CONTAINER>")
blob_client = container_client.get_blob_client("<ZIP_FILE_NAME>")
// Downloading Zip to local system
with open("sample1.zip", "wb") as my_blob:
download_stream = blob_client.download_blob()
my_blob.write(download_stream.readall())
// Extracting Zip Folder to path
with zipfile.ZipFile("sample1.zip", 'r') as zip_ref:
zip_ref.extractall(dir_path)
// Reading and uploading Files to Storage account
fileList = os.listdir(dir_path)
for filename in fileList:
container_client_upload = blob_service_client.get_container_client("<OUTPUT_BLOB_CONTAINER>")
blob_client_upload = container_client_upload.get_blob_client(filename)
f = open(dir_path+'\\'+filename, 'r')
byt = f.read()
blob_client_upload.upload_blob(byt, blob_type="BlockBlob")
First I downloaded the Zip file using download_blob() then extracted the zip file using extractall(dir_path) and then uploaded the extracted files using upload_blob().
RESULTS:
Files Inside Zip file
Files after extraction in Storage account
|
Unzip file in blob storage with blob storage trigger
|
I have a task where I need to take a zipped file from an Azure Storage Container and spit back out the unzipped contents into said container... I've created a blob trigger with python to try and accomplish this task.
From what I can tell, usually people who use python unzip files using this method
import zipfile
with zipfile.ZipFile(path_to_zip_file, 'r') as zip_ref:
zip_ref.extractall(directory_to_extract_to)
However, I can't seem to mix that solution with my cloud programming.
Here is what I have so far:
import logging
import azure.functions as func
import zipfile
from azure.storage.blob import ContainerClient
from io import BytesIO
def main(myblob: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
if myblob.name.endswith('.zip'):
blob_name = myblob.name.split('/')[1]
container_str_url = 'my_url'
container_client = ContainerClient.from_container_url(container_str_url)
#blob client accessing specific blob
blob_client = container_client.get_blob_client(blob= blob_name)
#download blob into memory
stream_downloader = blob_client.download_blob()
stream = BytesIO()
stream_downloader.readinto(stream)
with zipfile.ZipFile(stream, 'r') as zip_ref:
zip_ref.extractall()
I'm downloading the zipped file into memory and then I'm trying to use the traditional method to unzip the contents back into the container.
When doing so, the trigger doesn't return an error, but I can see when the program reaches zip_ref.extractall()
part of the code, it makes a GET request that just returns information about the file instead of actually (as far as I can tell) extracting the contents anywhere.
I'm stuck here, my overall goal is just to unzip the file found in the storage container and re-upload the contents back into the said container. Any help would be appreciated.
|
[
"After reproducing from my end, I could able to achieve using the below code.\nimport logging\nimport azure.functions as func\nfrom azure.storage.blob import BlobServiceClient\nimport zipfile\nimport os\n\nblob_service_client = BlobServiceClient.from_connection_string(\"<YOUR_CONNECTION_STRING>\")\ndir_path = r'<PATH_OF_EXTRACTED_FILES>'\n\ndef main(myblob: func.InputStream):\n logging.info(f\"Python blob trigger function processed blob \\n\"\n f\"Name: {myblob.name}\\n\"\n f\"Blob Size: {myblob.length} bytes\")\n\n container_client = blob_service_client.get_container_client(\"<INPUT_BLOB_CONTAINER>\")\n blob_client = container_client.get_blob_client(\"<ZIP_FILE_NAME>\")\n\n // Downloading Zip to local system\n with open(\"sample1.zip\", \"wb\") as my_blob:\n download_stream = blob_client.download_blob()\n my_blob.write(download_stream.readall())\n \n // Extracting Zip Folder to path\n with zipfile.ZipFile(\"sample1.zip\", 'r') as zip_ref:\n zip_ref.extractall(dir_path)\n \n // Reading and uploading Files to Storage account\n fileList = os.listdir(dir_path)\n for filename in fileList:\n container_client_upload = blob_service_client.get_container_client(\"<OUTPUT_BLOB_CONTAINER>\")\n blob_client_upload = container_client_upload.get_blob_client(filename)\n\n f = open(dir_path+'\\\\'+filename, 'r')\n byt = f.read()\n blob_client_upload.upload_blob(byt, blob_type=\"BlockBlob\")\n\nFirst I downloaded the Zip file using download_blob() then extracted the zip file using extractall(dir_path) and then uploaded the extracted files using upload_blob().\nRESULTS:\nFiles Inside Zip file\n\nFiles after extraction in Storage account\n\n"
] |
[
1
] |
[] |
[] |
[
"azure",
"azure_blob_trigger",
"azure_functions",
"python",
"unzip"
] |
stackoverflow_0074395710_azure_azure_blob_trigger_azure_functions_python_unzip.txt
|
Q:
Using parameters to return table column values?
I want to be able to use a parameter to determine which columns value to return. But, since 'owner' is a model, the 'assetType' in 'owner.assetType' is treated as an attribute and not as the parameter.
This is just an example of the code I'm working on.
# Owner, by default, owns 1 home, 2 boats, and 3 cars
class Owner(BaseTable):
homes = IntegerField(null=False, default = 1)
boats = IntegerField(null=False, default = 2)
cars = IntegerField(null=False, default = 3)
owner = Owner.get_by_id(1)
allAssets = {
"House1": {
"assetType": "homes"
},
"Boat1": {
"assetType": "boats"
}
}
# returns 'homes'
House1Type = allAssets["House1"].get("assetType")
# returns 1
print(owner.homes)
# AttributeError: 'Owner' object has no attribute 'assetType'
def findValue(assetType):
print(owner.assetType)
# GOAL: return '1'
findValue(House1Type)
This code does the job and gives me the values I'm looking for, but is turning into a giant if else statement and I'm wondering if there is a more concise way to dynamically get these values.
# if house 1 is type 'boats', show owner.boats value
# if house 1 is type 'homes', show owner.homes value
if House1Type == "boats":
print("Owned Boats: ",owner.boats)
elif House1Type == "homes":
print("Owned Homes: ",owner.homes)
else:
pass
A:
We can optimize the solution by directly calling getattr
try:
print(f"Owned {House1Type.title()}: {getattr(owner, House1Type)}")
except:pass
We can achieve GOAL by using same getattr
def findValue(assetType):
return getattr(owner, str(assetType), None)
# GOAL: return '1'
val = findValue(House1Type)
print(f"Owned {House1Type}: {val}")
|
Using parameters to return table column values?
|
I want to be able to use a parameter to determine which columns value to return. But, since 'owner' is a model, the 'assetType' in 'owner.assetType' is treated as an attribute and not as the parameter.
This is just an example of the code I'm working on.
# Owner, by default, owns 1 home, 2 boats, and 3 cars
class Owner(BaseTable):
homes = IntegerField(null=False, default = 1)
boats = IntegerField(null=False, default = 2)
cars = IntegerField(null=False, default = 3)
owner = Owner.get_by_id(1)
allAssets = {
"House1": {
"assetType": "homes"
},
"Boat1": {
"assetType": "boats"
}
}
# returns 'homes'
House1Type = allAssets["House1"].get("assetType")
# returns 1
print(owner.homes)
# AttributeError: 'Owner' object has no attribute 'assetType'
def findValue(assetType):
print(owner.assetType)
# GOAL: return '1'
findValue(House1Type)
This code does the job and gives me the values I'm looking for, but is turning into a giant if else statement and I'm wondering if there is a more concise way to dynamically get these values.
# if house 1 is type 'boats', show owner.boats value
# if house 1 is type 'homes', show owner.homes value
if House1Type == "boats":
print("Owned Boats: ",owner.boats)
elif House1Type == "homes":
print("Owned Homes: ",owner.homes)
else:
pass
|
[
"We can optimize the solution by directly calling getattr\ntry:\n print(f\"Owned {House1Type.title()}: {getattr(owner, House1Type)}\")\nexcept:pass\n\nWe can achieve GOAL by using same getattr\ndef findValue(assetType):\n return getattr(owner, str(assetType), None)\n\n\n# GOAL: return '1'\nval = findValue(House1Type)\nprint(f\"Owned {House1Type}: {val}\")\n\n"
] |
[
0
] |
[] |
[] |
[
"peewee",
"postgresql",
"python"
] |
stackoverflow_0074556386_peewee_postgresql_python.txt
|
Q:
Calculate the maximum number of consecutive digits in a string of discontinuous digits
My dataframe inside a column data is [1,1,2,3,4,7,8,8,15,19,20,21]. I want to get the most contiguous data segments in this column: [1,2,3,4]. How to calculate it?
A:
You can create groups by consecutive values by compare difference with cumulative sum, get counts by GroupBy.transform and last filter maximal counts of original column col - output are all consecutive values with maximal counts:
s = df['col'].groupby(df['col'].diff().ne(1).cumsum()).transform('size')
out = df.loc[s.eq(s.max()), 'col']
If need first maximum consecutives values use Series.value_counts with Series.idxmax:
s = df['col'].diff().ne(1).cumsum()
out = df.loc[s.eq(s.value_counts().idxmax()), 'col']
|
Calculate the maximum number of consecutive digits in a string of discontinuous digits
|
My dataframe inside a column data is [1,1,2,3,4,7,8,8,15,19,20,21]. I want to get the most contiguous data segments in this column: [1,2,3,4]. How to calculate it?
|
[
"You can create groups by consecutive values by compare difference with cumulative sum, get counts by GroupBy.transform and last filter maximal counts of original column col - output are all consecutive values with maximal counts:\ns = df['col'].groupby(df['col'].diff().ne(1).cumsum()).transform('size')\n\nout = df.loc[s.eq(s.max()), 'col']\n\nIf need first maximum consecutives values use Series.value_counts with Series.idxmax:\ns = df['col'].diff().ne(1).cumsum()\n\nout = df.loc[s.eq(s.value_counts().idxmax()), 'col']\n\n"
] |
[
0
] |
[] |
[] |
[
"numpy",
"pandas",
"python"
] |
stackoverflow_0074556466_numpy_pandas_python.txt
|
Q:
When transforming a list of integers (that contain nans) into dataframes they convert to floats
If I have a list and I convert it to dataframes I have
lista=[1,2,3]
print(pd.DataFrame(lista)
#Got a dataframe of ints
but if I have
listb=[1,2,3,np.nan]
print(pd.DataFrame(listb)
#Got a dataframe of floats
This does not change if I specify dtype='int64'
Is there a way that I can get a dataframe with ints?
A:
Use Int64 for integers with missing values:
listb=[1,2,3,np.nan]
print(pd.DataFrame(listb, dtype='Int64'))
0
0 1
1 2
2 3
3 <NA>
|
When transforming a list of integers (that contain nans) into dataframes they convert to floats
|
If I have a list and I convert it to dataframes I have
lista=[1,2,3]
print(pd.DataFrame(lista)
#Got a dataframe of ints
but if I have
listb=[1,2,3,np.nan]
print(pd.DataFrame(listb)
#Got a dataframe of floats
This does not change if I specify dtype='int64'
Is there a way that I can get a dataframe with ints?
|
[
"Use Int64 for integers with missing values:\nlistb=[1,2,3,np.nan]\nprint(pd.DataFrame(listb, dtype='Int64'))\n \n 0\n0 1\n1 2\n2 3\n3 <NA>\n\n"
] |
[
1
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074556525_pandas_python.txt
|
Q:
matplotlib event doesn't work when I use button clicked connect in pyqt5
I have 2 class, one (Plot) is for plot matplotlib figure, another (Widget) is for pyqt5.
When I create a button in pyqt5 and clicked_connect to class Plot to create figure,
the button_press_event in Plot doesn't work.
import pandas as pd
import numpy as np
from PyQt5.QtWidgets import *
import matplotlib.pyplot as plt
import sys
# x, y data
x = np.random.randint(50, size=10)
y = np.random.randint(50, size=10)
class Plot:
def __init__(self):
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(x, y)
fig.canvas.mpl_connect('button_press_event', self.on_press)
plt.show()
def on_press(self, event):
print(event.ydata)
class Widget(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
self.btn = QPushButton('button', self)
self.btn.clicked.connect(Plot)
if __name__ == '__main__':
app = QApplication(sys.argv)
w = Widget()
w.show()
sys.exit(app.exec_())
If I want to use event in Plot by button clicked connect in pyqt5, How can I do?
I don't want to use FigureCanvas to plot figure in pyqt5 window,
because I need a full screen figure to do something.
A:
for plotting a graph with PyQt you need an element to render it. This is the Figure Canvas. Using layout options and properties you can customize it to be full screen. I created an example where you can click on the button to get a plot in fullscreen. If you need the navigation toolbar you have to decide yourself. I find this always very handy.
import numpy as np
import PyQt5.QtWidgets as qtw
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.backends.backend_qt5agg import NavigationToolbar2QT as NavigationToolbar
import matplotlib.pyplot as plt
import sys
class PlotEx(qtw.QWidget):
def __init__(self):
super().__init__()
self.figure = plt.figure()
self.canvas = FigureCanvas(self.figure)
self.toolbar = NavigationToolbar(self.canvas, self)
self.button = qtw.QPushButton('Plot')
layout = qtw.QVBoxLayout()
layout.addWidget(self.toolbar)
layout.addWidget(self.canvas)
layout.addWidget(self.button)
self.setLayout(layout)
self.showMaximized()
self.button.clicked.connect(self.plot)
def plot(self):
x = np.random.randint(50, size=10)
y = np.random.randint(50, size=10)
self.figure.clear()
ax = self.figure.add_subplot(111)
ax.scatter(x,y)
self.canvas.draw()
if __name__ == '__main__':
app = qtw.QApplication(sys.argv)
main = PlotEx()
main.show()
sys.exit(app.exec_())
If you need the plot in a second screen, then you have to create another class, which you then call by clicking the button.
|
matplotlib event doesn't work when I use button clicked connect in pyqt5
|
I have 2 class, one (Plot) is for plot matplotlib figure, another (Widget) is for pyqt5.
When I create a button in pyqt5 and clicked_connect to class Plot to create figure,
the button_press_event in Plot doesn't work.
import pandas as pd
import numpy as np
from PyQt5.QtWidgets import *
import matplotlib.pyplot as plt
import sys
# x, y data
x = np.random.randint(50, size=10)
y = np.random.randint(50, size=10)
class Plot:
def __init__(self):
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(x, y)
fig.canvas.mpl_connect('button_press_event', self.on_press)
plt.show()
def on_press(self, event):
print(event.ydata)
class Widget(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
self.btn = QPushButton('button', self)
self.btn.clicked.connect(Plot)
if __name__ == '__main__':
app = QApplication(sys.argv)
w = Widget()
w.show()
sys.exit(app.exec_())
If I want to use event in Plot by button clicked connect in pyqt5, How can I do?
I don't want to use FigureCanvas to plot figure in pyqt5 window,
because I need a full screen figure to do something.
|
[
"for plotting a graph with PyQt you need an element to render it. This is the Figure Canvas. Using layout options and properties you can customize it to be full screen. I created an example where you can click on the button to get a plot in fullscreen. If you need the navigation toolbar you have to decide yourself. I find this always very handy.\nimport numpy as np \nimport PyQt5.QtWidgets as qtw \nfrom matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas\nfrom matplotlib.backends.backend_qt5agg import NavigationToolbar2QT as NavigationToolbar\nimport matplotlib.pyplot as plt \nimport sys \n\nclass PlotEx(qtw.QWidget):\n def __init__(self):\n super().__init__()\n \n self.figure = plt.figure()\n self.canvas = FigureCanvas(self.figure)\n self.toolbar = NavigationToolbar(self.canvas, self)\n self.button = qtw.QPushButton('Plot')\n \n layout = qtw.QVBoxLayout()\n layout.addWidget(self.toolbar)\n layout.addWidget(self.canvas)\n layout.addWidget(self.button)\n self.setLayout(layout)\n self.showMaximized()\n \n self.button.clicked.connect(self.plot)\n\n def plot(self):\n x = np.random.randint(50, size=10)\n y = np.random.randint(50, size=10)\n self.figure.clear()\n ax = self.figure.add_subplot(111)\n ax.scatter(x,y)\n self.canvas.draw()\n\nif __name__ == '__main__':\n app = qtw.QApplication(sys.argv)\n main = PlotEx()\n main.show()\n sys.exit(app.exec_())\n\nIf you need the plot in a second screen, then you have to create another class, which you then call by clicking the button.\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"pyqt5",
"python"
] |
stackoverflow_0074555885_matplotlib_pyqt5_python.txt
|
Q:
How to I extract int value in string value and put it in int column?
I have a dataframe with over 100,000 rows and 200 columns
there are some nan values in FALLDOWN_FLOOR column so I would like to extract int value out of FALLDOWN_LOCATION and fill nan values in FALLDOWN_FLOOR.
here is the example of FALLDOWN_LOCATION COLUMN
9 λλ‘μ μΉλ²½/10μΈ΅
106 5μΈ΅ λΉλΌμμ 1μΈ΅μΌλ‘ ν¬μ ν¨
109 λ³μ 7μΈ΅ μ₯μ ν¬μ
112 15μΈ΅ μννΈ λ² λλ€μμ ν¬μ
113 10μΈ΅
129 14μΈ΅
133 μννΈ 13μΈ΅κ³Ό 14μΈ΅ μ¬μ΄ κ³λ¨ μ°½λ¬ΈμΌλ‘ λ°μ΄λ΄λ¦Ό
136 μννΈ 13μΈ΅ λμ΄μμ ν¬μ νμ¬ 2μΈ΅ λκ°μΌλ‘ λ¨μ΄μ§.
I want to put int value whatever comes with 'μΈ΅' first so desired output of FALLDOWN_FLOOR column would be like this:
9 10
106 5
109 7
112 15
113 10
129 14
133 13
136 13
so when I input df.loc[con, ['FALLDOWN_FLOOR','FALLDOWN_LOCATION']] the output should be like this:
FALLDOWN_FLOOR FALLDOWN_LOCATIOIN
9 10 λλ‘μ μΉλ²½/10μΈ΅
106 5 5μΈ΅ λΉλΌμμ 1μΈ΅μΌλ‘ ν¬μ ν¨
109 7 λ³μ 7μΈ΅ μ₯μ ν¬μ
112 15 15μΈ΅ μννΈ λ² λλ€μμ ν¬μ
113 10 10μΈ΅
129 14 14μΈ΅
133 13 μννΈ 13μΈ΅κ³Ό 14μΈ΅ μ¬μ΄ κ³λ¨ μ°½λ¬ΈμΌλ‘ λ°μ΄λ΄λ¦Ό
136 13 μννΈ 13μΈ΅ λμ΄μμ ν¬μ νμ¬ 2μΈ΅ λκ°μΌλ‘ λ¨μ΄μ§.
A:
Use Series.str.extract with digits before μΈ΅:
df['FALLDOWN_FLOOR'] = df['FALLDOWN_LOCATION'].str.extract(r'(\d+)μΈ΅', expand=False)
print (df[['FALLDOWN_FLOOR','FALLDOWN_LOCATION']])
FALLDOWN_FLOOR FALLDOWN_LOCATION
i
9 10 λλ‘μ μΉλ²½/10μΈ΅
106 5 5μΈ΅ λΉλΌμμ 1μΈ΅μΌλ‘ ν¬μ ν¨
109 7 λ³μ 7μΈ΅ μ₯μ ν¬μ
112 15 15μΈ΅ μννΈ λ² λλ€μμ ν¬μ
113 10 10μΈ΅
129 14 14μΈ΅
133 13 μννΈ 13μΈ΅κ³Ό 14μΈ΅ μ¬μ΄ κ³λ¨ μ°½λ¬ΈμΌλ‘ λ°μ΄λ΄λ¦Ό
136 13 μννΈ 13μΈ΅ λμ΄μμ ν¬μ νμ¬ 2μΈ΅ λκ°μΌλ‘ λ¨μ΄μ§.
|
How to I extract int value in string value and put it in int column?
|
I have a dataframe with over 100,000 rows and 200 columns
there are some nan values in FALLDOWN_FLOOR column so I would like to extract int value out of FALLDOWN_LOCATION and fill nan values in FALLDOWN_FLOOR.
here is the example of FALLDOWN_LOCATION COLUMN
9 λλ‘μ μΉλ²½/10μΈ΅
106 5μΈ΅ λΉλΌμμ 1μΈ΅μΌλ‘ ν¬μ ν¨
109 λ³μ 7μΈ΅ μ₯μ ν¬μ
112 15μΈ΅ μννΈ λ² λλ€μμ ν¬μ
113 10μΈ΅
129 14μΈ΅
133 μννΈ 13μΈ΅κ³Ό 14μΈ΅ μ¬μ΄ κ³λ¨ μ°½λ¬ΈμΌλ‘ λ°μ΄λ΄λ¦Ό
136 μννΈ 13μΈ΅ λμ΄μμ ν¬μ νμ¬ 2μΈ΅ λκ°μΌλ‘ λ¨μ΄μ§.
I want to put int value whatever comes with 'μΈ΅' first so desired output of FALLDOWN_FLOOR column would be like this:
9 10
106 5
109 7
112 15
113 10
129 14
133 13
136 13
so when I input df.loc[con, ['FALLDOWN_FLOOR','FALLDOWN_LOCATION']] the output should be like this:
FALLDOWN_FLOOR FALLDOWN_LOCATIOIN
9 10 λλ‘μ μΉλ²½/10μΈ΅
106 5 5μΈ΅ λΉλΌμμ 1μΈ΅μΌλ‘ ν¬μ ν¨
109 7 λ³μ 7μΈ΅ μ₯μ ν¬μ
112 15 15μΈ΅ μννΈ λ² λλ€μμ ν¬μ
113 10 10μΈ΅
129 14 14μΈ΅
133 13 μννΈ 13μΈ΅κ³Ό 14μΈ΅ μ¬μ΄ κ³λ¨ μ°½λ¬ΈμΌλ‘ λ°μ΄λ΄λ¦Ό
136 13 μννΈ 13μΈ΅ λμ΄μμ ν¬μ νμ¬ 2μΈ΅ λκ°μΌλ‘ λ¨μ΄μ§.
|
[
"Use Series.str.extract with digits before μΈ΅:\ndf['FALLDOWN_FLOOR'] = df['FALLDOWN_LOCATION'].str.extract(r'(\\d+)μΈ΅', expand=False)\nprint (df[['FALLDOWN_FLOOR','FALLDOWN_LOCATION']])\n FALLDOWN_FLOOR FALLDOWN_LOCATION\ni \n9 10 λλ‘μ μΉλ²½/10μΈ΅\n106 5 5μΈ΅ λΉλΌμμ 1μΈ΅μΌλ‘ ν¬μ ν¨\n109 7 λ³μ 7μΈ΅ μ₯μ ν¬μ \n112 15 15μΈ΅ μννΈ λ² λλ€μμ ν¬μ \n113 10 10μΈ΅\n129 14 14μΈ΅\n133 13 μννΈ 13μΈ΅κ³Ό 14μΈ΅ μ¬μ΄ κ³λ¨ μ°½λ¬ΈμΌλ‘ λ°μ΄λ΄λ¦Ό\n136 13 μννΈ 13μΈ΅ λμ΄μμ ν¬μ νμ¬ 2μΈ΅ λκ°μΌλ‘ λ¨μ΄μ§.\n\n"
] |
[
1
] |
[] |
[] |
[
"numpy",
"pandas",
"python"
] |
stackoverflow_0074556563_numpy_pandas_python.txt
|
Q:
Python/Selenium - Clear the cache and cookies in my chrome webdriver?
I'm trying to clear the cache and cookies in my chrome browser (webdriver from selenium) but I can't find any solutions for specifically the chrome driver. How do I clear the cache and cookies in Python? Thanks!
A:
Taken from this post:
For cookies, you can use the delete_all_cookies function:
driver.delete_all_cookies()
For cache, there isn't a direct way to do this through Selenium. If you are trying to make sure everything is cleared at the beginning of starting a Chrome driver, or when you are done, then you don't need to do anything. Every time you initialize a webdriver, it is a brand new instance with no cache, cookies, or history. Every time you terminate the driver, all these are cleared.
A:
Cache clearing for Chromedriver with Selenium in November 2020:
Use this function which opens a new tab, choses to delete everything, confirms and goes back to previously active tab.
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
import time
driver = webdriver.Chrome("path/to/chromedriver.exe")
def delete_cache():
driver.execute_script("window.open('');")
time.sleep(2)
driver.switch_to.window(driver.window_handles[-1])
time.sleep(2)
driver.get('chrome://settings/clearBrowserData') # for old chromedriver versions use cleardriverData
time.sleep(2)
actions = ActionChains(driver)
actions.send_keys(Keys.TAB * 3 + Keys.DOWN * 3) # send right combination
actions.perform()
time.sleep(2)
actions = ActionChains(driver)
actions.send_keys(Keys.TAB * 4 + Keys.ENTER) # confirm
actions.perform()
time.sleep(5) # wait some time to finish
driver.close() # close this tab
driver.switch_to.window(driver.window_handles[0]) # switch back
delete_cache()
UPDATE 01/2021: Apparently the settings section in chromedriver is subject to change. The old version was chrome://settings/cleardriverData. In any doubt, go to chrome://settings/, click on the browser data/cache clearing section and copy the new term.
A:
2022 Method That Works
I used a similar method to @do-me's answer but made it a bit more functional. Also, his Tabs weren't mapping to the right places for me so I made some edits for it to work in 2022 (on mine at least).
import time
from pathlib import Path
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
CHROMEDRIVER = Path('chromedriver.exe')
def start_driver():
driver = webdriver.Chrome(executable_path=str(CHROMEDRIVER))
delete_cache(driver)
return driver
def delete_cache(driver):
driver.execute_script("window.open('')") # Create a separate tab than the main one
driver.switch_to.window(driver.window_handles[-1]) # Switch window to the second tab
driver.get('chrome://settings/clearBrowserData') # Open your chrome settings.
perform_actions(driver, Keys.TAB * 2 + Keys.DOWN * 4 + Keys.TAB * 5 + Keys.ENTER) # Tab to the time select and key down to say "All Time" then go to the Confirm button and press Enter
driver.close() # Close that window
driver.switch_to.window(driver.window_handles[0]) # Switch Selenium controls to the original tab to continue normal functionality.
def perform_actions(driver, keys):
actions = ActionChains(driver)
actions.send_keys(keys)
time.sleep(2)
print('Performing Actions!')
actions.perform()
if __name__ == '__main__':
driver = start_driver()
A:
in step one =>
pip install keyboard
step2 : use it in your code =>
from time import sleep
self.driver.get('chrome://settings/clearBrowserData')
sleep(10)
keyboard.send("Enter")
A:
self.driver.execute_cdp_command('Storage.clearDataForOrigin', {
"origin": '*',
"storageTypes": 'all',
})
Here is a solution from me, it uses the chrome devtools protocol to simulate the "Clear all data" button from the application Tab in devtools. I hope it was helpful
|
Python/Selenium - Clear the cache and cookies in my chrome webdriver?
|
I'm trying to clear the cache and cookies in my chrome browser (webdriver from selenium) but I can't find any solutions for specifically the chrome driver. How do I clear the cache and cookies in Python? Thanks!
|
[
"Taken from this post:\nFor cookies, you can use the delete_all_cookies function: \ndriver.delete_all_cookies()\n\nFor cache, there isn't a direct way to do this through Selenium. If you are trying to make sure everything is cleared at the beginning of starting a Chrome driver, or when you are done, then you don't need to do anything. Every time you initialize a webdriver, it is a brand new instance with no cache, cookies, or history. Every time you terminate the driver, all these are cleared.\n",
"Cache clearing for Chromedriver with Selenium in November 2020:\nUse this function which opens a new tab, choses to delete everything, confirms and goes back to previously active tab.\nfrom selenium.webdriver.common.action_chains import ActionChains\nfrom selenium.webdriver.common.keys import Keys\nimport time\ndriver = webdriver.Chrome(\"path/to/chromedriver.exe\")\n\ndef delete_cache():\n driver.execute_script(\"window.open('');\")\n time.sleep(2)\n driver.switch_to.window(driver.window_handles[-1])\n time.sleep(2)\n driver.get('chrome://settings/clearBrowserData') # for old chromedriver versions use cleardriverData\n time.sleep(2)\n actions = ActionChains(driver) \n actions.send_keys(Keys.TAB * 3 + Keys.DOWN * 3) # send right combination\n actions.perform()\n time.sleep(2)\n actions = ActionChains(driver) \n actions.send_keys(Keys.TAB * 4 + Keys.ENTER) # confirm\n actions.perform()\n time.sleep(5) # wait some time to finish\n driver.close() # close this tab\n driver.switch_to.window(driver.window_handles[0]) # switch back\ndelete_cache()\n\nUPDATE 01/2021: Apparently the settings section in chromedriver is subject to change. The old version was chrome://settings/cleardriverData. In any doubt, go to chrome://settings/, click on the browser data/cache clearing section and copy the new term.\n",
"2022 Method That Works\nI used a similar method to @do-me's answer but made it a bit more functional. Also, his Tabs weren't mapping to the right places for me so I made some edits for it to work in 2022 (on mine at least).\nimport time\nfrom pathlib import Path\n\nfrom selenium import webdriver\nfrom selenium.webdriver.common.action_chains import ActionChains\nfrom selenium.webdriver.common.keys import Keys\n\nCHROMEDRIVER = Path('chromedriver.exe')\n\n\ndef start_driver():\n driver = webdriver.Chrome(executable_path=str(CHROMEDRIVER))\n delete_cache(driver)\n return driver\n\n\ndef delete_cache(driver):\n driver.execute_script(\"window.open('')\") # Create a separate tab than the main one\n driver.switch_to.window(driver.window_handles[-1]) # Switch window to the second tab\n driver.get('chrome://settings/clearBrowserData') # Open your chrome settings.\n perform_actions(driver, Keys.TAB * 2 + Keys.DOWN * 4 + Keys.TAB * 5 + Keys.ENTER) # Tab to the time select and key down to say \"All Time\" then go to the Confirm button and press Enter\n driver.close() # Close that window\n driver.switch_to.window(driver.window_handles[0]) # Switch Selenium controls to the original tab to continue normal functionality.\n\n\ndef perform_actions(driver, keys):\n actions = ActionChains(driver)\n actions.send_keys(keys)\n time.sleep(2)\n print('Performing Actions!')\n actions.perform()\n\n\nif __name__ == '__main__':\n driver = start_driver()\n\n",
"in step one =>\npip install keyboard\n\nstep2 : use it in your code =>\nfrom time import sleep\nself.driver.get('chrome://settings/clearBrowserData')\nsleep(10)\nkeyboard.send(\"Enter\")\n\n",
"self.driver.execute_cdp_command('Storage.clearDataForOrigin', {\n \"origin\": '*',\n \"storageTypes\": 'all',\n})\n\nHere is a solution from me, it uses the chrome devtools protocol to simulate the \"Clear all data\" button from the application Tab in devtools. I hope it was helpful\n"
] |
[
30,
8,
2,
0,
0
] |
[] |
[] |
[
"python",
"selenium",
"selenium_chromedriver",
"selenium_webdriver",
"webdriver"
] |
stackoverflow_0050456783_python_selenium_selenium_chromedriver_selenium_webdriver_webdriver.txt
|
Q:
How to find the count of same values in a row in a dataframe?
The dataframe is as follows:
a | b | c | d
-------------------------------
TRUE FALSE TRUE TRUE
FALSE FALSE FALSE TRUE
TRUE TRUE TRUE TRUE
TRUE FALSE TRUE FALSE
I need to find the count of the TRUE's in each column.
The last row should contain the count as follows:
a | b | c | d | count
---------------------------------------
TRUE FALSE TRUE TRUE 3
FALSE FALSE FALSE TRUE 1
TRUE TRUE TRUE TRUE 4
TRUE FALSE TRUE FALSE 2
The logic I tried is:
df.groupby(df.columns.tolist(),as_index=False).size()
But it doesn't work as expected.
Could anyone please help me out here?
Thank you.
A:
Because Trues are processing like 1 you can use sum:
df['count'] = df.sum(axis=1)
If TRUEs are strings:
df['count'] = df.eq('TRUE').sum(axis=1)
|
How to find the count of same values in a row in a dataframe?
|
The dataframe is as follows:
a | b | c | d
-------------------------------
TRUE FALSE TRUE TRUE
FALSE FALSE FALSE TRUE
TRUE TRUE TRUE TRUE
TRUE FALSE TRUE FALSE
I need to find the count of the TRUE's in each column.
The last row should contain the count as follows:
a | b | c | d | count
---------------------------------------
TRUE FALSE TRUE TRUE 3
FALSE FALSE FALSE TRUE 1
TRUE TRUE TRUE TRUE 4
TRUE FALSE TRUE FALSE 2
The logic I tried is:
df.groupby(df.columns.tolist(),as_index=False).size()
But it doesn't work as expected.
Could anyone please help me out here?
Thank you.
|
[
"Because Trues are processing like 1 you can use sum:\ndf['count'] = df.sum(axis=1)\n\nIf TRUEs are strings:\ndf['count'] = df.eq('TRUE').sum(axis=1)\n\n"
] |
[
1
] |
[] |
[] |
[
"count",
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074556629_count_dataframe_pandas_python.txt
|
Q:
How to merge two dataframes with different variations of a column values?
I have two data frames that I want to merge on a same column name but the values can have different variations of a values.
Examples. Variations of a value :
Variations
USA
US
United States
United States of America
The United States of America
And let's suppose the data frames as below:
df1 =
country
column B
India
Cell 2
China
Cell 4
United States
Cell 2
UK
Cell 4
df2 =
Country
clm
USA
val1
CH
val2
IN
val3
Now how do I merge such that the United States is merged with USA?
I have tried DataFrame merge but it merges only on the matched values of the column name.
Is there a way to match the variations and merge the dataframes?
A:
You simply create a reftable then merge
Your data:
df = pd.DataFrame({'name':['USA', 'US', 'United States', 'FR', 'France'],
'val':[1,2,3,4,5]})
df
name val
0 USA 1
1 US 2
2 United States 3
3 FR 4
4 France 5
Your reftable:
reftable = pd.DataFrame({'name':['United States', 'US', 'USA', 'United States of America', 'The United States of America', 'France', 'FR', 'Frank'],
'uniqname':['us']*5+['fr']*3})
reftable
name uniqname
0 United States us
1 US us
2 USA us
3 United States of America us
4 The United States of America us
5 France fr
6 FR fr
7 Frank fr
Now merge:
new = pd.merge(df, reftable, on='name', how='left')
new
name val uniqname
0 USA 1 us
1 US 2 us
2 United States 3 us
3 FR 4 fr
4 France 5 fr
A:
Use .count to count how many times United States is stated in the list and then make an if command to see if united stated is listed more than once in the list. Do it to all of the other options and make a final if command to check if either any of them are in the list to output the value that you want.
|
How to merge two dataframes with different variations of a column values?
|
I have two data frames that I want to merge on a same column name but the values can have different variations of a values.
Examples. Variations of a value :
Variations
USA
US
United States
United States of America
The United States of America
And let's suppose the data frames as below:
df1 =
country
column B
India
Cell 2
China
Cell 4
United States
Cell 2
UK
Cell 4
df2 =
Country
clm
USA
val1
CH
val2
IN
val3
Now how do I merge such that the United States is merged with USA?
I have tried DataFrame merge but it merges only on the matched values of the column name.
Is there a way to match the variations and merge the dataframes?
|
[
"You simply create a reftable then merge\nYour data:\ndf = pd.DataFrame({'name':['USA', 'US', 'United States', 'FR', 'France'],\n 'val':[1,2,3,4,5]})\ndf\n\n name val\n0 USA 1\n1 US 2\n2 United States 3\n3 FR 4\n4 France 5\n\nYour reftable:\nreftable = pd.DataFrame({'name':['United States', 'US', 'USA', 'United States of America', 'The United States of America', 'France', 'FR', 'Frank'],\n 'uniqname':['us']*5+['fr']*3})\nreftable\n name uniqname\n0 United States us\n1 US us\n2 USA us\n3 United States of America us\n4 The United States of America us\n5 France fr\n6 FR fr\n7 Frank fr\n\nNow merge:\nnew = pd.merge(df, reftable, on='name', how='left')\nnew\n\n name val uniqname\n0 USA 1 us\n1 US 2 us\n2 United States 3 us\n3 FR 4 fr\n4 France 5 fr\n\n",
"Use .count to count how many times United States is stated in the list and then make an if command to see if united stated is listed more than once in the list. Do it to all of the other options and make a final if command to check if either any of them are in the list to output the value that you want.\n"
] |
[
1,
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074555443_dataframe_pandas_python.txt
|
Q:
Market Basket Association Analysis python or SQL
I am using a dataset as below. Rows show invoice numbers, columns show products. I want to show the number of products on the same invoice as a matrix (i.e. there will be products in both rows and columns, the intersection of the row and column will show how many times those 2 products are on the same invoice. How can I do that? Thanks.
Note: '1' indicates that the product is included in that invoice, and '0' indicates that it is not.
finally i want to get a matrix like in the picture. intersecting cells show the number of sales of the relevant product pair.
import pandas as pd
ids = ['invoice_1','invoice_2','invoice_3','invoice_4','invoice_5','invoice_6']
A= [0,0,1,0,1,1]
B= [0,1,1,0,1,1]
C= [1,1,1,0,1,0]
D= [1,0,0,1,1,0]
df=pd.DataFrame.from_dict({'A':A, 'B':B, 'C':C, 'D':D})
df.index=ids
Actually I want to get Table 2 from Table 1. AA=3 because product A is included in 3 invoices (rows) totally. AB=4 because A and B are included in 4 invoices (rows) together.
Note: Even if the AA, BB, CC , DD cells are not full, it does not matter. Binary products (like AB, DC etc.) are important to me.
Table 1
A B C D
invoice_1 0 0 1 1
invoice_2 0 1 1 0
invoice_3 1 1 1 0
invoice_4 0 0 0 1
invoice_5 1 1 1 1
invoice_6 1 1 0 0
invoice_7 1 1 0 0
Table 2
A B C D
A 4 4 2 1
B 4 4 3 1
C 2 3 4 2
D 1 1 2 3
A:
The OP made two mistakes here. The first one is the input to generate the intended Table 1 should be:
import pandas as pd
ids = ['invoice_1', 'invoice_2', 'invoice_3', 'invoice_4', 'invoice_5', 'invoice_6', 'invoice_7']
A = [0, 0, 1, 0, 1, 1, 1]
B = [0, 1, 1, 0, 1, 1, 1]
C = [1, 1, 1, 0, 1, 0, 0]
D = [1, 0, 0, 1, 1, 0, 0]
df = pd.DataFrame(data={'A': A, 'B': B, 'C': C, 'D': D}, index=ids)
Table 1
A B C D
invoice_1 0 0 1 1
invoice_2 0 1 1 0
invoice_3 1 1 1 0
invoice_4 0 0 0 1
invoice_5 1 1 1 1
invoice_6 1 1 0 0
invoice_7 1 1 0 0
Table 2 is actually the dot product of 2 matrices, i.e. matrix df and its transpose. One can use numpy.dot to produce it.
import numpy as np
pd.DataFrame(data=np.dot(df.T, df), index=df.columns, columns=df.columns)
Table 2
A B C D
A 4 4 2 1
B 4 5 3 1
C 2 3 4 2
D 1 1 2 3
The second mistake I pointed out is BB where it should be 5 instead of 4.
|
Market Basket Association Analysis python or SQL
|
I am using a dataset as below. Rows show invoice numbers, columns show products. I want to show the number of products on the same invoice as a matrix (i.e. there will be products in both rows and columns, the intersection of the row and column will show how many times those 2 products are on the same invoice. How can I do that? Thanks.
Note: '1' indicates that the product is included in that invoice, and '0' indicates that it is not.
finally i want to get a matrix like in the picture. intersecting cells show the number of sales of the relevant product pair.
import pandas as pd
ids = ['invoice_1','invoice_2','invoice_3','invoice_4','invoice_5','invoice_6']
A= [0,0,1,0,1,1]
B= [0,1,1,0,1,1]
C= [1,1,1,0,1,0]
D= [1,0,0,1,1,0]
df=pd.DataFrame.from_dict({'A':A, 'B':B, 'C':C, 'D':D})
df.index=ids
Actually I want to get Table 2 from Table 1. AA=3 because product A is included in 3 invoices (rows) totally. AB=4 because A and B are included in 4 invoices (rows) together.
Note: Even if the AA, BB, CC , DD cells are not full, it does not matter. Binary products (like AB, DC etc.) are important to me.
Table 1
A B C D
invoice_1 0 0 1 1
invoice_2 0 1 1 0
invoice_3 1 1 1 0
invoice_4 0 0 0 1
invoice_5 1 1 1 1
invoice_6 1 1 0 0
invoice_7 1 1 0 0
Table 2
A B C D
A 4 4 2 1
B 4 4 3 1
C 2 3 4 2
D 1 1 2 3
|
[
"The OP made two mistakes here. The first one is the input to generate the intended Table 1 should be:\nimport pandas as pd\n\nids = ['invoice_1', 'invoice_2', 'invoice_3', 'invoice_4', 'invoice_5', 'invoice_6', 'invoice_7']\nA = [0, 0, 1, 0, 1, 1, 1]\nB = [0, 1, 1, 0, 1, 1, 1]\nC = [1, 1, 1, 0, 1, 0, 0]\nD = [1, 0, 0, 1, 1, 0, 0]\ndf = pd.DataFrame(data={'A': A, 'B': B, 'C': C, 'D': D}, index=ids)\n\nTable 1\n A B C D\n invoice_1 0 0 1 1\n invoice_2 0 1 1 0\n invoice_3 1 1 1 0\n invoice_4 0 0 0 1\n invoice_5 1 1 1 1\n invoice_6 1 1 0 0\n invoice_7 1 1 0 0\n\nTable 2 is actually the dot product of 2 matrices, i.e. matrix df and its transpose. One can use numpy.dot to produce it.\nimport numpy as np\n\npd.DataFrame(data=np.dot(df.T, df), index=df.columns, columns=df.columns)\n\nTable 2 \n A B C D\n A 4 4 2 1\n B 4 5 3 1\n C 2 3 4 2\n D 1 1 2 3\n\nThe second mistake I pointed out is BB where it should be 5 instead of 4.\n"
] |
[
1
] |
[] |
[] |
[
"market_basket_analysis",
"pandas",
"python"
] |
stackoverflow_0074551290_market_basket_analysis_pandas_python.txt
|
Q:
When transforming a list of tuples to dataframes is there a way to keep the integers integers?
If I have a list like this
lista=[(0.11838, 0.1926, 0.12071, 0.27438, -0.0253, -0.18799, 0.01544, 0.24514, 0.19905, 0.18563, 0.19999, 0.25336, 783, 783, 783, 783), (nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan), (nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan), (0.11838, 0.1926, 0.12071, 0.27438, -0.0253, -0.18799, 0.01544, 0.24514, 0.19905, 0.18563, 0.19999, 0.25336, 783, 783, 783, 783), (nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan), (nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan), (0.11838, 0.1926, 0.12071, 0.27438, -0.0253, -0.18799, 0.01544, 0.24514, 0.19905, 0.18563, 0.19999, 0.25336, 783, 783, 783, 783), (nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan)]
is there a way that when transforming them into dataframes the integers (783) does not get transformed into floats?
Now I get this
pd.DataFrame(lista)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514 0.19905 0.18563 0.19999 0.25336 783.0 783.0 783.0 783.0
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514 0.19905 0.18563 0.19999 0.25336 783.0 783.0 783.0 783.0
4 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
5 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
6 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514 0.19905 0.18563 0.19999 0.25336 783.0 783.0 783.0 783.0
7 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
A:
Not during the DataFrame creation. Since np.nan is a float Dtype, the entire column is transformed into floats. The individual Dtypes will have to be transformed after the DataFrame is created.
Use DataFrame.convert_dtypes:
df = pd.DataFrame(lista).convert_dtypes()
print (df)
0 1 2 3 4 5 6 7 \
0 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514
1 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA>
2 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA>
3 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514
4 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA>
5 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA>
6 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514
7 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA>
8 9 10 11 12 13 14 15
0 0.19905 0.18563 0.19999 0.25336 783 783 783 783
1 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA>
2 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA>
3 0.19905 0.18563 0.19999 0.25336 783 783 783 783
4 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA>
5 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA>
6 0.19905 0.18563 0.19999 0.25336 783 783 783 783
7 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA>
Another idea if always only integers and missing values per columns:
df = pd.DataFrame(lista)
m = df.apply(lambda x: x.dropna().astype(int).eq(x)).any()
df.loc[:, m] = df.loc[:, m].astype('Int64')
print (df)
0 1 2 3 4 5 6 7 \
0 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514
1 NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN
3 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514
4 NaN NaN NaN NaN NaN NaN NaN NaN
5 NaN NaN NaN NaN NaN NaN NaN NaN
6 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514
7 NaN NaN NaN NaN NaN NaN NaN NaN
8 9 10 11 12 13 14 15
0 0.19905 0.18563 0.19999 0.25336 783 783 783 783
1 NaN NaN NaN NaN <NA> <NA> <NA> <NA>
2 NaN NaN NaN NaN <NA> <NA> <NA> <NA>
3 0.19905 0.18563 0.19999 0.25336 783 783 783 783
4 NaN NaN NaN NaN <NA> <NA> <NA> <NA>
5 NaN NaN NaN NaN <NA> <NA> <NA> <NA>
6 0.19905 0.18563 0.19999 0.25336 783 783 783 783
7 NaN NaN NaN NaN <NA> <NA> <NA> <NA>
A:
A pandas column has a data type that fits all values in that column. If you want to manually set columns data types, use this
convert_dict = {'c1': int,
'c2': float
} # c1 and c2 are example columns here
df = df.astype(convert_dict)
|
When transforming a list of tuples to dataframes is there a way to keep the integers integers?
|
If I have a list like this
lista=[(0.11838, 0.1926, 0.12071, 0.27438, -0.0253, -0.18799, 0.01544, 0.24514, 0.19905, 0.18563, 0.19999, 0.25336, 783, 783, 783, 783), (nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan), (nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan), (0.11838, 0.1926, 0.12071, 0.27438, -0.0253, -0.18799, 0.01544, 0.24514, 0.19905, 0.18563, 0.19999, 0.25336, 783, 783, 783, 783), (nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan), (nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan), (0.11838, 0.1926, 0.12071, 0.27438, -0.0253, -0.18799, 0.01544, 0.24514, 0.19905, 0.18563, 0.19999, 0.25336, 783, 783, 783, 783), (nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan)]
is there a way that when transforming them into dataframes the integers (783) does not get transformed into floats?
Now I get this
pd.DataFrame(lista)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514 0.19905 0.18563 0.19999 0.25336 783.0 783.0 783.0 783.0
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514 0.19905 0.18563 0.19999 0.25336 783.0 783.0 783.0 783.0
4 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
5 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
6 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514 0.19905 0.18563 0.19999 0.25336 783.0 783.0 783.0 783.0
7 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
|
[
"Not during the DataFrame creation. Since np.nan is a float Dtype, the entire column is transformed into floats. The individual Dtypes will have to be transformed after the DataFrame is created.\nUse DataFrame.convert_dtypes:\ndf = pd.DataFrame(lista).convert_dtypes()\nprint (df)\n 0 1 2 3 4 5 6 7 \\\n0 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514 \n1 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> \n2 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> \n3 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514 \n4 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> \n5 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> \n6 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514 \n7 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> \n\n 8 9 10 11 12 13 14 15 \n0 0.19905 0.18563 0.19999 0.25336 783 783 783 783 \n1 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> \n2 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> \n3 0.19905 0.18563 0.19999 0.25336 783 783 783 783 \n4 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> \n5 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> \n6 0.19905 0.18563 0.19999 0.25336 783 783 783 783 \n7 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> \n\nAnother idea if always only integers and missing values per columns:\ndf = pd.DataFrame(lista)\n\nm = df.apply(lambda x: x.dropna().astype(int).eq(x)).any()\n\ndf.loc[:, m] = df.loc[:, m].astype('Int64')\nprint (df)\n 0 1 2 3 4 5 6 7 \\\n0 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514 \n1 NaN NaN NaN NaN NaN NaN NaN NaN \n2 NaN NaN NaN NaN NaN NaN NaN NaN \n3 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514 \n4 NaN NaN NaN NaN NaN NaN NaN NaN \n5 NaN NaN NaN NaN NaN NaN NaN NaN \n6 0.11838 0.1926 0.12071 0.27438 -0.0253 -0.18799 0.01544 0.24514 \n7 NaN NaN NaN NaN NaN NaN NaN NaN \n\n 8 9 10 11 12 13 14 15 \n0 0.19905 0.18563 0.19999 0.25336 783 783 783 783 \n1 NaN NaN NaN NaN <NA> <NA> <NA> <NA> \n2 NaN NaN NaN NaN <NA> <NA> <NA> <NA> \n3 0.19905 0.18563 0.19999 0.25336 783 783 783 783 \n4 NaN NaN NaN NaN <NA> <NA> <NA> <NA> \n5 NaN NaN NaN NaN <NA> <NA> <NA> <NA> \n6 0.19905 0.18563 0.19999 0.25336 783 783 783 783 \n7 NaN NaN NaN NaN <NA> <NA> <NA> <NA> \n\n",
"A pandas column has a data type that fits all values in that column. If you want to manually set columns data types, use this\nconvert_dict = {'c1': int,\n 'c2': float\n } # c1 and c2 are example columns here\n\ndf = df.astype(convert_dict)\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"dataframe",
"nan",
"pandas",
"python"
] |
stackoverflow_0074556638_dataframe_nan_pandas_python.txt
|
Q:
how to check every dictionary is perfect in list , python
I have a data set as below
tmp_dict = {
'a': ?,
'b': ?,
'c': ?,
}
and I have a data is a list of dictionaries like
tmp_list = [tmp_dict1, tmp_dict2, tmp_dict3....]
and I found some of dictionaries are not perfectly have keys about 'a','b','c'.
How do I check and fill the key is not existing
A:
You could try something like this:
# List of keys to look for in each dictionary
dict_keys = ['a','b','c']
# Generate the dictionaries for demonstration purposes only
tmp_dict1 = {'a':[1,2,3], 'b':[4,5,6]}
tmp_dict2 = {'a':[7,8,9], 'b':[10,11,12], 'c':[13,14,15]}
tmp_dict3 = {'a':[16,17,18], 'c':[19,20,21]}
# Add the dictionaries to a list as per OP instructions
tmp_list = [tmp_dict1, tmp_dict2, tmp_dict3]
#--------------------------------------------------------
# Check for missing keys in each dict.
# Print the dict name and keys missing.
# -------------------------------------------------------
for i, dct in enumerate(tmp_list, start=1):
for k in dict_keys:
if dct.get(k) == None:
print(f"tmp_dict{i} is missing key:", k)
OUTPUT:
tmp_dict1 is missing key: c
tmp_dict3 is missing key: b
A:
You can compare the keys in the dictionary with a set containing all the expected keys.
for d in tmp_list:
if set(d) != {'a', 'b', 'c'}:
print(d)
A:
I think you want this.
tmp_dict = {'a':1, 'b': 2, 'c':3}
default_keys = tmp_dict.keys()
tmp_list = [{'a': 1}, {'b': 2,}, {'c': 3}]
for t in tmp_list:
current_dict = t.keys()
if default_keys - current_dict:
t.update({diff: None for diff in list(default_keys-current_dict)})
print(tmp_list)
Output:
[{'a': 1, 'c': None, 'b': None}, {'b': 2, 'a': None, 'c': None}, {'c': 3, 'a': None, 'b': None}]
|
how to check every dictionary is perfect in list , python
|
I have a data set as below
tmp_dict = {
'a': ?,
'b': ?,
'c': ?,
}
and I have a data is a list of dictionaries like
tmp_list = [tmp_dict1, tmp_dict2, tmp_dict3....]
and I found some of dictionaries are not perfectly have keys about 'a','b','c'.
How do I check and fill the key is not existing
|
[
"You could try something like this:\n# List of keys to look for in each dictionary\ndict_keys = ['a','b','c']\n\n# Generate the dictionaries for demonstration purposes only\ntmp_dict1 = {'a':[1,2,3], 'b':[4,5,6]}\ntmp_dict2 = {'a':[7,8,9], 'b':[10,11,12], 'c':[13,14,15]}\ntmp_dict3 = {'a':[16,17,18], 'c':[19,20,21]}\n\n# Add the dictionaries to a list as per OP instructions\ntmp_list = [tmp_dict1, tmp_dict2, tmp_dict3]\n\n#--------------------------------------------------------\n# Check for missing keys in each dict. \n# Print the dict name and keys missing.\n# -------------------------------------------------------\nfor i, dct in enumerate(tmp_list, start=1):\n for k in dict_keys:\n if dct.get(k) == None:\n print(f\"tmp_dict{i} is missing key:\", k)\n\nOUTPUT:\ntmp_dict1 is missing key: c\ntmp_dict3 is missing key: b\n\n",
"You can compare the keys in the dictionary with a set containing all the expected keys.\nfor d in tmp_list:\n if set(d) != {'a', 'b', 'c'}:\n print(d)\n\n",
"I think you want this.\ntmp_dict = {'a':1, 'b': 2, 'c':3}\ndefault_keys = tmp_dict.keys()\ntmp_list = [{'a': 1}, {'b': 2,}, {'c': 3}]\n\nfor t in tmp_list:\n current_dict = t.keys()\n if default_keys - current_dict:\n t.update({diff: None for diff in list(default_keys-current_dict)})\nprint(tmp_list)\n\nOutput:\n[{'a': 1, 'c': None, 'b': None}, {'b': 2, 'a': None, 'c': None}, {'c': 3, 'a': None, 'b': None}]\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"dictionary",
"fill",
"list",
"python"
] |
stackoverflow_0074556605_dictionary_fill_list_python.txt
|
Q:
backend_youtube_dl.py", line 54, in _fetch_basic self._dislikes = self._ydl_info['dislike_count'] KeyError: 'dislike_count'
I have the below code that has been used to download youtube videos. I automatically detect if it's a playlist or single video. However all the sudden it is giving the above error. What can be the problem?
import pafy
from log import *
import tkinter.filedialog
import pytube
url = input("Enter url :")
directory = tkinter.filedialog.askdirectory()
def single_url(url,directory):
print("==================================================================================================================")
video = pafy.new(url)
print(url)
print(video.title)
#logs(video.title,url)
file_object = open(directory+"/links.log", "a")
file_object.write(video.title +' '+ url + '\n')
file_object.close()
print('Rating :',video.rating,', Duration :',video.duration,', Likes :',video.likes, ', Dislikes : ', video.dislikes)
#print(video.description)
best = video.getbest()
print(best.resolution, best.extension)
best.download(quiet=False, filepath=directory+'/'+video.title+"." + best.extension)
print("saved at :", directory, " directory")
print("==================================================================================================================")
def playlist_func(url,directory):
try:
playlist = pytube.Playlist(url)
file_object = open(directory+"/links.log", "a")
file_object.write('Playlist Url :'+ url + '\n')
file_object.close()
print('There are {0}'.format(len(playlist.video_urls)))
for url in playlist.video_urls:
single_url(url,directory)
except:
single_url(url,directory)
playlist_func(url,directory)
A:
Your issue doesn't have anything to do with your code.
Youtube does no longer have a dislike count, they simply removed it.
You just have to wait for the pafy package to be updated accordingly, or patch the package locally and remove that part by yourself.
Keep in mind there are at least 5 different pull requests open trying to fix it.
A:
MANUAL FIX:
You can just set the attribute _dislikes to 0 in file backend_youtube_dl.py
Line 54:
self._dislikes = 0 # self._ydl_info['dislike_count']
A:
While it is true the dislike_count is removed
There's the pafy cloned repo that already adjusted the changes already instead of waiting for a new release which I doubt would happen anytime soon.
I've been using this one and no issues fn.
Install using:
pip install git+https://github.com/Cupcakus/pafy
You don't have to do any changes at all, just remove (pip uninstall) the initial pafy with the dislike count issues before installing this one yo avoid conflicts.
A:
I had faced the similar issue but it is due to YouTube recent update of Dislike button. So there is nothing wrong with code. And If there is any Operating System error regarding youtube-dl occur than you need to install this in prompt
#conda install -c forge youtube-dl
#pip3 install youtube-dl
A:
We can manually fix it by going to
C:\Users\admin\AppData\Local\Programs\Python\Python310\lib\site-packages\pafy\backend_youtube_dl.py
and open python file in editor and comment out
self._likes = self._ydl_info['like_count']
self._dislikes = self._ydl_info['dislike_count']
these two lines at line 53 and 54 and save the file.
PS:
the location of python file may differ according to your system
A:
To get rid of this problem, the best way(currently) is to patch it up locally because Pafy and youtube-dl packages are not updated alongside YouTube's update(remove dislike feature).
First, check the error message. I hope you will find it like this:
Traceback (most recent call last):
File "D:\Random Work\youtube scraping\yt_vc.py", line 5, in <module>
result = pafy.new(url)
File "C:\Users\username\Anaconda3\envs\wsWork\lib\site-packages\pafy\pafy.py", line 124, in new
return Pafy(url, basic, gdata, size, callback, ydl_opts=ydl_opts)
File "C:\Users\username\Anaconda3\envs\wsWork\lib\site-packages\pafy\backend_youtube_dl.py", line 31, in __init__
super(YtdlPafy, self).__init__(*args, **kwargs)
File "C:\Users\username\Anaconda3\envs\wsWork\lib\site-packages\pafy\backend_shared.py", line 97, in __init__
self._fetch_basic()
File "C:\Users\username\Anaconda3\envs\wsWork\lib\site-packages\pafy\backend_youtube_dl.py", line 54, in _fetch_basic
self._dislikes = self._ydl_info['dislike_count']
KeyError: 'dislike_count'
Ignore all of them except the above one of KeyError. In my case, which is:
File "C:\Users\username\Anaconda3\envs\wsWork\lib\site-packages\pafy\backend_youtube_dl.py", line 54, in _fetch_basic
self._dislikes = self._ydl_info['dislike_count']
Give a closer look to the file address
File "C:\Users\username\Anaconda3\envs\wsWork\lib\site-packages\pafy\backend_youtube_dl.py
and to the line number of that file line 54 which is written after the file address/path. It means, that backend_youtube_dl.py file's line 54 is occurring problem. If we can comment out or remove that line, that problem will be solved.
My file address, line number and your file address, line number can be different. Don't worry about that. Just follow the file address/path which is showing in your terminal. And open that file. Find the line and comment out or remove. Then save.
After that, I hope this error won't show again and program will run without any issue.
A:
Simply you can comment out the self._dislikes = self._ydl_info['dislike_count'] line in file backend_youtube_dl.py
|
backend_youtube_dl.py", line 54, in _fetch_basic self._dislikes = self._ydl_info['dislike_count'] KeyError: 'dislike_count'
|
I have the below code that has been used to download youtube videos. I automatically detect if it's a playlist or single video. However all the sudden it is giving the above error. What can be the problem?
import pafy
from log import *
import tkinter.filedialog
import pytube
url = input("Enter url :")
directory = tkinter.filedialog.askdirectory()
def single_url(url,directory):
print("==================================================================================================================")
video = pafy.new(url)
print(url)
print(video.title)
#logs(video.title,url)
file_object = open(directory+"/links.log", "a")
file_object.write(video.title +' '+ url + '\n')
file_object.close()
print('Rating :',video.rating,', Duration :',video.duration,', Likes :',video.likes, ', Dislikes : ', video.dislikes)
#print(video.description)
best = video.getbest()
print(best.resolution, best.extension)
best.download(quiet=False, filepath=directory+'/'+video.title+"." + best.extension)
print("saved at :", directory, " directory")
print("==================================================================================================================")
def playlist_func(url,directory):
try:
playlist = pytube.Playlist(url)
file_object = open(directory+"/links.log", "a")
file_object.write('Playlist Url :'+ url + '\n')
file_object.close()
print('There are {0}'.format(len(playlist.video_urls)))
for url in playlist.video_urls:
single_url(url,directory)
except:
single_url(url,directory)
playlist_func(url,directory)
|
[
"Your issue doesn't have anything to do with your code.\nYoutube does no longer have a dislike count, they simply removed it.\nYou just have to wait for the pafy package to be updated accordingly, or patch the package locally and remove that part by yourself.\nKeep in mind there are at least 5 different pull requests open trying to fix it.\n",
"MANUAL FIX:\nYou can just set the attribute _dislikes to 0 in file backend_youtube_dl.py\nLine 54:\nself._dislikes = 0 # self._ydl_info['dislike_count']\n\n",
"While it is true the dislike_count is removed\nThere's the pafy cloned repo that already adjusted the changes already instead of waiting for a new release which I doubt would happen anytime soon.\nI've been using this one and no issues fn.\nInstall using:\npip install git+https://github.com/Cupcakus/pafy\n\nYou don't have to do any changes at all, just remove (pip uninstall) the initial pafy with the dislike count issues before installing this one yo avoid conflicts.\n",
"I had faced the similar issue but it is due to YouTube recent update of Dislike button. So there is nothing wrong with code. And If there is any Operating System error regarding youtube-dl occur than you need to install this in prompt\n#conda install -c forge youtube-dl\n#pip3 install youtube-dl\n",
"We can manually fix it by going to\nC:\\Users\\admin\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pafy\\backend_youtube_dl.py\nand open python file in editor and comment out\nself._likes = self._ydl_info['like_count']\nself._dislikes = self._ydl_info['dislike_count']\nthese two lines at line 53 and 54 and save the file.\nPS:\nthe location of python file may differ according to your system\n",
"To get rid of this problem, the best way(currently) is to patch it up locally because Pafy and youtube-dl packages are not updated alongside YouTube's update(remove dislike feature).\nFirst, check the error message. I hope you will find it like this:\nTraceback (most recent call last):\n File \"D:\\Random Work\\youtube scraping\\yt_vc.py\", line 5, in <module>\n result = pafy.new(url)\n File \"C:\\Users\\username\\Anaconda3\\envs\\wsWork\\lib\\site-packages\\pafy\\pafy.py\", line 124, in new\n return Pafy(url, basic, gdata, size, callback, ydl_opts=ydl_opts)\n File \"C:\\Users\\username\\Anaconda3\\envs\\wsWork\\lib\\site-packages\\pafy\\backend_youtube_dl.py\", line 31, in __init__\n super(YtdlPafy, self).__init__(*args, **kwargs)\n File \"C:\\Users\\username\\Anaconda3\\envs\\wsWork\\lib\\site-packages\\pafy\\backend_shared.py\", line 97, in __init__\n self._fetch_basic()\n File \"C:\\Users\\username\\Anaconda3\\envs\\wsWork\\lib\\site-packages\\pafy\\backend_youtube_dl.py\", line 54, in _fetch_basic\n self._dislikes = self._ydl_info['dislike_count']\nKeyError: 'dislike_count'\n\nIgnore all of them except the above one of KeyError. In my case, which is:\nFile \"C:\\Users\\username\\Anaconda3\\envs\\wsWork\\lib\\site-packages\\pafy\\backend_youtube_dl.py\", line 54, in _fetch_basic\n self._dislikes = self._ydl_info['dislike_count']\n\nGive a closer look to the file address\nFile \"C:\\Users\\username\\Anaconda3\\envs\\wsWork\\lib\\site-packages\\pafy\\backend_youtube_dl.py\n\nand to the line number of that file line 54 which is written after the file address/path. It means, that backend_youtube_dl.py file's line 54 is occurring problem. If we can comment out or remove that line, that problem will be solved.\nMy file address, line number and your file address, line number can be different. Don't worry about that. Just follow the file address/path which is showing in your terminal. And open that file. Find the line and comment out or remove. Then save.\nAfter that, I hope this error won't show again and program will run without any issue.\n",
"Simply you can comment out the self._dislikes = self._ydl_info['dislike_count'] line in file backend_youtube_dl.py\n"
] |
[
5,
4,
4,
1,
1,
1,
0
] |
[] |
[] |
[
"pafy",
"python"
] |
stackoverflow_0070344739_pafy_python.txt
|
Q:
Load facenet model
I have tried almost all the answers on stackoverflow but nothing worked. Here is my code.
from keras.models import load_model
load_model('facenet_keras.h5')
It is giving me this error
ValueError Traceback (most recent call
last) ~\AppData\Local\Temp\ipykernel_5776\2622147163.py in
----> 1 load_model('facenet_keras.h5')
~\AppData\Roaming\Python\Python39\site-packages\keras\utils\traceback_utils.py
in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # tf.debugging.disable_traceback_filtering()
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
~\AppData\Roaming\Python\Python39\site-packages\keras\utils\generic_utils.py
in func_load(code, defaults, closure, globs)
101 except (UnicodeEncodeError, binascii.Error):
102 raw_code = code.encode("raw_unicode_escape")
--> 103 code = marshal.loads(raw_code)
104 if globs is None:
105 globs = globals()
ValueError: bad marshal data (unknown type code)
To solve the above error I did this
from keras_facenet import FaceNet
embedder = FaceNet()
But I don't want to use the above method.I want to load the facenet model only.How to solve this error if anyone can help.
Python verison : 3.9.3
tensorflow : 2.11.0
keras : 2.11.0
EDIT
According to V.M's answer, this worked.
model = InceptionResNetV1(
input_shape=(None, None, 3),
classes=512,
)
model.load_weights('20180402-114759.h5')
A:
If you can recreate the architecture, in this case from [keras_facenet/inception_resnet_v1][1], then you can do:
model = InceptionResNetV1(
input_shape=(None, None, 3),
classes=512,
)
model.load_weights('model.h5')
|
Load facenet model
|
I have tried almost all the answers on stackoverflow but nothing worked. Here is my code.
from keras.models import load_model
load_model('facenet_keras.h5')
It is giving me this error
ValueError Traceback (most recent call
last) ~\AppData\Local\Temp\ipykernel_5776\2622147163.py in
----> 1 load_model('facenet_keras.h5')
~\AppData\Roaming\Python\Python39\site-packages\keras\utils\traceback_utils.py
in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # tf.debugging.disable_traceback_filtering()
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
~\AppData\Roaming\Python\Python39\site-packages\keras\utils\generic_utils.py
in func_load(code, defaults, closure, globs)
101 except (UnicodeEncodeError, binascii.Error):
102 raw_code = code.encode("raw_unicode_escape")
--> 103 code = marshal.loads(raw_code)
104 if globs is None:
105 globs = globals()
ValueError: bad marshal data (unknown type code)
To solve the above error I did this
from keras_facenet import FaceNet
embedder = FaceNet()
But I don't want to use the above method.I want to load the facenet model only.How to solve this error if anyone can help.
Python verison : 3.9.3
tensorflow : 2.11.0
keras : 2.11.0
EDIT
According to V.M's answer, this worked.
model = InceptionResNetV1(
input_shape=(None, None, 3),
classes=512,
)
model.load_weights('20180402-114759.h5')
|
[
"If you can recreate the architecture, in this case from [keras_facenet/inception_resnet_v1][1], then you can do:\nmodel = InceptionResNetV1(\n input_shape=(None, None, 3),\n classes=512,\n )\nmodel.load_weights('model.h5')\n\n"
] |
[
0
] |
[] |
[] |
[
"facenet",
"keras",
"python",
"tensorflow"
] |
stackoverflow_0074556149_facenet_keras_python_tensorflow.txt
|
Q:
I want to save the output text as it is
code
import csv
import pandas as pd
data = []
with open("book1.csv", "r") as f:
reader =csv.reader(f)
next(reader)
for row in reader:
data.append(row[0])
print(data)
df = pd.DataFrame(data)
df.to_csv('save.csv', mode='a', header=True, index=False)
output
(https://i.stack.imgur.com/8JSZL.png)
['If goods sold on credit double entry is:\nAnil Account Dr.\nRevenue Account Cr.\n\nIf good sold on cash means payment received on the spot by cash, cheque, debit/credit card.\nCash/Bank Account Dr.\n\nRevenue Account. Cr.']
['If goods sold on credit double entry is:\nAnil Account Dr.\nRevenue Account Cr.\n\nIf good sold on cash means payment received on the spot by cash, cheque, debit/credit card.\nCash/Bank Account Dr.\n\nRevenue Account. Cr.', 'Let us assume that machine of Rs 5000 is puchased from ram and partial payment of Rs 3000 is done. So the journal enry will be\nMachine A/c Dr 5000\nTo Bank 3000\nTo Ram 2000']
when the text is saved in save.csv it removes \n and creates a new line. I want to save with \n. means save output data as it is...
`saved file format
saved file. i did,t need this format
A:
I was running through something similar to this and I do not remember that I could figure it out so I came up with a workaround.
#replace '\n' with '\\n'
df.replace('\n', '\\n', inplace= True)
# export normally
df.to_csv('path')
whenever you want to open it back, just read the file and again replace any '\\n' with '\n'
|
I want to save the output text as it is
|
code
import csv
import pandas as pd
data = []
with open("book1.csv", "r") as f:
reader =csv.reader(f)
next(reader)
for row in reader:
data.append(row[0])
print(data)
df = pd.DataFrame(data)
df.to_csv('save.csv', mode='a', header=True, index=False)
output
(https://i.stack.imgur.com/8JSZL.png)
['If goods sold on credit double entry is:\nAnil Account Dr.\nRevenue Account Cr.\n\nIf good sold on cash means payment received on the spot by cash, cheque, debit/credit card.\nCash/Bank Account Dr.\n\nRevenue Account. Cr.']
['If goods sold on credit double entry is:\nAnil Account Dr.\nRevenue Account Cr.\n\nIf good sold on cash means payment received on the spot by cash, cheque, debit/credit card.\nCash/Bank Account Dr.\n\nRevenue Account. Cr.', 'Let us assume that machine of Rs 5000 is puchased from ram and partial payment of Rs 3000 is done. So the journal enry will be\nMachine A/c Dr 5000\nTo Bank 3000\nTo Ram 2000']
when the text is saved in save.csv it removes \n and creates a new line. I want to save with \n. means save output data as it is...
`saved file format
saved file. i did,t need this format
|
[
"I was running through something similar to this and I do not remember that I could figure it out so I came up with a workaround.\n#replace '\\n' with '\\\\n'\ndf.replace('\\n', '\\\\n', inplace= True)\n# export normally\ndf.to_csv('path')\n\nwhenever you want to open it back, just read the file and again replace any '\\\\n' with '\\n'\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074556687_python.txt
|
Q:
What is the difference between torch.nn.functional.grid_sample and torch.nn.functional.interpolate?
Let's say I have an image I want to downsample to half its resolution via either grid_sample or interpolate from the torch.nn.functional library. I select mode ='bilinear' for both cases.
For grid_sample, I'd do the following:
dh = torch.linspace(-1,1, h/2)
dw = torch.linspace(-1,1, w/2)
mesh, meshy = torch.meshgrid((dh,dw))
grid = torch.stack.((meshy,meshx),2)
grid = grid.unsqueeze(0) #add batch dim
x_downsampled = torch.nn.functional.grid_sample(img, grid, mode='bilinear')
For interpolate, I'd do:
x_downsampled = torch.nn.functional.interpolate(img, size=(h/2,w/2), mode='bilinear')
What do both methods differently? Which one is better for my example?
A:
Second for brevity. Grid_sample is more suitable for non-uniform interpolation.
|
What is the difference between torch.nn.functional.grid_sample and torch.nn.functional.interpolate?
|
Let's say I have an image I want to downsample to half its resolution via either grid_sample or interpolate from the torch.nn.functional library. I select mode ='bilinear' for both cases.
For grid_sample, I'd do the following:
dh = torch.linspace(-1,1, h/2)
dw = torch.linspace(-1,1, w/2)
mesh, meshy = torch.meshgrid((dh,dw))
grid = torch.stack.((meshy,meshx),2)
grid = grid.unsqueeze(0) #add batch dim
x_downsampled = torch.nn.functional.grid_sample(img, grid, mode='bilinear')
For interpolate, I'd do:
x_downsampled = torch.nn.functional.interpolate(img, size=(h/2,w/2), mode='bilinear')
What do both methods differently? Which one is better for my example?
|
[
"Second for brevity. Grid_sample is more suitable for non-uniform interpolation.\n"
] |
[
0
] |
[] |
[] |
[
"bilinear_interpolation",
"interpolation",
"python",
"python_3.x",
"torch"
] |
stackoverflow_0072373545_bilinear_interpolation_interpolation_python_python_3.x_torch.txt
|
Q:
.txt file opened in python won't iterate properly
The following contains abridged version of the code for a text card game I am trying to run. It should get a random string for a card from a random line in "cards.txt", and add it to a user's collection at "user.txt" (user would be the name of the user). A sample line from "users.txt" should look like:
X* NameOfCard
If "user.txt" already contains an entry for a card, it changes the number before the name by 1. If "user.txt" had:
1* Hyper Dragon
then got another Hyper Dragon, the line would look like:
2* Hyper Dragon
If there is no version already in there, it should append a newline that says:
1* NameOfCard
The code however, is flawed. No matter what, it will always change the contents of "users.txt" to:
1* NameOfCard
(followed by 3 blank lines)
I believe the issue to lie in the marked for loop in the following code:
from random import choice
def check(e, c):
if (c in e):
return True
else:
return False
username = input("What is the username?: ")
collectionPath = f"collections\\{username}.txt"
while True:
with open("cards.txt", "r") as cards:
card_drew = f"{choice(cards.readlines())}\n"
print("Card drawn: "+card_drew)
with open(collectionPath, "w+") as file:
copyowned = False
print("Looking for card")
currentline = 0
for line in file:
# this is the marked for loop.
print("test")
print("checking "+line)
currentline += 1
if (check(card_drew, line)):
print("Found card!")
copyowned = True
strnumof = ""
for i in line:
if (i.isdigit()):
strnumof = strnumof+i
numof = int(strnumof)+1
line = (f"{numof}* {card_drew}")
print("Card added, 2nd+ copy")
if (not copyowned):
with open(collectionPath, "a") as file:
file.write(f"1* {card_drew}\n")
print("Card Added, 1st copy")
input(f"{username} drew a(n) {card_drew}")
When I run it, the for loop act as if it's not there. It wont even run a print function, though an error message never appears. After using try and except statements, the loop still doesn't porvide an error. I have no clue why it's doing this.
Some help would be greatly appreciated.
A:
open(collectionPath, "w+") Opens a file in read and write mode. It creates a new file if it does not exist, if it exists, it erases the contents of the file and the file pointer starts from the beginning.
So essentially you are erasing the contents of your file, and thus cannot read anything from it. You probably wish to open it in read mode instead: open(collectionPath, "r")
Maybe this diagram can be useful for you: Difference between modes a, a+, w, w+, and r+ in built-in open function?
|
.txt file opened in python won't iterate properly
|
The following contains abridged version of the code for a text card game I am trying to run. It should get a random string for a card from a random line in "cards.txt", and add it to a user's collection at "user.txt" (user would be the name of the user). A sample line from "users.txt" should look like:
X* NameOfCard
If "user.txt" already contains an entry for a card, it changes the number before the name by 1. If "user.txt" had:
1* Hyper Dragon
then got another Hyper Dragon, the line would look like:
2* Hyper Dragon
If there is no version already in there, it should append a newline that says:
1* NameOfCard
The code however, is flawed. No matter what, it will always change the contents of "users.txt" to:
1* NameOfCard
(followed by 3 blank lines)
I believe the issue to lie in the marked for loop in the following code:
from random import choice
def check(e, c):
if (c in e):
return True
else:
return False
username = input("What is the username?: ")
collectionPath = f"collections\\{username}.txt"
while True:
with open("cards.txt", "r") as cards:
card_drew = f"{choice(cards.readlines())}\n"
print("Card drawn: "+card_drew)
with open(collectionPath, "w+") as file:
copyowned = False
print("Looking for card")
currentline = 0
for line in file:
# this is the marked for loop.
print("test")
print("checking "+line)
currentline += 1
if (check(card_drew, line)):
print("Found card!")
copyowned = True
strnumof = ""
for i in line:
if (i.isdigit()):
strnumof = strnumof+i
numof = int(strnumof)+1
line = (f"{numof}* {card_drew}")
print("Card added, 2nd+ copy")
if (not copyowned):
with open(collectionPath, "a") as file:
file.write(f"1* {card_drew}\n")
print("Card Added, 1st copy")
input(f"{username} drew a(n) {card_drew}")
When I run it, the for loop act as if it's not there. It wont even run a print function, though an error message never appears. After using try and except statements, the loop still doesn't porvide an error. I have no clue why it's doing this.
Some help would be greatly appreciated.
|
[
"open(collectionPath, \"w+\") Opens a file in read and write mode. It creates a new file if it does not exist, if it exists, it erases the contents of the file and the file pointer starts from the beginning.\nSo essentially you are erasing the contents of your file, and thus cannot read anything from it. You probably wish to open it in read mode instead: open(collectionPath, \"r\")\nMaybe this diagram can be useful for you: Difference between modes a, a+, w, w+, and r+ in built-in open function?\n"
] |
[
1
] |
[] |
[] |
[
"file",
"for_loop",
"python",
"txt"
] |
stackoverflow_0074556826_file_for_loop_python_txt.txt
|
Q:
How to convert number to words
I'm beginner, i have homework that requires the user to input a number and it convert it to words.For example:
15342
to
one five three four two
this's my code, but it only work with a number:
def convert_text():
arr = ['zero','one','two','three','four','five','six','seven','eight','nine']
word = arr[n]
return word
n =int(input())
print(convert_text())
I am not allowed to use the num2word library and dictionary.
A:
Keep the value you pas being a string, then you can iterate over its chars
arr = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine']
def convert_text(value):
result = []
for char in value:
result.append(arr[int(char)])
return " ".join(result)
print(convert_text("1625")) # one six two five
print(convert_text("98322")) # nine eight three two two
n = input()
print(convert_text(n))
Or just with generator syntax
def convert_text(value):
return " ".join(arr[int(char)] for char in value)
A:
You can use this code hope this can help you
def convert_text(n):
word = ""
arr = ['zero','one','two','three','four','five','six','seven','eight','nine']
''' Convert to string n from input for iterate over each single unit'''
for value in str(n):
''' Convert back unit of string into number '''
num = int(value);
''' Add new word to previous words '''
word = word + " " + arr[num]
return word
n = int(input())
print(convert_text(n))
|
How to convert number to words
|
I'm beginner, i have homework that requires the user to input a number and it convert it to words.For example:
15342
to
one five three four two
this's my code, but it only work with a number:
def convert_text():
arr = ['zero','one','two','three','four','five','six','seven','eight','nine']
word = arr[n]
return word
n =int(input())
print(convert_text())
I am not allowed to use the num2word library and dictionary.
|
[
"Keep the value you pas being a string, then you can iterate over its chars\narr = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine']\n\ndef convert_text(value):\n result = []\n for char in value:\n result.append(arr[int(char)])\n return \" \".join(result)\n\nprint(convert_text(\"1625\")) # one six two five\nprint(convert_text(\"98322\")) # nine eight three two two\n\nn = input()\nprint(convert_text(n))\n\nOr just with generator syntax\ndef convert_text(value):\n return \" \".join(arr[int(char)] for char in value)\n\n",
"You can use this code hope this can help you\ndef convert_text(n):\n word = \"\"\n arr = ['zero','one','two','three','four','five','six','seven','eight','nine'] \n ''' Convert to string n from input for iterate over each single unit'''\n for value in str(n):\n ''' Convert back unit of string into number '''\n num = int(value);\n ''' Add new word to previous words '''\n word = word + \" \" + arr[num]\n return word\nn = int(input())\nprint(convert_text(n))\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074556778_python.txt
|
Q:
Elasticsearch - How to create buckets by using information from two fields at the same time?
My documents are like this:
{'start': 0, 'stop': 3, 'val': 3}
{'start': 2, 'stop': 4, 'val': 1}
{'start': 5, 'stop': 6, 'val': 4}
We can imagine that each document occupies the x-coordinates from 'start' to 'stop',
and has a certain value 'val' ('start' < 'stop' is guaranteed).
The goal is to plot a line showing the sum of these values 'val' from all the
documents which occupy an x-coordinate:
this graph online
In reality there are many documents with many different 'start' and 'stop' coordinates. Speed is important, so:
Is this possible to do with at most a couple of elastic search requests? how?
What I've tried:
With one elastic search request we can get the min_start, and max_stop coordinates. These will be the boundaries of x.
Then we divide the x-coordinates into N intervals, and in a loop for each interval we make an elastic search request: to filter out all the documents which lie completely outside of this interval, and do a sum aggregation of 'val'.
This approach takes too much time because there are N+1 requests, and if we want to have a line with higher precision, the time will increase linearly.
Code:
N = 300 # number of intervals along x
x = []
y = []
data = es.search(index='index_name',
body={
'aggs': {
'min_start': {'min': {'field': 'start'}},
'max_stop': {'max': {'field': 'stop'}}
}
})
min_x = data['aggregations']['min_start']['value']
max_x = data['aggregations']['max_stop']['value']
x_from = min_x
x_step = (max_x - min_x) / N
for _ in range(N):
x_to = x_from + x_step
data = es.search(
index='index_name',
body= {
'size': 0, # to not return any actual documents
'query': {
'bool': {
'should': [
# start is in the current x-interval:
{'bool': {'must': [
{'range': {'start': {'gte': x_from}}},
{'range': {'start': {'lte': x_to}}}
]}},
# stop is in the current x-interval:
{'bool': {'must': [
{'range': {'stop': {'gte': x_from}}},
{'range': {'stop': {'lte': x_to}}}
]}},
# current x-interval is inside start--stop
{'bool': {'must': [
{'range': {'start': {'lte': x_from}}},
{'range': {'stop': {'gte': x_to}}}
]}}
],
'minimum_should_match': 1 # at least 1 of these 3 conditions should match
}
},
'aggs': {
'vals_sum': {'sum': {'field': 'val'}}
}
}
)
# Append info to the lists:
x.append(x_from)
y.append(data['aggregations']['vals_sum']['value'])
# Next x-interval:
x_from = x_to
from matplotlib import pyplot as plt
plt.plot(x, y)
A:
The right way to do this in one single query is to use the range field type (available since 5.2) instead of using two fields start and stop and reimplementing the same logic. Like this:
PUT test
{
"mappings": {
"properties": {
"range": {
"type": "integer_range"
},
"val": {
"type":"integer"
}
}
}
}
Your documents would look like this:
{
"range" : {
"gte" : 0,
"lt" : 3
},
"val" : 3
}
And then the query would simply leverage an histogram aggregation like this:
POST test/_search
{
"size": 0,
"aggs": {
"histo": {
"histogram": {
"field": "range",
"interval": 1
},
"aggs": {
"total": {
"sum": {
"field": "val"
}
}
}
}
}
}
And the results are as expected: 3, 3, 4, 1, 0, 4
|
Elasticsearch - How to create buckets by using information from two fields at the same time?
|
My documents are like this:
{'start': 0, 'stop': 3, 'val': 3}
{'start': 2, 'stop': 4, 'val': 1}
{'start': 5, 'stop': 6, 'val': 4}
We can imagine that each document occupies the x-coordinates from 'start' to 'stop',
and has a certain value 'val' ('start' < 'stop' is guaranteed).
The goal is to plot a line showing the sum of these values 'val' from all the
documents which occupy an x-coordinate:
this graph online
In reality there are many documents with many different 'start' and 'stop' coordinates. Speed is important, so:
Is this possible to do with at most a couple of elastic search requests? how?
What I've tried:
With one elastic search request we can get the min_start, and max_stop coordinates. These will be the boundaries of x.
Then we divide the x-coordinates into N intervals, and in a loop for each interval we make an elastic search request: to filter out all the documents which lie completely outside of this interval, and do a sum aggregation of 'val'.
This approach takes too much time because there are N+1 requests, and if we want to have a line with higher precision, the time will increase linearly.
Code:
N = 300 # number of intervals along x
x = []
y = []
data = es.search(index='index_name',
body={
'aggs': {
'min_start': {'min': {'field': 'start'}},
'max_stop': {'max': {'field': 'stop'}}
}
})
min_x = data['aggregations']['min_start']['value']
max_x = data['aggregations']['max_stop']['value']
x_from = min_x
x_step = (max_x - min_x) / N
for _ in range(N):
x_to = x_from + x_step
data = es.search(
index='index_name',
body= {
'size': 0, # to not return any actual documents
'query': {
'bool': {
'should': [
# start is in the current x-interval:
{'bool': {'must': [
{'range': {'start': {'gte': x_from}}},
{'range': {'start': {'lte': x_to}}}
]}},
# stop is in the current x-interval:
{'bool': {'must': [
{'range': {'stop': {'gte': x_from}}},
{'range': {'stop': {'lte': x_to}}}
]}},
# current x-interval is inside start--stop
{'bool': {'must': [
{'range': {'start': {'lte': x_from}}},
{'range': {'stop': {'gte': x_to}}}
]}}
],
'minimum_should_match': 1 # at least 1 of these 3 conditions should match
}
},
'aggs': {
'vals_sum': {'sum': {'field': 'val'}}
}
}
)
# Append info to the lists:
x.append(x_from)
y.append(data['aggregations']['vals_sum']['value'])
# Next x-interval:
x_from = x_to
from matplotlib import pyplot as plt
plt.plot(x, y)
|
[
"The right way to do this in one single query is to use the range field type (available since 5.2) instead of using two fields start and stop and reimplementing the same logic. Like this:\nPUT test \n{\n \"mappings\": {\n \"properties\": {\n \"range\": {\n \"type\": \"integer_range\"\n },\n \"val\": {\n \"type\":\"integer\"\n }\n }\n }\n}\n\nYour documents would look like this:\n {\n \"range\" : {\n \"gte\" : 0,\n \"lt\" : 3\n },\n \"val\" : 3\n }\n\nAnd then the query would simply leverage an histogram aggregation like this:\nPOST test/_search \n{\n \"size\": 0,\n \"aggs\": {\n \"histo\": {\n \"histogram\": {\n \"field\": \"range\",\n \"interval\": 1\n },\n \"aggs\": {\n \"total\": {\n \"sum\": {\n \"field\": \"val\"\n }\n }\n }\n }\n }\n}\n\nAnd the results are as expected: 3, 3, 4, 1, 0, 4\n"
] |
[
1
] |
[] |
[] |
[
"elasticsearch",
"python"
] |
stackoverflow_0074554278_elasticsearch_python.txt
|
Q:
Forwarding FastAPI requests to another server
I have a FastAPI application for testing/development purposes. What I want is that any request that gets to my app will automatically be sent, as is, to another app on another server, with exactly the same parameters and same endpoint. This is not a redirect, because I still want the app to process the request and return values as usual. I just want to initiate a similar request to a different version of the app on a different server, without waiting for the answer from the other server, so that the other app gets the request as if the original request was sent to it.
How can I achieve that? (below is a sample code that i use for handling the request)
@app.post("/my_endpoint/some_parameters")
def process_request(
params: MyParamsClass,
pwd: str = Depends(authenticate),
):
# send the same request to http://my_other_url/my_endpoint/
return_value = process_the_request(params)
return return_value.as_json()
A:
You can use the AsyncClient() from the httpx library, as described in this answer, as well as this and this answer (have a look at those for more details on the approach). You can spawn a Client in the startup event handler, store it on the app instanceβas described here, as well as here and hereβand reuse it every time you need it. You can explicitly close the Client once you are done with it, using the shutdown event handler.
Working Example:
Main Server
When building the request that is about to be forwarded to the other server, the example below uses content=request.stream(), which provides an async iterator, so that if the client sends a request with large body (for instance, large files), the main server would not have to load the entire body in its memory before forwarding the request, as it would happen if you used content=await request.body().
You can add multiple routes in the same way the /upload one has been defined below, specifying the path, as well as the HTTP method for the endpoint. Note that the /upload route below uses Starlette's path convertor to capture arbitrary paths, as described here and here. You could also specify the exact path parameters if you wish, but this is a more convenient way if there are too many of them. Regardless, the path will be evaluated against the endpoint in the other server below, where you can explicitly specify the path parameters.
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from starlette.background import BackgroundTask
import httpx
app = FastAPI()
@app.on_event('startup')
async def startup_event():
client = httpx.AsyncClient(base_url='http://127.0.0.1:8001/') # this is the other server
app.state.client = client
@app.on_event('shutdown')
async def shutdown_event():
client = app.state.client
await client.aclose()
async def _reverse_proxy(request: Request):
client = request.app.state.client
url = httpx.URL(path=request.url.path, query=request.url.query.encode('utf-8'))
req = client.build_request(
request.method, url, headers=request.headers.raw, content=request.stream()
)
r = await client.send(req, stream=True)
return StreamingResponse(
r.aiter_raw(),
status_code=r.status_code,
headers=r.headers,
background=BackgroundTask(r.aclose)
)
app.add_route('/upload/{path:path}', _reverse_proxy, ['POST'])
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, host='0.0.0.0', port=8000)
Other server
Again, for simplicity, the Request object is used to read the body, but you can isntead define UploadFile, Form and other parameters as usual. The below is listenning on port 8001.
from fastapi import FastAPI, Request
app = FastAPI()
@app.post('/upload/{p1}/{p2}')
async def upload(p1: str, p2: str, q1: str, request: Request):
return {'p1': p1, 'p2': p2, 'q1': q1, 'body': await request.body()}
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, host='0.0.0.0', port=8001)
Test
import httpx
url = 'http://127.0.0.1:8000/upload/hello/world'
files = {'file': open('file.txt', 'rb')}
params = {'q1': 'This is a query param'}
r = httpx.post(url, params=params, files=files)
print(r.content)
|
Forwarding FastAPI requests to another server
|
I have a FastAPI application for testing/development purposes. What I want is that any request that gets to my app will automatically be sent, as is, to another app on another server, with exactly the same parameters and same endpoint. This is not a redirect, because I still want the app to process the request and return values as usual. I just want to initiate a similar request to a different version of the app on a different server, without waiting for the answer from the other server, so that the other app gets the request as if the original request was sent to it.
How can I achieve that? (below is a sample code that i use for handling the request)
@app.post("/my_endpoint/some_parameters")
def process_request(
params: MyParamsClass,
pwd: str = Depends(authenticate),
):
# send the same request to http://my_other_url/my_endpoint/
return_value = process_the_request(params)
return return_value.as_json()
|
[
"You can use the AsyncClient() from the httpx library, as described in this answer, as well as this and this answer (have a look at those for more details on the approach). You can spawn a Client in the startup event handler, store it on the app instanceβas described here, as well as here and hereβand reuse it every time you need it. You can explicitly close the Client once you are done with it, using the shutdown event handler.\nWorking Example:\nMain Server\nWhen building the request that is about to be forwarded to the other server, the example below uses content=request.stream(), which provides an async iterator, so that if the client sends a request with large body (for instance, large files), the main server would not have to load the entire body in its memory before forwarding the request, as it would happen if you used content=await request.body().\nYou can add multiple routes in the same way the /upload one has been defined below, specifying the path, as well as the HTTP method for the endpoint. Note that the /upload route below uses Starlette's path convertor to capture arbitrary paths, as described here and here. You could also specify the exact path parameters if you wish, but this is a more convenient way if there are too many of them. Regardless, the path will be evaluated against the endpoint in the other server below, where you can explicitly specify the path parameters.\nfrom fastapi import FastAPI, Request\nfrom fastapi.responses import StreamingResponse\nfrom starlette.background import BackgroundTask\nimport httpx\n\napp = FastAPI()\n\n@app.on_event('startup')\nasync def startup_event():\n client = httpx.AsyncClient(base_url='http://127.0.0.1:8001/') # this is the other server\n app.state.client = client\n\n\n@app.on_event('shutdown')\nasync def shutdown_event():\n client = app.state.client\n await client.aclose()\n\n\nasync def _reverse_proxy(request: Request):\n client = request.app.state.client\n url = httpx.URL(path=request.url.path, query=request.url.query.encode('utf-8'))\n req = client.build_request(\n request.method, url, headers=request.headers.raw, content=request.stream()\n )\n r = await client.send(req, stream=True)\n return StreamingResponse(\n r.aiter_raw(),\n status_code=r.status_code,\n headers=r.headers,\n background=BackgroundTask(r.aclose)\n )\n\n\napp.add_route('/upload/{path:path}', _reverse_proxy, ['POST'])\n\n\nif __name__ == '__main__':\n import uvicorn\n uvicorn.run(app, host='0.0.0.0', port=8000)\n\nOther server\nAgain, for simplicity, the Request object is used to read the body, but you can isntead define UploadFile, Form and other parameters as usual. The below is listenning on port 8001.\nfrom fastapi import FastAPI, Request\n\napp = FastAPI()\n\n@app.post('/upload/{p1}/{p2}')\nasync def upload(p1: str, p2: str, q1: str, request: Request):\n return {'p1': p1, 'p2': p2, 'q1': q1, 'body': await request.body()}\n \n \nif __name__ == '__main__':\n import uvicorn\n uvicorn.run(app, host='0.0.0.0', port=8001)\n\nTest\nimport httpx\n\nurl = 'http://127.0.0.1:8000/upload/hello/world'\nfiles = {'file': open('file.txt', 'rb')}\nparams = {'q1': 'This is a query param'}\nr = httpx.post(url, params=params, files=files)\nprint(r.content)\n\n"
] |
[
0
] |
[] |
[] |
[
"fastapi",
"forward",
"python",
"request",
"rest"
] |
stackoverflow_0074555102_fastapi_forward_python_request_rest.txt
|
Q:
Kernel keeps dying
what function do you use in Jupyter notebook in place of quit() because the quit() function keeps killing my kernel but it works perfectly in the pycharm and VS code
I wrote the quit function under an if statement. And the quit() function instead of ending the program when the condition is met, it rather kills my kernel
|
Kernel keeps dying
|
what function do you use in Jupyter notebook in place of quit() because the quit() function keeps killing my kernel but it works perfectly in the pycharm and VS code
I wrote the quit function under an if statement. And the quit() function instead of ending the program when the condition is met, it rather kills my kernel
|
[] |
[] |
[
"I think raising an error when your if statement is verified can be a good solution.\nFor example :\nraise KeyboardInterrupt\n\nHaving a look at https://docs.python.org/3/tutorial/errors.html may help you.\n"
] |
[
-1
] |
[
"function",
"if_statement",
"jupyter",
"python"
] |
stackoverflow_0074556816_function_if_statement_jupyter_python.txt
|
Q:
How do I combine one input to a list of another input in order?
Say I had an input of:
john bob alex liam # names
15 17 16 19 # age
70 92 70 100 # iq
How do I make it so that john is assigned to age 15 and iq of 70, bob is assigned to age 17 and iq of 92, alex is assigned to age 16 and iq of 70, and liam is assigned to age 19 and iq of 100?
Right now I have:
names = input().split()
From there, I know have to make 2 more variables for age and iq and assign them to inputs as well but how do I assign those numbers to the names in the same order?
A:
We can form 3 lists and then zip them together:
names = "john bob alex liam"
ages = "15 17 16 19"
iq = "70 92 70 100"
list_a = names.split()
list_b = ages.split()
list_c = iq.split()
zipped = zip(list_a, list_b, list_c)
zipped_list = list(zipped)
print(zipped_list)
This prints:
[('john', '15', '70'), ('bob', '17', '92'), ('alex', '16', '70'), ('liam', '19', '100')]
A:
What is the expected output?
Assuming a dictionary, that is the canonical way to link key:values, you can use a dictionary comprehension:
text = '''john bob alex liam # names
15 17 16 19 # age
70 92 70 100 # iq'''
d = {key: [age, iq] for key, age, iq in
zip(*(s.split() for s in
(s.split(' #', 1)[0] for s in
text.split('\n'))))}
Output:
{'john': ['15', '70'],
'bob': ['17', '92'],
'alex': ['16', '70'],
'liam': ['19', '100']}
NB. if you want integers, use key: [int(age), int(iq)]
If you want tuples:
out = list(zip(*(s.split() for s in
(s.split(' #', 1)[0] for s in text.split('\n')))))
Output:
[('john', '15', '70'),
('bob', '17', '92'),
('alex', '16', '70'),
('liam', '19', '100')]
A:
Only if you have the same amount of each type (5 names, 5 ages and 5 iqs) you could store all the names in a list, all ages in another, and all iqs in another.
Then, assign the first element of the names list with the first element of the ages list, and with the first element of the iqs list and so on
names = ["John", "Bob", "Alex", "Liam"]
ages = [15, 17, 16, 19]
iqs = [70, 92, 70, 100]
Then, in order to, for example, print this, you could
print(names[0], ages[0], iqs[0])
|
How do I combine one input to a list of another input in order?
|
Say I had an input of:
john bob alex liam # names
15 17 16 19 # age
70 92 70 100 # iq
How do I make it so that john is assigned to age 15 and iq of 70, bob is assigned to age 17 and iq of 92, alex is assigned to age 16 and iq of 70, and liam is assigned to age 19 and iq of 100?
Right now I have:
names = input().split()
From there, I know have to make 2 more variables for age and iq and assign them to inputs as well but how do I assign those numbers to the names in the same order?
|
[
"We can form 3 lists and then zip them together:\nnames = \"john bob alex liam\"\nages = \"15 17 16 19\"\niq = \"70 92 70 100\"\nlist_a = names.split()\nlist_b = ages.split()\nlist_c = iq.split()\nzipped = zip(list_a, list_b, list_c)\nzipped_list = list(zipped) \n\nprint(zipped_list)\n\nThis prints:\n[('john', '15', '70'), ('bob', '17', '92'), ('alex', '16', '70'), ('liam', '19', '100')]\n\n",
"What is the expected output?\nAssuming a dictionary, that is the canonical way to link key:values, you can use a dictionary comprehension:\ntext = '''john bob alex liam # names\n15 17 16 19 # age\n70 92 70 100 # iq'''\n\nd = {key: [age, iq] for key, age, iq in \n zip(*(s.split() for s in \n (s.split(' #', 1)[0] for s in \n text.split('\\n'))))}\n\nOutput:\n{'john': ['15', '70'],\n 'bob': ['17', '92'],\n 'alex': ['16', '70'],\n 'liam': ['19', '100']}\n\nNB. if you want integers, use key: [int(age), int(iq)]\nIf you want tuples:\nout = list(zip(*(s.split() for s in \n (s.split(' #', 1)[0] for s in text.split('\\n')))))\n\nOutput:\n[('john', '15', '70'),\n ('bob', '17', '92'),\n ('alex', '16', '70'),\n ('liam', '19', '100')]\n\n",
"Only if you have the same amount of each type (5 names, 5 ages and 5 iqs) you could store all the names in a list, all ages in another, and all iqs in another.\nThen, assign the first element of the names list with the first element of the ages list, and with the first element of the iqs list and so on\nnames = [\"John\", \"Bob\", \"Alex\", \"Liam\"]\nages = [15, 17, 16, 19]\niqs = [70, 92, 70, 100]\n\nThen, in order to, for example, print this, you could\nprint(names[0], ages[0], iqs[0])\n\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074556969_python_python_3.x.txt
|
Q:
A certain regular expression that should match does not match in Python
I am working with determining if certain regular expressions apply to some specified text, and for this I wrote a short Python script. I am having trouble with a certain regular expression because I tested it in an app on my iPhone designed to test regular expressions on specified text, and the regular expression matches the text in the app. But when I try the expression on the text in a Python script, there is no match. I am pasting below a short Python script that tests the regular expression on the desired text and a photo of the regular expression app that shows that the regular expression does match the text. What I would like, if possible, is to get an explanation as to why the regular expression does not match the text in Python. Any help would be greatly appreciated. Thanks so much.
# -*- coding:utf-8 -*-
import regex as re
expression = r'((?<=(^|\n)[\w [[:punct:]]]{1,100})(?<!Chapter[ \t]{1,100}[0-9]{1,100})(?<!\w{2}β[\w [[:punct:]]]{1,100})β(?![a-z]))'
text = r'Section 1βFrom Strength to Weakness'
replacedText, numMatches = re.subn(r'(' + expression + r')', r'<mark>\1</mark>', text)
print('Number of matches: ' + str(numMatches) + '\n' + replacedText)
A:
What I would like, if possible, is to get an explanation as to why the regular expression does not match the text in Python
The problem is that the [[:punct:]] character class appears inside a character class. You need to stick to the brackets that [[:punct:]] already has, and add the other characters inside that notation. In other words, regard that [[:punct:]] notation as a character class notation, with [:punct:] appearing in it. There should not be another pair of brackets.
So write [\w [:punct:]] instead of [\w [[:punct:]]], ...etc.
Here is the correction:
expression = r'((?<=(^|\n)[\w [:punct:]]{1,100})(?<!Chapter[ \t]{1,100}[0-9]{1,100})(?<!\w{2}β[\w [:punct:]]{1,100})β(?![a-z]))'
|
A certain regular expression that should match does not match in Python
|
I am working with determining if certain regular expressions apply to some specified text, and for this I wrote a short Python script. I am having trouble with a certain regular expression because I tested it in an app on my iPhone designed to test regular expressions on specified text, and the regular expression matches the text in the app. But when I try the expression on the text in a Python script, there is no match. I am pasting below a short Python script that tests the regular expression on the desired text and a photo of the regular expression app that shows that the regular expression does match the text. What I would like, if possible, is to get an explanation as to why the regular expression does not match the text in Python. Any help would be greatly appreciated. Thanks so much.
# -*- coding:utf-8 -*-
import regex as re
expression = r'((?<=(^|\n)[\w [[:punct:]]]{1,100})(?<!Chapter[ \t]{1,100}[0-9]{1,100})(?<!\w{2}β[\w [[:punct:]]]{1,100})β(?![a-z]))'
text = r'Section 1βFrom Strength to Weakness'
replacedText, numMatches = re.subn(r'(' + expression + r')', r'<mark>\1</mark>', text)
print('Number of matches: ' + str(numMatches) + '\n' + replacedText)
|
[
"\nWhat I would like, if possible, is to get an explanation as to why the regular expression does not match the text in Python\n\nThe problem is that the [[:punct:]] character class appears inside a character class. You need to stick to the brackets that [[:punct:]] already has, and add the other characters inside that notation. In other words, regard that [[:punct:]] notation as a character class notation, with [:punct:] appearing in it. There should not be another pair of brackets.\nSo write [\\w [:punct:]] instead of [\\w [[:punct:]]], ...etc.\nHere is the correction:\nexpression = r'((?<=(^|\\n)[\\w [:punct:]]{1,100})(?<!Chapter[ \\t]{1,100}[0-9]{1,100})(?<!\\w{2}β[\\w [:punct:]]{1,100})β(?![a-z]))'\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0074554306_python_regex.txt
|
Q:
Memory Error when parsing a large number of files
I am parsing 6k csv files to merge them into one. I need this for their joint analysis and training of the ML model. There are too many files and my computer ran out of memory by simply concatenating them.
S = ββ
for f in csv_files:
# read the csv file
#df = df.append(pd.read_csv(f))
s = s + open(f, mode ='r').read()[32:]
print(f)
file = open('bigdata.csv', mode = 'w')
file.write(s)
file.close()
I need a way to create a single dataset from all files (60gb) for train my ML model
A:
I believe this may help:
file = open('bigdata.csv', mode = 'w')
for f in csv_files:
s = open(f, mode='r').read()[32:]
file.write(s)
file.close()
In contrast, your origin code need at least the same memory as the size of the output file, which is 60gb and maybe larger than the memory of your computer.
However, if there is a single input file that can run out of your memory, then this method will also fail, in which case you may need to read each of csv files line by line and write them into the output file. I didn't write that method because I'm not sure about your magic number 32.
|
Memory Error when parsing a large number of files
|
I am parsing 6k csv files to merge them into one. I need this for their joint analysis and training of the ML model. There are too many files and my computer ran out of memory by simply concatenating them.
S = ββ
for f in csv_files:
# read the csv file
#df = df.append(pd.read_csv(f))
s = s + open(f, mode ='r').read()[32:]
print(f)
file = open('bigdata.csv', mode = 'w')
file.write(s)
file.close()
I need a way to create a single dataset from all files (60gb) for train my ML model
|
[
"I believe this may help:\nfile = open('bigdata.csv', mode = 'w')\n\nfor f in csv_files:\n s = open(f, mode='r').read()[32:]\n file.write(s)\n\nfile.close()\n\nIn contrast, your origin code need at least the same memory as the size of the output file, which is 60gb and maybe larger than the memory of your computer.\nHowever, if there is a single input file that can run out of your memory, then this method will also fail, in which case you may need to read each of csv files line by line and write them into the output file. I didn't write that method because I'm not sure about your magic number 32.\n"
] |
[
0
] |
[] |
[] |
[
"bigdata",
"data_processing",
"machine_learning",
"python"
] |
stackoverflow_0074556994_bigdata_data_processing_machine_learning_python.txt
|
Q:
Why can't I "deactivate" pyenv / virtualenv? How to "fix" installation
I am on a freshly installed Ubuntu 16.04 and in view of developing with recent versions of pandas I installed Python 3.6.0 using a virtual environment.
A reason for choosing 3.6.0 was because I read somewhere that this version of Python could deal with virtual environments natively, i.e. without installing anything else [anyway to install 3.6.0 itself without replacing the system wide Python, which would have been almost surely wrong, I actually had to provide a virtual environment before].
I did it optimistically thinking that everything would go in the right direction (including my knowledge) and so, without caring too much about the differences between: pyenv, pyenv-virtualenv, pyvenv, etc...
So I don't remember well what I installed, anyway I used only apt and pip/pip3, trying to confine changes within the virtualenv as soon as it became available.
I loosely followed this tutorial except (maybe) that I didn't create a directory for the virtualenvs (the $ mkdir ~/.virtualenvs command).
Now my user is stuck within the (general) environment and I can't get out.
Situation
Right from the start after the login, without activating any environment, Bash gives me a modified prompt, and it seems I can't get the usual prompt by deactivate, source deactivate, etc...
(general) $ deactivate
pyenv-virtualenv: deactivate must be sourced. Run 'source deactivate' instead of 'deactivate'
(general) $ source deactivate
pyenv-virtualenv: deactivate 3.6.0/envs/general
(general) $ pyvenv deactivate
pyenv: pyvenv: command not found
The `pyvenv` command exists in these Python versions: 3.6.0
(general) $
You see that the (general) prefix remains in the prompt.
I have also had symptoms that this pyenv/virtualenv setup is affecting system activities (e.g. while trying to install hplip from the command line, the installer got confused when trying to recognize my OS, and ultimately failed - I had to do it from another user, and then it worked), so I need to revert this to a clean state.
NB. I am not that sure that my installation is really that wrong, maybe it's just me issuing the wrong commands or some common pitfall I have incurred in.
The questions
How can I deactivate the (general) environment?
How can I tell if my installation is wrong, and how can I fix it?
Wow can I safely revert from this installation and get to a more proper one?
I have already read this question but it wasn't so tied to my case
This one seems more related, in that it highlights that
python venv should be preferred;
it is available on Python >=3.3;
Ubuntu Xenial doesn't have it already installed by default;
it gives package names to install it.
But still I am unsure of what to uninstall before installing them in case.
More info
Here are the outputs of TAB completions, commands, and a directory listing, to show a bit of which environment I am in:
(general) $ cat .py <TAB>
.pyenv/ .python_history
(general) $ cat .pyenv/ <TAB>
.agignore completions/ LICENSE shims/ versions/
bin/ CONDUCT.md Makefile src/ .vimrc
cache/ .git/ plugins/ test/
CHANGELOG.md .gitignore pyenv.d/ .travis.yml
COMMANDS.md libexec/ README.md version
(general) $ cat .pyenv/version
general
(general) $ ls -l ~/.pyenv/versions
totale 12
drwxrwxr-x 3 myuser myuser 4096 apr 20 13:50 ./
drwxrwxr-x 13 myuser myuser 4096 apr 20 13:50 ../
drwxr-xr-x 7 myuser myuser 4096 apr 20 13:50 3.6.0/
lrwxrwxrwx 1 myuser myuser 48 apr 20 13:50 general -> /home/myuser/.pyenv/versions/3.6.0/envs/general/
I tried listing what installed, but I'm afraid that with pip3 list the answer I get is for the env where I am stuck, and that this is masking anything that I installed before getting to it.
May it just be that I mistakenly installed pyenv from my home directory? Would it be enough to delete/move the .pyenv directory? I am not confident enough to do it without asking.
A:
It was deactivated when I used this command: pyenv shell .
A:
EDIT-[22/11/22]---> BELOW ANSWER IS FROM 2018 - maybe i never got to DEACTIVATE and did only manage to UN-INSTALL
The way to DEACTIVATE the default PyEnv General is --pyenv uninstall 3.6.0/envs/general
pyenv-virtualenv: remove /home/dhankar/.pyenv/versions/3.6.0/envs/general? y
dhankar@dhankar-VPCEB44EN:~/.pyenv$
to doubly ensure that the PyENV has been removed --
dhankar@dhankar-VPCEB44EN:~/.pyenv$ pyenv versions
pyenv: version `general' is not installed (set by /home/dhankar/.pyenv/version)
system
3.6.0
3.6.5
dhankar@dhankar-VPCEB44EN:~/.pyenv$
Also so that its documented - am sharing the terminal output of the same command , earlier before the Un-Install.
(general) dhankar@dhankar-VPCEB44EN:~/.pyenv$ pyenv versions
system
3.6.0
3.6.0/envs/general
3.6.5
* general (set by /home/dhankar/.pyenv/version)
(general) dhankar@dhankar-VPCEB44EN:~/.pyenv$
A:
Your mental model of what these modules do is wrong.
pyenv doesn't really have a "deactivate" feature. It lets you choose between a number of independent Python installations, but you would basically always be using one of them - unless you get rid of pyenv entirely. pyenv shell . effectively deactivates pyenv for the time being in the current shell instance, and if you want to remove it from your configuration entirely, you want to remove the pyenv commands from your shell's login files: they would typically look something like
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"
where the latter is an optional add-on which you might not have installed.
You can mix pyenv with regular virtual environments; for example, you could run pyenv shell 3.8.9 and then with that active run python -m venv thisenv which will create thisenv in the current directory with a link to the selected Python version; then running source thisenv/bin/activate will activate the virtual environment, so that python or python3 runs the Python version you had active in pyenv when you created the environment, and you can subsequently deactivate that to return to whatever you had before.
As a quick hack, you can also run $(pyenv root)/versions/3.8.9/bin/python -m venv thisenv to create the environment without touching the pyenv shell setting.
The pyenv-virtualenv add-on plugin has its own set of features, with a slightly different set of use cases. You create those with pyenv virtualenv, activate with pyenv activate, and deactivate with pyenv deactivate. These environments are stored alongside the pyenv Python versions in a central place in your home directory, not in the current directory.
|
Why can't I "deactivate" pyenv / virtualenv? How to "fix" installation
|
I am on a freshly installed Ubuntu 16.04 and in view of developing with recent versions of pandas I installed Python 3.6.0 using a virtual environment.
A reason for choosing 3.6.0 was because I read somewhere that this version of Python could deal with virtual environments natively, i.e. without installing anything else [anyway to install 3.6.0 itself without replacing the system wide Python, which would have been almost surely wrong, I actually had to provide a virtual environment before].
I did it optimistically thinking that everything would go in the right direction (including my knowledge) and so, without caring too much about the differences between: pyenv, pyenv-virtualenv, pyvenv, etc...
So I don't remember well what I installed, anyway I used only apt and pip/pip3, trying to confine changes within the virtualenv as soon as it became available.
I loosely followed this tutorial except (maybe) that I didn't create a directory for the virtualenvs (the $ mkdir ~/.virtualenvs command).
Now my user is stuck within the (general) environment and I can't get out.
Situation
Right from the start after the login, without activating any environment, Bash gives me a modified prompt, and it seems I can't get the usual prompt by deactivate, source deactivate, etc...
(general) $ deactivate
pyenv-virtualenv: deactivate must be sourced. Run 'source deactivate' instead of 'deactivate'
(general) $ source deactivate
pyenv-virtualenv: deactivate 3.6.0/envs/general
(general) $ pyvenv deactivate
pyenv: pyvenv: command not found
The `pyvenv` command exists in these Python versions: 3.6.0
(general) $
You see that the (general) prefix remains in the prompt.
I have also had symptoms that this pyenv/virtualenv setup is affecting system activities (e.g. while trying to install hplip from the command line, the installer got confused when trying to recognize my OS, and ultimately failed - I had to do it from another user, and then it worked), so I need to revert this to a clean state.
NB. I am not that sure that my installation is really that wrong, maybe it's just me issuing the wrong commands or some common pitfall I have incurred in.
The questions
How can I deactivate the (general) environment?
How can I tell if my installation is wrong, and how can I fix it?
Wow can I safely revert from this installation and get to a more proper one?
I have already read this question but it wasn't so tied to my case
This one seems more related, in that it highlights that
python venv should be preferred;
it is available on Python >=3.3;
Ubuntu Xenial doesn't have it already installed by default;
it gives package names to install it.
But still I am unsure of what to uninstall before installing them in case.
More info
Here are the outputs of TAB completions, commands, and a directory listing, to show a bit of which environment I am in:
(general) $ cat .py <TAB>
.pyenv/ .python_history
(general) $ cat .pyenv/ <TAB>
.agignore completions/ LICENSE shims/ versions/
bin/ CONDUCT.md Makefile src/ .vimrc
cache/ .git/ plugins/ test/
CHANGELOG.md .gitignore pyenv.d/ .travis.yml
COMMANDS.md libexec/ README.md version
(general) $ cat .pyenv/version
general
(general) $ ls -l ~/.pyenv/versions
totale 12
drwxrwxr-x 3 myuser myuser 4096 apr 20 13:50 ./
drwxrwxr-x 13 myuser myuser 4096 apr 20 13:50 ../
drwxr-xr-x 7 myuser myuser 4096 apr 20 13:50 3.6.0/
lrwxrwxrwx 1 myuser myuser 48 apr 20 13:50 general -> /home/myuser/.pyenv/versions/3.6.0/envs/general/
I tried listing what installed, but I'm afraid that with pip3 list the answer I get is for the env where I am stuck, and that this is masking anything that I installed before getting to it.
May it just be that I mistakenly installed pyenv from my home directory? Would it be enough to delete/move the .pyenv directory? I am not confident enough to do it without asking.
|
[
"It was deactivated when I used this command: pyenv shell .\n",
"EDIT-[22/11/22]---> BELOW ANSWER IS FROM 2018 - maybe i never got to DEACTIVATE and did only manage to UN-INSTALL\nThe way to DEACTIVATE the default PyEnv General is --pyenv uninstall 3.6.0/envs/general\npyenv-virtualenv: remove /home/dhankar/.pyenv/versions/3.6.0/envs/general? y\ndhankar@dhankar-VPCEB44EN:~/.pyenv$\n\nto doubly ensure that the PyENV has been removed --\ndhankar@dhankar-VPCEB44EN:~/.pyenv$ pyenv versions\npyenv: version `general' is not installed (set by /home/dhankar/.pyenv/version)\n system\n 3.6.0\n 3.6.5\ndhankar@dhankar-VPCEB44EN:~/.pyenv$\n\nAlso so that its documented - am sharing the terminal output of the same command , earlier before the Un-Install.\n(general) dhankar@dhankar-VPCEB44EN:~/.pyenv$ pyenv versions\n system\n 3.6.0\n 3.6.0/envs/general\n 3.6.5\n* general (set by /home/dhankar/.pyenv/version)\n(general) dhankar@dhankar-VPCEB44EN:~/.pyenv$ \n\n",
"Your mental model of what these modules do is wrong.\npyenv doesn't really have a \"deactivate\" feature. It lets you choose between a number of independent Python installations, but you would basically always be using one of them - unless you get rid of pyenv entirely. pyenv shell . effectively deactivates pyenv for the time being in the current shell instance, and if you want to remove it from your configuration entirely, you want to remove the pyenv commands from your shell's login files: they would typically look something like\neval \"$(pyenv init -)\"\neval \"$(pyenv virtualenv-init -)\"\n\nwhere the latter is an optional add-on which you might not have installed.\nYou can mix pyenv with regular virtual environments; for example, you could run pyenv shell 3.8.9 and then with that active run python -m venv thisenv which will create thisenv in the current directory with a link to the selected Python version; then running source thisenv/bin/activate will activate the virtual environment, so that python or python3 runs the Python version you had active in pyenv when you created the environment, and you can subsequently deactivate that to return to whatever you had before.\nAs a quick hack, you can also run $(pyenv root)/versions/3.8.9/bin/python -m venv thisenv to create the environment without touching the pyenv shell setting.\nThe pyenv-virtualenv add-on plugin has its own set of features, with a slightly different set of use cases. You create those with pyenv virtualenv, activate with pyenv activate, and deactivate with pyenv deactivate. These environments are stored alongside the pyenv Python versions in a central place in your home directory, not in the current directory.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"pyenv",
"python",
"ubuntu",
"virtualenv"
] |
stackoverflow_0043935610_pyenv_python_ubuntu_virtualenv.txt
|
Q:
ValueError: Length of values does not match length of index | Pandas DataFrame.unique()
I am trying to get a new dataset, or change the value of the current dataset columns to their unique values.
Here is an example of what I am trying to get :
A B
-----
0| 1 1
1| 2 5
2| 1 5
3| 7 9
4| 7 9
5| 8 9
Wanted Result Not Wanted Result
A B A B
----- -----
0| 1 1 0| 1 1
1| 2 5 1| 2 5
2| 7 9 2|
3| 8 3| 7 9
4|
5| 8
I don't really care about the index but it seems to be the problem.
My code so far is pretty simple, I tried 2 approaches, 1 with a new dataFrame and one without.
#With New DataFrame
def UniqueResults(dataframe):
df = pd.DataFrame()
for col in dataframe:
S=pd.Series(dataframe[col].unique())
df[col]=S.values
return df
#Without new DataFrame
def UniqueResults(dataframe):
for col in dataframe:
dataframe[col]=dataframe[col].unique()
return dataframe
Both times, I get the error:
Length of Values does not match length of index
A:
The error comes up when you are trying to assign a list of numpy array of different length to a data frame, and it can be reproduced as follows:
A data frame of four rows:
df = pd.DataFrame({'A': [1,2,3,4]})
Now trying to assign a list/array of two elements to it:
df['B'] = [3,4] # or df['B'] = np.array([3,4])
Both errors out:
ValueError: Length of values does not match length of index
Because the data frame has four rows but the list and array has only two elements.
Work around Solution (use with caution): convert the list/array to a pandas Series, and then when you do assignment, missing index in the Series will be filled with NaN:
df['B'] = pd.Series([3,4])
df
# A B
#0 1 3.0
#1 2 4.0
#2 3 NaN # NaN because the value at index 2 and 3 doesn't exist in the Series
#3 4 NaN
For your specific problem, if you don't care about the index or the correspondence of values between columns, you can reset index for each column after dropping the duplicates:
df.apply(lambda col: col.drop_duplicates().reset_index(drop=True))
# A B
#0 1 1.0
#1 2 5.0
#2 7 9.0
#3 8 NaN
A:
One way to get around this issue is to keep the unique values in a list and use itertools.zip_longest to transpose the data and pass it into the DataFrame constructor:
from itertools import zip_longest
def UniqueResults(dataframe):
tmp = [dataframe[col].unique() for col in dataframe]
return pd.DataFrame(zip_longest(*tmp), columns=dataframe.columns)
out = UniqueResults(df)
Output:
A B
0 1 1.0
1 2 5.0
2 7 9.0
3 8 NaN
At least for small DataFrames, this seems to be faster (for example on OP's sample):
%timeit out = df.apply(lambda col: col.drop_duplicates().reset_index(drop=True))
1.27 ms Β± 50.9 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each)
%timeit x = UniqueResults(df)
426 Β΅s Β± 24.9 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each)
A:
Another simple solution is to make the solution suggested by OP into a working one. We just need to cast the unique values of each column into a pandas Series:
df1 = df.apply(lambda col: pd.Series(col.unique()))
df1
|
ValueError: Length of values does not match length of index | Pandas DataFrame.unique()
|
I am trying to get a new dataset, or change the value of the current dataset columns to their unique values.
Here is an example of what I am trying to get :
A B
-----
0| 1 1
1| 2 5
2| 1 5
3| 7 9
4| 7 9
5| 8 9
Wanted Result Not Wanted Result
A B A B
----- -----
0| 1 1 0| 1 1
1| 2 5 1| 2 5
2| 7 9 2|
3| 8 3| 7 9
4|
5| 8
I don't really care about the index but it seems to be the problem.
My code so far is pretty simple, I tried 2 approaches, 1 with a new dataFrame and one without.
#With New DataFrame
def UniqueResults(dataframe):
df = pd.DataFrame()
for col in dataframe:
S=pd.Series(dataframe[col].unique())
df[col]=S.values
return df
#Without new DataFrame
def UniqueResults(dataframe):
for col in dataframe:
dataframe[col]=dataframe[col].unique()
return dataframe
Both times, I get the error:
Length of Values does not match length of index
|
[
"The error comes up when you are trying to assign a list of numpy array of different length to a data frame, and it can be reproduced as follows:\nA data frame of four rows:\ndf = pd.DataFrame({'A': [1,2,3,4]})\n\nNow trying to assign a list/array of two elements to it:\ndf['B'] = [3,4] # or df['B'] = np.array([3,4])\n\nBoth errors out:\n\nValueError: Length of values does not match length of index\n\nBecause the data frame has four rows but the list and array has only two elements.\nWork around Solution (use with caution): convert the list/array to a pandas Series, and then when you do assignment, missing index in the Series will be filled with NaN:\ndf['B'] = pd.Series([3,4])\n\ndf\n# A B\n#0 1 3.0\n#1 2 4.0\n#2 3 NaN # NaN because the value at index 2 and 3 doesn't exist in the Series\n#3 4 NaN\n\n\nFor your specific problem, if you don't care about the index or the correspondence of values between columns, you can reset index for each column after dropping the duplicates:\ndf.apply(lambda col: col.drop_duplicates().reset_index(drop=True))\n\n# A B\n#0 1 1.0\n#1 2 5.0\n#2 7 9.0\n#3 8 NaN\n\n",
"One way to get around this issue is to keep the unique values in a list and use itertools.zip_longest to transpose the data and pass it into the DataFrame constructor:\nfrom itertools import zip_longest\ndef UniqueResults(dataframe):\n tmp = [dataframe[col].unique() for col in dataframe]\n return pd.DataFrame(zip_longest(*tmp), columns=dataframe.columns)\n\nout = UniqueResults(df)\n\nOutput:\n A B\n0 1 1.0\n1 2 5.0\n2 7 9.0\n3 8 NaN\n\nAt least for small DataFrames, this seems to be faster (for example on OP's sample):\n%timeit out = df.apply(lambda col: col.drop_duplicates().reset_index(drop=True))\n1.27 ms Β± 50.9 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each)\n\n%timeit x = UniqueResults(df)\n426 Β΅s Β± 24.9 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each)\n\n",
"Another simple solution is to make the solution suggested by OP into a working one. We just need to cast the unique values of each column into a pandas Series:\ndf1 = df.apply(lambda col: pd.Series(col.unique()))\ndf1\n\n\n"
] |
[
125,
2,
0
] |
[] |
[] |
[
"dataframe",
"duplicates",
"pandas",
"python"
] |
stackoverflow_0042382263_dataframe_duplicates_pandas_python.txt
|
Q:
Different result when i switch positions of my two different functions
Note - I am using VSCode
Sample 1: In this example my function nextSquare() is been executed, but aFunc() is not been executed, as in, i get no output for my 2nd function
def nextSquare():
i = 1
while True:
yield i*i
i += 1
for num in nextSquare():
if num<100:
print(num)
def aFunc():
print("This is inside our function")
print("This is outside of our funcion as a seperate entity")
aFunc()
Output:
Code/RoughWork.py
1
4
9
16
25
36
49
64
81
Sample 2: In this example my function nextSquare() is been executed, but aFunc() gives me output, all i did is, just shift aFunc() before nextSquare()
def aFunc():
print("This is inside our function")
print("This is outside of our funcion as a seperate entity")
aFunc()
def nextSquare():
i = 1
while True:
yield i*i
i += 1
for num in nextSquare():
if num<100:
print(num)
Output:
Code/RoughWork.py
This is outside of our funcion as a seperate entity
This is inside our function
1
4
9
16
25
36
49
64
81
so when i used the sample 1: in the above code block, i expected that both the functions will get executed, but they didn't, instead by rearranging the function's positions, i got an output, so why is that why did the position of what existed before what mattered in this scenario,and when i try the same in jupyter notebook, the cell doesn't run at all for sample 2.
A:
The for loop never ends, so nothing after it is executed. When num is more than 100 it stops printing, but it keeps looping. You need to stop the loop.
for num in nextSquare():
if num<100:
print(num)
else:
break
|
Different result when i switch positions of my two different functions
|
Note - I am using VSCode
Sample 1: In this example my function nextSquare() is been executed, but aFunc() is not been executed, as in, i get no output for my 2nd function
def nextSquare():
i = 1
while True:
yield i*i
i += 1
for num in nextSquare():
if num<100:
print(num)
def aFunc():
print("This is inside our function")
print("This is outside of our funcion as a seperate entity")
aFunc()
Output:
Code/RoughWork.py
1
4
9
16
25
36
49
64
81
Sample 2: In this example my function nextSquare() is been executed, but aFunc() gives me output, all i did is, just shift aFunc() before nextSquare()
def aFunc():
print("This is inside our function")
print("This is outside of our funcion as a seperate entity")
aFunc()
def nextSquare():
i = 1
while True:
yield i*i
i += 1
for num in nextSquare():
if num<100:
print(num)
Output:
Code/RoughWork.py
This is outside of our funcion as a seperate entity
This is inside our function
1
4
9
16
25
36
49
64
81
so when i used the sample 1: in the above code block, i expected that both the functions will get executed, but they didn't, instead by rearranging the function's positions, i got an output, so why is that why did the position of what existed before what mattered in this scenario,and when i try the same in jupyter notebook, the cell doesn't run at all for sample 2.
|
[
"The for loop never ends, so nothing after it is executed. When num is more than 100 it stops printing, but it keeps looping. You need to stop the loop.\nfor num in nextSquare():\n if num<100:\n print(num)\n else:\n break\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"python_3.11"
] |
stackoverflow_0074556876_python_python_3.11.txt
|
Q:
How to generate the csv file with the Number field but None value
I got a requirement to get data from database and write them into file with CSV format.
and the further requirement is the field needs to be sepread by the comma char, and the String value needs to be enclosed with double quota char, and other fileds no need.
but when write them into csv, the number field with Null value is enclosed with double quota char.
below is my test code.
import csv
results = [("aaa", 123.1,323.1),("bbb",None,2345)]
with open('test.csv','w', newline='', encoding='utf-8') as csvfile:
csvwriter = csv.writer(csvfile, quotechar='"',quoting=csv.QUOTE_NONNUMERIC)
csvwriter.writerows(results)
the result after exporting is below.
"aaa",123.1,323.1
"bbb","",2345
My question is how to get the result below.
"aaa",123.1,323.1
"bbb",,2345
A:
Setting quoting to csv.QUOTE_NONNUMERIC means to quote all non-numeric values, and since None (or '') is not a numeric type, its output gets quoted.
A workaround is to create a subclass of a numeric type and force its string conversion to be '', so that it passes csv's numeric type check and gets a non-quoted empty output:
import csv
class empty_number(int):
def __str__(self):
return ''
EmptyNumber = empty_number()
results = [("aaa", 123.1, 323.1), ("bbb", EmptyNumber, 2345)]
with open('test.csv','w', newline='', encoding='utf-8') as csvfile:
csvwriter = csv.writer(csvfile, quotechar='"',quoting=csv.QUOTE_NONNUMERIC)
csvwriter.writerows(results)
Demo: https://replit.com/@blhsing/BurlyForthrightDatawarehouse
|
How to generate the csv file with the Number field but None value
|
I got a requirement to get data from database and write them into file with CSV format.
and the further requirement is the field needs to be sepread by the comma char, and the String value needs to be enclosed with double quota char, and other fileds no need.
but when write them into csv, the number field with Null value is enclosed with double quota char.
below is my test code.
import csv
results = [("aaa", 123.1,323.1),("bbb",None,2345)]
with open('test.csv','w', newline='', encoding='utf-8') as csvfile:
csvwriter = csv.writer(csvfile, quotechar='"',quoting=csv.QUOTE_NONNUMERIC)
csvwriter.writerows(results)
the result after exporting is below.
"aaa",123.1,323.1
"bbb","",2345
My question is how to get the result below.
"aaa",123.1,323.1
"bbb",,2345
|
[
"Setting quoting to csv.QUOTE_NONNUMERIC means to quote all non-numeric values, and since None (or '') is not a numeric type, its output gets quoted.\nA workaround is to create a subclass of a numeric type and force its string conversion to be '', so that it passes csv's numeric type check and gets a non-quoted empty output:\nimport csv\n\nclass empty_number(int):\n def __str__(self):\n return ''\n\nEmptyNumber = empty_number()\n\nresults = [(\"aaa\", 123.1, 323.1), (\"bbb\", EmptyNumber, 2345)]\nwith open('test.csv','w', newline='', encoding='utf-8') as csvfile:\n csvwriter = csv.writer(csvfile, quotechar='\"',quoting=csv.QUOTE_NONNUMERIC)\n csvwriter.writerows(results)\n\nDemo: https://replit.com/@blhsing/BurlyForthrightDatawarehouse\n"
] |
[
1
] |
[] |
[] |
[
"csv",
"format",
"python"
] |
stackoverflow_0074556908_csv_format_python.txt
|
Q:
Reshaping a Dataframe with a column having numeric and non-numeric value stored as Object Datatype
I want to reshape the input dataframe to output dataframe shape as mentioned below.
Input Dataframe
ID Parameter Value
0 1001 Name Peter
1 1001 Name Pete
2 1001 Name J. Pete
3 1001 ShoeSize A
4 1001 ShoeSize A
5 1001 BrainSize 32
6 1001 BrainSize 41
7 1002 Name Frank
8 1002 ShoeSize C
9 1002 ShoeSize A
10 1002 BrainSize 52
11 1002 BrainSize 41
Output Dataframe
ID BrainSize Name ShoeSize
1001 36.5 Peter A
1002 46.5 Frank C,A
I did
import pandas as pd
import numpy as np
df=pd.read_csv('input.csv')
df
ID Parameter Value
0 1001 Name Peter
1 1001 Name Pete
2 1001 Name J. Pete
3 1001 ShoeSize A
4 1001 ShoeSize A
5 1001 BrainSize 32
6 1001 BrainSize 41
7 1002 Name Frank
8 1002 ShoeSize C
9 1002 ShoeSize A
10 1002 BrainSize 52
11 1002 BrainSize 41
df1=pd.pivot_table(df,index='ID',columns='Parameter',values='Value',aggfunc=sum)
df1
Parameter BrainSize Name ShoeSize
ID
1001 3241 PeterPeteJ. Pete AA
1002 5241 Frank CA
for i in range (len(df1)):
x=pd.to_numeric(df1.iloc[i,df1.columns.get_loc("BrainSize")][:2])+pd.to_numeric(df1.iloc[i,df1.columns.get_loc("BrainSize")][2:])
df1.iloc[i,0]=x/2
y=df1.iloc[i,df1.columns.get_loc("Name")][:5]
df1.iloc[i,1]=y
for j in range(len(df1)):
if(df1.iloc[j,df1.columns.get_loc('ShoeSize')][0]==df1.iloc[j,df1.columns.get_loc('ShoeSize')][1]):
df1.iloc[j,2]=df1.iloc[j,df1.columns.get_loc('ShoeSize')][0]
else:
df1.iloc[j,2]=df1.iloc[j,df1.columns.get_loc('ShoeSize')][0]+','+df1.iloc[j,df1.columns.get_loc('ShoeSize')][1]
df1
Parameter BrainSize Name ShoeSize
ID
1001 36.5 Peter A
1002 46.5 Frank C,A
I am sure there are better ways to do it. Can you please help !!!
A:
Use:
join = lambda x: ','.join(x.dropna())
(df.assign(idx2=lambda d: d.groupby(['ID', 'Parameter']).cumcount())
.pivot(index=['ID', 'idx2'], columns='Parameter', values='Value')
.astype({'BrainSize': float})
.groupby(level=0).agg({'BrainSize': 'mean', 'Name': join, 'ShoeSize': join})
)
output:
Parameter BrainSize Name ShoeSize
ID
1001 36.5 Peter,Pete,J. Pete A,A
1002 46.5 Frank C,A
|
Reshaping a Dataframe with a column having numeric and non-numeric value stored as Object Datatype
|
I want to reshape the input dataframe to output dataframe shape as mentioned below.
Input Dataframe
ID Parameter Value
0 1001 Name Peter
1 1001 Name Pete
2 1001 Name J. Pete
3 1001 ShoeSize A
4 1001 ShoeSize A
5 1001 BrainSize 32
6 1001 BrainSize 41
7 1002 Name Frank
8 1002 ShoeSize C
9 1002 ShoeSize A
10 1002 BrainSize 52
11 1002 BrainSize 41
Output Dataframe
ID BrainSize Name ShoeSize
1001 36.5 Peter A
1002 46.5 Frank C,A
I did
import pandas as pd
import numpy as np
df=pd.read_csv('input.csv')
df
ID Parameter Value
0 1001 Name Peter
1 1001 Name Pete
2 1001 Name J. Pete
3 1001 ShoeSize A
4 1001 ShoeSize A
5 1001 BrainSize 32
6 1001 BrainSize 41
7 1002 Name Frank
8 1002 ShoeSize C
9 1002 ShoeSize A
10 1002 BrainSize 52
11 1002 BrainSize 41
df1=pd.pivot_table(df,index='ID',columns='Parameter',values='Value',aggfunc=sum)
df1
Parameter BrainSize Name ShoeSize
ID
1001 3241 PeterPeteJ. Pete AA
1002 5241 Frank CA
for i in range (len(df1)):
x=pd.to_numeric(df1.iloc[i,df1.columns.get_loc("BrainSize")][:2])+pd.to_numeric(df1.iloc[i,df1.columns.get_loc("BrainSize")][2:])
df1.iloc[i,0]=x/2
y=df1.iloc[i,df1.columns.get_loc("Name")][:5]
df1.iloc[i,1]=y
for j in range(len(df1)):
if(df1.iloc[j,df1.columns.get_loc('ShoeSize')][0]==df1.iloc[j,df1.columns.get_loc('ShoeSize')][1]):
df1.iloc[j,2]=df1.iloc[j,df1.columns.get_loc('ShoeSize')][0]
else:
df1.iloc[j,2]=df1.iloc[j,df1.columns.get_loc('ShoeSize')][0]+','+df1.iloc[j,df1.columns.get_loc('ShoeSize')][1]
df1
Parameter BrainSize Name ShoeSize
ID
1001 36.5 Peter A
1002 46.5 Frank C,A
I am sure there are better ways to do it. Can you please help !!!
|
[
"Use:\njoin = lambda x: ','.join(x.dropna())\n(df.assign(idx2=lambda d: d.groupby(['ID', 'Parameter']).cumcount())\n .pivot(index=['ID', 'idx2'], columns='Parameter', values='Value')\n .astype({'BrainSize': float})\n .groupby(level=0).agg({'BrainSize': 'mean', 'Name': join, 'ShoeSize': join})\n)\n\noutput:\nParameter BrainSize Name ShoeSize\nID \n1001 36.5 Peter,Pete,J. Pete A,A\n1002 46.5 Frank C,A\n\n"
] |
[
1
] |
[] |
[] |
[
"data_transform",
"pivot_table",
"python"
] |
stackoverflow_0074557126_data_transform_pivot_table_python.txt
|
Q:
Python - How to sort a 2d array by different order for each element?
I just want to clear out that I am new to coding.
I am trying to solve a problem set that counts the occurrence of characters in a string and prints out the 3 most reoccurring characters
Heres the code I wrote
s = input().lower()
b = []
for i in s:
templst = []
templst.append(i)
templst.append(s.count(i))
if templst not in b:
b.append(templst)
final = sorted(b, key=itemgetter(1),reverse=True)
print (final)
for i in final[:3]:
print(*i, sep=" ")
now if I gave it an input of
szrmtbttyyaymadobvwniwmozojggfbtswdiocewnqsjrkimhovimghixqryqgzhgbakpncwupcadwvglmupbexijimonxdowqsjinqzytkooacwkchatuwpsoxwvgrrejkukcvyzbkfnzfvrthmtfvmbppkdebswfpspxnelhqnjlgntqzsprmhcnuomrvuyolvzlni
the output of final would be
[['o', 12], ['m', 11], ['w', 11], ['n', 11], ['t', 9], ['v', 9], ['i', 9], ['p', 9], ['s', 8], ['z', 8], ['r', 8], ['b', 8], ['g', 8], ['k', 8], ['y', 7], ['c', 7], ['q', 7], ['h', 7], ['a', 6], ['j', 6], ['u', 6], ['d', 5], ['f', 5], ['e', 5], ['x', 5], ['l', 5]
so, the most occurring characters are
['o', 12], ['m', 11], ['w', 11], ['n', 11]
instead of
['o', 12], ['m', 11], ['n', 11], ['w', 11]
and since "m", "w" and "n" occurred equal times how do I sort the first element alphabetically while having the second element reversely sorted
A:
you need to specify multiple conditions for the sort
final= Sorted(b, key = lambda e: (-e[1], e[0]))
The negative sign here makes larger numbers first (as if we are sorting in reverse order)
A:
Since pythons sort is stable you could do two sort passes:
b.sort(key=lambda x: x[0])
b.sort(key=lambda x: x[1], reverse=True)
|
Python - How to sort a 2d array by different order for each element?
|
I just want to clear out that I am new to coding.
I am trying to solve a problem set that counts the occurrence of characters in a string and prints out the 3 most reoccurring characters
Heres the code I wrote
s = input().lower()
b = []
for i in s:
templst = []
templst.append(i)
templst.append(s.count(i))
if templst not in b:
b.append(templst)
final = sorted(b, key=itemgetter(1),reverse=True)
print (final)
for i in final[:3]:
print(*i, sep=" ")
now if I gave it an input of
szrmtbttyyaymadobvwniwmozojggfbtswdiocewnqsjrkimhovimghixqryqgzhgbakpncwupcadwvglmupbexijimonxdowqsjinqzytkooacwkchatuwpsoxwvgrrejkukcvyzbkfnzfvrthmtfvmbppkdebswfpspxnelhqnjlgntqzsprmhcnuomrvuyolvzlni
the output of final would be
[['o', 12], ['m', 11], ['w', 11], ['n', 11], ['t', 9], ['v', 9], ['i', 9], ['p', 9], ['s', 8], ['z', 8], ['r', 8], ['b', 8], ['g', 8], ['k', 8], ['y', 7], ['c', 7], ['q', 7], ['h', 7], ['a', 6], ['j', 6], ['u', 6], ['d', 5], ['f', 5], ['e', 5], ['x', 5], ['l', 5]
so, the most occurring characters are
['o', 12], ['m', 11], ['w', 11], ['n', 11]
instead of
['o', 12], ['m', 11], ['n', 11], ['w', 11]
and since "m", "w" and "n" occurred equal times how do I sort the first element alphabetically while having the second element reversely sorted
|
[
"you need to specify multiple conditions for the sort\nfinal= Sorted(b, key = lambda e: (-e[1], e[0]))\n\nThe negative sign here makes larger numbers first (as if we are sorting in reverse order)\n",
"Since pythons sort is stable you could do two sort passes:\nb.sort(key=lambda x: x[0])\nb.sort(key=lambda x: x[1], reverse=True)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"arrays",
"list",
"python",
"sorting",
"string"
] |
stackoverflow_0074556864_arrays_list_python_sorting_string.txt
|
Q:
sum in list of dictionaries python with exception
how do I get the sum of money and spent from a list of dictionaries where
sum of money = (sum of money of shirt color blue and red) and (sum of money of shirt color yellow and green)
sum of spent = (sum of spent of shirt color blue and red) and (sum of spent of shirt color yellow and green)
should i make new dictionary for shirtcolor blue and red and another one for yellow and green?
people = [{'name': 'A', 'shirtcolor':'blue', 'money':'100', spent:'50'}, {'name': 'B', 'shirtcolor':'red', 'money':'70', spent:'50'}, {'name': 'C', 'shirtcolor':'yellow', 'money':'100', spent:'70'}, {'name': 'D', 'shirtcolor':'blue', 'money':'200', spent:'110'},{'name': 'E', 'shirtcolor':'red', 'money':'130', spent:'50'}, {'name': 'F', 'shirtcolor':'yellow', 'money':'200', spent:'70'},{'name': 'G', 'shirtcolor':'green', 'money':'100', spent:'50'}]
expected output:
Total Money: 500 and 400
Total spent: 260 and 190
A:
The data is
people = [{'name': 'A', 'shirtcolor': 'blue', 'money': '100', 'spent': '50'},
{'name': 'B', 'shirtcolor': 'red', 'money': '70', 'spent': '50'},
{'name': 'C', 'shirtcolor': 'yellow', 'money': '100', 'spent': '70'},
{'name': 'D', 'shirtcolor': 'blue', 'money': '200', 'spent': '110'},
{'name': 'E', 'shirtcolor': 'red', 'money': '130', 'spent': '50'},
{'name': 'F', 'shirtcolor': 'yellow', 'money': '200', 'spent': '70'},
{'name': 'G', 'shirtcolor': 'green', 'money': '100', 'spent': '50'}]
You need only one dictionary where the color is the key and the value is a dictionary with the keys "money" and "spent". Then you can add up all entries there.
color_sum = dict()
for entry in people:
if entry['shirtcolor'] not in color_sum:
color_sum[entry['shirtcolor']] = {'money':0, 'spent':0}
color_sum[entry['shirtcolor']]['money'] += int(entry['money'])
color_sum[entry['shirtcolor']]['spent'] += int(entry['spent'])
Using a defaultdict does make this easier.
from collections import defaultdict
color_sum = defaultdict(lambda: {'money':0, 'spent':0})
for entry in people:
color_sum[entry['shirtcolor']]['money'] += int(entry['money'])
color_sum[entry['shirtcolor']]['spent'] += int(entry['spent'])
The resulting data in color_sum will be this:
{'blue': {'money': 300, 'spent': 160},
'red': {'money': 200, 'spent': 100},
'yellow': {'money': 300, 'spent': 140},
'green': {'money': 100, 'spent': 50}}
Now you can get the information you need.
money_red_blue = color_sum["red"]["money"] + color_sum["blue"]["money"]
money_yellow_green = color_sum["yellow"]["money"]+ color_sum["green"]["money"]
print(f'Total money: {money_red_blue} and {money_yellow_green}')
This will output Total money: 500 and 400
In the comment was the question how to get all the money from shirts that don't have one of the colors green and yellow. In this case we will have to loop over the aggregated data in the dictionary and exclude the items with the keys "green" and "yellow".
money = 0
for k, v in color_sum.items():
if k not in {'green', 'yellow'}:
money += v['money']
print(money)
Or as a one-liner with sum and a generator:
money = sum(v['money'] for k, v in color_sum.items() if k not in {'green', 'yellow'})
print(money)
A:
First you need to check the spent attribute. It should have this syntax: 'spent': 40.
So people will be like this:
people = [{'name': 'A', 'shirtcolor':'blue', 'money':'100', 'spent':'50'}, {'name': 'B', 'shirtcolor':'red', 'money':'70', 'spent':'50'}, {'name': 'C', 'shirtcolor':'yellow', 'money':'100', 'spent':'70'}, {'name': 'D', 'shirtcolor':'blue', 'money':'200', 'spent':'110'},{'name': 'E', 'shirtcolor':'red', 'money':'130', 'spent':'50'}, {'name': 'F', 'shirtcolor':'yellow', 'money':'200', 'spent':'70'},{'name': 'G', 'shirtcolor':'green', 'money':'100', 'spent':'50'}]
Then you need to loop over the list and get all the values.
You can do this by extending this code:
money_blue = 0
for i in range(len(people)):
if people[i]['shirtcolor'] == "blue":
money += int(people[i]['money'])
print(money_blue)
A:
You can try like below:
product_list=[
{"name": "A", "shirtcolor":"blue", "money":"100", "spent":"50"},
{"name": "B", "shirtcolor":"red", "money":"70", "spent":"50"},
{"name": "C", "shirtcolor":"yellow", "money":"100", "spent":"70"},
{"name": "D", "shirtcolor":"blue", "money":"200", "spent":"110"},
{"name": "E", "shirtcolor":"red", "money":"130", "spent":"50"},
{"name": "F", "shirtcolor":"yellow", "money":"200", "spent":"70"},
{"name": "G", "shirtcolor":"green", "money":"100", "spent":"50"}
]
print(product_list)
#sum of spent for blue and red
blueSpent = sum([int(x["spent"]) for x in product_list if x["shirtcolor"]=="blue" or x["shirtcolor"]=="red"])
print(blueSpent)
#sum of spent for green and yellow
greenSpent = sum([int(x["spent"]) for x in product_list if x["shirtcolor"]=="green" or x["shirtcolor"]=="yellow"])
print(greenSpent)
#sum of spent for blue and red
blueMoney = sum([int(x["money"]) for x in product_list if x["shirtcolor"]=="blue" or x["shirtcolor"]=="red"])
print(blueMoney)
#sum of money for green and yellow
greenMoney = sum([int(x["money"]) for x in product_list if x["shirtcolor"]=="green" or x["shirtcolor"]=="yellow"])
print(greenMoney)
|
sum in list of dictionaries python with exception
|
how do I get the sum of money and spent from a list of dictionaries where
sum of money = (sum of money of shirt color blue and red) and (sum of money of shirt color yellow and green)
sum of spent = (sum of spent of shirt color blue and red) and (sum of spent of shirt color yellow and green)
should i make new dictionary for shirtcolor blue and red and another one for yellow and green?
people = [{'name': 'A', 'shirtcolor':'blue', 'money':'100', spent:'50'}, {'name': 'B', 'shirtcolor':'red', 'money':'70', spent:'50'}, {'name': 'C', 'shirtcolor':'yellow', 'money':'100', spent:'70'}, {'name': 'D', 'shirtcolor':'blue', 'money':'200', spent:'110'},{'name': 'E', 'shirtcolor':'red', 'money':'130', spent:'50'}, {'name': 'F', 'shirtcolor':'yellow', 'money':'200', spent:'70'},{'name': 'G', 'shirtcolor':'green', 'money':'100', spent:'50'}]
expected output:
Total Money: 500 and 400
Total spent: 260 and 190
|
[
"The data is\npeople = [{'name': 'A', 'shirtcolor': 'blue', 'money': '100', 'spent': '50'},\n {'name': 'B', 'shirtcolor': 'red', 'money': '70', 'spent': '50'},\n {'name': 'C', 'shirtcolor': 'yellow', 'money': '100', 'spent': '70'},\n {'name': 'D', 'shirtcolor': 'blue', 'money': '200', 'spent': '110'},\n {'name': 'E', 'shirtcolor': 'red', 'money': '130', 'spent': '50'},\n {'name': 'F', 'shirtcolor': 'yellow', 'money': '200', 'spent': '70'},\n {'name': 'G', 'shirtcolor': 'green', 'money': '100', 'spent': '50'}]\n\nYou need only one dictionary where the color is the key and the value is a dictionary with the keys \"money\" and \"spent\". Then you can add up all entries there.\ncolor_sum = dict()\nfor entry in people:\n if entry['shirtcolor'] not in color_sum:\n color_sum[entry['shirtcolor']] = {'money':0, 'spent':0}\n color_sum[entry['shirtcolor']]['money'] += int(entry['money'])\n color_sum[entry['shirtcolor']]['spent'] += int(entry['spent'])\n\nUsing a defaultdict does make this easier.\nfrom collections import defaultdict\n\ncolor_sum = defaultdict(lambda: {'money':0, 'spent':0})\nfor entry in people:\n color_sum[entry['shirtcolor']]['money'] += int(entry['money'])\n color_sum[entry['shirtcolor']]['spent'] += int(entry['spent'])\n\nThe resulting data in color_sum will be this:\n{'blue': {'money': 300, 'spent': 160}, \n 'red': {'money': 200, 'spent': 100}, \n 'yellow': {'money': 300, 'spent': 140}, \n 'green': {'money': 100, 'spent': 50}}\n\nNow you can get the information you need.\nmoney_red_blue = color_sum[\"red\"][\"money\"] + color_sum[\"blue\"][\"money\"]\nmoney_yellow_green = color_sum[\"yellow\"][\"money\"]+ color_sum[\"green\"][\"money\"]\nprint(f'Total money: {money_red_blue} and {money_yellow_green}')\n\nThis will output Total money: 500 and 400\n\nIn the comment was the question how to get all the money from shirts that don't have one of the colors green and yellow. In this case we will have to loop over the aggregated data in the dictionary and exclude the items with the keys \"green\" and \"yellow\".\nmoney = 0\nfor k, v in color_sum.items():\n if k not in {'green', 'yellow'}:\n money += v['money']\nprint(money)\n\nOr as a one-liner with sum and a generator:\nmoney = sum(v['money'] for k, v in color_sum.items() if k not in {'green', 'yellow'})\nprint(money)\n\n",
"First you need to check the spent attribute. It should have this syntax: 'spent': 40.\nSo people will be like this:\npeople = [{'name': 'A', 'shirtcolor':'blue', 'money':'100', 'spent':'50'}, {'name': 'B', 'shirtcolor':'red', 'money':'70', 'spent':'50'}, {'name': 'C', 'shirtcolor':'yellow', 'money':'100', 'spent':'70'}, {'name': 'D', 'shirtcolor':'blue', 'money':'200', 'spent':'110'},{'name': 'E', 'shirtcolor':'red', 'money':'130', 'spent':'50'}, {'name': 'F', 'shirtcolor':'yellow', 'money':'200', 'spent':'70'},{'name': 'G', 'shirtcolor':'green', 'money':'100', 'spent':'50'}]\nThen you need to loop over the list and get all the values.\nYou can do this by extending this code:\nmoney_blue = 0\nfor i in range(len(people)):\n if people[i]['shirtcolor'] == \"blue\":\n money += int(people[i]['money'])\n \n\nprint(money_blue)\n\n",
"You can try like below:\nproduct_list=[\n {\"name\": \"A\", \"shirtcolor\":\"blue\", \"money\":\"100\", \"spent\":\"50\"},\n {\"name\": \"B\", \"shirtcolor\":\"red\", \"money\":\"70\", \"spent\":\"50\"}, \n {\"name\": \"C\", \"shirtcolor\":\"yellow\", \"money\":\"100\", \"spent\":\"70\"},\n {\"name\": \"D\", \"shirtcolor\":\"blue\", \"money\":\"200\", \"spent\":\"110\"},\n {\"name\": \"E\", \"shirtcolor\":\"red\", \"money\":\"130\", \"spent\":\"50\"},\n {\"name\": \"F\", \"shirtcolor\":\"yellow\", \"money\":\"200\", \"spent\":\"70\"},\n {\"name\": \"G\", \"shirtcolor\":\"green\", \"money\":\"100\", \"spent\":\"50\"}\n]\n\nprint(product_list)\n\n#sum of spent for blue and red\nblueSpent = sum([int(x[\"spent\"]) for x in product_list if x[\"shirtcolor\"]==\"blue\" or x[\"shirtcolor\"]==\"red\"])\nprint(blueSpent)\n\n#sum of spent for green and yellow\ngreenSpent = sum([int(x[\"spent\"]) for x in product_list if x[\"shirtcolor\"]==\"green\" or x[\"shirtcolor\"]==\"yellow\"])\nprint(greenSpent)\n\n#sum of spent for blue and red\nblueMoney = sum([int(x[\"money\"]) for x in product_list if x[\"shirtcolor\"]==\"blue\" or x[\"shirtcolor\"]==\"red\"])\nprint(blueMoney)\n\n#sum of money for green and yellow\ngreenMoney = sum([int(x[\"money\"]) for x in product_list if x[\"shirtcolor\"]==\"green\" or x[\"shirtcolor\"]==\"yellow\"])\nprint(greenMoney)\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"dictionary",
"list",
"python",
"python_3.x"
] |
stackoverflow_0074556828_dictionary_list_python_python_3.x.txt
|
Q:
How to convert a three digit integer (xxx) with a 1 decimal place float (xx.x)?
Currently I'm getting data from some sensors with voltage(V) and current(C) values which is decoded into text as V040038038039C125067 to be stored in MYSQL DB table. The voltage contains 4 different voltage values combined while the current contains 2 different current values combined where each value represented by 3 digits in the format of Voltage xx.x C: Current xx.x. For example, the current value of C125067 is actually 12.5 and 06.7A respectively. I tried to use python slicing some and some simple math to achieve this by dividing the values by 10 e.g. C125067 = 125/10 = 12.5. While this works for integers with first non-zero values (e.g. 125), when I tried to perform the same for values such as 040 or 067, I get the SyntaxError: leading zeros in decimal integer literals are not permitted error. Are there any better ways to achieve the desired decoding output of xx.x or to insert a decimal point before the last digit etc? Thanks.
v1 = voltage[1:4]
v2 = voltage[4:7]
v3 = voltage[7:10]
v4 = voltage[10:13]
c1 = current[1:4]
c2 = current[4:7]
volt_1 = int(v1)/10
volt_2 = int(v2)/10
volt_3 = int(v3)/10
volt_4 = int(v4)/10
curr_1 = int(c1)/10
curr_2 = int(c2)/10
A:
Which version of Python are you using? int should convert strings such as '040' just fine.
Python 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:56:21)
Type 'copyright', 'credits' or 'license' for more information
IPython 8.4.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: int('040')
Out[1]: 40
In [2]:
Are you by any chance typing int(040) instead of int('040')? One is a decimal integer literal while the latter is a string.
Leading zeros are not allowed in Python?
Using python 3.9.13, your code works without problems.
voltage = "V040038038039C125067"
v1 = voltage[1:4]
v2 = voltage[4:7]
v3 = voltage[7:10]
v4 = voltage[10:13]
volt_1 = int(v1)/10
volt_2 = int(v2)/10
volt_3 = int(v3)/10
volt_4 = int(v4)/10
print(v1, v2, v3, v4, volt_1, volt_2, volt_3, volt_4)
# 040 038 038 039 4.0 3.8 3.8 3.9
A:
Use a regex to get a list of 6 string values from your sql data (grouped by 3 digits).
The most efficient way to use the regex is to compile it at the beginning then use the compiled regex on your sql rows.
Use a list-comprehension to obtain a list of floats (converted from strings, also stripping the leading zeros).
Use sequence unpacking to separate into a voltage list and a current list.
import re
pattern = re.compile(r"(\d{3})")
data = "V040038038039C125067"
values = [int(x.lstrip("0")) / 10.0 for x in pattern.findall(data)]
voltage, current = values[:4], values[4:]
print(voltage, current) # [4.0, 3.8, 3.8, 3.9] [12.5, 6.7]
You can make a function of that, to easily apply to your sql rows.
def parse(data):
values = [int(x.lstrip("0")) / 10.0 for x in pattern.findall(data)]
return values[:4], values[4:]
voltage, current = parse("V040038038039C125067")
|
How to convert a three digit integer (xxx) with a 1 decimal place float (xx.x)?
|
Currently I'm getting data from some sensors with voltage(V) and current(C) values which is decoded into text as V040038038039C125067 to be stored in MYSQL DB table. The voltage contains 4 different voltage values combined while the current contains 2 different current values combined where each value represented by 3 digits in the format of Voltage xx.x C: Current xx.x. For example, the current value of C125067 is actually 12.5 and 06.7A respectively. I tried to use python slicing some and some simple math to achieve this by dividing the values by 10 e.g. C125067 = 125/10 = 12.5. While this works for integers with first non-zero values (e.g. 125), when I tried to perform the same for values such as 040 or 067, I get the SyntaxError: leading zeros in decimal integer literals are not permitted error. Are there any better ways to achieve the desired decoding output of xx.x or to insert a decimal point before the last digit etc? Thanks.
v1 = voltage[1:4]
v2 = voltage[4:7]
v3 = voltage[7:10]
v4 = voltage[10:13]
c1 = current[1:4]
c2 = current[4:7]
volt_1 = int(v1)/10
volt_2 = int(v2)/10
volt_3 = int(v3)/10
volt_4 = int(v4)/10
curr_1 = int(c1)/10
curr_2 = int(c2)/10
|
[
"Which version of Python are you using? int should convert strings such as '040' just fine.\nPython 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:56:21) \nType 'copyright', 'credits' or 'license' for more information\nIPython 8.4.0 -- An enhanced Interactive Python. Type '?' for help.\n\nIn [1]: int('040')\nOut[1]: 40\n\nIn [2]: \n\nAre you by any chance typing int(040) instead of int('040')? One is a decimal integer literal while the latter is a string.\nLeading zeros are not allowed in Python?\nUsing python 3.9.13, your code works without problems.\nvoltage = \"V040038038039C125067\"\n\nv1 = voltage[1:4]\nv2 = voltage[4:7]\nv3 = voltage[7:10]\nv4 = voltage[10:13]\n\nvolt_1 = int(v1)/10\nvolt_2 = int(v2)/10\nvolt_3 = int(v3)/10\nvolt_4 = int(v4)/10\n\nprint(v1, v2, v3, v4, volt_1, volt_2, volt_3, volt_4)\n# 040 038 038 039 4.0 3.8 3.8 3.9\n\n",
"Use a regex to get a list of 6 string values from your sql data (grouped by 3 digits).\nThe most efficient way to use the regex is to compile it at the beginning then use the compiled regex on your sql rows.\nUse a list-comprehension to obtain a list of floats (converted from strings, also stripping the leading zeros).\nUse sequence unpacking to separate into a voltage list and a current list.\nimport re\n\npattern = re.compile(r\"(\\d{3})\")\ndata = \"V040038038039C125067\"\nvalues = [int(x.lstrip(\"0\")) / 10.0 for x in pattern.findall(data)]\nvoltage, current = values[:4], values[4:]\nprint(voltage, current) # [4.0, 3.8, 3.8, 3.9] [12.5, 6.7]\n\nYou can make a function of that, to easily apply to your sql rows.\ndef parse(data):\n values = [int(x.lstrip(\"0\")) / 10.0 for x in pattern.findall(data)]\n return values[:4], values[4:]\n\nvoltage, current = parse(\"V040038038039C125067\")\n\n"
] |
[
0,
0
] |
[
"This is a Very Simple Problem\nWhat you have to do is just divide the number by 10 and convert it into float with float inbuilt function in python.\na = int(input(\"Enter a random number: \"))\nprint(float(a/10))\n\nnow apply it in your problem.\nvolt_1 = float(int(v1)/10)\nvolt_2 = float(int(v2)/10)\nvolt_3 = float(int(v3)/10)\nvolt_4 = float(int(v4)/10)\n\ncurr_1 = float(int(c1)/10)\ncurr_2 = float(int(c2)/10)\n\n"
] |
[
-2
] |
[
"mysql",
"python",
"syntax_error"
] |
stackoverflow_0074555970_mysql_python_syntax_error.txt
|
Q:
Speed up groupby rolling apply utilising multiple columns
I'm trying to create a Brier Score for a grouped rolling window. As the function that calculates the Brier Score utilises multiple columns in the grouped rolling window I've had to use the answer here as the basis for a rather hacky solution:
import pandas as pd
import numpy as np
from pandas._libs.tslibs.timestamps import Timestamp
import random
ROWS = 20
# create dataframe
def create_random_dates(start: Timestamp, end: Timestamp, n: int):
divide_by = 24*60*60*10**9
start_u = start.value // divide_by
end_u = end.value // divide_by
return pd.to_datetime([random.randint(start_u, end_u) for p in range(n)], unit="D")
random.seed(1)
start = pd.to_datetime('2015-01-01')
end = pd.to_datetime('2018-01-01')
random_dates = create_random_dates(start, end, ROWS)
df = pd.DataFrame(
{
"id_": list(range(ROWS)),
"date": random_dates,
"group": [random.randint(1, 2) for p in range(ROWS)],
"y_true": [random.randint(0, 1) for p in range(ROWS)],
"y_prob": [random.random() for p in range(ROWS)],
}
)
df.sort_values(["group", "date"], inplace=True)
df.reset_index(drop=True, inplace=True)
df.reset_index(inplace=True)
# calculate brier score
def calc_brier(series: pd.Series, df: pd.DataFrame) -> float:
df_group = df.loc[series.values]
return np.average((df_group["y_true"].values - df_group["y_prob"].values) ** 2)
df_date_idx = df.set_index("date")
df_date_idx.drop(["id_", "y_true", "y_prob"], axis=1, inplace=True)
brier: pd.DataFrame = (
df_date_idx
.groupby("group", as_index=False)
.rolling("1000d", min_periods=3, closed="left")
.apply(calc_brier, args=(df, ))
)
df.drop("index", axis=1, inplace=True)
df["brier"] = brier["index"].values
df
This works fine with a small number of rows but takes an age once I start scaling ROWS. In my actual use case the dataframe is 1m+ rows and I've given up after a few minutes.
Would anyone have a faster solution?
A:
You can easily achieve fast execution with parallel-pandas.
In your example, I increased the number of groups from 2 to 100. Initialize parallel-pandas and use p_apply the parallel analog of the apply method
import time
from pandas._libs.tslibs.timestamps import Timestamp
import random
import pandas as pd
import numpy as np
from parallel_pandas import ParallelPandas
ROWS = 10_000
# create dataframe
def create_random_dates(start: Timestamp, end: Timestamp, n: int):
divide_by = 24 * 60 * 60 * 10 ** 9
start_u = start.value // divide_by
end_u = end.value // divide_by
return pd.to_datetime([random.randint(start_u, end_u) for p in range(n)], unit="D")
def calc_brier(series: pd.Series, df: pd.DataFrame) -> float:
df_group = df.loc[series.values]
return np.average((df_group["y_true"].values - df_group["y_prob"].values) ** 2)
if __name__ == '__main__':
ParallelPandas.initialize(n_cpu=16, disable_pr_bar=0, split_factor=1)
random.seed(1)
start = pd.to_datetime('2015-01-01')
end = pd.to_datetime('2018-01-01')
random_dates = create_random_dates(start, end, ROWS)
df = pd.DataFrame(
{
"id_": list(range(ROWS)),
"date": random_dates,
"group": [random.randint(1, 100) for p in range(ROWS)],
"y_true": [random.randint(0, 1) for p in range(ROWS)],
"y_prob": [random.random() for p in range(ROWS)],
}
)
df.sort_values(["group", "date"], inplace=True)
df.reset_index(drop=True, inplace=True)
df.reset_index(inplace=True)
# calculate brier score
df_date_idx = df.set_index("date")
df_date_idx.drop(["id_", "y_true", "y_prob"], axis=1, inplace=True)
start = time.monotonic()
brier: pd.DataFrame = (
df_date_idx
.groupby("group")
.rolling("1000d", min_periods=3, closed="left")
.p_apply(calc_brier, args=(df, ))
)
print(f'parallel time took: {time.monotonic() -start:.1f}')
df.drop("index", axis=1, inplace=True)
df["brier"] = brier["index"].values
Output: parallel time took: 0.8 s.
For 10,000 lines it took less than a second versus 7 seconds using the non-parallel apply method on my PC.
|
Speed up groupby rolling apply utilising multiple columns
|
I'm trying to create a Brier Score for a grouped rolling window. As the function that calculates the Brier Score utilises multiple columns in the grouped rolling window I've had to use the answer here as the basis for a rather hacky solution:
import pandas as pd
import numpy as np
from pandas._libs.tslibs.timestamps import Timestamp
import random
ROWS = 20
# create dataframe
def create_random_dates(start: Timestamp, end: Timestamp, n: int):
divide_by = 24*60*60*10**9
start_u = start.value // divide_by
end_u = end.value // divide_by
return pd.to_datetime([random.randint(start_u, end_u) for p in range(n)], unit="D")
random.seed(1)
start = pd.to_datetime('2015-01-01')
end = pd.to_datetime('2018-01-01')
random_dates = create_random_dates(start, end, ROWS)
df = pd.DataFrame(
{
"id_": list(range(ROWS)),
"date": random_dates,
"group": [random.randint(1, 2) for p in range(ROWS)],
"y_true": [random.randint(0, 1) for p in range(ROWS)],
"y_prob": [random.random() for p in range(ROWS)],
}
)
df.sort_values(["group", "date"], inplace=True)
df.reset_index(drop=True, inplace=True)
df.reset_index(inplace=True)
# calculate brier score
def calc_brier(series: pd.Series, df: pd.DataFrame) -> float:
df_group = df.loc[series.values]
return np.average((df_group["y_true"].values - df_group["y_prob"].values) ** 2)
df_date_idx = df.set_index("date")
df_date_idx.drop(["id_", "y_true", "y_prob"], axis=1, inplace=True)
brier: pd.DataFrame = (
df_date_idx
.groupby("group", as_index=False)
.rolling("1000d", min_periods=3, closed="left")
.apply(calc_brier, args=(df, ))
)
df.drop("index", axis=1, inplace=True)
df["brier"] = brier["index"].values
df
This works fine with a small number of rows but takes an age once I start scaling ROWS. In my actual use case the dataframe is 1m+ rows and I've given up after a few minutes.
Would anyone have a faster solution?
|
[
"You can easily achieve fast execution with parallel-pandas.\nIn your example, I increased the number of groups from 2 to 100. Initialize parallel-pandas and use p_apply the parallel analog of the apply method\nimport time\n\nfrom pandas._libs.tslibs.timestamps import Timestamp\nimport random\nimport pandas as pd\nimport numpy as np\nfrom parallel_pandas import ParallelPandas\n\nROWS = 10_000\n\n\n# create dataframe\n\ndef create_random_dates(start: Timestamp, end: Timestamp, n: int):\n divide_by = 24 * 60 * 60 * 10 ** 9\n start_u = start.value // divide_by\n end_u = end.value // divide_by\n return pd.to_datetime([random.randint(start_u, end_u) for p in range(n)], unit=\"D\")\n\n\ndef calc_brier(series: pd.Series, df: pd.DataFrame) -> float:\n df_group = df.loc[series.values]\n return np.average((df_group[\"y_true\"].values - df_group[\"y_prob\"].values) ** 2)\n\n\nif __name__ == '__main__':\n ParallelPandas.initialize(n_cpu=16, disable_pr_bar=0, split_factor=1)\n random.seed(1)\n start = pd.to_datetime('2015-01-01')\n end = pd.to_datetime('2018-01-01')\n random_dates = create_random_dates(start, end, ROWS)\n df = pd.DataFrame(\n {\n \"id_\": list(range(ROWS)),\n \"date\": random_dates,\n \"group\": [random.randint(1, 100) for p in range(ROWS)],\n \"y_true\": [random.randint(0, 1) for p in range(ROWS)],\n \"y_prob\": [random.random() for p in range(ROWS)],\n }\n )\n df.sort_values([\"group\", \"date\"], inplace=True)\n df.reset_index(drop=True, inplace=True)\n df.reset_index(inplace=True)\n\n # calculate brier score\n\n\n\n df_date_idx = df.set_index(\"date\")\n df_date_idx.drop([\"id_\", \"y_true\", \"y_prob\"], axis=1, inplace=True)\n start = time.monotonic()\n brier: pd.DataFrame = (\n df_date_idx\n .groupby(\"group\")\n .rolling(\"1000d\", min_periods=3, closed=\"left\")\n .p_apply(calc_brier, args=(df, ))\n )\n print(f'parallel time took: {time.monotonic() -start:.1f}')\n df.drop(\"index\", axis=1, inplace=True)\n df[\"brier\"] = brier[\"index\"].values\n\n\nOutput: parallel time took: 0.8 s.\n\nFor 10,000 lines it took less than a second versus 7 seconds using the non-parallel apply method on my PC.\n"
] |
[
1
] |
[] |
[] |
[
"group_by",
"pandas",
"pandas_apply",
"pandas_rolling",
"python"
] |
stackoverflow_0074083218_group_by_pandas_pandas_apply_pandas_rolling_python.txt
|
Q:
f-string with percent and fixed decimals?
I know that I can to the following. But is it possible to combine them to give me a percent with fixed decimal?
>>> print(f'{0.123:%}')
12.300000%
>>> print(f'{0.123:.2f}')
0.12
But what I want is this output:
12.30%
A:
You can specify the number of decimal places before %:
>>> f'{0.123:.2%}'
'12.30%'
|
f-string with percent and fixed decimals?
|
I know that I can to the following. But is it possible to combine them to give me a percent with fixed decimal?
>>> print(f'{0.123:%}')
12.300000%
>>> print(f'{0.123:.2f}')
0.12
But what I want is this output:
12.30%
|
[
"You can specify the number of decimal places before %:\n>>> f'{0.123:.2%}' \n'12.30%'\n\n"
] |
[
2
] |
[] |
[] |
[
"f_string",
"python"
] |
stackoverflow_0074557297_f_string_python.txt
|
Q:
Hashing the content of a method in Python
I am looking for a robust way to hash/serialize the content of a method in Python.
Use-case: We are doing some file caching the result of a transformation function, and it would be great if it was possible to automatically refresh if transformation function has changed:
cached_file = get_cached_filename(
data_version=data.version,
transform_version=1, # increment this when making updates
)
# Return the cached file if present
if cached_file.exists() and not overwrite:
logger.debug("Reading dataset from local cache")
return joblib.load(cached_file)
logger.debug("Downloading dataset and storing to local cache")
# Do the expensive data download and transform
df = download_and_apply_transforms(data)
# Store dataframe to local data cache
joblib.dump(df, cached_file)
return df
I am looking for something that could potentially replace my hardcoding of a version in transform_version=1, but rather read a hash from the method itself.
I tried using the built-in hash method of a callable. I.e. download_and_apply_transforms.__hash__(), but it seems to update on each re-run, so it likely bases itself on memory location somehow.
A:
Good question. Not sure if it is the best approach, but you can get the function source code using inspect module, like this:
inspect.getsource(foo)
That will return the source as a string, so you can then get the hash to get a cache key by running some hashing function.
|
Hashing the content of a method in Python
|
I am looking for a robust way to hash/serialize the content of a method in Python.
Use-case: We are doing some file caching the result of a transformation function, and it would be great if it was possible to automatically refresh if transformation function has changed:
cached_file = get_cached_filename(
data_version=data.version,
transform_version=1, # increment this when making updates
)
# Return the cached file if present
if cached_file.exists() and not overwrite:
logger.debug("Reading dataset from local cache")
return joblib.load(cached_file)
logger.debug("Downloading dataset and storing to local cache")
# Do the expensive data download and transform
df = download_and_apply_transforms(data)
# Store dataframe to local data cache
joblib.dump(df, cached_file)
return df
I am looking for something that could potentially replace my hardcoding of a version in transform_version=1, but rather read a hash from the method itself.
I tried using the built-in hash method of a callable. I.e. download_and_apply_transforms.__hash__(), but it seems to update on each re-run, so it likely bases itself on memory location somehow.
|
[
"Good question. Not sure if it is the best approach, but you can get the function source code using inspect module, like this:\ninspect.getsource(foo)\n\nThat will return the source as a string, so you can then get the hash to get a cache key by running some hashing function.\n"
] |
[
2
] |
[] |
[] |
[
"hash",
"python"
] |
stackoverflow_0074557272_hash_python.txt
|
Q:
Add multiple recipient while sending mail throws error - python
I have a code that should send mail to multiple recipient , but it throws me error when i use multiple recipients.
The error am getting -
{'error': {'code': 'RequestBodyRead', 'message': "The property 'Email Address' does not exist on type 'microsoft.graph.recipient'. Make sure to only use property names that are defined by the type or mark the type as open type."}}
def send_email(token, subject, recipients=None, body= None, content_type='HTML', attachments=None):
try:
print("inside send mail")
userId = str(EmailID)
graph_url = f'https://graph.microsoft.com/v1.0/users/{userId}/sendMail'
# Verify that required arguments have been passed.
if not all([token, subject, recipients]):
raise ValueError('sendmail(): required arguments missing')
# Create recipients list in required format.
recipient_list = [{'Email Address': {'Address': address}} for address in recipients]
print(recipient_list)
# Create list of attachments in required format.
attached_files = []
if attachments:
for filename in attachments:
b64_content = base64.b64encode(open(filename, 'rb').read())
mine_type = mimetypes.guess_type(filename)[0]
mime_type = mime_type or ''
attached_files.append(
{'@odata. type': '#microsoft. graph. fi LeAttachment',
'ContentBytes': b64_content.decode('utf-8'),
'ContentType': mime_type,
'Name': filename})
email_msg = {'Message': {'Subject': subject,
'Body': {'ContentType': content_type, 'Content': body},
'ToRecipients': recipient_list,
'Attachments': attached_files},
'SaveToSentItems': 'true'}
res = requests.post(
graph_url,
headers={
'Authorization': 'Bearer {0}'.format(token),
'Content-Type': 'application/json'
},
data=json.dumps(email_msg))
print(res.json())
except Exception as e:
print(e)
So when printing recipient_list am getting
[{'Email Address': {'Address': 'Heer@company.com'}}, {'Email Address': {'Address': 'harry @company.com'}}]
Can anyone please help me solve this issue
A:
The name of the property inside toRecipients is emailAddress not Email Address.
Simply remove the space.
recipient_list = [{'emailAddress': {'Address': address}} for address in recipients]
Also in attachments you have some spaces: '@odata. type': '#microsoft. graph. fi LeAttachment'.
Remove those spaces
attached_files.append(
{'@odata.type': '#microsoft.graph.fileAttachment',
'ContentBytes': b64_content.decode('utf-8'),
'ContentType': mime_type,
'Name': filename})
There is no reason to call res.json() because this request doesn't return any response body.
User sendMail response
|
Add multiple recipient while sending mail throws error - python
|
I have a code that should send mail to multiple recipient , but it throws me error when i use multiple recipients.
The error am getting -
{'error': {'code': 'RequestBodyRead', 'message': "The property 'Email Address' does not exist on type 'microsoft.graph.recipient'. Make sure to only use property names that are defined by the type or mark the type as open type."}}
def send_email(token, subject, recipients=None, body= None, content_type='HTML', attachments=None):
try:
print("inside send mail")
userId = str(EmailID)
graph_url = f'https://graph.microsoft.com/v1.0/users/{userId}/sendMail'
# Verify that required arguments have been passed.
if not all([token, subject, recipients]):
raise ValueError('sendmail(): required arguments missing')
# Create recipients list in required format.
recipient_list = [{'Email Address': {'Address': address}} for address in recipients]
print(recipient_list)
# Create list of attachments in required format.
attached_files = []
if attachments:
for filename in attachments:
b64_content = base64.b64encode(open(filename, 'rb').read())
mine_type = mimetypes.guess_type(filename)[0]
mime_type = mime_type or ''
attached_files.append(
{'@odata. type': '#microsoft. graph. fi LeAttachment',
'ContentBytes': b64_content.decode('utf-8'),
'ContentType': mime_type,
'Name': filename})
email_msg = {'Message': {'Subject': subject,
'Body': {'ContentType': content_type, 'Content': body},
'ToRecipients': recipient_list,
'Attachments': attached_files},
'SaveToSentItems': 'true'}
res = requests.post(
graph_url,
headers={
'Authorization': 'Bearer {0}'.format(token),
'Content-Type': 'application/json'
},
data=json.dumps(email_msg))
print(res.json())
except Exception as e:
print(e)
So when printing recipient_list am getting
[{'Email Address': {'Address': 'Heer@company.com'}}, {'Email Address': {'Address': 'harry @company.com'}}]
Can anyone please help me solve this issue
|
[
"The name of the property inside toRecipients is emailAddress not Email Address.\nSimply remove the space.\nrecipient_list = [{'emailAddress': {'Address': address}} for address in recipients]\n\nAlso in attachments you have some spaces: '@odata. type': '#microsoft. graph. fi LeAttachment'.\nRemove those spaces\nattached_files.append(\n {'@odata.type': '#microsoft.graph.fileAttachment',\n 'ContentBytes': b64_content.decode('utf-8'),\n 'ContentType': mime_type,\n 'Name': filename})\n\nThere is no reason to call res.json() because this request doesn't return any response body.\nUser sendMail response\n"
] |
[
0
] |
[] |
[] |
[
"email",
"microsoft_graph_api",
"microsoft_graph_mail",
"office365",
"python"
] |
stackoverflow_0074557328_email_microsoft_graph_api_microsoft_graph_mail_office365_python.txt
|
Q:
Multiprocessing starmap creates duplicate objects
I am trying to use multiprocessing to (i) read in data, (ii) do some analyses, and (iii) save the output/results of those analyses as an instance of custom class. My function that does the analyses inputs multiple arguments, indicating that starmap from the multiprocessing module should do the trick.
However, even though I input unique (non repeating) arguments into my function, the results (i.e., the class instances) are sometimes duplicated and/or missing.
Here is an example of some boiled down code that illustrates my question/issue:
import numpy as np
from multiprocessing import Pool
# create simple class
class EgClass:
pass
# define function to do analysis
def fun(eg_class, val):
eg_class.val = val
return eg_class
if __name__ == "__main__":
# create a list of unique inputs
unique_vals = list(np.arange(12))
# instantiate the example class
inst = EgClass()
# create a list of inputs
inputs = list(zip(np.repeat(inst, len(unique_vals)), unique_vals))
# apply the inputs to the function via the pool
p = Pool(processes=2)
results = p.starmap(fun, inputs)
p.close()
result_vals = [i.val for i in results]
# notice the result values repeat (not unique) and do not match the unique_vals
print(result_vals)
print(unique_vals)
Interestingly, if you decrease the number of unique values (in this case changing unique_vals to unique_vals = list(np.arange(7))) the code works as I would expect, i.e., duplicate values only crop up when the input arguments increase above a certain length.
I've looked here: Python multiprocessing pool creating duplicate lists
But I believe that post was about wanting to create duplicate lists by sharing information across processes, which is like the opposite of what I am trying to do :)
Finally, forgive my naΓ―vetΓ©. I am new to multiprocessing and there is a good chance I am missing something obvious.
A:
let's first run this code serially without multiprocessing.
from itertools import starmap
results = list(starmap(fun,inputs))
# [11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11]
notice you only created one EgClass instance in your entire code, (you only called EgClass() once), np.repeat simply repeats the pointer to the same class, it doesn't create new objects, so all the trials are modifying this same object, Facts and myths about Python names and values
now for the multiprocessing part, let's modify the code to print the id of the returned object instead, let's also allow the chunksize parameter of starmap to be modified.
results = p.starmap(fun, inputs,chunksize=2)
result_id = [id(i) for i in results]
# [3066122542240, 3066122542240, 3066122541184, 3066122541184, 3066122540896, 3066122540896, 3066122541232, 3066122541232, 3066122541376, 3066122541376, 3066122483072, 3066122483072]
this means that the other process knows that the two objects that were sent over were the same object, because pickle can resolve multiple references to the same object, which means both objects with chunksize=2 were correctly sent to the other process and returned as one object instead of 2, but as only 2 objects were pickled at one time, only 2 objects could keep the same id, we can change that by changing the chunksize parameter, for example we'd get the behavior you expect if we set chunksize to 1, but that would be relying on pickle specific behavior, instead the proper way to ensure all objects have different id is to actually create different objects to begin with, or make deep copies of it.
inputs = list(zip([EgClass() for x in range(len(unique_vals))], unique_vals))
# result_vals = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
this actually creates different objects and each object will have its own id, and internal properties, so we won't be depending on pickle's specific implementation detail.
keep in mind that arguments and returns of multiprocessing are pickled copy versions of the original objects, and so any change to them doesn't translate to changes in the original objects, you'll have to create Manager objects to propagate the change.
|
Multiprocessing starmap creates duplicate objects
|
I am trying to use multiprocessing to (i) read in data, (ii) do some analyses, and (iii) save the output/results of those analyses as an instance of custom class. My function that does the analyses inputs multiple arguments, indicating that starmap from the multiprocessing module should do the trick.
However, even though I input unique (non repeating) arguments into my function, the results (i.e., the class instances) are sometimes duplicated and/or missing.
Here is an example of some boiled down code that illustrates my question/issue:
import numpy as np
from multiprocessing import Pool
# create simple class
class EgClass:
pass
# define function to do analysis
def fun(eg_class, val):
eg_class.val = val
return eg_class
if __name__ == "__main__":
# create a list of unique inputs
unique_vals = list(np.arange(12))
# instantiate the example class
inst = EgClass()
# create a list of inputs
inputs = list(zip(np.repeat(inst, len(unique_vals)), unique_vals))
# apply the inputs to the function via the pool
p = Pool(processes=2)
results = p.starmap(fun, inputs)
p.close()
result_vals = [i.val for i in results]
# notice the result values repeat (not unique) and do not match the unique_vals
print(result_vals)
print(unique_vals)
Interestingly, if you decrease the number of unique values (in this case changing unique_vals to unique_vals = list(np.arange(7))) the code works as I would expect, i.e., duplicate values only crop up when the input arguments increase above a certain length.
I've looked here: Python multiprocessing pool creating duplicate lists
But I believe that post was about wanting to create duplicate lists by sharing information across processes, which is like the opposite of what I am trying to do :)
Finally, forgive my naΓ―vetΓ©. I am new to multiprocessing and there is a good chance I am missing something obvious.
|
[
"let's first run this code serially without multiprocessing.\nfrom itertools import starmap\nresults = list(starmap(fun,inputs))\n# [11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11]\n\nnotice you only created one EgClass instance in your entire code, (you only called EgClass() once), np.repeat simply repeats the pointer to the same class, it doesn't create new objects, so all the trials are modifying this same object, Facts and myths about Python names and values\nnow for the multiprocessing part, let's modify the code to print the id of the returned object instead, let's also allow the chunksize parameter of starmap to be modified.\nresults = p.starmap(fun, inputs,chunksize=2)\nresult_id = [id(i) for i in results]\n# [3066122542240, 3066122542240, 3066122541184, 3066122541184, 3066122540896, 3066122540896, 3066122541232, 3066122541232, 3066122541376, 3066122541376, 3066122483072, 3066122483072]\n\nthis means that the other process knows that the two objects that were sent over were the same object, because pickle can resolve multiple references to the same object, which means both objects with chunksize=2 were correctly sent to the other process and returned as one object instead of 2, but as only 2 objects were pickled at one time, only 2 objects could keep the same id, we can change that by changing the chunksize parameter, for example we'd get the behavior you expect if we set chunksize to 1, but that would be relying on pickle specific behavior, instead the proper way to ensure all objects have different id is to actually create different objects to begin with, or make deep copies of it.\ninputs = list(zip([EgClass() for x in range(len(unique_vals))], unique_vals))\n# result_vals = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]\n\nthis actually creates different objects and each object will have its own id, and internal properties, so we won't be depending on pickle's specific implementation detail.\nkeep in mind that arguments and returns of multiprocessing are pickled copy versions of the original objects, and so any change to them doesn't translate to changes in the original objects, you'll have to create Manager objects to propagate the change.\n"
] |
[
1
] |
[] |
[] |
[
"multiprocessing",
"pool",
"python"
] |
stackoverflow_0074552552_multiprocessing_pool_python.txt
|
Q:
Indexed manageable attributes in Python
I need to add a custom attributes to a class and make this attributes 'indexed'. This code fragment illustrates the issue:
import numpy as np
class Test:
def __init__(self):
self.arr = np.array([[100, 200, 300],
[100, 155, 120],
[300, 110, 333],
[500, 180, 120]
], dtype='object')
@property
def SecondRow(self):
return self.arr[:, 1]
# Workaround which always works
@SecondRow.setter
def SecondRow(self, data):
idx, value = data
self.arr[idx][1] = value
def func(test):
print(test.SecondRow)
# It works sometimes (depending if python makes a copy of self.arr[:, 1] or not).
# In the case of this example, it will work. In another context, it may not work.
test.SecondRow[2] = 500
print(test.arr)
# It works always but the code is not neat. I would like it to look as in the previous example.
test.SecondRow = (2, 600)
print(test.arr)
test = Test()
func(test)
The output is:
[200 155 110 180]
[[100 200 300]
[100 155 120]
[300 500 333]
[500 180 120]]
[[100 200 300]
[100 155 120]
[300 600 333]
[500 180 120]]
Here the output is ok. However, in the real project arrays are huge and there are many intermediate calls between creating a 'class Test' instance and calling 'def func()' and in some cases python interpreter makes a copy of 'self.arr[:, 1]' (particularly problematic are unit tests) and changing it will not affect the actual numpy array 'self.arr' but just a copy of the column.
How can I address the [index] in a regular way (like test.SecondRow[2]) and still handle the copying issue?
I appreciate any help?
A:
There is a slightly hacky way to do what you want by piggybacking off __setitem__.
import numpy as np
class Test:
def __init__(self):
self.arr = np.array([[100, 200, 300],
[100, 155, 120],
[300, 110, 333],
[500, 180, 120]
], dtype='object')
# a control mechanism if there are multiple properties
self.signal = None
def __setitem__(self, idx, value):
"""Performs action according to specified signal."""
if self.signal == "R1":
self.arr[idx, 1] = value
elif self.signal == "C0":
self.arr[0, idx] = value
self.signal = None # reset signal
@property
def SecondRow(self):
self.signal = "R1"
return self
@property
def FirstColumn(self):
self.signal = "C0"
return self
This should allow you to index into your custom property and modify your array without issues.
def func(test):
test.SecondRow[2] = 500
print("Arr:\n", test.arr)
test.FirstColumn[0] = -17
print("Arr:\n", test.arr)
test = Test()
func(test)
Output:
Arr:
[[100 200 300]
[100 155 120]
[300 500 333]
[500 180 120]]
Arr:
[[-17 200 300]
[100 155 120]
[300 500 333]
[500 180 120]]
Drawback
While test.SecondRow[2] = 500 should work without any issues now, we've lost the ability to do print(test.SecondRow), as we directly rout to self and setitem.
|
Indexed manageable attributes in Python
|
I need to add a custom attributes to a class and make this attributes 'indexed'. This code fragment illustrates the issue:
import numpy as np
class Test:
def __init__(self):
self.arr = np.array([[100, 200, 300],
[100, 155, 120],
[300, 110, 333],
[500, 180, 120]
], dtype='object')
@property
def SecondRow(self):
return self.arr[:, 1]
# Workaround which always works
@SecondRow.setter
def SecondRow(self, data):
idx, value = data
self.arr[idx][1] = value
def func(test):
print(test.SecondRow)
# It works sometimes (depending if python makes a copy of self.arr[:, 1] or not).
# In the case of this example, it will work. In another context, it may not work.
test.SecondRow[2] = 500
print(test.arr)
# It works always but the code is not neat. I would like it to look as in the previous example.
test.SecondRow = (2, 600)
print(test.arr)
test = Test()
func(test)
The output is:
[200 155 110 180]
[[100 200 300]
[100 155 120]
[300 500 333]
[500 180 120]]
[[100 200 300]
[100 155 120]
[300 600 333]
[500 180 120]]
Here the output is ok. However, in the real project arrays are huge and there are many intermediate calls between creating a 'class Test' instance and calling 'def func()' and in some cases python interpreter makes a copy of 'self.arr[:, 1]' (particularly problematic are unit tests) and changing it will not affect the actual numpy array 'self.arr' but just a copy of the column.
How can I address the [index] in a regular way (like test.SecondRow[2]) and still handle the copying issue?
I appreciate any help?
|
[
"There is a slightly hacky way to do what you want by piggybacking off __setitem__.\nimport numpy as np\n\nclass Test:\n def __init__(self):\n self.arr = np.array([[100, 200, 300],\n [100, 155, 120],\n [300, 110, 333],\n [500, 180, 120]\n ], dtype='object')\n \n # a control mechanism if there are multiple properties\n self.signal = None\n\n def __setitem__(self, idx, value):\n \"\"\"Performs action according to specified signal.\"\"\"\n \n if self.signal == \"R1\":\n self.arr[idx, 1] = value\n elif self.signal == \"C0\":\n self.arr[0, idx] = value\n \n self.signal = None # reset signal\n\n @property\n def SecondRow(self):\n self.signal = \"R1\"\n return self\n\n @property\n def FirstColumn(self):\n self.signal = \"C0\"\n return self\n\nThis should allow you to index into your custom property and modify your array without issues.\ndef func(test):\n test.SecondRow[2] = 500\n print(\"Arr:\\n\", test.arr)\n\n test.FirstColumn[0] = -17\n print(\"Arr:\\n\", test.arr)\n \ntest = Test()\nfunc(test)\n\nOutput:\nArr:\n [[100 200 300]\n [100 155 120]\n [300 500 333]\n [500 180 120]]\nArr:\n [[-17 200 300]\n [100 155 120]\n [300 500 333]\n [500 180 120]]\n\nDrawback\nWhile test.SecondRow[2] = 500 should work without any issues now, we've lost the ability to do print(test.SecondRow), as we directly rout to self and setitem.\n"
] |
[
0
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0074533621_numpy_python.txt
|
Q:
Python Zybooks LAB 9.6 - Contact List
Yet again, I do not understand an error I keep encountering. Here is my code:
s = input()
name = input()
splits = s.split(" ")
i = 0
for i in range(len(splits)):
if(splits[i] == name):
break
print(splits[i+1])
Here is the error:
Traceback (most recent call last):
File "main.py", line 15, in <module>
print(splits[i+1])
IndexError: list index out of range
I am not sure why [i+1] returns as out of range. What did I screw up this time? I appreciate the help in advance as I don't get much guidance from my instructor or TA. You folks rock here!
Edit: I apologize I did not include a desired outcome.
The input is:
Joe,123-5432 Linda,983-4123 Frank,867-5309
Frank
The output is supposed to be:
867-5309
A:
s = 'Hello'
name = 'Goodbye'
splits = s.split() # default value is a single space ['Hello'] - notice the single value
for i in range(1): # because splits has a single item in the list
if 'Hello' == 'Goodbye':
break
print(splits[i+1]) # this will not work because splits has a single index = 0
# same as
s = [0,1]
print(s[2])
I advise to come up with a desired output when you are asking a question on StackOverflow so others will have a better understanding of your problem.
Also, as mentioned by Hossein in the comment, try using print() in each line of your code to see if you get what you expect.
-- Update --
s_input = 'Joe,123-5432 Linda,983-4123 Frank,867-5309'
to_search = 'Frank'
# split the input
splitted_input = s_input.split()
for item_pair in splitted_input:
# split the pair by a comma
pair_split = item_pair.split(',')
name = pair_split[0]
number = pair_split[1]
if name == to_search:
print(name, number)
Frank 867-5309
The problem you had is that after you split you have 3 elements but I assume you have thought you get 6, 3 pairs for a name and a number, therefore when you try to reach the number you get out of index, you need to split by a comma to separate the people one from another and then split again to separate the number from the name.
But I would suggest to use:
def find_number(string_to_search, name_to_find):
splits = string_to_search.split()
for item in splits:
if name_to_find.lower() in item.lower():
return item.split(',')[1]
find_number('Joe,123-5432 Linda,983-4123 Frank,867-5309', 'Frank')
# '867-5309'
|
Python Zybooks LAB 9.6 - Contact List
|
Yet again, I do not understand an error I keep encountering. Here is my code:
s = input()
name = input()
splits = s.split(" ")
i = 0
for i in range(len(splits)):
if(splits[i] == name):
break
print(splits[i+1])
Here is the error:
Traceback (most recent call last):
File "main.py", line 15, in <module>
print(splits[i+1])
IndexError: list index out of range
I am not sure why [i+1] returns as out of range. What did I screw up this time? I appreciate the help in advance as I don't get much guidance from my instructor or TA. You folks rock here!
Edit: I apologize I did not include a desired outcome.
The input is:
Joe,123-5432 Linda,983-4123 Frank,867-5309
Frank
The output is supposed to be:
867-5309
|
[
"s = 'Hello'\nname = 'Goodbye'\nsplits = s.split() # default value is a single space ['Hello'] - notice the single value\n\nfor i in range(1): # because splits has a single item in the list\n if 'Hello' == 'Goodbye':\n break\n \nprint(splits[i+1]) # this will not work because splits has a single index = 0\n\n\n# same as\n\ns = [0,1]\nprint(s[2])\n\nI advise to come up with a desired output when you are asking a question on StackOverflow so others will have a better understanding of your problem.\nAlso, as mentioned by Hossein in the comment, try using print() in each line of your code to see if you get what you expect.\n-- Update --\ns_input = 'Joe,123-5432 Linda,983-4123 Frank,867-5309'\nto_search = 'Frank'\n\n# split the input\nsplitted_input = s_input.split()\n\n\nfor item_pair in splitted_input:\n # split the pair by a comma\n pair_split = item_pair.split(',')\n name = pair_split[0]\n number = pair_split[1]\n \n if name == to_search:\n print(name, number)\n \nFrank 867-5309\n\nThe problem you had is that after you split you have 3 elements but I assume you have thought you get 6, 3 pairs for a name and a number, therefore when you try to reach the number you get out of index, you need to split by a comma to separate the people one from another and then split again to separate the number from the name.\nBut I would suggest to use:\ndef find_number(string_to_search, name_to_find):\n splits = string_to_search.split()\n for item in splits:\n if name_to_find.lower() in item.lower():\n return item.split(',')[1]\n \n \nfind_number('Joe,123-5432 Linda,983-4123 Frank,867-5309', 'Frank')\n\n# '867-5309'\n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074557361_python.txt
|
Q:
Selenium Webdriver Xpath id random
I have a problem,
when I look for the id of an xpath it changes every time I enter the web
how can i use selenium webdriver python
browser.find_element(By.ID,)
if the id changes every time I consult it
first
<span data-dojo-attach-point="containerNode,focusNode"
class="tabLabel" role="tab" tabindex="0"
id="icm_widget_SelectorTabContainer_0_tablist_dcf42e75-1d03-4acd-878c-722cbc8e74ec"
name="icm_widget_SelectorTabContainer_0_tablist_dcf42e75-1d03-4acd-878c-722cbc8e74ec"
aria-disabled="false"
title=""
style="user-select: none;"
aria-selected="true">Search</span>
second
<span data-dojo-attach-point="containerNode,focusNode"
class="tabLabel"
role="tab"
tabindex="0"
id="icm_widget_SelectorTabContainer_0_tablist_c9ba5042-90d2-4932-8c2d-762a1dd39982"
name="icm_widget_SelectorTabContainer_0_tablist_c9ba5042-90d2-4932-8c2d-762a1dd39982"
aria-disabled="false"
title=""
style="user-select: none;"
aria-selected="true">Search</span>
try with
browser.find_element(By.XPATH
browser.find_element(By.ID
browser.find_element(By.NAME
same problem, the id changes
A:
Try to use below xpath
browser.find_element(By.XPATH(//span[contains(@id, 'icm_widget_SelectorTabContainer') and text()='Search']);
A:
In case the first part of the id is unique and stable as it seems to be, you can use XPath or CSS Selector to locate this element.
XPath:
browser.find_element(By.XPATH, "//span[contains(@id,'icm_widget_SelectorTabContainer_0_tablist')]")
CSS Selector:
browser.find_element(By.CSS_SELECTOR, "span[id*='icm_widget_SelectorTabContainer_0_tablist']")
|
Selenium Webdriver Xpath id random
|
I have a problem,
when I look for the id of an xpath it changes every time I enter the web
how can i use selenium webdriver python
browser.find_element(By.ID,)
if the id changes every time I consult it
first
<span data-dojo-attach-point="containerNode,focusNode"
class="tabLabel" role="tab" tabindex="0"
id="icm_widget_SelectorTabContainer_0_tablist_dcf42e75-1d03-4acd-878c-722cbc8e74ec"
name="icm_widget_SelectorTabContainer_0_tablist_dcf42e75-1d03-4acd-878c-722cbc8e74ec"
aria-disabled="false"
title=""
style="user-select: none;"
aria-selected="true">Search</span>
second
<span data-dojo-attach-point="containerNode,focusNode"
class="tabLabel"
role="tab"
tabindex="0"
id="icm_widget_SelectorTabContainer_0_tablist_c9ba5042-90d2-4932-8c2d-762a1dd39982"
name="icm_widget_SelectorTabContainer_0_tablist_c9ba5042-90d2-4932-8c2d-762a1dd39982"
aria-disabled="false"
title=""
style="user-select: none;"
aria-selected="true">Search</span>
try with
browser.find_element(By.XPATH
browser.find_element(By.ID
browser.find_element(By.NAME
same problem, the id changes
|
[
"Try to use below xpath\nbrowser.find_element(By.XPATH(//span[contains(@id, 'icm_widget_SelectorTabContainer') and text()='Search']);\n\n",
"In case the first part of the id is unique and stable as it seems to be, you can use XPath or CSS Selector to locate this element.\nXPath:\nbrowser.find_element(By.XPATH, \"//span[contains(@id,'icm_widget_SelectorTabContainer_0_tablist')]\")\n\nCSS Selector:\nbrowser.find_element(By.CSS_SELECTOR, \"span[id*='icm_widget_SelectorTabContainer_0_tablist']\")\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"google_chrome",
"python",
"selenium",
"webdriver"
] |
stackoverflow_0074555529_google_chrome_python_selenium_webdriver.txt
|
Q:
Crate new row depends on 2 columns
After Year row need a new row as Year period if column 1 is year and column3< 2010 then columns values for year period is Below 2010 same as other rows
Column1 Column2 ColumnX Column3
0 Year 1 A 2009
1 Date 1 A 12
2 Year 2 A 2021
3 Year 3 A 2011
Column1 Column2 ColumnX Column3
0 Year 1 A 2009
1 Year period 1 A Below2010
2 Date 1 A 12
3 Year 2 A 2021
4 Year period 2 A Above2020
5 Year 3 A 2011
6 Year period 3 A Range in 2010/2020
A:
Filter rows first in boolean indexing for Year columns, replace Column3 in numpy.select and add substring to Column1, last join with original by concat and sort indices by DataFrame.sort_index:
#necessary default RangeIndex
df = df.reset_index(drop=True)
df2 = df[df['Column1'].eq('Year')].copy()
df2['Column3'] = pd.to_numeric(df2['Column3'], errors='coerce')
df1 = (df2.assign(Column3 = lambda x: np.select([x['Column3']<2010, x['Column3']>2020],
['Below2010','Above2020'],
default='Range in 2010/2020'),
Column1 = lambda x: x['Column1'] + ' period'))
df = pd.concat([df, df1]).sort_index(kind='mergesort', ignore_index=True)
print (df)
Column1 Column2 ColumnX Column3
0 Year 1 A 2009
1 Year period 1 A Below2010
2 Date 1 A 12
3 Year 2 A 2021
4 Year period 2 A Above2020
5 Year 3 A 2011
6 Year period 3 A Range in 2010/2020
|
Crate new row depends on 2 columns
|
After Year row need a new row as Year period if column 1 is year and column3< 2010 then columns values for year period is Below 2010 same as other rows
Column1 Column2 ColumnX Column3
0 Year 1 A 2009
1 Date 1 A 12
2 Year 2 A 2021
3 Year 3 A 2011
Column1 Column2 ColumnX Column3
0 Year 1 A 2009
1 Year period 1 A Below2010
2 Date 1 A 12
3 Year 2 A 2021
4 Year period 2 A Above2020
5 Year 3 A 2011
6 Year period 3 A Range in 2010/2020
|
[
"Filter rows first in boolean indexing for Year columns, replace Column3 in numpy.select and add substring to Column1, last join with original by concat and sort indices by DataFrame.sort_index:\n#necessary default RangeIndex\ndf = df.reset_index(drop=True)\n\ndf2 = df[df['Column1'].eq('Year')].copy()\ndf2['Column3'] = pd.to_numeric(df2['Column3'], errors='coerce')\n\ndf1 = (df2.assign(Column3 = lambda x: np.select([x['Column3']<2010, x['Column3']>2020], \n ['Below2010','Above2020'], \n default='Range in 2010/2020'),\n Column1 = lambda x: x['Column1'] + ' period'))\n\ndf = pd.concat([df, df1]).sort_index(kind='mergesort', ignore_index=True)\nprint (df)\n Column1 Column2 ColumnX Column3\n0 Year 1 A 2009\n1 Year period 1 A Below2010\n2 Date 1 A 12\n3 Year 2 A 2021\n4 Year period 2 A Above2020\n5 Year 3 A 2011\n6 Year period 3 A Range in 2010/2020\n\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074557375_pandas_python.txt
|
Q:
How to associate subplots with a particular figure number?
coming from MATLAB, I am trying to perform something like how subplots are associated with a figure number:
figure(3)
subplot(3,1,1)
How would I do this in Python? Below is where I am stuck.
plt.figure(3)
fig, axs = plt.subplots(3)
A:
You have to add subplots to the figure you created
fig = plt.figure(3)
axs = fig.add_subplot(3, 1, 1)
or you can create both using subplots, so the previous call of figure is not needed
fig, axs = plt.subplots(3, 1)
|
How to associate subplots with a particular figure number?
|
coming from MATLAB, I am trying to perform something like how subplots are associated with a figure number:
figure(3)
subplot(3,1,1)
How would I do this in Python? Below is where I am stuck.
plt.figure(3)
fig, axs = plt.subplots(3)
|
[
"You have to add subplots to the figure you created\nfig = plt.figure(3)\naxs = fig.add_subplot(3, 1, 1)\n\nor you can create both using subplots, so the previous call of figure is not needed\nfig, axs = plt.subplots(3, 1)\n\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0074556278_matplotlib_python.txt
|
Q:
login flow api server python
Im kind fo new in this so any help will be highly appreciated.
Im trying to write an API server that will contain simple request of user login.
This is what i got so far, hashing:
from flask import Flask, request, jsonify, Response
import jwt
app = Flask(__name__)
@app.route('/login', methods=['POST'])
def login():
if "user" not in request.json \
or "password" not in request.json:
return Response(
"missing credentials",
status=400,)
user = request.json["user"]
password = request.json["password"]
encoded_jwt = jwt.encode({"sub": user}, "secret", algorithm="HS256")
return jsonify({"token": encoded_jwt})
app.run()
Assuming i have a dict of users and passwords for example:
USER_DICT = {"user":"ben", "password":"12345"}.
I want to write a code that receive the user name and password, hash it, and then verify if the password and user name is correct (by using the dict).
should i save hashed password in the dict? if so how?
I hope I was able to explain myself well.
Thanks!
A:
I normally use passlib to do this when I am building rest API using fastapi but I am not if it works with flask.
To Install
pip install "passlib[bcrypt]"
Or
pip3 install "passlib[bcrypt]"
Usage
from passlib.context import CryptContext
pwd_context = CryptContext(schemes='bcrypt')
"""
Call this function when creating a new
user.
Call this function to hash the
password received from the user
"""
def hashed_pwd(password: str) -> str:
return pwd_context.hash(password)
"""
Call this function on login attempt.
Call this function to check if the
password enter by the user is equivalent to the
hashed password in the database
"""
def verify_pwd(user_pass: str, hashed_pass: str) -> bool:
return pwd_context.verify(user_pass, hashed_pass)
|
login flow api server python
|
Im kind fo new in this so any help will be highly appreciated.
Im trying to write an API server that will contain simple request of user login.
This is what i got so far, hashing:
from flask import Flask, request, jsonify, Response
import jwt
app = Flask(__name__)
@app.route('/login', methods=['POST'])
def login():
if "user" not in request.json \
or "password" not in request.json:
return Response(
"missing credentials",
status=400,)
user = request.json["user"]
password = request.json["password"]
encoded_jwt = jwt.encode({"sub": user}, "secret", algorithm="HS256")
return jsonify({"token": encoded_jwt})
app.run()
Assuming i have a dict of users and passwords for example:
USER_DICT = {"user":"ben", "password":"12345"}.
I want to write a code that receive the user name and password, hash it, and then verify if the password and user name is correct (by using the dict).
should i save hashed password in the dict? if so how?
I hope I was able to explain myself well.
Thanks!
|
[
"I normally use passlib to do this when I am building rest API using fastapi but I am not if it works with flask.\nTo Install\npip install \"passlib[bcrypt]\"\nOr\npip3 install \"passlib[bcrypt]\"\n\nUsage\nfrom passlib.context import CryptContext\n\npwd_context = CryptContext(schemes='bcrypt')\n\n\"\"\"\nCall this function when creating a new \nuser.\n\nCall this function to hash the\npassword received from the user\n\"\"\"\ndef hashed_pwd(password: str) -> str:\n return pwd_context.hash(password)\n\n\"\"\"\nCall this function on login attempt.\n\nCall this function to check if the\npassword enter by the user is equivalent to the \nhashed password in the database\n\"\"\"\ndef verify_pwd(user_pass: str, hashed_pass: str) -> bool:\n return pwd_context.verify(user_pass, hashed_pass)\n\n"
] |
[
0
] |
[] |
[] |
[
"api",
"python"
] |
stackoverflow_0074557365_api_python.txt
|
Q:
How to convert index of a pandas dataframe into a column
This seems rather obvious, but I can't seem to figure out how to convert an index of data frame to a column?
For example:
df=
gi ptt_loc
0 384444683 593
1 384444684 594
2 384444686 596
To,
df=
index1 gi ptt_loc
0 0 384444683 593
1 1 384444684 594
2 2 384444686 596
A:
either:
df['index1'] = df.index
or, .reset_index:
df = df.reset_index(level=0)
so, if you have a multi-index frame with 3 levels of index, like:
>>> df
val
tick tag obs
2016-02-26 C 2 0.0139
2016-02-27 A 2 0.5577
2016-02-28 C 6 0.0303
and you want to convert the 1st (tick) and 3rd (obs) levels in the index into columns, you would do:
>>> df.reset_index(level=['tick', 'obs'])
tick obs val
tag
C 2016-02-26 2 0.0139
A 2016-02-27 2 0.5577
C 2016-02-28 6 0.0303
A:
rename_axis + reset_index
You can first rename your index to a desired label, then elevate to a series:
df = df.rename_axis('index1').reset_index()
print(df)
index1 gi ptt_loc
0 0 384444683 593
1 1 384444684 594
2 2 384444686 596
This works also for MultiIndex dataframes:
print(df)
# val
# tick tag obs
# 2016-02-26 C 2 0.0139
# 2016-02-27 A 2 0.5577
# 2016-02-28 C 6 0.0303
df = df.rename_axis(['index1', 'index2', 'index3']).reset_index()
print(df)
index1 index2 index3 val
0 2016-02-26 C 2 0.0139
1 2016-02-27 A 2 0.5577
2 2016-02-28 C 6 0.0303
A:
To provide a bit more clarity, let's look at a DataFrame with two levels in its index (a MultiIndex).
index = pd.MultiIndex.from_product([['TX', 'FL', 'CA'],
['North', 'South']],
names=['State', 'Direction'])
df = pd.DataFrame(index=index,
data=np.random.randint(0, 10, (6,4)),
columns=list('abcd'))
The reset_index method, called with the default parameters, converts all index levels to columns and uses a simple RangeIndex as new index.
df.reset_index()
Use the level parameter to control which index levels are converted into columns. If possible, use the level name, which is more explicit. If there are no level names, you can refer to each level by its integer location, which begin at 0 from the outside. You can use a scalar value here or a list of all the indexes you would like to reset.
df.reset_index(level='State') # same as df.reset_index(level=0)
In the rare event that you want to preserve the index and turn the index into a column, you can do the following:
# for a single level
df.assign(State=df.index.get_level_values('State'))
# for all levels
df.assign(**df.index.to_frame())
A:
For MultiIndex you can extract its subindex using
df['si_name'] = R.index.get_level_values('si_name')
where si_name is the name of the subindex.
A:
If you want to use the reset_index method and also preserve your existing index you should use:
df.reset_index().set_index('index', drop=False)
or to change it in place:
df.reset_index(inplace=True)
df.set_index('index', drop=False, inplace=True)
For example:
print(df)
gi ptt_loc
0 384444683 593
4 384444684 594
9 384444686 596
print(df.reset_index())
index gi ptt_loc
0 0 384444683 593
1 4 384444684 594
2 9 384444686 596
print(df.reset_index().set_index('index', drop=False))
index gi ptt_loc
index
0 0 384444683 593
4 4 384444684 594
9 9 384444686 596
And if you want to get rid of the index label you can do:
df2 = df.reset_index().set_index('index', drop=False)
df2.index.name = None
print(df2)
index gi ptt_loc
0 0 384444683 593
4 4 384444684 594
9 9 384444686 596
A:
This should do the trick (if not multilevel indexing) -
df.reset_index().rename({'index':'index1'}, axis = 'columns')
And of course, you can always set inplace = True, if you do not want to assign this to a new variable in the function parameter of rename.
A:
df1 = pd.DataFrame({"gi":[232,66,34,43],"ptt":[342,56,662,123]})
p = df1.index.values
df1.insert( 0, column="new",value = p)
df1
new gi ptt
0 0 232 342
1 1 66 56
2 2 34 662
3 3 43 123
A:
In the newest version of pandas 1.5.0, you could use the function reset_index with the new argument names to specify a list of names you want to give the index columns. Here is a reproducible example with one index column:
import pandas as pd
df = pd.DataFrame({"gi":[232,66,34,43],"ptt":[342,56,662,123]})
gi ptt
0 232 342
1 66 56
2 34 662
3 43 123
df.reset_index(names=['new'])
Output:
new gi ptt
0 0 232 342
1 1 66 56
2 2 34 662
3 3 43 123
This can also easily be applied with MultiIndex. Just create a list of the names you want.
A:
I usually do it this way:
df = df.assign(index1=df.index)
|
How to convert index of a pandas dataframe into a column
|
This seems rather obvious, but I can't seem to figure out how to convert an index of data frame to a column?
For example:
df=
gi ptt_loc
0 384444683 593
1 384444684 594
2 384444686 596
To,
df=
index1 gi ptt_loc
0 0 384444683 593
1 1 384444684 594
2 2 384444686 596
|
[
"either:\ndf['index1'] = df.index\n\nor, .reset_index:\ndf = df.reset_index(level=0)\n\n\nso, if you have a multi-index frame with 3 levels of index, like:\n>>> df\n val\ntick tag obs \n2016-02-26 C 2 0.0139\n2016-02-27 A 2 0.5577\n2016-02-28 C 6 0.0303\n\nand you want to convert the 1st (tick) and 3rd (obs) levels in the index into columns, you would do:\n>>> df.reset_index(level=['tick', 'obs'])\n tick obs val\ntag \nC 2016-02-26 2 0.0139\nA 2016-02-27 2 0.5577\nC 2016-02-28 6 0.0303\n\n",
"rename_axis + reset_index\nYou can first rename your index to a desired label, then elevate to a series:\ndf = df.rename_axis('index1').reset_index()\n\nprint(df)\n\n index1 gi ptt_loc\n0 0 384444683 593\n1 1 384444684 594\n2 2 384444686 596\n\nThis works also for MultiIndex dataframes:\nprint(df)\n# val\n# tick tag obs \n# 2016-02-26 C 2 0.0139\n# 2016-02-27 A 2 0.5577\n# 2016-02-28 C 6 0.0303\n\ndf = df.rename_axis(['index1', 'index2', 'index3']).reset_index()\n\nprint(df)\n\n index1 index2 index3 val\n0 2016-02-26 C 2 0.0139\n1 2016-02-27 A 2 0.5577\n2 2016-02-28 C 6 0.0303\n\n",
"To provide a bit more clarity, let's look at a DataFrame with two levels in its index (a MultiIndex).\nindex = pd.MultiIndex.from_product([['TX', 'FL', 'CA'], \n ['North', 'South']], \n names=['State', 'Direction'])\n\ndf = pd.DataFrame(index=index, \n data=np.random.randint(0, 10, (6,4)), \n columns=list('abcd'))\n\n\nThe reset_index method, called with the default parameters, converts all index levels to columns and uses a simple RangeIndex as new index.\ndf.reset_index()\n\n\nUse the level parameter to control which index levels are converted into columns. If possible, use the level name, which is more explicit. If there are no level names, you can refer to each level by its integer location, which begin at 0 from the outside. You can use a scalar value here or a list of all the indexes you would like to reset.\ndf.reset_index(level='State') # same as df.reset_index(level=0)\n\n\nIn the rare event that you want to preserve the index and turn the index into a column, you can do the following:\n# for a single level\ndf.assign(State=df.index.get_level_values('State'))\n\n# for all levels\ndf.assign(**df.index.to_frame())\n\n",
"For MultiIndex you can extract its subindex using \ndf['si_name'] = R.index.get_level_values('si_name') \n\nwhere si_name is the name of the subindex.\n",
"If you want to use the reset_index method and also preserve your existing index you should use:\ndf.reset_index().set_index('index', drop=False)\n\nor to change it in place:\ndf.reset_index(inplace=True)\ndf.set_index('index', drop=False, inplace=True)\n\nFor example:\nprint(df)\n gi ptt_loc\n0 384444683 593\n4 384444684 594\n9 384444686 596\n\nprint(df.reset_index())\n index gi ptt_loc\n0 0 384444683 593\n1 4 384444684 594\n2 9 384444686 596\n\nprint(df.reset_index().set_index('index', drop=False))\n index gi ptt_loc\nindex\n0 0 384444683 593\n4 4 384444684 594\n9 9 384444686 596\n\nAnd if you want to get rid of the index label you can do:\ndf2 = df.reset_index().set_index('index', drop=False)\ndf2.index.name = None\nprint(df2)\n index gi ptt_loc\n0 0 384444683 593\n4 4 384444684 594\n9 9 384444686 596\n\n",
"This should do the trick (if not multilevel indexing) -\ndf.reset_index().rename({'index':'index1'}, axis = 'columns')\n\n\nAnd of course, you can always set inplace = True, if you do not want to assign this to a new variable in the function parameter of rename.\n",
"df1 = pd.DataFrame({\"gi\":[232,66,34,43],\"ptt\":[342,56,662,123]})\np = df1.index.values\ndf1.insert( 0, column=\"new\",value = p)\ndf1\n\n new gi ptt\n0 0 232 342\n1 1 66 56 \n2 2 34 662\n3 3 43 123\n\n",
"In the newest version of pandas 1.5.0, you could use the function reset_index with the new argument names to specify a list of names you want to give the index columns. Here is a reproducible example with one index column:\nimport pandas as pd\n\ndf = pd.DataFrame({\"gi\":[232,66,34,43],\"ptt\":[342,56,662,123]})\n\n gi ptt\n0 232 342\n1 66 56\n2 34 662\n3 43 123\n\ndf.reset_index(names=['new'])\n\nOutput:\n new gi ptt\n0 0 232 342\n1 1 66 56\n2 2 34 662\n3 3 43 123\n\nThis can also easily be applied with MultiIndex. Just create a list of the names you want.\n",
"I usually do it this way:\ndf = df.assign(index1=df.index)\n\n"
] |
[
1252,
56,
51,
42,
11,
11,
5,
2,
0
] |
[] |
[] |
[
"dataframe",
"indexing",
"pandas",
"python",
"series"
] |
stackoverflow_0020461165_dataframe_indexing_pandas_python_series.txt
|
Q:
Update row values where certain condition is met in pandas
Say I have the following dataframe:
What is the most efficient way to update the values of the columns feat and another_feat where the stream is number 2?
Is this it?
for index, row in df.iterrows():
if df1.loc[index,'stream'] == 2:
# do something
How do I do it if there are more than 100 columns? I don't want to explicitly name the columns that I want to update. I want to divide the value of each column by 2 (except for the stream column).
So to be clear, my goal is:
Dividing all values by 2 of all rows that have stream 2, but not changing the stream column.
A:
I think you can use loc if you need update two columns to same value:
df1.loc[df1['stream'] == 2, ['feat','another_feat']] = 'aaaa'
print df1
stream feat another_feat
a 1 some_value some_value
b 2 aaaa aaaa
c 2 aaaa aaaa
d 3 some_value some_value
If you need update separate, one option is use:
df1.loc[df1['stream'] == 2, 'feat'] = 10
print df1
stream feat another_feat
a 1 some_value some_value
b 2 10 some_value
c 2 10 some_value
d 3 some_value some_value
Another common option is use numpy.where:
df1['feat'] = np.where(df1['stream'] == 2, 10,20)
print df1
stream feat another_feat
a 1 20 some_value
b 2 10 some_value
c 2 10 some_value
d 3 20 some_value
EDIT: If you need divide all columns without stream where condition is True, use:
print df1
stream feat another_feat
a 1 4 5
b 2 4 5
c 2 2 9
d 3 1 7
#filter columns all without stream
cols = [col for col in df1.columns if col != 'stream']
print cols
['feat', 'another_feat']
df1.loc[df1['stream'] == 2, cols ] = df1 / 2
print df1
stream feat another_feat
a 1 4.0 5.0
b 2 2.0 2.5
c 2 1.0 4.5
d 3 1.0 7.0
If working with multiple conditions is possible use multiple numpy.where
or numpy.select:
df0 = pd.DataFrame({'Col':[5,0,-6]})
df0['New Col1'] = np.where((df0['Col'] > 0), 'Increasing',
np.where((df0['Col'] < 0), 'Decreasing', 'No Change'))
df0['New Col2'] = np.select([df0['Col'] > 0, df0['Col'] < 0],
['Increasing', 'Decreasing'],
default='No Change')
print (df0)
Col New Col1 New Col2
0 5 Increasing Increasing
1 0 No Change No Change
2 -6 Decreasing Decreasing
A:
You can do the same with .ix, like this:
In [1]: df = pd.DataFrame(np.random.randn(5,4), columns=list('abcd'))
In [2]: df
Out[2]:
a b c d
0 -0.323772 0.839542 0.173414 -1.341793
1 -1.001287 0.676910 0.465536 0.229544
2 0.963484 -0.905302 -0.435821 1.934512
3 0.266113 -0.034305 -0.110272 -0.720599
4 -0.522134 -0.913792 1.862832 0.314315
In [3]: df.ix[df.a>0, ['b','c']] = 0
In [4]: df
Out[4]:
a b c d
0 -0.323772 0.839542 0.173414 -1.341793
1 -1.001287 0.676910 0.465536 0.229544
2 0.963484 0.000000 0.000000 1.934512
3 0.266113 0.000000 0.000000 -0.720599
4 -0.522134 -0.913792 1.862832 0.314315
EDIT
After the extra information, the following will return all columns - where some condition is met - with halved values:
>> condition = df.a > 0
>> df[condition][[i for i in df.columns.values if i not in ['a']]].apply(lambda x: x/2)
A:
Another vectorized solution is to use the mask() method to halve the rows corresponding to stream=2 and join() these columns to a dataframe that consists only of the stream column:
cols = ['feat', 'another_feat']
df[['stream']].join(df[cols].mask(df['stream'] == 2, lambda x: x/2))
or you can also update() the original dataframe:
df.update(df[cols].mask(df['stream'] == 2, lambda x: x/2))
Both of the above codes do the following:
mask() is even simpler to use if the value to replace is a constant (not derived using a function); e.g. the following code replaces all feat values corresponding to stream equal to 1 or 3 by 100.1
df[['stream']].join(df.filter(like='feat').mask(df['stream'].isin([1,3]), 100))
1: feat columns can be selected using filter() method as well.
|
Update row values where certain condition is met in pandas
|
Say I have the following dataframe:
What is the most efficient way to update the values of the columns feat and another_feat where the stream is number 2?
Is this it?
for index, row in df.iterrows():
if df1.loc[index,'stream'] == 2:
# do something
How do I do it if there are more than 100 columns? I don't want to explicitly name the columns that I want to update. I want to divide the value of each column by 2 (except for the stream column).
So to be clear, my goal is:
Dividing all values by 2 of all rows that have stream 2, but not changing the stream column.
|
[
"I think you can use loc if you need update two columns to same value:\ndf1.loc[df1['stream'] == 2, ['feat','another_feat']] = 'aaaa'\nprint df1\n stream feat another_feat\na 1 some_value some_value\nb 2 aaaa aaaa\nc 2 aaaa aaaa\nd 3 some_value some_value\n\nIf you need update separate, one option is use:\ndf1.loc[df1['stream'] == 2, 'feat'] = 10\nprint df1\n stream feat another_feat\na 1 some_value some_value\nb 2 10 some_value\nc 2 10 some_value\nd 3 some_value some_value\n\nAnother common option is use numpy.where:\ndf1['feat'] = np.where(df1['stream'] == 2, 10,20)\nprint df1\n stream feat another_feat\na 1 20 some_value\nb 2 10 some_value\nc 2 10 some_value\nd 3 20 some_value\n\nEDIT: If you need divide all columns without stream where condition is True, use:\nprint df1\n stream feat another_feat\na 1 4 5\nb 2 4 5\nc 2 2 9\nd 3 1 7\n\n#filter columns all without stream\ncols = [col for col in df1.columns if col != 'stream']\nprint cols\n['feat', 'another_feat']\n\ndf1.loc[df1['stream'] == 2, cols ] = df1 / 2\nprint df1\n stream feat another_feat\na 1 4.0 5.0\nb 2 2.0 2.5\nc 2 1.0 4.5\nd 3 1.0 7.0\n\nIf working with multiple conditions is possible use multiple numpy.where\nor numpy.select:\ndf0 = pd.DataFrame({'Col':[5,0,-6]})\n\ndf0['New Col1'] = np.where((df0['Col'] > 0), 'Increasing', \n np.where((df0['Col'] < 0), 'Decreasing', 'No Change'))\n\ndf0['New Col2'] = np.select([df0['Col'] > 0, df0['Col'] < 0],\n ['Increasing', 'Decreasing'], \n default='No Change')\n\nprint (df0)\n Col New Col1 New Col2\n0 5 Increasing Increasing\n1 0 No Change No Change\n2 -6 Decreasing Decreasing\n\n",
"You can do the same with .ix, like this:\nIn [1]: df = pd.DataFrame(np.random.randn(5,4), columns=list('abcd'))\n\nIn [2]: df\nOut[2]: \n a b c d\n0 -0.323772 0.839542 0.173414 -1.341793\n1 -1.001287 0.676910 0.465536 0.229544\n2 0.963484 -0.905302 -0.435821 1.934512\n3 0.266113 -0.034305 -0.110272 -0.720599\n4 -0.522134 -0.913792 1.862832 0.314315\n\nIn [3]: df.ix[df.a>0, ['b','c']] = 0\n\nIn [4]: df\nOut[4]: \n a b c d\n0 -0.323772 0.839542 0.173414 -1.341793\n1 -1.001287 0.676910 0.465536 0.229544\n2 0.963484 0.000000 0.000000 1.934512\n3 0.266113 0.000000 0.000000 -0.720599\n4 -0.522134 -0.913792 1.862832 0.314315\n\nEDIT\nAfter the extra information, the following will return all columns - where some condition is met - with halved values:\n>> condition = df.a > 0\n>> df[condition][[i for i in df.columns.values if i not in ['a']]].apply(lambda x: x/2)\n\n",
"Another vectorized solution is to use the mask() method to halve the rows corresponding to stream=2 and join() these columns to a dataframe that consists only of the stream column:\ncols = ['feat', 'another_feat']\ndf[['stream']].join(df[cols].mask(df['stream'] == 2, lambda x: x/2))\n\nor you can also update() the original dataframe:\ndf.update(df[cols].mask(df['stream'] == 2, lambda x: x/2))\n\nBoth of the above codes do the following:\n\n\nmask() is even simpler to use if the value to replace is a constant (not derived using a function); e.g. the following code replaces all feat values corresponding to stream equal to 1 or 3 by 100.1\ndf[['stream']].join(df.filter(like='feat').mask(df['stream'].isin([1,3]), 100))\n\n\n1: feat columns can be selected using filter() method as well.\n"
] |
[
305,
4,
0
] |
[] |
[] |
[
"indexing",
"iterator",
"mask",
"pandas",
"python"
] |
stackoverflow_0036909977_indexing_iterator_mask_pandas_python.txt
|
Q:
Streamlip app not searching files in the good directory
I am trying to run a Streamlit app importing pickle files and a DataFrame. The pathfile for my script is :
/Users/myname/Documents/Master2/Python/Final_Project/streamlit_app.py
And the one for my DataFrame is:
/Users/myname/Documents/Master2/Python/Final_Project/data/metabolic_syndrome.csv
One could reasonably argue that I only need to specify df = pd.read_csv('data/df.csv') yet it does not work as the Streamlit app is unexpectedly not searching in its directory:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/myname/data/metabolic_syndrome.csv'
How can I manage to make the app look for the files in the good directory (the one where it is saved) without having to use absolute pathfiles ?
A:
you can use os.getcwd() to get the Current Working Directory. Read up on what this means exactly, fe here
See the sample script below on how to use it. I'm using os.sep for OS agnostic filepath separators.
import os
print(os.getcwd())
relative_path = "data"
full_path = f"{os.getcwd()}{os.sep}{relative_path}"
print(full_path)
filename = "somefile.csv"
full_file_path = f"{full_path}{os.sep}{filename}"
print(full_file_path)
with open(full_file_path) as infile:
for line in infile.read().splitlines():
print(line)
A:
In which directory are you standing when you are running your code?
From your error message I would assume that you are standing in /Users/myname/ which makes python look for data as a subdirectory of /Users/myname/.
But if you first change directory to /Users/myname/Documents/Master2/Python/Final_Project and then run your code from there I think it would work.
|
Streamlip app not searching files in the good directory
|
I am trying to run a Streamlit app importing pickle files and a DataFrame. The pathfile for my script is :
/Users/myname/Documents/Master2/Python/Final_Project/streamlit_app.py
And the one for my DataFrame is:
/Users/myname/Documents/Master2/Python/Final_Project/data/metabolic_syndrome.csv
One could reasonably argue that I only need to specify df = pd.read_csv('data/df.csv') yet it does not work as the Streamlit app is unexpectedly not searching in its directory:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/myname/data/metabolic_syndrome.csv'
How can I manage to make the app look for the files in the good directory (the one where it is saved) without having to use absolute pathfiles ?
|
[
"you can use os.getcwd() to get the Current Working Directory. Read up on what this means exactly, fe here\nSee the sample script below on how to use it. I'm using os.sep for OS agnostic filepath separators.\nimport os\n\n\nprint(os.getcwd())\n\nrelative_path = \"data\"\nfull_path = f\"{os.getcwd()}{os.sep}{relative_path}\"\nprint(full_path)\n\nfilename = \"somefile.csv\"\nfull_file_path = f\"{full_path}{os.sep}{filename}\"\nprint(full_file_path)\n\nwith open(full_file_path) as infile:\n for line in infile.read().splitlines():\n print(line)\n\n",
"In which directory are you standing when you are running your code?\nFrom your error message I would assume that you are standing in /Users/myname/ which makes python look for data as a subdirectory of /Users/myname/.\nBut if you first change directory to /Users/myname/Documents/Master2/Python/Final_Project and then run your code from there I think it would work.\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"streamlit"
] |
stackoverflow_0074530787_python_streamlit.txt
|
Q:
requests.Session with client certificates and own CA
Here is my code
os.environ['REQUESTS_CA_BUNDLE'] = os.path.join('/path/to/','ca-own.crt')
s = requests.Session()
s.cert = ('some.crt', 'some.key')
s.get('https://some.site.com')
Last instruction returns:
requests.exceptions.SSLError: HTTPSConnectionPool(host='some.site.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')))
With curl:
curl --cacert ca-own.crt --key some.key --cert some.crt https://some.site.com
returns normal html code.
How can i make python requests.Session send correct certificates to the endpoint?
P.S. The same situation will be if i add the following
s.verify = 'some.crt'
or
cat some.crt ca-own.crt > res.crt
s.verify = 'res.crt'
P.P.S.
cat some.crt some.key > res.pem
s.cert = "res.pem"
requests.exceptions.SSLError: HTTPSConnectionPool(host='some.site.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')))
cat ca-own.crt some.crt some.key > res.pem
s.cert = "res.pem"
requests.exceptions.SSLError: HTTPSConnectionPool(host='some.site.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(116, '[X509: KEY_VALUES_MISMATCH] key values mismatch (_ssl.c:4067)')))
A:
Above code will work if you put verify=False in the GET request, but it's not ideal security wise(Man in the middle attacks) thus you need to add the CA certificate(issuer's certificate) file to the verify parameter. More info here
session = requests.Session()
session.verify = "/path/to/issuer's certificate"(CA certificate)
session.get('https://some.site.com')
A:
you can try this -
session = requests.Session()
session.verify = "your CA cert"
response = session.get(url, cert=('path of client cert','path of client key'))
session.close()
|
requests.Session with client certificates and own CA
|
Here is my code
os.environ['REQUESTS_CA_BUNDLE'] = os.path.join('/path/to/','ca-own.crt')
s = requests.Session()
s.cert = ('some.crt', 'some.key')
s.get('https://some.site.com')
Last instruction returns:
requests.exceptions.SSLError: HTTPSConnectionPool(host='some.site.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')))
With curl:
curl --cacert ca-own.crt --key some.key --cert some.crt https://some.site.com
returns normal html code.
How can i make python requests.Session send correct certificates to the endpoint?
P.S. The same situation will be if i add the following
s.verify = 'some.crt'
or
cat some.crt ca-own.crt > res.crt
s.verify = 'res.crt'
P.P.S.
cat some.crt some.key > res.pem
s.cert = "res.pem"
requests.exceptions.SSLError: HTTPSConnectionPool(host='some.site.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')))
cat ca-own.crt some.crt some.key > res.pem
s.cert = "res.pem"
requests.exceptions.SSLError: HTTPSConnectionPool(host='some.site.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(116, '[X509: KEY_VALUES_MISMATCH] key values mismatch (_ssl.c:4067)')))
|
[
"Above code will work if you put verify=False in the GET request, but it's not ideal security wise(Man in the middle attacks) thus you need to add the CA certificate(issuer's certificate) file to the verify parameter. More info here\nsession = requests.Session()\nsession.verify = \"/path/to/issuer's certificate\"(CA certificate)\n\nsession.get('https://some.site.com')\n\n",
"you can try this -\nsession = requests.Session()\nsession.verify = \"your CA cert\"\nresponse = session.get(url, cert=('path of client cert','path of client key'))\nsession.close()\n"
] |
[
0,
0
] |
[] |
[] |
[
"ca",
"client_certificates",
"python",
"python_3.x",
"session"
] |
stackoverflow_0071955825_ca_client_certificates_python_python_3.x_session.txt
|
Q:
Computing the distance matrix from an adjacency matrix in python
Write a code that produces the distance matrix from a graph (graph theory), the code should use the adjacency matrix and cannot use any functions from NetworkX module, apart from networkx.adjacency_matrix().
I understand the process of how the distance matrix works. My theory of how the adjacency matrix is involved is that it takes an element that connects two nodes and adds the distance up. For example, lets say i have nodes A, B and C. A is connected to B, and B is connected to C. The distance between two connected nodes is 1. So the distance from A to C would be 2.
My only problem is how i can implement this into a code so that it creates a distance matrix for any given graph.
Thank you for any help, sorry if my explanation is unclear, please let me know if you would like me to clarify anything.
|
Computing the distance matrix from an adjacency matrix in python
|
Write a code that produces the distance matrix from a graph (graph theory), the code should use the adjacency matrix and cannot use any functions from NetworkX module, apart from networkx.adjacency_matrix().
I understand the process of how the distance matrix works. My theory of how the adjacency matrix is involved is that it takes an element that connects two nodes and adds the distance up. For example, lets say i have nodes A, B and C. A is connected to B, and B is connected to C. The distance between two connected nodes is 1. So the distance from A to C would be 2.
My only problem is how i can implement this into a code so that it creates a distance matrix for any given graph.
Thank you for any help, sorry if my explanation is unclear, please let me know if you would like me to clarify anything.
|
[] |
[] |
[
"Check if the below code helps.\n#G is a networkX graph.\ndef get_actual_distance_between_two_nodes(G, i, j):\n pos=nx.spring_layout(G, seed=random_seed)\n sp = nx.shortest_path(G, i, j)\n edges_set = [[sp[i], sp[i+1]] for i in range(len(sp)-1)]\n\n distance_list = []\n for edge in edges_set:\n start_node = edge[0]\n end_node = edge[1]\n\n x1 = pos[start_node][0]\n y1 = pos[start_node][1]\n x2 = pos[end_node][0]\n y2 = pos[end_node][1]\n\n distance = math.dist([x1,y1], [x2,y2])\n distance_list.append(distance)\n\n return (sum(distance_list)) \n \ndef nodes_connected(G, u, v):\n return u in G.neighbors(v)\n\ndef create_distance_matrix(G, nodes_list):\n distance_matrix_custom = []\n for i in range(len(nodes_list)):\n current_node = nodes_list[i]\n #print(\"Current Node: --->\", current_node)\n list_of_distance = []\n for j in range(len(nodes_list)):\n target_node = nodes_list[j]\n #print(\"Target Node: --->\", target_node)\n \n if(current_node == target_node):\n actual_distance = 0\n elif(G.has_edge(current_node, target_node)):\n actual_distance = get_actual_distance_between_two_nodes(G, current_node, target_node) \n else:\n #actual_distance = float('inf')\n actual_distance = 10000000000\n list_of_distance.append(actual_distance)\n distance_matrix_custom.append(list_of_distance)\n\n return distance_matrix_custom\n \ndistance_matrix_custom = create_distance_matrix(G, nodes_list)\n\n"
] |
[
-1
] |
[
"adjacency_matrix",
"distance_matrix",
"python"
] |
stackoverflow_0055328620_adjacency_matrix_distance_matrix_python.txt
|
Q:
Python type hints: How to use Literal with strings to conform with mypy?
I want to restrict the possible input arguments by using typing.Literal.
The following code works just fine, however, mypy is complaining.
from typing import Literal
def literal_func(string_input: Literal["best", "worst"]) -> int:
if string_input == "best":
return 1
elif string_input == "worst":
return 0
literal_func(string_input="best") # works just fine with mypy
# The following call leads to an error with mypy:
# error: Argument "string_input" to "literal_func" has incompatible type "str";
# expected "Literal['best', 'worst']" [arg-type]
input_string = "best"
literal_func(string_input=input_string)
A:
Unfortunately, mypy does not narrow the type of input_string to Literal["best"]. You can help it with a proper type annotation:
input_string: Literal["best"] = "best"
literal_func(string_input=input_string)
Perhaps worth mentioning that pyright works just fine with your example.
Alternatively, the same can be achieved by annotating the input_string as Final:
from typing import Final, Literal
...
input_string: Final = "best"
literal_func(string_input=input_string)
|
Python type hints: How to use Literal with strings to conform with mypy?
|
I want to restrict the possible input arguments by using typing.Literal.
The following code works just fine, however, mypy is complaining.
from typing import Literal
def literal_func(string_input: Literal["best", "worst"]) -> int:
if string_input == "best":
return 1
elif string_input == "worst":
return 0
literal_func(string_input="best") # works just fine with mypy
# The following call leads to an error with mypy:
# error: Argument "string_input" to "literal_func" has incompatible type "str";
# expected "Literal['best', 'worst']" [arg-type]
input_string = "best"
literal_func(string_input=input_string)
|
[
"Unfortunately, mypy does not narrow the type of input_string to Literal[\"best\"]. You can help it with a proper type annotation:\ninput_string: Literal[\"best\"] = \"best\"\nliteral_func(string_input=input_string)\n\nPerhaps worth mentioning that pyright works just fine with your example.\n\nAlternatively, the same can be achieved by annotating the input_string as Final:\nfrom typing import Final, Literal\n\n...\n\ninput_string: Final = \"best\"\nliteral_func(string_input=input_string)\n\n"
] |
[
1
] |
[] |
[] |
[
"literals",
"mypy",
"python"
] |
stackoverflow_0074557655_literals_mypy_python.txt
|
Q:
pyautogui screenshot command is not working
import pyautogui
myScreenshot = pyautogui.screenshot()
myScreenshot.save(r'C:\Users\"my user name"\PycharmProjects\"my project"\ name.png')
I don't know what I did wrong but any similar command is not working (I have installed pyautogui).
A:
If you already have PIL (Pillow) installed, you'll need to upgrade it via the command prompt command
pip install Pillow --upgrade
A:
Just install Pillow package using pip:
pip install Pillow
or
pip3 install Pillow
A:
Try to upgrade your PyAutoGui module using the following command:
pip install pyautogui --upgrade
I hope it helps you!
|
pyautogui screenshot command is not working
|
import pyautogui
myScreenshot = pyautogui.screenshot()
myScreenshot.save(r'C:\Users\"my user name"\PycharmProjects\"my project"\ name.png')
I don't know what I did wrong but any similar command is not working (I have installed pyautogui).
|
[
"If you already have PIL (Pillow) installed, you'll need to upgrade it via the command prompt command\npip install Pillow --upgrade\n\n",
"Just install Pillow package using pip:\npip install Pillow\n\nor\npip3 install Pillow\n\n",
"Try to upgrade your PyAutoGui module using the following command:\npip install pyautogui --upgrade\n\nI hope it helps you!\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"pyautogui",
"python",
"screenshot"
] |
stackoverflow_0069526177_pyautogui_python_screenshot.txt
|
Q:
Can't connect to local MySQL server through socket '/tmp/mysql.sock
When I attempted to connect to a local MySQL server during my test suite, it
fails with the error:
OperationalError: (2002, "Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)")
However, I'm able to at all times, connect to MySQL by running the command line
mysql program. A ps aux | grep mysql shows the server is running, and
stat /tmp/mysql.sock confirm that the socket exists. Further, if I open a
debugger in except clause of that exception, I'm able to reliably connect
with the exact same parameters.
This issue reproduces fairly reliably, however it doesn't appear to be 100%,
because every once in a blue moon, my test suite does in fact run without
hitting this error. When I attempted to run with sudo dtruss it did not reproduce.
All the client code is in Python, though I can't figure how that'd be relevant.
Switching to use host 127.0.0.1 produces the error:
DatabaseError: Can't connect to MySQL server on '127.0.0.1' (61)
A:
sudo /usr/local/mysql/support-files/mysql.server start
This worked for me. However, if this doesnt work then make sure that mysqld is running and try connecting.
A:
The relevant section of the MySQL manual is here. I'd start by going through the debugging steps listed there.
Also, remember that localhost and 127.0.0.1 are not the same thing in this context:
If host is set to localhost, then a socket or pipe is used.
If host is set to 127.0.0.1, then the client is forced to use TCP/IP.
So, for example, you can check if your database is listening for TCP connections vi netstat -nlp. It seems likely that it IS listening for TCP connections because you say that mysql -h 127.0.0.1 works just fine. To check if you can connect to your database via sockets, use mysql -h localhost.
If none of this helps, then you probably need to post more details about your MySQL config, exactly how you're instantiating the connection, etc.
A:
For me the problem was I wasn't running MySQL Server.
Run server first and then execute mysql.
$ mysql.server start
$ mysql -h localhost -u root -p
A:
I just changed the HOST from localhost to 127.0.0.1 and it works fine:
# settings.py of Django project
...
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'db_name',
'USER': 'username',
'PASSWORD': 'password',
'HOST': '127.0.0.1',
'PORT': '',
},
...
A:
I've seen this happen at my shop when my devs have a stack manager like MAMP installed that comes preconfigured with MySQL installed in a non standard place.
at your terminal run
mysql_config --socket
that will give you your path to the sock file. take that path and use it in your DATABASES HOST paramater.
What you need to do is point your
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'test',
'USER': 'test',
'PASSWORD': 'test',
'HOST': '/Applications/MAMP/tmp/mysql/mysql.sock',
'PORT': '',
},
}
NOTE
also run which mysql_config if you somehow have multiple instances of mysql server installed on the machine you may be connecting to the wrong one.
A:
For me, the mysql server was not running. So, i started the mysql server through
mysql.server start
then
mysql_secure_installation
to secure the server and now I can visit
the MySQL server through
mysql -u root -p
or
sudo mysql -u root -p
depending on your installation.
A:
When, if you lose your daemon mysql in mac OSx but is present in other path for exemple in private/var do the following command
1)
ln -s /private/var/mysql/mysql.sock /tmp/mysql.sock
2) restart your connexion to mysql with :
mysql -u username -p -h host databasename
works also for mariadb
A:
Run the below cmd in terminal
/usr/local/mysql/bin/mysqld_safe
Then restart the machine to take effect. It works!!
A:
After attempting a few of these solutions and not having any success, this is what worked for me:
Restart system
mysql.server start
Success!
A:
To those who upgraded from 5.7 to 8.0 via homebrew, this error is likely caused by the upgrade not being complete. In my case, mysql.server start got me the following error:
ERROR! The server quit without updating PID file
I then checked the log file via cat /usr/local/var/mysql/YOURS.err | tail -n 50, and found the following:
InnoDB: Upgrade after a crash is not supported.
If you are on the same boat, first install mysql@5.7 via homebrew, stop the server, and then start the 8.0 system again.
brew install mysql@5.7
/usr/local/opt/mysql@5.7/bin/mysql.server start
/usr/local/opt/mysql@5.7/bin/mysql.server stop
Then,
mysql.server start
This would get your MySQL (8.0) working again.
A:
Check number of open files for the mysql process using lsof command.
Increase the open files limit and run again.
A:
This may be one of following problems.
Incorrect mysql lock.
solution: You have to find out the correct mysql socket by,
mysqladmin -p variables | grep socket
and then put it in your db connection code:
pymysql.connect(db='db', user='user', passwd='pwd', unix_socket="/tmp/mysql.sock")
/tmp/mysql.sock is the returned from grep
2.Incorrect mysql port
solution: You have to find out the correct mysql port:
mysqladmin -p variables | grep port
and then in your code:
pymysql.connect(db='db', user='user', passwd='pwd', host='localhost', port=3306)
3306 is the port returned from the grep
I think first option will resolve your problem.
A:
I have two sneaky conjectures on this one
CONJECTURE #1
Look into the possibility of not being able to access the /tmp/mysql.sock file. When I setup MySQL databases, I normally let the socket file site in /var/lib/mysql. If you login to mysql as root@localhost, your OS session needs access to the /tmp folder. Make sure /tmp has the correct access rights in the OS. Also, make sure the sudo user can always read file in /tmp.
CONJECTURE #2
Accessing mysql via 127.0.0.1 can cause some confusion if you are not paying attention. How?
From the command line, if you connect to MySQL with 127.0.0.1, you may need to specify the TCP/IP protocol.
mysql -uroot -p -h127.0.0.1 --protocol=tcp
or try the DNS name
mysql -uroot -p -hDNSNAME
This will bypass logging in as root@localhost, but make sure you have root@'127.0.0.1' defined.
Next time you connect to MySQL, run this:
SELECT USER(),CURRENT_USER();
What does this give you?
USER() reports how you attempted to authenticate in MySQL
CURRENT_USER() reports how you were allowed to authenticate in MySQL
If these functions return with the same values, then you are connecting and authenticating as expected. If the values are different, you may need to create the corresponding user root@127.0.0.1.
A:
I think i saw this same behavior some time ago, but can't remember the details.
In our case, the problem was the moment the testrunner initialises database connections relative to first database interaction required, for instance, by import of a module in settings.py or some __init__.py.
I'll try to digg up some more info, but this might already ring a bell for your case.
A:
Make sure your /etc/hosts has 127.0.0.1 localhost in it and it should work fine
A:
if you get an error like below :
django.db.utils.OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
Then just find your mysqld.sock file location and add it to "HOST".
Like i am using xampp on linux so my mysqld.sock file is in another location. so it is not working for '/var/run/mysqld/mysqld.sock'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'asd',
'USER' : 'root',
'PASSWORD' : '',
'HOST' : '/opt/lampp/var/mysql/mysql.sock',
'PORT' : ''
}
}
A:
Had this same problem. Turned out mysqld had stopped running (I'm on Mac OSX). I restarted it and the error went away.
I figured out that mysqld was not running largely because of this link:
http://dev.mysql.com/doc/refman/5.6/en/can-not-connect-to-server.html
Notice the first tip!
A:
I had to kill off all instances of mysql by first finding all the process IDs:
ps aux | grep mysql
And then killing them off:
kill -9 {pid}
Then:
mysql.server start
Worked for me.
A:
Check that your mysql has not reached maximum connections, or is not in some sort of booting loop as happens quite often if the settings are incorrect in my.cnf.
Use ps aux | grep mysql to check if the PID is changing.
A:
Looked around online too long not to contribute. After trying to type in the mysql prompt from the command line, I was continuing to receive this message:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
This was due to the fact that my local mysql server was no longer running. In order to restart the server, I navigated to
shell> cd /user/local/bin
where my mysql.server was located. From here, simply type:
shell> mysql.server start
This will relaunch the local mysql server.
From there you can reset the root password if need be..
mysql> UPDATE mysql.user SET Password=PASSWORD('MyNewPass')
-> WHERE User='root';
mysql> FLUSH PRIVILEGES;
A:
The socket is located in /tmp. On Unix system, due to modes & ownerships on /tmp, this could cause some problem. But, as long as you tell us that you CAN use your mysql connexion normally, I guess it is not a problem on your system. A primal check should be to relocate mysql.sock in a more neutral directory.
The fact that the problem occurs "randomly" (or not every time) let me think that it could be a server problem.
Is your /tmp located on a standard disk, or on an exotic mount (like in the RAM) ?
Is your /tmp empty ?
Does iotopshow you something wrong when you encounter the problem ?
A:
# shell script ,ignore the first
$ $(dirname `which mysql`)\/mysql.server start
May be helpful.
A:
If you installed through Homebrew, try to run
brew services start mysql
A:
Configure your DB connection in the 'Manage DB Connections dialog. Select 'Standard (TCP/IP)' as connection method.
See this page for more details
http://dev.mysql.com/doc/workbench/en/wb-manage-db-connections.html
According to this other page a socket file is used even if you specify localhost.
A Unix socket file is used if you do not specify a host name or if you
specify the special host name localhost.
It also shows how to check on your server by running these commands:
If a mysqld process is running, you can check it by trying the
following commands. The port number or Unix socket file name might be
different in your setup. host_ip represents the IP address of the
machine where the server is running.
shell> mysqladmin version
shell> mysqladmin variables
shell> mysqladmin -h `hostname` version variables
shell> mysqladmin -h `hostname` --port=3306 version
shell> mysqladmin -h host_ip version
shell> mysqladmin --protocol=SOCKET --socket=/tmp/mysql.sock version
A:
in ubuntu14.04 you can do this to slove this problem.
zack@zack:~/pycodes/python-scraping/chapter5$ **mysqladmin -p variables|grep socket**
Enter password:
| socket | ***/var/run/mysqld/mysqld.sock*** |
zack@zack:~/pycodes/python-scraping/chapter5$***ln -s /var/run/mysqld/mysqld.sock /tmp/mysql.sock***
zack@zack:~/pycodes/python-scraping/chapter5$ ll /tmp/mysql.sock
lrwxrwxrwx 1 zack zack 27 11ζ 29 13:08 /tmp/mysql.sock -> /var/run/mysqld/mysqld.sock=
A:
For me, I'm sure mysqld is started, and command line mysql can work properly. But the httpd server show the issue(can't connect to mysql through socket).
I started the service with mysqld_safe&.
finally, I found when I start the mysqld service with service mysqld start, there are issues(selinux permission issue), and when I fix the selinux issue, and start the mysqld with "service mysqld start", the httpd connection issue disappear. But when I start the mysqld with mysqld_safe&, mysqld can be worked. (mysql client can work properly). But there are still issue when connect with httpd.
A:
If it's socket related read this file
/etc/mysql/my.cnf
and see what is the standard socket location. It's a line like:
socket = /var/run/mysqld/mysqld.sock
now create an alias for your shell like:
alias mysql="mysql --socket=/var/run/mysqld/mysqld.sock"
This way you don't need root privileges.
A:
Simply try to run mysqld.
This was what was not working for me on mac.
If it doesn't work try go to /usr/local/var/mysql/<your_name>.err to see detailed error logs.
A:
Using MacOS Mojave 10.14.6 for MySQL 8.0.19 installed via Homebrew
Ran sudo find / -name my.cnf
File found at /usr/local/etc/my.cnf
Worked for a time then eventually the error returned. Uninstalled the Homebrew version of MySQL and installed the .dmg file directly from here
Happily connecting since then.
A:
In my case what helped was to edit the file /etc/mysql/mysql.conf.d/mysqld.cnfand replace the line:
socket = /var/run/mysqld/mysqld.sock
with
socket = /tmp/mysql.sock
Then I restarted the server and it worked fine. The funny thing is that if I put back the line as it was before and restarted it still worked..
A:
I had faced similar problem recently. Went through many answers. I got it working by following steps.
change the socket path in /etc/my.cnf (as i was repeatedly getting error with /tmp/mysql.sock ) reference to change the socket path
run mysqld_safe to restart the server as it is the recommended way to restart in case of errors. reference to mysqld_safe
A:
Following steps would help:
mysql.server start
List item
for more details go to ref on medium
A:
I'm using macOS Monterey, I fixed it by changing the file content which locate in /opt/homebrew/etc/my.cnf from "bind-address = 127.0.0.1" to "bind-address = localhost"
A:
You can try
$ mysql.server start
then
$ mysql -h localhost -u root -p
in password just press enter and it will start,
then u can change password
A:
This worked for me :
mysql.server start
mysql -u root -p
|
Can't connect to local MySQL server through socket '/tmp/mysql.sock
|
When I attempted to connect to a local MySQL server during my test suite, it
fails with the error:
OperationalError: (2002, "Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)")
However, I'm able to at all times, connect to MySQL by running the command line
mysql program. A ps aux | grep mysql shows the server is running, and
stat /tmp/mysql.sock confirm that the socket exists. Further, if I open a
debugger in except clause of that exception, I'm able to reliably connect
with the exact same parameters.
This issue reproduces fairly reliably, however it doesn't appear to be 100%,
because every once in a blue moon, my test suite does in fact run without
hitting this error. When I attempted to run with sudo dtruss it did not reproduce.
All the client code is in Python, though I can't figure how that'd be relevant.
Switching to use host 127.0.0.1 produces the error:
DatabaseError: Can't connect to MySQL server on '127.0.0.1' (61)
|
[
"sudo /usr/local/mysql/support-files/mysql.server start \n\nThis worked for me. However, if this doesnt work then make sure that mysqld is running and try connecting.\n",
"The relevant section of the MySQL manual is here. I'd start by going through the debugging steps listed there.\nAlso, remember that localhost and 127.0.0.1 are not the same thing in this context:\n\nIf host is set to localhost, then a socket or pipe is used.\nIf host is set to 127.0.0.1, then the client is forced to use TCP/IP.\n\nSo, for example, you can check if your database is listening for TCP connections vi netstat -nlp. It seems likely that it IS listening for TCP connections because you say that mysql -h 127.0.0.1 works just fine. To check if you can connect to your database via sockets, use mysql -h localhost. \nIf none of this helps, then you probably need to post more details about your MySQL config, exactly how you're instantiating the connection, etc.\n",
"For me the problem was I wasn't running MySQL Server.\nRun server first and then execute mysql.\n$ mysql.server start\n$ mysql -h localhost -u root -p\n\n",
"I just changed the HOST from localhost to 127.0.0.1 and it works fine:\n# settings.py of Django project\n...\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'NAME': 'db_name',\n 'USER': 'username',\n 'PASSWORD': 'password',\n 'HOST': '127.0.0.1',\n 'PORT': '',\n},\n...\n\n",
"I've seen this happen at my shop when my devs have a stack manager like MAMP installed that comes preconfigured with MySQL installed in a non standard place.\nat your terminal run \nmysql_config --socket\n\nthat will give you your path to the sock file. take that path and use it in your DATABASES HOST paramater.\nWhat you need to do is point your \nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'NAME': 'test',\n 'USER': 'test',\n 'PASSWORD': 'test',\n 'HOST': '/Applications/MAMP/tmp/mysql/mysql.sock',\n 'PORT': '',\n },\n}\n\nNOTE\nalso run which mysql_config if you somehow have multiple instances of mysql server installed on the machine you may be connecting to the wrong one.\n",
"For me, the mysql server was not running. So, i started the mysql server through\nmysql.server start\n\nthen\nmysql_secure_installation\nto secure the server and now I can visit\nthe MySQL server through\nmysql -u root -p\nor\nsudo mysql -u root -p\ndepending on your installation.\n",
"When, if you lose your daemon mysql in mac OSx but is present in other path for exemple in private/var do the following command\n1)\nln -s /private/var/mysql/mysql.sock /tmp/mysql.sock\n\n2) restart your connexion to mysql with :\nmysql -u username -p -h host databasename\n\nworks also for mariadb\n",
"Run the below cmd in terminal \n\n/usr/local/mysql/bin/mysqld_safe\n\n\nThen restart the machine to take effect. It works!!\n",
"After attempting a few of these solutions and not having any success, this is what worked for me:\n\nRestart system\nmysql.server start\nSuccess!\n\n",
"To those who upgraded from 5.7 to 8.0 via homebrew, this error is likely caused by the upgrade not being complete. In my case, mysql.server start got me the following error:\n\nERROR! The server quit without updating PID file\n\nI then checked the log file via cat /usr/local/var/mysql/YOURS.err | tail -n 50, and found the following:\n\nInnoDB: Upgrade after a crash is not supported. \n\nIf you are on the same boat, first install mysql@5.7 via homebrew, stop the server, and then start the 8.0 system again.\nbrew install mysql@5.7\n\n/usr/local/opt/mysql@5.7/bin/mysql.server start\n/usr/local/opt/mysql@5.7/bin/mysql.server stop\n\nThen,\nmysql.server start\n\nThis would get your MySQL (8.0) working again.\n",
"Check number of open files for the mysql process using lsof command. \nIncrease the open files limit and run again.\n",
"This may be one of following problems.\n\nIncorrect mysql lock.\nsolution: You have to find out the correct mysql socket by,\n\n\nmysqladmin -p variables | grep socket\n\nand then put it in your db connection code: \npymysql.connect(db='db', user='user', passwd='pwd', unix_socket=\"/tmp/mysql.sock\")\n\n/tmp/mysql.sock is the returned from grep\n2.Incorrect mysql port\n solution: You have to find out the correct mysql port:\nmysqladmin -p variables | grep port\n\nand then in your code:\npymysql.connect(db='db', user='user', passwd='pwd', host='localhost', port=3306)\n\n3306 is the port returned from the grep\nI think first option will resolve your problem. \n",
"I have two sneaky conjectures on this one\nCONJECTURE #1\nLook into the possibility of not being able to access the /tmp/mysql.sock file. When I setup MySQL databases, I normally let the socket file site in /var/lib/mysql. If you login to mysql as root@localhost, your OS session needs access to the /tmp folder. Make sure /tmp has the correct access rights in the OS. Also, make sure the sudo user can always read file in /tmp.\nCONJECTURE #2\nAccessing mysql via 127.0.0.1 can cause some confusion if you are not paying attention. How?\nFrom the command line, if you connect to MySQL with 127.0.0.1, you may need to specify the TCP/IP protocol.\nmysql -uroot -p -h127.0.0.1 --protocol=tcp\n\nor try the DNS name\nmysql -uroot -p -hDNSNAME\n\nThis will bypass logging in as root@localhost, but make sure you have root@'127.0.0.1' defined.\nNext time you connect to MySQL, run this:\nSELECT USER(),CURRENT_USER();\n\nWhat does this give you?\n\nUSER() reports how you attempted to authenticate in MySQL\nCURRENT_USER() reports how you were allowed to authenticate in MySQL\n\nIf these functions return with the same values, then you are connecting and authenticating as expected. If the values are different, you may need to create the corresponding user root@127.0.0.1.\n",
"I think i saw this same behavior some time ago, but can't remember the details.\nIn our case, the problem was the moment the testrunner initialises database connections relative to first database interaction required, for instance, by import of a module in settings.py or some __init__.py.\nI'll try to digg up some more info, but this might already ring a bell for your case.\n",
"Make sure your /etc/hosts has 127.0.0.1 localhost in it and it should work fine\n",
"if you get an error like below :\ndjango.db.utils.OperationalError: (2002, \"Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\")\n\nThen just find your mysqld.sock file location and add it to \"HOST\".\nLike i am using xampp on linux so my mysqld.sock file is in another location. so it is not working for '/var/run/mysqld/mysqld.sock'\nDATABASES = {\n\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'NAME': 'asd',\n 'USER' : 'root',\n 'PASSWORD' : '',\n 'HOST' : '/opt/lampp/var/mysql/mysql.sock',\n 'PORT' : ''\n }\n}\n\n",
"Had this same problem. Turned out mysqld had stopped running (I'm on Mac OSX). I restarted it and the error went away.\nI figured out that mysqld was not running largely because of this link:\nhttp://dev.mysql.com/doc/refman/5.6/en/can-not-connect-to-server.html\nNotice the first tip!\n",
"I had to kill off all instances of mysql by first finding all the process IDs:\n\nps aux | grep mysql\n\nAnd then killing them off:\n\nkill -9 {pid}\n\nThen:\n\nmysql.server start\n\nWorked for me.\n",
"Check that your mysql has not reached maximum connections, or is not in some sort of booting loop as happens quite often if the settings are incorrect in my.cnf.\nUse ps aux | grep mysql to check if the PID is changing.\n",
"Looked around online too long not to contribute. After trying to type in the mysql prompt from the command line, I was continuing to receive this message:\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)\nThis was due to the fact that my local mysql server was no longer running. In order to restart the server, I navigated to\nshell> cd /user/local/bin\n\nwhere my mysql.server was located. From here, simply type: \nshell> mysql.server start\n\nThis will relaunch the local mysql server.\nFrom there you can reset the root password if need be..\nmysql> UPDATE mysql.user SET Password=PASSWORD('MyNewPass')\n-> WHERE User='root';\nmysql> FLUSH PRIVILEGES;\n\n",
"The socket is located in /tmp. On Unix system, due to modes & ownerships on /tmp, this could cause some problem. But, as long as you tell us that you CAN use your mysql connexion normally, I guess it is not a problem on your system. A primal check should be to relocate mysql.sock in a more neutral directory.\nThe fact that the problem occurs \"randomly\" (or not every time) let me think that it could be a server problem. \n\nIs your /tmp located on a standard disk, or on an exotic mount (like in the RAM) ?\nIs your /tmp empty ? \nDoes iotopshow you something wrong when you encounter the problem ?\n\n",
"# shell script ,ignore the first \n$ $(dirname `which mysql`)\\/mysql.server start\n\nMay be helpful.\n",
"If you installed through Homebrew, try to run\nbrew services start mysql\n\n",
"Configure your DB connection in the 'Manage DB Connections dialog. Select 'Standard (TCP/IP)' as connection method.\nSee this page for more details \nhttp://dev.mysql.com/doc/workbench/en/wb-manage-db-connections.html\nAccording to this other page a socket file is used even if you specify localhost.\n\nA Unix socket file is used if you do not specify a host name or if you\n specify the special host name localhost.\n\nIt also shows how to check on your server by running these commands:\n\nIf a mysqld process is running, you can check it by trying the\n following commands. The port number or Unix socket file name might be\n different in your setup. host_ip represents the IP address of the\n machine where the server is running.\n\nshell> mysqladmin version \nshell> mysqladmin variables \nshell> mysqladmin -h `hostname` version variables \nshell> mysqladmin -h `hostname` --port=3306 version \nshell> mysqladmin -h host_ip version \nshell> mysqladmin --protocol=SOCKET --socket=/tmp/mysql.sock version\n\n",
"in ubuntu14.04 you can do this to slove this problem.\nzack@zack:~/pycodes/python-scraping/chapter5$ **mysqladmin -p variables|grep socket**\nEnter password: \n| socket | ***/var/run/mysqld/mysqld.sock*** |\nzack@zack:~/pycodes/python-scraping/chapter5$***ln -s /var/run/mysqld/mysqld.sock /tmp/mysql.sock***\nzack@zack:~/pycodes/python-scraping/chapter5$ ll /tmp/mysql.sock \nlrwxrwxrwx 1 zack zack 27 11ζ 29 13:08 /tmp/mysql.sock -> /var/run/mysqld/mysqld.sock=\n\n",
"For me, I'm sure mysqld is started, and command line mysql can work properly. But the httpd server show the issue(can't connect to mysql through socket).\nI started the service with mysqld_safe&.\nfinally, I found when I start the mysqld service with service mysqld start, there are issues(selinux permission issue), and when I fix the selinux issue, and start the mysqld with \"service mysqld start\", the httpd connection issue disappear. But when I start the mysqld with mysqld_safe&, mysqld can be worked. (mysql client can work properly). But there are still issue when connect with httpd.\n",
"If it's socket related read this file\n/etc/mysql/my.cnf\n\nand see what is the standard socket location. It's a line like:\nsocket = /var/run/mysqld/mysqld.sock\n\nnow create an alias for your shell like:\nalias mysql=\"mysql --socket=/var/run/mysqld/mysqld.sock\"\n\nThis way you don't need root privileges.\n",
"Simply try to run mysqld. \nThis was what was not working for me on mac.\nIf it doesn't work try go to /usr/local/var/mysql/<your_name>.err to see detailed error logs.\n",
"Using MacOS Mojave 10.14.6 for MySQL 8.0.19 installed via Homebrew\n\nRan sudo find / -name my.cnf\nFile found at /usr/local/etc/my.cnf\n\nWorked for a time then eventually the error returned. Uninstalled the Homebrew version of MySQL and installed the .dmg file directly from here\nHappily connecting since then.\n",
"In my case what helped was to edit the file /etc/mysql/mysql.conf.d/mysqld.cnfand replace the line:\nsocket = /var/run/mysqld/mysqld.sock\n\nwith\nsocket = /tmp/mysql.sock\n\nThen I restarted the server and it worked fine. The funny thing is that if I put back the line as it was before and restarted it still worked..\n",
"I had faced similar problem recently. Went through many answers. I got it working by following steps.\n\nchange the socket path in /etc/my.cnf (as i was repeatedly getting error with /tmp/mysql.sock ) reference to change the socket path\nrun mysqld_safe to restart the server as it is the recommended way to restart in case of errors. reference to mysqld_safe\n\n",
"Following steps would help:\n\nmysql.server start\nList item\n\nfor more details go to ref on medium\n",
"I'm using macOS Monterey, I fixed it by changing the file content which locate in /opt/homebrew/etc/my.cnf from \"bind-address = 127.0.0.1\" to \"bind-address = localhost\"\n",
"You can try\n$ mysql.server start\nthen\n$ mysql -h localhost -u root -p\nin password just press enter and it will start,\nthen u can change password\n",
"This worked for me :\nmysql.server start\nmysql -u root -p\n\n"
] |
[
175,
147,
145,
30,
29,
22,
15,
9,
9,
9,
7,
7,
7,
4,
4,
4,
3,
3,
2,
2,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] |
[
"Mac user here running mac os mojave 10.14. In my case what helped me was to uninstall MySQL from within the system preferences pane and then reinstalling and selecting the legacy password MySQL system during the installation wizard.\nI did this by going to apple menu > system preferences >> MySQL >> and then hitting uninstall.\nOnce I successfully uninstalled I went back to MySQL's website and into the archive downloads and downloaded mysql Ver 8.0.18 for macos10.14 on x86_64 (MySQL Community Server - GPL).\nNow, when going through the install wizard you will come to a point where the installer asks you to choose the password type that you'd like to use and here you MUST SELECT THE LEGACY SYSTEM that allows you to sign into MySQL using the root user and a password that you will set at that moment.\nAfter I finished going through the install wizard I restarted my terminal (in my case I was using vs code's terminal) and then ran \"mysql -u root -p\" and was able to enter the password I created during installation and got into MySQL with no errors.\n"
] |
[
-2
] |
[
"django",
"mysql",
"python"
] |
stackoverflow_0016325607_django_mysql_python.txt
|
Q:
Is there an easy way to convert ISO 8601 duration to timedelta?
Given a ISO 8601 duration string, how do I convert it into a datetime.timedelta?
This didn't work:
timedelta("PT1H5M26S", "T%H%M%S")
A timedelta object represents a duration, the difference between two dates or times (https://docs.python.org/3/library/datetime.html#datetime.timedelta). ISO 8601 durations are described here: https://en.wikipedia.org/wiki/ISO_8601#Durations
A:
I found isodate library to do exactly what I want
isodate.parse_duration('PT1H5M26S')
You can read the source code for the function here
A:
If you're using Pandas, you could use pandas.Timedelta. The constructor accepts an ISO 8601 string, and pandas.Timedelta.isoformat you can format the instance back to a string:
>>> import pandas as pd
>>> dt = pd.Timedelta("PT1H5M26S")
>>> dt
Timedelta('0 days 01:05:26')
>>> dt.isoformat()
'P0DT1H5M26S'
A:
Here's a solution without a new package, but only works if you're dealing with a max duration expressed in days. That limitation makes sense though, because as others have pointed out (1):
Given that the timedelta has more than "a month's" worth of days, how
would you describe it using the ISO8601 duration notation without
referencing a specific point in time? Conversely, given your example,
"P3Y6M4DT12H30M5S", how would you convert that into a timedelta
without knowing which exact years and months this duration refers to?
Timedelta objects are very precise beasts, which is almost certainly
why they don't support "years" and "months" args in their
constructors.
import datetime
def get_isosplit(s, split):
if split in s:
n, s = s.split(split)
else:
n = 0
return n, s
def parse_isoduration(s):
# Remove prefix
s = s.split('P')[-1]
# Step through letter dividers
days, s = get_isosplit(s, 'D')
_, s = get_isosplit(s, 'T')
hours, s = get_isosplit(s, 'H')
minutes, s = get_isosplit(s, 'M')
seconds, s = get_isosplit(s, 'S')
# Convert all to seconds
dt = datetime.timedelta(days=int(days), hours=int(hours), minutes=int(minutes), seconds=int(seconds))
return int(dt.total_seconds())
> parse_isoduration("PT1H5M26S")
3926
A:
Great question, obviously the "right" solution depends on your expectations for the input (a more reliable data source doesn't need as much input validation).
My approach to parse an ISO8601 duration timestamp only checks that the "PT" prefix is present and will not assume integer values for any of the units:
from datetime import timedelta
def parse_isoduration(isostring, as_dict=False):
"""
Parse the ISO8601 duration string as hours, minutes, seconds
"""
separators = {
"PT": None,
"W": "weeks",
"D": "days",
"H": "hours",
"M": "minutes",
"S": "seconds",
}
duration_vals = {}
for sep, unit in separators.items():
partitioned = isostring.partition(sep)
if partitioned[1] == sep:
# Matched this unit
isostring = partitioned[2]
if sep == "PT":
continue # Successful prefix match
dur_str = partitioned[0]
dur_val = float(dur_str) if "." in dur_str else int(dur_str)
duration_vals.update({unit: dur_val})
else:
if sep == "PT":
raise ValueError("Missing PT prefix")
else:
# No match for this unit: it's absent
duration_vals.update({unit: 0})
if as_dict:
return duration_vals
else:
return tuple(duration_vals.values())
dur_isostr = "PT3H2M59.989333S"
dur_tuple = parse_isoduration(dur_isostr)
dur_dict = parse_isoduration(dur_isostr, as_dict=True)
td = timedelta(**dur_dict)
s = td.total_seconds()
β£
>>> dur_tuple
(0, 0, 3, 2, 59.989333)
>>> dur_dict
{'weeks': 0, 'days': 0, 'hours': 3, 'minutes': 2, 'seconds': 59.989333}
>>> td
datetime.timedelta(seconds=10979, microseconds=989333)
>>> s
10979.989333
A:
Based on @r3robertson a more complete, yet not perfect, version
def parse_isoduration(s):
""" Parse a str ISO-8601 Duration: https://en.wikipedia.org/wiki/ISO_8601#Durations
Originally copied from:
https://stackoverflow.com/questions/36976138/is-there-an-easy-way-to-convert-iso-8601-duration-to-timedelta
:param s:
:return:
"""
# ToDo [40]: Can't handle legal ISO3106 ""PT1M""
def get_isosplit(s, split):
if split in s:
n, s = s.split(split, 1)
else:
n = '0'
return n.replace(',', '.'), s # to handle like "P0,5Y"
s = s.split('P', 1)[-1] # Remove prefix
s_yr, s = get_isosplit(s, 'Y') # Step through letter dividers
s_mo, s = get_isosplit(s, 'M')
s_dy, s = get_isosplit(s, 'D')
_, s = get_isosplit(s, 'T')
s_hr, s = get_isosplit(s, 'H')
s_mi, s = get_isosplit(s, 'M')
s_sc, s = get_isosplit(s, 'S')
n_yr = float(s_yr) * 365 # These are approximations that I can live with
n_mo = float(s_mo) * 30.4 # But they are not correct!
dt = datetime.timedelta(days=n_yr+n_mo+float(s_dy), hours=float(s_hr), minutes=float(s_mi), seconds=float(s_sc))
return dt # int(dt.total_seconds()) # original code wanted to return as seconds, we don't.
A:
This is my modification(Martin, rer answers) to support weeks attribute and return milliseconds. Some durations may use PT15.460S fractions.
def parse_isoduration(str):
## https://stackoverflow.com/questions/36976138/is-there-an-easy-way-to-convert-iso-8601-duration-to-timedelta
## Parse the ISO8601 duration as years,months,weeks,days, hours,minutes,seconds
## Returns: milliseconds
## Examples: "PT1H30M15.460S", "P5DT4M", "P2WT3H"
def get_isosplit(str, split):
if split in str:
n, str = str.split(split, 1)
else:
n = '0'
return n.replace(',', '.'), str # to handle like "P0,5Y"
str = str.split('P', 1)[-1] # Remove prefix
s_yr, str = get_isosplit(str, 'Y') # Step through letter dividers
s_mo, str = get_isosplit(str, 'M')
s_wk, str = get_isosplit(str, 'W')
s_dy, str = get_isosplit(str, 'D')
_, str = get_isosplit(str, 'T')
s_hr, str = get_isosplit(str, 'H')
s_mi, str = get_isosplit(str, 'M')
s_sc, str = get_isosplit(str, 'S')
n_yr = float(s_yr) * 365 # approx days for year, month, week
n_mo = float(s_mo) * 30.4
n_wk = float(s_wk) * 7
dt = datetime.timedelta(days=n_yr+n_mo+n_wk+float(s_dy), hours=float(s_hr), minutes=float(s_mi), seconds=float(s_sc))
return int(dt.total_seconds()*1000) ## int(dt.total_seconds()) | dt
|
Is there an easy way to convert ISO 8601 duration to timedelta?
|
Given a ISO 8601 duration string, how do I convert it into a datetime.timedelta?
This didn't work:
timedelta("PT1H5M26S", "T%H%M%S")
A timedelta object represents a duration, the difference between two dates or times (https://docs.python.org/3/library/datetime.html#datetime.timedelta). ISO 8601 durations are described here: https://en.wikipedia.org/wiki/ISO_8601#Durations
|
[
"I found isodate library to do exactly what I want\nisodate.parse_duration('PT1H5M26S')\n\n\nYou can read the source code for the function here\n\n",
"If you're using Pandas, you could use pandas.Timedelta. The constructor accepts an ISO 8601 string, and pandas.Timedelta.isoformat you can format the instance back to a string:\n>>> import pandas as pd\n>>> dt = pd.Timedelta(\"PT1H5M26S\")\n>>> dt\nTimedelta('0 days 01:05:26')\n>>> dt.isoformat()\n'P0DT1H5M26S'\n\n",
"Here's a solution without a new package, but only works if you're dealing with a max duration expressed in days. That limitation makes sense though, because as others have pointed out (1):\n\nGiven that the timedelta has more than \"a month's\" worth of days, how\nwould you describe it using the ISO8601 duration notation without\nreferencing a specific point in time? Conversely, given your example,\n\"P3Y6M4DT12H30M5S\", how would you convert that into a timedelta\nwithout knowing which exact years and months this duration refers to?\nTimedelta objects are very precise beasts, which is almost certainly\nwhy they don't support \"years\" and \"months\" args in their\nconstructors.\n\nimport datetime\n\n\ndef get_isosplit(s, split):\n if split in s:\n n, s = s.split(split)\n else:\n n = 0\n return n, s\n\n\ndef parse_isoduration(s):\n \n # Remove prefix\n s = s.split('P')[-1]\n \n # Step through letter dividers\n days, s = get_isosplit(s, 'D')\n _, s = get_isosplit(s, 'T')\n hours, s = get_isosplit(s, 'H')\n minutes, s = get_isosplit(s, 'M')\n seconds, s = get_isosplit(s, 'S')\n\n # Convert all to seconds\n dt = datetime.timedelta(days=int(days), hours=int(hours), minutes=int(minutes), seconds=int(seconds))\n return int(dt.total_seconds())\n\n> parse_isoduration(\"PT1H5M26S\")\n3926\n\n",
"Great question, obviously the \"right\" solution depends on your expectations for the input (a more reliable data source doesn't need as much input validation).\nMy approach to parse an ISO8601 duration timestamp only checks that the \"PT\" prefix is present and will not assume integer values for any of the units:\nfrom datetime import timedelta\n\ndef parse_isoduration(isostring, as_dict=False):\n \"\"\"\n Parse the ISO8601 duration string as hours, minutes, seconds\n \"\"\"\n separators = {\n \"PT\": None,\n \"W\": \"weeks\",\n \"D\": \"days\",\n \"H\": \"hours\",\n \"M\": \"minutes\",\n \"S\": \"seconds\",\n }\n duration_vals = {}\n for sep, unit in separators.items():\n partitioned = isostring.partition(sep)\n if partitioned[1] == sep:\n # Matched this unit\n isostring = partitioned[2]\n if sep == \"PT\":\n continue # Successful prefix match\n dur_str = partitioned[0]\n dur_val = float(dur_str) if \".\" in dur_str else int(dur_str)\n duration_vals.update({unit: dur_val})\n else:\n if sep == \"PT\":\n raise ValueError(\"Missing PT prefix\")\n else:\n # No match for this unit: it's absent\n duration_vals.update({unit: 0})\n if as_dict:\n return duration_vals\n else:\n return tuple(duration_vals.values())\n\ndur_isostr = \"PT3H2M59.989333S\"\ndur_tuple = parse_isoduration(dur_isostr)\ndur_dict = parse_isoduration(dur_isostr, as_dict=True)\ntd = timedelta(**dur_dict)\ns = td.total_seconds()\n\nβ£\n>>> dur_tuple\n(0, 0, 3, 2, 59.989333)\n>>> dur_dict\n{'weeks': 0, 'days': 0, 'hours': 3, 'minutes': 2, 'seconds': 59.989333}\n>>> td\ndatetime.timedelta(seconds=10979, microseconds=989333)\n>>> s\n10979.989333\n\n",
"Based on @r3robertson a more complete, yet not perfect, version\ndef parse_isoduration(s):\n\"\"\" Parse a str ISO-8601 Duration: https://en.wikipedia.org/wiki/ISO_8601#Durations\nOriginally copied from:\nhttps://stackoverflow.com/questions/36976138/is-there-an-easy-way-to-convert-iso-8601-duration-to-timedelta\n:param s:\n:return:\n\"\"\"\n\n# ToDo [40]: Can't handle legal ISO3106 \"\"PT1M\"\"\n\ndef get_isosplit(s, split):\n if split in s:\n n, s = s.split(split, 1)\n else:\n n = '0'\n return n.replace(',', '.'), s # to handle like \"P0,5Y\"\n\ns = s.split('P', 1)[-1] # Remove prefix\ns_yr, s = get_isosplit(s, 'Y') # Step through letter dividers\ns_mo, s = get_isosplit(s, 'M')\ns_dy, s = get_isosplit(s, 'D')\n_, s = get_isosplit(s, 'T')\ns_hr, s = get_isosplit(s, 'H')\ns_mi, s = get_isosplit(s, 'M')\ns_sc, s = get_isosplit(s, 'S')\nn_yr = float(s_yr) * 365 # These are approximations that I can live with\nn_mo = float(s_mo) * 30.4 # But they are not correct!\ndt = datetime.timedelta(days=n_yr+n_mo+float(s_dy), hours=float(s_hr), minutes=float(s_mi), seconds=float(s_sc))\nreturn dt # int(dt.total_seconds()) # original code wanted to return as seconds, we don't.\n\n",
"This is my modification(Martin, rer answers) to support weeks attribute and return milliseconds. Some durations may use PT15.460S fractions.\ndef parse_isoduration(str):\n## https://stackoverflow.com/questions/36976138/is-there-an-easy-way-to-convert-iso-8601-duration-to-timedelta\n## Parse the ISO8601 duration as years,months,weeks,days, hours,minutes,seconds\n## Returns: milliseconds\n## Examples: \"PT1H30M15.460S\", \"P5DT4M\", \"P2WT3H\"\n def get_isosplit(str, split):\n if split in str:\n n, str = str.split(split, 1)\n else:\n n = '0'\n return n.replace(',', '.'), str # to handle like \"P0,5Y\"\n\n str = str.split('P', 1)[-1] # Remove prefix\n s_yr, str = get_isosplit(str, 'Y') # Step through letter dividers\n s_mo, str = get_isosplit(str, 'M')\n s_wk, str = get_isosplit(str, 'W')\n s_dy, str = get_isosplit(str, 'D')\n _, str = get_isosplit(str, 'T')\n s_hr, str = get_isosplit(str, 'H')\n s_mi, str = get_isosplit(str, 'M')\n s_sc, str = get_isosplit(str, 'S')\n n_yr = float(s_yr) * 365 # approx days for year, month, week\n n_mo = float(s_mo) * 30.4\n n_wk = float(s_wk) * 7\n dt = datetime.timedelta(days=n_yr+n_mo+n_wk+float(s_dy), hours=float(s_hr), minutes=float(s_mi), seconds=float(s_sc))\n return int(dt.total_seconds()*1000) ## int(dt.total_seconds()) | dt\n\n"
] |
[
59,
5,
4,
1,
1,
0
] |
[] |
[] |
[
"datetime",
"python",
"python_datetime",
"timedelta"
] |
stackoverflow_0036976138_datetime_python_python_datetime_timedelta.txt
|
Q:
How to print whole number without zeros after decimal point?
I'm trying to print a whole number (such as 39 for example) in the following format: 39.
It must not be a str type object like '39.' for example, but a number
e. g. n = 39.0 should be printed like 39.
n = 39.0
#magic stuff with output
39.
I tried using :.nf methods (:.0f apparently -- didn't work), print(float(39.)) or just print(39.)
In the first case, it looks like 39, in the second and third 39.0
I also tried float(str(39) + '.') and obviously it didn't work
Sorry, if it's a stupid question, I've been trying to solve it for several hours already, still can't find any information.
A:
From Format Specification Mini-Language (emphasis mine):
The '#' option causes the βalternate formβ to be used for the conversion. The alternate form is defined differently for different types. This option is only valid for integer, float and complex types. For integers, when binary, octal, or hexadecimal output is used, this option adds the respective prefix '0b', '0o', '0x', or '0X' to the output value. For float and complex the alternate form causes the result of the conversion to always contain a decimal-point character, even if no digits follow it. Normally, a decimal-point character appears in the result of these conversions only if a digit follows it. In addition, for 'g' and 'G' conversions, trailing zeros are not removed from the result.
>>> n=39.0
>>> print(f'{n:#.0f}')
39.
A:
print(int(n))
or u can try:
n = n // 1
print(fβ{n}.β)
|
How to print whole number without zeros after decimal point?
|
I'm trying to print a whole number (such as 39 for example) in the following format: 39.
It must not be a str type object like '39.' for example, but a number
e. g. n = 39.0 should be printed like 39.
n = 39.0
#magic stuff with output
39.
I tried using :.nf methods (:.0f apparently -- didn't work), print(float(39.)) or just print(39.)
In the first case, it looks like 39, in the second and third 39.0
I also tried float(str(39) + '.') and obviously it didn't work
Sorry, if it's a stupid question, I've been trying to solve it for several hours already, still can't find any information.
|
[
"From Format Specification Mini-Language (emphasis mine):\n\nThe '#' option causes the βalternate formβ to be used for the conversion. The alternate form is defined differently for different types. This option is only valid for integer, float and complex types. For integers, when binary, octal, or hexadecimal output is used, this option adds the respective prefix '0b', '0o', '0x', or '0X' to the output value. For float and complex the alternate form causes the result of the conversion to always contain a decimal-point character, even if no digits follow it. Normally, a decimal-point character appears in the result of these conversions only if a digit follows it. In addition, for 'g' and 'G' conversions, trailing zeros are not removed from the result.\n\n>>> n=39.0\n>>> print(f'{n:#.0f}')\n39.\n\n",
"print(int(n))\nor u can try:\nn = n // 1\nprint(fβ{n}.β)\n"
] |
[
8,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074425460_python.txt
|
Q:
Exclude holidays between two selected dates in pyhton odoo
How can I calculate total hours between two dates. here I have to select the start date and end date. and every day an employee works 8 hours per day. I calculate the total hours between these two dates. For example if I select two dates from: 11/21/2022 and date to:11/22/2022. These two dates total hours are 16 hours. and date need to count without holiday how can I do that. Please help me. Here I want to exclude holidays between the total days.please help me
@api.depends("start_date", "date_deadline")
def _compute_hours(self):
if self.start_date and self.date_deadline:
t1 = datetime.strptime(str(self.start_date), '%Y-%m-%d')
print(t1)
t2 = datetime.strptime(str(self.date_deadline), '%Y-%m-%d')
print('=================================T2')
print(t2)
t3 = t2 - t1
# count = sum(1 for day in t3 if day.weekday() < 5)
# print(count)
print('=================================T3')
print(t3)
print('=================================')
seconds = t3.total_seconds() / 3
diff_in_hours = seconds / 3600
print('Difference between two datetimes in hours:')
print(diff_in_hours)
self.total_hours = diff_in_hours
I am trying to exclude holidays from total days
A:
You can simply use weekday function to find the weekday for that day. Then compare it with holidays.
In [1]: from datetime import date, timedelta
...:
...: start_date = date(2019, 1, 1)
...: end_date = date(2020, 1, 1)
...: delta = timedelta(days=1)
...: count = 0
...: while start_date <= end_date:
...: print(start_date.strftime("%Y-%m-%d"))
...: start_date += delta
...: if start_date.weekday() in [5,6]: # 5 is friday, 6 is sat
...: print('holiday')
...: count += 1
...: print("total holidays", count)
Now you have holidays count, so you can subtract it.
i.e
total holidays 104
total_hours - (count * 8) # count is number, 8 is working hours
A:
You can use the company default working hours to compute the number of work hours between two datetimes. The method is taking into account the global leaves.
You can find an example in hr_leave module:
hours = self.env.company.resource_calendar_id.get_work_hours_count(date_from, date_to)
|
Exclude holidays between two selected dates in pyhton odoo
|
How can I calculate total hours between two dates. here I have to select the start date and end date. and every day an employee works 8 hours per day. I calculate the total hours between these two dates. For example if I select two dates from: 11/21/2022 and date to:11/22/2022. These two dates total hours are 16 hours. and date need to count without holiday how can I do that. Please help me. Here I want to exclude holidays between the total days.please help me
@api.depends("start_date", "date_deadline")
def _compute_hours(self):
if self.start_date and self.date_deadline:
t1 = datetime.strptime(str(self.start_date), '%Y-%m-%d')
print(t1)
t2 = datetime.strptime(str(self.date_deadline), '%Y-%m-%d')
print('=================================T2')
print(t2)
t3 = t2 - t1
# count = sum(1 for day in t3 if day.weekday() < 5)
# print(count)
print('=================================T3')
print(t3)
print('=================================')
seconds = t3.total_seconds() / 3
diff_in_hours = seconds / 3600
print('Difference between two datetimes in hours:')
print(diff_in_hours)
self.total_hours = diff_in_hours
I am trying to exclude holidays from total days
|
[
"You can simply use weekday function to find the weekday for that day. Then compare it with holidays.\nIn [1]: from datetime import date, timedelta \n ...: \n ...: start_date = date(2019, 1, 1) \n ...: end_date = date(2020, 1, 1) \n ...: delta = timedelta(days=1) \n ...: count = 0 \n ...: while start_date <= end_date: \n ...: print(start_date.strftime(\"%Y-%m-%d\")) \n ...: start_date += delta \n ...: if start_date.weekday() in [5,6]: # 5 is friday, 6 is sat\n ...: print('holiday') \n ...: count += 1 \n ...: print(\"total holidays\", count) \n\nNow you have holidays count, so you can subtract it.\ni.e\ntotal holidays 104\ntotal_hours - (count * 8) # count is number, 8 is working hours\n\n",
"You can use the company default working hours to compute the number of work hours between two datetimes. The method is taking into account the global leaves.\nYou can find an example in hr_leave module:\n hours = self.env.company.resource_calendar_id.get_work_hours_count(date_from, date_to)\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"odoo",
"python"
] |
stackoverflow_0074545035_odoo_python.txt
|
Q:
Why is the loop not calculating every lowercase letter from a string?
I am trying to calculate every lowercase letter from a mixed uppercase and lowercase string and form a new string of only lowercase. For example I have a string named st="ABcASFatBD" and I expect an output of low= "cat" but I am getting only "c" as the output. Below is my code.
class Solution(object):
def find_crowd(self, st):
lo = ""
for i in range(len(st)):
if st[i].islower():
lo += st[i]
return lo
else:
pass
if __name__ == "__main__":
p = Solution()
s = "ABcASFatBD"
print(p.find_crowd(s))
A:
The problem is that you have your script ending on the first time it finds a lowercase value, whereas your return lo should be outside of the for i in range(...):. But heres a simpler version of your code:
class Solution(object):
def find_crowd(self, st):
lo = ""
lo = ''.join([char for char in st if char.islower() == True])
return lo
if __name__ == "__main__":
p = Solution()
s = "ABcASFatBD"
print(p.find_crowd(s))
Hope this helps
A:
The problem with your code is that you call return statement in your if statement which ends the loop without passing through the end of your string. Please change your code according to the below suggestion.
class Solution(object):
def find_crowd(self, st):
lo = ""
for i in range(len(st)):
if st[i].islower():
lo += st[i]
return lo
if __name__ == "__main__":
p = Solution()
s = "ABcASFatBD"
print(p.find_crowd(s))
You can also convert your string into a list of characters before iterating on it and getting an individual element. You can convert a string into a list using [*s]. Just see the output of the print below to understand. Here is the simplest way to do that:
class Solution(object):
def find_crowd(self, st):
lo = ""
for i in [*st]:
if i.islower():
lo += i
return lo
if __name__ == "__main__":
p = Solution()
s = "ABcASFatBD"
print([*s])
print(p.find_crowd(s))
A:
Your function is returning when first lowercase character is obtained.!
Some solutions are:-
class Solution(object):
def find_crowd(self, st):
lo=""
for i in range(len(st)):
if st[i].islower():
lo += st[i]
return lo
if __name__ == "__main__":
p = Solution()
s = "ABcASFatBD"
print(p.find_crowd(s))
Using ord() function [Ascii Values]:-
class Solution(object):
def find_crowd(self, st):
lo=""
for i in st:
if ord(i)>=97 and ord(i)<=122:
lo += i
return lo
if __name__ == "__main__":
p = Solution()
s = "ABcASFatBDZzaA"
print(p.find_crowd(s))
One liner [Pythonic Way]:-
class Solution(object):
def find_crowd(self, st):
return "".join([lower for lower in st if lower.islower() == True])
if __name__ == "__main__":
p = Solution()
s = "ABcASFatBDZzaA"
print(p.find_crowd(s))
|
Why is the loop not calculating every lowercase letter from a string?
|
I am trying to calculate every lowercase letter from a mixed uppercase and lowercase string and form a new string of only lowercase. For example I have a string named st="ABcASFatBD" and I expect an output of low= "cat" but I am getting only "c" as the output. Below is my code.
class Solution(object):
def find_crowd(self, st):
lo = ""
for i in range(len(st)):
if st[i].islower():
lo += st[i]
return lo
else:
pass
if __name__ == "__main__":
p = Solution()
s = "ABcASFatBD"
print(p.find_crowd(s))
|
[
"The problem is that you have your script ending on the first time it finds a lowercase value, whereas your return lo should be outside of the for i in range(...):. But heres a simpler version of your code:\nclass Solution(object):\n\n def find_crowd(self, st):\n lo = \"\"\n lo = ''.join([char for char in st if char.islower() == True])\n return lo\n\n\nif __name__ == \"__main__\":\n p = Solution()\n s = \"ABcASFatBD\"\n print(p.find_crowd(s))\n\nHope this helps\n",
"The problem with your code is that you call return statement in your if statement which ends the loop without passing through the end of your string. Please change your code according to the below suggestion.\nclass Solution(object):\n def find_crowd(self, st):\n lo = \"\"\n for i in range(len(st)):\n if st[i].islower():\n lo += st[i]\n return lo\n\n\nif __name__ == \"__main__\":\n p = Solution()\n s = \"ABcASFatBD\"\n print(p.find_crowd(s))\n\nYou can also convert your string into a list of characters before iterating on it and getting an individual element. You can convert a string into a list using [*s]. Just see the output of the print below to understand. Here is the simplest way to do that:\nclass Solution(object):\n def find_crowd(self, st):\n lo = \"\"\n for i in [*st]:\n if i.islower():\n lo += i\n return lo\n\n\nif __name__ == \"__main__\":\n p = Solution()\n s = \"ABcASFatBD\"\n print([*s])\n print(p.find_crowd(s))\n\n",
"Your function is returning when first lowercase character is obtained.!\nSome solutions are:-\nclass Solution(object):\n\n def find_crowd(self, st):\n lo=\"\"\n for i in range(len(st)):\n if st[i].islower():\n lo += st[i]\n return lo\n\n\nif __name__ == \"__main__\":\n p = Solution()\n s = \"ABcASFatBD\"\n print(p.find_crowd(s))\n\nUsing ord() function [Ascii Values]:-\nclass Solution(object):\n def find_crowd(self, st):\n lo=\"\"\n for i in st:\n if ord(i)>=97 and ord(i)<=122:\n lo += i\n return lo\n\n\nif __name__ == \"__main__\":\n p = Solution()\n s = \"ABcASFatBDZzaA\"\n print(p.find_crowd(s))\n\nOne liner [Pythonic Way]:-\nclass Solution(object):\n def find_crowd(self, st):\n return \"\".join([lower for lower in st if lower.islower() == True])\n\n\nif __name__ == \"__main__\":\n p = Solution()\n s = \"ABcASFatBDZzaA\"\n print(p.find_crowd(s))\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"loops",
"python",
"string"
] |
stackoverflow_0074557249_loops_python_string.txt
|
Q:
Welcome messages for different userNames
I want a python code to print greeting messages for users when logged in: if the userName is admin then it should print "Hello admin you're welcome, would you want to see a status update?", and if the userName is other than admin then a different welcome message is printed:
I tried the code below but am not getting it right:
userNames = ["jack2", "admin", "lucy21", "angeUt", "lacky53"]
userName = "admin"
if userName == "admin":
# ** print this below if admin is the userName **
print("Hello " + userName + " you are welcome, would you like to see a status update?")
else:
print("other message")
# ** this should be printed when the userName is for example jack2. **
Please note I just started python barely a week now.
I just didn't get the code right because am just starting out with programming.
A:
The first prompt says to loop through the list of user names, and print a special message if the name is "admin":
def main():
userNames = ["jack2", "admin", "lucy21", "angeUt", "lacky53"]
for user in userNames:
if user == "admin":
print("Hello admin would you like to see a status report")
else:
print("Hello {}".format(user))
if __name__ == '__main__':
main()
A:
You can easily check if the username is in the list of names with user in userNames.
Then you would just have to check if it's the admin, for the more specific message.
userNames = ["jack2", "admin", "lucy21", "angeUt", "lacky53"]
user = "admin"
if user in userNames:
if user == "admin":
print ("Hello " + user + " you are welcome, would you like to see a status update?")
else:
print ("Hello " + user + " thanks for logging in again.")
|
Welcome messages for different userNames
|
I want a python code to print greeting messages for users when logged in: if the userName is admin then it should print "Hello admin you're welcome, would you want to see a status update?", and if the userName is other than admin then a different welcome message is printed:
I tried the code below but am not getting it right:
userNames = ["jack2", "admin", "lucy21", "angeUt", "lacky53"]
userName = "admin"
if userName == "admin":
# ** print this below if admin is the userName **
print("Hello " + userName + " you are welcome, would you like to see a status update?")
else:
print("other message")
# ** this should be printed when the userName is for example jack2. **
Please note I just started python barely a week now.
I just didn't get the code right because am just starting out with programming.
|
[
"The first prompt says to loop through the list of user names, and print a special message if the name is \"admin\":\ndef main():\n userNames = [\"jack2\", \"admin\", \"lucy21\", \"angeUt\", \"lacky53\"]\n for user in userNames:\n if user == \"admin\":\n print(\"Hello admin would you like to see a status report\")\n else:\n print(\"Hello {}\".format(user))\n\nif __name__ == '__main__':\n main()\n\n",
"You can easily check if the username is in the list of names with user in userNames.\nThen you would just have to check if it's the admin, for the more specific message.\nuserNames = [\"jack2\", \"admin\", \"lucy21\", \"angeUt\", \"lacky53\"]\n\nuser = \"admin\"\n\nif user in userNames:\n if user == \"admin\":\n print (\"Hello \" + user + \" you are welcome, would you like to see a status update?\")\n else:\n print (\"Hello \" + user + \" thanks for logging in again.\")\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074504267_python.txt
|
Q:
Python - Intersection of multiple lists taken two at a time
In python, I am able to get intersection of multiple lists:
arr = [[1, 2, 3], [2, 3, 4], [3, 4, 5]]
result = set.intersection(*map(set, arr))
Output:
result = {3}
Now, I want the result as intersection of all 3 nested lists taken 2 at a time:
result = {2, 3, 4}
as [2, 3] is common between 1st and 2nd lists, [3, 4] is common between 2nd and 3rd lists and [3] is common between 1st and 3rd lists.
Is there a built in function for this?
A:
You can take a union of the all pairs of intersections as follows;
import itertools as it
arr = [[1, 2, 3], [2, 3, 4], [3, 4, 5]]
res = set.union(*(set(i).intersection(set(j)) for i,j in it.combinations(arr,2)))
# output {2, 3, 4}
*edit as per @DanielHao's comment
A:
Try itertools.combinations
from itertools import combinations
arr = [[1, 2, 3], [2, 3, 4], [3, 4, 5]]
result = [set.intersection(*map(set, i)) for i in combinations(arr,2)]
# print(list(combinations(arr,2)) to get the combinations
# [{2, 3}, {3}, {3, 4}]
|
Python - Intersection of multiple lists taken two at a time
|
In python, I am able to get intersection of multiple lists:
arr = [[1, 2, 3], [2, 3, 4], [3, 4, 5]]
result = set.intersection(*map(set, arr))
Output:
result = {3}
Now, I want the result as intersection of all 3 nested lists taken 2 at a time:
result = {2, 3, 4}
as [2, 3] is common between 1st and 2nd lists, [3, 4] is common between 2nd and 3rd lists and [3] is common between 1st and 3rd lists.
Is there a built in function for this?
|
[
"You can take a union of the all pairs of intersections as follows;\nimport itertools as it\narr = [[1, 2, 3], [2, 3, 4], [3, 4, 5]]\nres = set.union(*(set(i).intersection(set(j)) for i,j in it.combinations(arr,2)))\n# output {2, 3, 4}\n\n*edit as per @DanielHao's comment\n",
"Try itertools.combinations\nfrom itertools import combinations\narr = [[1, 2, 3], [2, 3, 4], [3, 4, 5]]\nresult = [set.intersection(*map(set, i)) for i in combinations(arr,2)] \n# print(list(combinations(arr,2)) to get the combinations\n# [{2, 3}, {3}, {3, 4}]\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"intersection",
"list",
"python",
"set"
] |
stackoverflow_0074557859_intersection_list_python_set.txt
|
Q:
Pyspark Avoid Pivot Transformation To Dataframe - Pivot Alternative
I have a datafrane to which I am applying a pivot transformation and I want to know if there is a way to have the same end result and avoid the pivot transformation.
The dataframe looks like this:
|gender| pro|week| share|forecast|
+------+------------+----+-------------+--------+
| Male| A| 40| 0.2| 195.0|
|Female| A| 40| 0.01| 38.0|
| Male| B| 40| 0.15| 733.0|
|Female| B| 41| 0.15| 579.0|
|Female| C| 41| 0.01| 38.0|
The expected output os the following:
|gender| pro|week| share_1| share_10| share_15| share_20|
+------+---------+----+-----------+------------+------------+------------+
| Male| A| 40| 0.0| 0.0| 0.0| 195.0|
|Female| A| 40| 38.0| 0.0| 0.0| 0.0|
|Female| B| 41| 0.0| 0.0| 579.0| 0.0|
|Female| C| 41| 38.0| 0.0| 0.0| 0.0|
| Male| B| 40| 191.0| 205.0| 733.0| 245.0|
At the moment I am implementing this:
df.groupBy(['gender','pro','week']).pivot("share").agg(first('forecast')).withColumnRenamed('0.01', 'share_1').withColumnRenamed('0.1', 'share_10').withColumnRenamed('0.15', 'share_15').withColumnRenamed('0.2', 'share_20')
Is there a have the same result without applying a pivot transformation?
A:
performances are poor because you do not provide values for the share column.
cf. doc pivot(pivot_col, values=None)
Not providing values is more concise but less efficient, because Spark needs to first compute the list of distinct values internally.
I can insure you that the current official implementation of pivot will always be better than anything you'll try by yourself. Just add your values and it will be good.
|
Pyspark Avoid Pivot Transformation To Dataframe - Pivot Alternative
|
I have a datafrane to which I am applying a pivot transformation and I want to know if there is a way to have the same end result and avoid the pivot transformation.
The dataframe looks like this:
|gender| pro|week| share|forecast|
+------+------------+----+-------------+--------+
| Male| A| 40| 0.2| 195.0|
|Female| A| 40| 0.01| 38.0|
| Male| B| 40| 0.15| 733.0|
|Female| B| 41| 0.15| 579.0|
|Female| C| 41| 0.01| 38.0|
The expected output os the following:
|gender| pro|week| share_1| share_10| share_15| share_20|
+------+---------+----+-----------+------------+------------+------------+
| Male| A| 40| 0.0| 0.0| 0.0| 195.0|
|Female| A| 40| 38.0| 0.0| 0.0| 0.0|
|Female| B| 41| 0.0| 0.0| 579.0| 0.0|
|Female| C| 41| 38.0| 0.0| 0.0| 0.0|
| Male| B| 40| 191.0| 205.0| 733.0| 245.0|
At the moment I am implementing this:
df.groupBy(['gender','pro','week']).pivot("share").agg(first('forecast')).withColumnRenamed('0.01', 'share_1').withColumnRenamed('0.1', 'share_10').withColumnRenamed('0.15', 'share_15').withColumnRenamed('0.2', 'share_20')
Is there a have the same result without applying a pivot transformation?
|
[
"performances are poor because you do not provide values for the share column.\ncf. doc pivot(pivot_col, values=None)\nNot providing values is more concise but less efficient, because Spark needs to first compute the list of distinct values internally.\nI can insure you that the current official implementation of pivot will always be better than anything you'll try by yourself. Just add your values and it will be good.\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"loops",
"pivot",
"pyspark",
"python"
] |
stackoverflow_0074521544_dataframe_loops_pivot_pyspark_python.txt
|
Q:
How to map values from nested dict to Pydantic Model?
I am trying to map a value from a nested dict/json to my Pydantic model. For me, this works well when my json/dict has a flat structure. However, I am struggling to map values from a nested structure to my Pydantic Model.
Lets assume I have a json/dict in the following format:
d = {
"p_id": 1,
"billing": {
"first_name": "test"
}
}
In addition, I have a Pydantic model with two attributes:
class Order(BaseModel):
p_id: int
pre_name: str
How can I map the value from the key first_nameto my Pydantic attribute pre_name?
Is there an easy way instead of using a root_validator to parse the given structure to my flat pydantic Model?
A:
You can customize __init__ of your model class:
from pydantic import BaseModel
d = {
"p_id": 1,
"billing": {
"first_name": "test"
}
}
class Order(BaseModel):
p_id: int
pre_name: str
def __init__(self, **kwargs):
kwargs["pre_name"] = kwargs["billing"]["first_name"]
super().__init__(**kwargs)
print(Order.parse_obj(d)) # p_id=1 pre_name='test'
A:
There's a package PyMapMe for mapping models, it supports nested models as well as helping functions and context, for example:
from typing import Any
from pydantic import BaseModel, Field
from pymapme.models.mapping import MappingModel
class Person(BaseModel):
name: str
surname: str
class Profile(BaseModel):
nickname: str
person: Person
class User(MappingModel):
nickname: str = Field(source='nickname')
first_name: str = Field(source='person.name')
surname: str = Field(source='person.surname')
full_name: str = Field(source_func='_get_full_name')
@staticmethod
def _get_full_name(model: Profile, default: Any):
return model.person.name + ' ' + model.person.surname
profile = Profile(nickname='baobab', person=Person(name='John', surname='Smith'))
user = User.build_from_model(profile)
print(user.dict()) # {'nickname': 'baobab', 'first_name': 'John', 'surname': 'Smith', 'full_name': 'John Smith'}
Or for your example, it would look like:
d = {
"p_id": 1,
"billing": {
"first_name": "test"
}
}
class Billing(BaseModel):
first_name: str
class Data(BaseModel):
p_id: int
billing: Billing
class Order(MappingModel):
p_id: int
pre_name: str = Field(source='billing.first_name')
order = Order.build_from_model(Data(**d))
print(order.dict())
Note: I'm the author, pull requests and any suggestions are welcome!
A:
You can nest pydantic models by making your lower levels another pydantic model. See as follows:
class MtnPayer(BaseModel):
partyIdType: str
partyId: str
class MtnPayment(BaseModel):
financialTransactionId: str
externalId: str
amount: str
currency: str
payer: MtnPayer
payerMessage: str
payeeNote: str
status: str
reason: str
See the payer item in the second model
|
How to map values from nested dict to Pydantic Model?
|
I am trying to map a value from a nested dict/json to my Pydantic model. For me, this works well when my json/dict has a flat structure. However, I am struggling to map values from a nested structure to my Pydantic Model.
Lets assume I have a json/dict in the following format:
d = {
"p_id": 1,
"billing": {
"first_name": "test"
}
}
In addition, I have a Pydantic model with two attributes:
class Order(BaseModel):
p_id: int
pre_name: str
How can I map the value from the key first_nameto my Pydantic attribute pre_name?
Is there an easy way instead of using a root_validator to parse the given structure to my flat pydantic Model?
|
[
"You can customize __init__ of your model class:\nfrom pydantic import BaseModel\n\nd = {\n \"p_id\": 1,\n \"billing\": {\n \"first_name\": \"test\"\n }\n}\n\n\nclass Order(BaseModel):\n p_id: int\n pre_name: str\n\n def __init__(self, **kwargs):\n kwargs[\"pre_name\"] = kwargs[\"billing\"][\"first_name\"]\n super().__init__(**kwargs)\n\n\nprint(Order.parse_obj(d)) # p_id=1 pre_name='test'\n\n",
"There's a package PyMapMe for mapping models, it supports nested models as well as helping functions and context, for example:\nfrom typing import Any\n\nfrom pydantic import BaseModel, Field\nfrom pymapme.models.mapping import MappingModel\n\n\nclass Person(BaseModel):\n name: str\n surname: str\n\n\nclass Profile(BaseModel):\n nickname: str\n person: Person\n\n\nclass User(MappingModel):\n nickname: str = Field(source='nickname')\n first_name: str = Field(source='person.name')\n surname: str = Field(source='person.surname')\n full_name: str = Field(source_func='_get_full_name')\n\n @staticmethod\n def _get_full_name(model: Profile, default: Any):\n return model.person.name + ' ' + model.person.surname\n\n\nprofile = Profile(nickname='baobab', person=Person(name='John', surname='Smith'))\nuser = User.build_from_model(profile)\nprint(user.dict()) # {'nickname': 'baobab', 'first_name': 'John', 'surname': 'Smith', 'full_name': 'John Smith'}\n\nOr for your example, it would look like:\nd = {\n \"p_id\": 1,\n \"billing\": {\n \"first_name\": \"test\"\n }\n}\n\n\nclass Billing(BaseModel):\n first_name: str\n\n\nclass Data(BaseModel):\n p_id: int\n billing: Billing\n\n\nclass Order(MappingModel):\n p_id: int\n pre_name: str = Field(source='billing.first_name')\n\n\norder = Order.build_from_model(Data(**d))\nprint(order.dict())\n\nNote: I'm the author, pull requests and any suggestions are welcome!\n",
"You can nest pydantic models by making your lower levels another pydantic model. See as follows:\nclass MtnPayer(BaseModel):\n partyIdType: str\n partyId: str\n\nclass MtnPayment(BaseModel):\n financialTransactionId: str\n externalId: str\n amount: str\n currency: str\n payer: MtnPayer\n payerMessage: str\n payeeNote: str\n status: str\n reason: str\n\nSee the payer item in the second model\n"
] |
[
11,
0,
0
] |
[] |
[] |
[
"pydantic",
"python"
] |
stackoverflow_0066570894_pydantic_python.txt
|
Q:
How do I get these outputs in 1 single list or dictionary
from bs4 import BeautifulSoup
import requests
with open("htmlviewer.html") as fp:
soup = BeautifulSoup(fp, "html.parser")
gp = soup.find_all("a")
for link in gp:
bs = link.get('href')
I am using this code to extract links from source code and here is my output -|
None
https://support.google.com/websearch/answer/181196?hl=en-IN
None
https://www.google.com/webhp?hl=en&ictx=2&sa=X&ved=0ahUKEwj88YTzkL_7AhX9TGwGHZQpBVEQPQgJ
https://chromedriver.chromium.org/
/search?rlz=1C1CHBD_enIN1032IN1032&sxsrf=ALiCzsZzV82nGh7PsFzltlGMqVaKe-JR2Q:1669028827453&q=What+is+a+Chrome+WebDriver%3F&sa=X&ved=2ahUKEwj88YTzkL_7AhX9TGwGHZQpBVEQzmd6BAgUEAUhttps://www.selenium.dev/documentation/webdriver/getting_started/install_drivers/
https://splinter.readthedocs.io/en/latest/drivers/chrome.html
https://subscription.packtpub.com/book/web-development/9781784392512/1/ch01lvl1sec15/setting-up-chromedriver-for-google-chrome
I want all the links in 1 single list or dictionary
if I do this
bs = {link.get('href')}
I am getting every single link in new dictionary can anyone help, I am new at coding,
Also how do I select links starting with https and ignore /search, I know very stupid questions but I am week into learning python.
A:
First create an empty list outside the for loop, e.g. links = [] and then inside your for loop do links.append(link.get("href"))
|
How do I get these outputs in 1 single list or dictionary
|
from bs4 import BeautifulSoup
import requests
with open("htmlviewer.html") as fp:
soup = BeautifulSoup(fp, "html.parser")
gp = soup.find_all("a")
for link in gp:
bs = link.get('href')
I am using this code to extract links from source code and here is my output -|
None
https://support.google.com/websearch/answer/181196?hl=en-IN
None
https://www.google.com/webhp?hl=en&ictx=2&sa=X&ved=0ahUKEwj88YTzkL_7AhX9TGwGHZQpBVEQPQgJ
https://chromedriver.chromium.org/
/search?rlz=1C1CHBD_enIN1032IN1032&sxsrf=ALiCzsZzV82nGh7PsFzltlGMqVaKe-JR2Q:1669028827453&q=What+is+a+Chrome+WebDriver%3F&sa=X&ved=2ahUKEwj88YTzkL_7AhX9TGwGHZQpBVEQzmd6BAgUEAUhttps://www.selenium.dev/documentation/webdriver/getting_started/install_drivers/
https://splinter.readthedocs.io/en/latest/drivers/chrome.html
https://subscription.packtpub.com/book/web-development/9781784392512/1/ch01lvl1sec15/setting-up-chromedriver-for-google-chrome
I want all the links in 1 single list or dictionary
if I do this
bs = {link.get('href')}
I am getting every single link in new dictionary can anyone help, I am new at coding,
Also how do I select links starting with https and ignore /search, I know very stupid questions but I am week into learning python.
|
[
"First create an empty list outside the for loop, e.g. links = [] and then inside your for loop do links.append(link.get(\"href\"))\n"
] |
[
0
] |
[] |
[] |
[
"list",
"python",
"web_scraping"
] |
stackoverflow_0074558002_list_python_web_scraping.txt
|
Q:
Converting Flask form data to JSON only gets first value
I want to take input from an HTML form and give the output in JSON format. When multiple values are selected they are not converted into JSON arrays, only the first value is used.
@app.route('/form')
def show_form():
return render_template('form.html')
@app.route("/result", methods=['POST'])
def show_result():
result = request.form
return render_template('result.html', result=result)
form.html:
<form method=POST>
<input name=server>
<select name=owners multiple>
<option value="thor">thor</option>
<option value="loki">loki</option>
<option value="flash">flash</option>
<option value="batman">batman</option>
</select>
<input type=submit>
</form>
result.html:
{{ result|tojson }}
When multiple values for owner are selected, "thor" and "flash", the output shows only one value:
{"server": "app-srv", "owners": "thor"}
I expect owners to be a list:
{"server": "app-srv", "owners": ["thor", "flash"]}
How do I display the form as JSON without losing list values?
A:
request.form is a MultiDict. Iterating over a multidict only returns the first value for each key. To get a dictionary with lists of values, use to_dict(flat=False).
result = request.form.to_dict(flat=False)
All values will be lists, even if there's only one item, for consistency. If you want to flatten single-value items, you need to process the data manually. Use iterlists with a dict comprehension.
result = {
key: value[0] if len(value) == 1 else value
for key, value in request.form.iterlists()
}
|
Converting Flask form data to JSON only gets first value
|
I want to take input from an HTML form and give the output in JSON format. When multiple values are selected they are not converted into JSON arrays, only the first value is used.
@app.route('/form')
def show_form():
return render_template('form.html')
@app.route("/result", methods=['POST'])
def show_result():
result = request.form
return render_template('result.html', result=result)
form.html:
<form method=POST>
<input name=server>
<select name=owners multiple>
<option value="thor">thor</option>
<option value="loki">loki</option>
<option value="flash">flash</option>
<option value="batman">batman</option>
</select>
<input type=submit>
</form>
result.html:
{{ result|tojson }}
When multiple values for owner are selected, "thor" and "flash", the output shows only one value:
{"server": "app-srv", "owners": "thor"}
I expect owners to be a list:
{"server": "app-srv", "owners": ["thor", "flash"]}
How do I display the form as JSON without losing list values?
|
[
"request.form is a MultiDict. Iterating over a multidict only returns the first value for each key. To get a dictionary with lists of values, use to_dict(flat=False).\nresult = request.form.to_dict(flat=False)\n\nAll values will be lists, even if there's only one item, for consistency. If you want to flatten single-value items, you need to process the data manually. Use iterlists with a dict comprehension.\nresult = {\n key: value[0] if len(value) == 1 else value\n for key, value in request.form.iterlists()\n}\n\n"
] |
[
21
] |
[
"Difference in results when using the \"flat\" parameter:\nresult = request.form.to_dict(flat=True)\nResult: {'a': '6', 'b': '7', 'c': '8'}\nresult = request.form.to_dict(flat=False)\nResult: {'a': ['6'], 'b': ['7'], 'c': ['8']}\n"
] |
[
-1
] |
[
"flask",
"jinja2",
"json",
"python"
] |
stackoverflow_0045590988_flask_jinja2_json_python.txt
|
Q:
How to post Telegraph article?
I want to post a Telegra.ph article using Python and Telegraph API. I tried modules telegraph and python-telegraphapi, but I cannot do it. I try to use example codes of the modules:
from telegraph import Telegraph
telegraph = Telegraph()
telegraph.create_account(short_name='1337')
response = telegraph.create_page(
'Hey',
html_content='<p>Hello, world!</p>'
)
print('http://telegra.ph/{}'.format(response['path']))
and here's what happens:
File "AutoContent.py", line 9, in <module>
html_content='<p>Hello, world!</p>'
File "C:\Program Files\Python36\lib\site-packages\telegraph\api.py", line 168, in create_page
'return_content': return_content
File "C:\Program Files\Python36\lib\site-packages\telegraph\api.py", line 40, in method
raise TelegraphException(response.get('error'))
telegraph.exceptions.TelegraphException: PAGE_SAVE_FAILED
Another code:
from telegraphapi import Telegraph
telegraph = Telegraph()
telegraph.createAccount("PythonTelegraphAPI")
page = telegraph.createPage("Hello world!", html_content="<b>Welcome, TelegraphAPI!</b>")
print('http://telegra.ph/{}'.format(page['path']))
And what happens:
File "AutoContent.py", line 6, in <module>
page = telegraph.createPage("Hello world!", html_content="<b>Welcome, TelegraphAPI!</b>")
File "C:\Program Files\Python36\lib\site-packages\telegraphapi\main.py", line 139, in createPage
"return_content": return_content
File "C:\Program Files\Python36\lib\site-packages\telegraphapi\main.py", line 32, in make_method
post_request.json()['error'])
telegraphapi.exceptions.TelegraphAPIException: Error while executing createPage: PAGE_SAVE_FAILED
Please, help me! How can I post a Telegraph article using Python?
A:
I think you should replace short_name on yours when you create_account:
telegraph.create_account(short_name='<your_name>')
A:
There are 2 different solutions for using the API.
import json
MAIN_URL = 'https://api.telegra.ph/'
class apiuz():
def __init__(self):
self.http = requests.Session()
def callMethod(self, n_method=None, a_method=None):
xitoy2= MAIN_URL + n_method.__name__+'?'
for x,y in a_method:
if x!='self' and y!=None: xitoy2+=x+'='+str(y)+'&'
response = self.http.get(xitoy2[:-1])
xitoy2 = eval(response.text.replace('\/','/').replace('true','True').replace('false','False'))
return xitoy2
#Methods created by @apiuz
def createAccount(self, short_name=None, author_name=None, author_url=None):
return self.callMethod(n_method=self.createAccount, a_method=locals().items())
def editAccountInfo(self, access_token=None, short_name=None, author_name=None, author_url=None):
return self.callMethod(n_method=self.editAccountInfo, a_method=locals().items())
def getAccountInfo(self, access_token=None, field=None):
return self.callMethod(n_method=self.getAccountInfo, a_method=locals().items())
def revokeAccessToken(self, access_token=None):
return self.callMethod(n_method=self.revokeAccessToken, a_method=locals().items())
def createPage(self, access_token=None, title=None, author_name=None, author_url=None,
content=None):
return self.callMethod(n_method=self.createPage, a_method=locals().items())
def editPage(self, access_token=None, path=None, title=None, content=None,
author_name=None, author_url=None):
return self.callMethod(n_method=self.editPage, a_method=locals().items())
def getPage(self, path=None):
return self.callMethod(n_method=self.getPage, a_method=locals().items())
def getPageList(self, access_token=None, offset=0, limit=50):
return self.callMethod(n_method=self.getPageList, a_method=locals().items())
def getViews(self, path=None, year=None, month=None, day=None, hour=None):
return self.callMethod(n_method=self.getPageList, a_method=locals().items())
#or
import requests
params = {
'access_token': "<ACCES_TOKEN>",
'path': '/Sample-Test-02-17-4',
'title': 'My Title',
'content':[ {"tag":"p","children":["A link to google",{"tag":"a","attrs":{"href":"http://google.com/","target":"_blank"},"children":["http://google.com"]}]} ],
'author_name': 'My Name',
'author_url': None,
'return_content': 'true'
}
url = 'https://api.telegra.ph/editPage'
r = requests.post(url, json=params)
r.raise_for_status()
response = r.json()
print response```
A:
I don't know WTF is going wrong.
But running the code under interactive mode (via the command line) will cause a PAGE_SAVE_FAILED error. Instead, running the code under script mode will make a successful page save (WTF).
IDK if you are still struggling, but I've made it (cost me a whole afternoon). It might be not suitable for you, but I think you should have a try.
Also, you should keep the access_toke, or not use the create_account method. Use the @Telegraph bot and create your account, then log in as <Your account> on this device. It will automatically open your browser and set the access_token to your browser, copy that access_token (you can find it by clicking the little lock icon before the URL in chrome) and use Telegraph(access_token = '<Your access_token>') instead.
|
How to post Telegraph article?
|
I want to post a Telegra.ph article using Python and Telegraph API. I tried modules telegraph and python-telegraphapi, but I cannot do it. I try to use example codes of the modules:
from telegraph import Telegraph
telegraph = Telegraph()
telegraph.create_account(short_name='1337')
response = telegraph.create_page(
'Hey',
html_content='<p>Hello, world!</p>'
)
print('http://telegra.ph/{}'.format(response['path']))
and here's what happens:
File "AutoContent.py", line 9, in <module>
html_content='<p>Hello, world!</p>'
File "C:\Program Files\Python36\lib\site-packages\telegraph\api.py", line 168, in create_page
'return_content': return_content
File "C:\Program Files\Python36\lib\site-packages\telegraph\api.py", line 40, in method
raise TelegraphException(response.get('error'))
telegraph.exceptions.TelegraphException: PAGE_SAVE_FAILED
Another code:
from telegraphapi import Telegraph
telegraph = Telegraph()
telegraph.createAccount("PythonTelegraphAPI")
page = telegraph.createPage("Hello world!", html_content="<b>Welcome, TelegraphAPI!</b>")
print('http://telegra.ph/{}'.format(page['path']))
And what happens:
File "AutoContent.py", line 6, in <module>
page = telegraph.createPage("Hello world!", html_content="<b>Welcome, TelegraphAPI!</b>")
File "C:\Program Files\Python36\lib\site-packages\telegraphapi\main.py", line 139, in createPage
"return_content": return_content
File "C:\Program Files\Python36\lib\site-packages\telegraphapi\main.py", line 32, in make_method
post_request.json()['error'])
telegraphapi.exceptions.TelegraphAPIException: Error while executing createPage: PAGE_SAVE_FAILED
Please, help me! How can I post a Telegraph article using Python?
|
[
"I think you should replace short_name on yours when you create_account:\ntelegraph.create_account(short_name='<your_name>')\n",
"There are 2 different solutions for using the API.\nimport json\nMAIN_URL = 'https://api.telegra.ph/'\n\nclass apiuz():\n def __init__(self):\n self.http = requests.Session()\n\n def callMethod(self, n_method=None, a_method=None):\n xitoy2= MAIN_URL + n_method.__name__+'?'\n for x,y in a_method:\n if x!='self' and y!=None: xitoy2+=x+'='+str(y)+'&'\n response = self.http.get(xitoy2[:-1])\n xitoy2 = eval(response.text.replace('\\/','/').replace('true','True').replace('false','False'))\n return xitoy2 \n\n #Methods created by @apiuz\n def createAccount(self, short_name=None, author_name=None, author_url=None):\n return self.callMethod(n_method=self.createAccount, a_method=locals().items())\n\n def editAccountInfo(self, access_token=None, short_name=None, author_name=None, author_url=None):\n return self.callMethod(n_method=self.editAccountInfo, a_method=locals().items())\n\n def getAccountInfo(self, access_token=None, field=None):\n return self.callMethod(n_method=self.getAccountInfo, a_method=locals().items())\n\n def revokeAccessToken(self, access_token=None):\n return self.callMethod(n_method=self.revokeAccessToken, a_method=locals().items())\n\n def createPage(self, access_token=None, title=None, author_name=None, author_url=None,\n content=None):\n return self.callMethod(n_method=self.createPage, a_method=locals().items())\n\n def editPage(self, access_token=None, path=None, title=None, content=None,\n author_name=None, author_url=None):\n return self.callMethod(n_method=self.editPage, a_method=locals().items())\n\n def getPage(self, path=None):\n return self.callMethod(n_method=self.getPage, a_method=locals().items())\n\n def getPageList(self, access_token=None, offset=0, limit=50):\n return self.callMethod(n_method=self.getPageList, a_method=locals().items())\n\n def getViews(self, path=None, year=None, month=None, day=None, hour=None):\n return self.callMethod(n_method=self.getPageList, a_method=locals().items())\n\n#or\n\nimport requests\n\nparams = {\n 'access_token': \"<ACCES_TOKEN>\",\n 'path': '/Sample-Test-02-17-4',\n 'title': 'My Title',\n 'content':[ {\"tag\":\"p\",\"children\":[\"A link to google\",{\"tag\":\"a\",\"attrs\":{\"href\":\"http://google.com/\",\"target\":\"_blank\"},\"children\":[\"http://google.com\"]}]} ],\n 'author_name': 'My Name',\n 'author_url': None,\n 'return_content': 'true'\n}\n\nurl = 'https://api.telegra.ph/editPage'\n\nr = requests.post(url, json=params)\nr.raise_for_status()\nresponse = r.json()\nprint response```\n\n",
"I don't know WTF is going wrong.\nBut running the code under interactive mode (via the command line) will cause a PAGE_SAVE_FAILED error. Instead, running the code under script mode will make a successful page save (WTF).\nIDK if you are still struggling, but I've made it (cost me a whole afternoon). It might be not suitable for you, but I think you should have a try.\nAlso, you should keep the access_toke, or not use the create_account method. Use the @Telegraph bot and create your account, then log in as <Your account> on this device. It will automatically open your browser and set the access_token to your browser, copy that access_token (you can find it by clicking the little lock icon before the URL in chrome) and use Telegraph(access_token = '<Your access_token>') instead.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"module",
"python"
] |
stackoverflow_0048823781_module_python.txt
|
Q:
Identify if records exist in another dataframe, within the first dataframe
I have two csv files, OrderOne (approx 105k records) & OrderTwo (approx 115k records)
I want to add a column in OrderTwo which states "TRUE" if that record is found in OrderOne, and "FALSE" if not.
The new column should be appended and the file output.
There is no shared key, so I'm creating one. It will the concatenation of columns within the orders, which are in different formats from different suppliers. For simplicity in this example, it will be 'Forename' + 'Surname'.
I am reading the two data tables in, one of which I only need a few columns from. I'm converting names to upper case & stripping out white space to ensure they match correctly.
I've read the outputs from these files and they look correct. So far, so good.
import pandas as pd
orderoneData = pd.read_csv ('orderone.csv', usecols=['Customer Reference','Forename', 'Surname'], index_col=False)
orderoneData.set_index('Customer Reference', inplace=True)
orderoneData["FNSN"] = orderoneData['Forename'].str.strip() + orderoneData["Surname"].str.strip()
orderoneData["FNSN"] = orderoneData["FNSN"].str.upper()
ordertwoData = pd.read_csv ('ordertwo.csv')
ordertwoData.set_index('Supplier Reference', inplace=True)
ordertwoData["FNSN"] = ordertwoData['Forename'].str.strip() + ordertwoData["Surname"].str.strip()
ordertwoData["FNSN"] = ordertwoData["FNSN"].str.upper()
Next I'm merging; I'm using OrderTwo as the left (because that's the file I want the new column added to). I intend to change the values of the indicator to Boolean ('both' = True, otherwise False) but I haven't got that far yet.
d = (
ordertwoData.merge(orderoneData['FNSN'],
on=['FNSN'],
how='left',
indicator=True,
)
)
d.reset_index(drop=True, inplace=True)
At this point, I have far too many records (approx 179k; I'm expecting the same as OrderTwo, which is 115k). My understanding was that a left join should have the same number of records as the left table, which is my case is ordertwoData
#I thought I might have used the wrong merge criteria and it was creating duplicates, so I thought I would just remove them
d1 = d.drop_duplicates()
print(d1)
d1.to_csv("d.csv")
Dropping duplicates leaves me with too few records, so I'm confused how I get the right result.
Any help much appreciated!
A:
As @Clegane identified, the issue here was not the code but the input data containing duplicate records. By including the original reference in the merge then dropping duplicates on OrderTwo['Supplier Reference'] I got the expected answer. Thanks!
|
Identify if records exist in another dataframe, within the first dataframe
|
I have two csv files, OrderOne (approx 105k records) & OrderTwo (approx 115k records)
I want to add a column in OrderTwo which states "TRUE" if that record is found in OrderOne, and "FALSE" if not.
The new column should be appended and the file output.
There is no shared key, so I'm creating one. It will the concatenation of columns within the orders, which are in different formats from different suppliers. For simplicity in this example, it will be 'Forename' + 'Surname'.
I am reading the two data tables in, one of which I only need a few columns from. I'm converting names to upper case & stripping out white space to ensure they match correctly.
I've read the outputs from these files and they look correct. So far, so good.
import pandas as pd
orderoneData = pd.read_csv ('orderone.csv', usecols=['Customer Reference','Forename', 'Surname'], index_col=False)
orderoneData.set_index('Customer Reference', inplace=True)
orderoneData["FNSN"] = orderoneData['Forename'].str.strip() + orderoneData["Surname"].str.strip()
orderoneData["FNSN"] = orderoneData["FNSN"].str.upper()
ordertwoData = pd.read_csv ('ordertwo.csv')
ordertwoData.set_index('Supplier Reference', inplace=True)
ordertwoData["FNSN"] = ordertwoData['Forename'].str.strip() + ordertwoData["Surname"].str.strip()
ordertwoData["FNSN"] = ordertwoData["FNSN"].str.upper()
Next I'm merging; I'm using OrderTwo as the left (because that's the file I want the new column added to). I intend to change the values of the indicator to Boolean ('both' = True, otherwise False) but I haven't got that far yet.
d = (
ordertwoData.merge(orderoneData['FNSN'],
on=['FNSN'],
how='left',
indicator=True,
)
)
d.reset_index(drop=True, inplace=True)
At this point, I have far too many records (approx 179k; I'm expecting the same as OrderTwo, which is 115k). My understanding was that a left join should have the same number of records as the left table, which is my case is ordertwoData
#I thought I might have used the wrong merge criteria and it was creating duplicates, so I thought I would just remove them
d1 = d.drop_duplicates()
print(d1)
d1.to_csv("d.csv")
Dropping duplicates leaves me with too few records, so I'm confused how I get the right result.
Any help much appreciated!
|
[
"As @Clegane identified, the issue here was not the code but the input data containing duplicate records. By including the original reference in the merge then dropping duplicates on OrderTwo['Supplier Reference'] I got the expected answer. Thanks!\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"merge",
"python"
] |
stackoverflow_0074550989_dataframe_merge_python.txt
|
Q:
Single column fetch returning in list in python postgresql
Database:
id
trade
token
1
abc
5523
2
fdfd
5145
3
sdfd
2899
Code:
def db_fetchquery(sql):
conn = psycopg2.connect(database="trade", user='postgres', password='jps', host='127.0.0.1', port= '5432')
cursor = conn.cursor()
conn.autocommit = True
cursor.execute(sql)
row = cursor.rowcount
if row >= 1:
data = cursor.fetchall()
conn.close()
return data
conn.close()
return False
print(db_fetchquery("SELECT token FROM script"))
Result:
[(5523,),(5145,),(2899,)]
But I need results as:
[5523,5145,2899]
I also tried print(db_fetchquery("SELECT zerodha FROM script")[0]) but this gave result as:- [(5523,)]
Also, why is there ',' / list inside list when I am fetching only one column?
A:
Not sure if you are able to do that without further processing but I would do it like this:
data = [x[0] for x in data]
which convert the list of tuples to a 1D list
A:
To convert [(5523,),(5145,),(2899,)] to [5523, 5145, 2899] you can use lambda or list comprehension
using lambda
res = [(5523,),(5145,),(2899,)]
res = list(map(lambda x: x[0], res))
print(res)
#output: [5523, 5145, 2899]
using list comprehension
res = [(5523,),(5145,),(2899,)]
res = [i[0] for i in res]
print(res)
#output: [5523, 5145, 2899]
|
Single column fetch returning in list in python postgresql
|
Database:
id
trade
token
1
abc
5523
2
fdfd
5145
3
sdfd
2899
Code:
def db_fetchquery(sql):
conn = psycopg2.connect(database="trade", user='postgres', password='jps', host='127.0.0.1', port= '5432')
cursor = conn.cursor()
conn.autocommit = True
cursor.execute(sql)
row = cursor.rowcount
if row >= 1:
data = cursor.fetchall()
conn.close()
return data
conn.close()
return False
print(db_fetchquery("SELECT token FROM script"))
Result:
[(5523,),(5145,),(2899,)]
But I need results as:
[5523,5145,2899]
I also tried print(db_fetchquery("SELECT zerodha FROM script")[0]) but this gave result as:- [(5523,)]
Also, why is there ',' / list inside list when I am fetching only one column?
|
[
"Not sure if you are able to do that without further processing but I would do it like this:\ndata = [x[0] for x in data]\n\nwhich convert the list of tuples to a 1D list\n",
"To convert [(5523,),(5145,),(2899,)] to [5523, 5145, 2899] you can use lambda or list comprehension\nusing lambda\nres = [(5523,),(5145,),(2899,)]\nres = list(map(lambda x: x[0], res)) \nprint(res) \n#output: [5523, 5145, 2899]\n\nusing list comprehension\nres = [(5523,),(5145,),(2899,)]\nres = [i[0] for i in res]\nprint(res) \n#output: [5523, 5145, 2899]\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"list",
"postgresql",
"python",
"python_3.x",
"sql"
] |
stackoverflow_0068842475_list_postgresql_python_python_3.x_sql.txt
|
Q:
Alternative methods to cartopy functions (manipulating shapely linestrings - Geodetic)
Long story short I can't get cartopy to install in my environment so I'm looking for alternative ways of doing things it might be used for.
I've recently been following this tutorial which uses cartopy to alter the path of shapely linestrings to take into account the curvature of the earth:
"Cartopy can be used to manipulate the way that lines are plotted. The transform=ccrs.Geodetic() method transforms the LineStrings to account for the earths curvature"
assuming I can just google the actual value that is the curvature of the earth, are there any ways I could manually manipulate the linestrings to achieve roughly the same effect?
A:
It's probably feasible to implement the great circle algorithms yourself, but there are also other options. If you manage the install pyproj for example, you can use the example below, it samples a given amount of points between two locations on earth.
Note that although I still use Cartopy to show the coastlines (for reference), the actual plotting of the line (great circle) is fully independent of Cartopy and can be used with a plain Matplotlib axes as well.
And you can always read the coastlines or other borders/annotation etc for your map without Cartopy as well, as long as you pay attention to the projection. Matplotlib has all the primatives to do this (lines, PathCollections etc), it's just that Cartopy makes it a lot more convenient.
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
from pyproj import Geod
# from
lat1 = 55.
lon1 = -65
# to
lat2 = 30
lon2 = 80
n_samples = 1000
g = Geod(ellps='WGS84')
coords = g.npts(
lon1, lat1, lon2, lat2, n_samples, initial_idx=0, terminus_idx=0,
)
lons, lats = zip(*coords)
fig, ax = plt.subplots(
figsize=(10,5), dpi=86, facecolor="w",
subplot_kw=dict(projection=ccrs.PlateCarree(), xlim=(-180,180), ylim=(-90, 90)),
)
ax.axis("off")
ax.coastlines(lw=.3)
ax.plot(lons, lats, "r-") # <- no cartopy, just x/y points!
The amount of points you sample should probably depend on the distance between both points, which can also be calculated using the g.inv(...) function. And the amount is a tradeoff between accuracy and performance. For most applications you probably want to keep it as low as possible, just above where you start to visibly see the effect.
|
Alternative methods to cartopy functions (manipulating shapely linestrings - Geodetic)
|
Long story short I can't get cartopy to install in my environment so I'm looking for alternative ways of doing things it might be used for.
I've recently been following this tutorial which uses cartopy to alter the path of shapely linestrings to take into account the curvature of the earth:
"Cartopy can be used to manipulate the way that lines are plotted. The transform=ccrs.Geodetic() method transforms the LineStrings to account for the earths curvature"
assuming I can just google the actual value that is the curvature of the earth, are there any ways I could manually manipulate the linestrings to achieve roughly the same effect?
|
[
"It's probably feasible to implement the great circle algorithms yourself, but there are also other options. If you manage the install pyproj for example, you can use the example below, it samples a given amount of points between two locations on earth.\nNote that although I still use Cartopy to show the coastlines (for reference), the actual plotting of the line (great circle) is fully independent of Cartopy and can be used with a plain Matplotlib axes as well.\nAnd you can always read the coastlines or other borders/annotation etc for your map without Cartopy as well, as long as you pay attention to the projection. Matplotlib has all the primatives to do this (lines, PathCollections etc), it's just that Cartopy makes it a lot more convenient.\nimport matplotlib.pyplot as plt\nimport cartopy.crs as ccrs\nfrom pyproj import Geod\n\n# from\nlat1 = 55.\nlon1 = -65\n\n# to\nlat2 = 30\nlon2 = 80\n\nn_samples = 1000\n\ng = Geod(ellps='WGS84')\ncoords = g.npts(\n lon1, lat1, lon2, lat2, n_samples, initial_idx=0, terminus_idx=0,\n)\nlons, lats = zip(*coords)\n\nfig, ax = plt.subplots(\n figsize=(10,5), dpi=86, facecolor=\"w\", \n subplot_kw=dict(projection=ccrs.PlateCarree(), xlim=(-180,180), ylim=(-90, 90)),\n)\nax.axis(\"off\")\nax.coastlines(lw=.3)\n\nax.plot(lons, lats, \"r-\") # <- no cartopy, just x/y points! \n\n\nThe amount of points you sample should probably depend on the distance between both points, which can also be calculated using the g.inv(...) function. And the amount is a tradeoff between accuracy and performance. For most applications you probably want to keep it as low as possible, just above where you start to visibly see the effect.\n"
] |
[
1
] |
[] |
[] |
[
"cartopy",
"gis",
"python",
"shapely"
] |
stackoverflow_0074554773_cartopy_gis_python_shapely.txt
|
Q:
Odoo t-field image is appearing empty to public
I have a simple controller when shows the people comment in the website along with their pictures. Everything works fine except the image is not appearing when the user logout.
here is my controller
@http.route('/page/homepage', type='http', auth='public', website=True)
def comment_list(self):
comments = request.env['erp.comment'].sudo().search([], limit=10)
values = {
'user': comments,
}
return request.website.render('website.homepage', values)
and here is the xml content
<div class="ocomment-avatar">
<span t-field="p.image" t-field-options="{"widget": "image", "class": "img-rounded"}"/>
</div>
A:
Use img tag.
like
<span>
<img t-att-src="'p.image'" t-att-class="'img-rounded'" t-att-widget="'image'" />
</span>
Hope it will help you.
A:
I found the problem, it was due to security reason, i add an open access rule of the module. and it worked!
A:
<img t-if="line.image_upload"
t-att-src="'data:image/png;base64,%s' % to_text(line.image_upload)"
style="max-height: 300px;"/>
line.image_upload is the binary field
A:
use t-att-src instead of span t-field tag
then it worked
A:
You can use the method "image_data_uri"
<img t-att-src="image_data_uri(image)" />
They had the same problem :
https://www.odoo.com/forum/help-1/how-to-display-a-image-in-qweb-from-ir-attachment-table-146979
|
Odoo t-field image is appearing empty to public
|
I have a simple controller when shows the people comment in the website along with their pictures. Everything works fine except the image is not appearing when the user logout.
here is my controller
@http.route('/page/homepage', type='http', auth='public', website=True)
def comment_list(self):
comments = request.env['erp.comment'].sudo().search([], limit=10)
values = {
'user': comments,
}
return request.website.render('website.homepage', values)
and here is the xml content
<div class="ocomment-avatar">
<span t-field="p.image" t-field-options="{"widget": "image", "class": "img-rounded"}"/>
</div>
|
[
"Use img tag.\nlike\n<span>\n <img t-att-src=\"'p.image'\" t-att-class=\"'img-rounded'\" t-att-widget=\"'image'\" />\n</span>\n\nHope it will help you.\n",
"I found the problem, it was due to security reason, i add an open access rule of the module. and it worked!\n",
"<img t-if=\"line.image_upload\"\n t-att-src=\"'data:image/png;base64,%s' % to_text(line.image_upload)\"\n style=\"max-height: 300px;\"/>\n\nline.image_upload is the binary field\n",
"use t-att-src instead of span t-field tag\nthen it worked\n",
"You can use the method \"image_data_uri\"\n<img t-att-src=\"image_data_uri(image)\" />\n\nThey had the same problem :\nhttps://www.odoo.com/forum/help-1/how-to-display-a-image-in-qweb-from-ir-attachment-table-146979\n"
] |
[
0,
0,
0,
0,
0
] |
[] |
[] |
[
"odoo_8",
"odoo_9",
"openerp",
"python"
] |
stackoverflow_0038067886_odoo_8_odoo_9_openerp_python.txt
|
Q:
How to get an individual row in Big Query?
I want to get an individual row from the QueryJob in BQ. My query: select count(*) from ... returns a single row & I want to read the count value which is its first column. So if I can get the first row then I can do row[0] for the first column. I can iterate: row in queryJob but since I require only the first row this seems unneccesary.
Below is what I've tried:
row = self.client.query(count_query)
count = row.result()[0]
This gives an error:
'QueryJob' object is not subscriptable"
How can I get individual rows from queryJob by the row index?
A:
Just do:
row = self.client.query(count_query)
result = row.result().total_rows
This will give the count from the query
A:
you can use to_dataframe():
result = self.client.query(count_query).to_dataframe()
#if you want to result as a integer:
result = self.client.query(count_query).to_dataframe()['first_column_name'].iat[0]
|
How to get an individual row in Big Query?
|
I want to get an individual row from the QueryJob in BQ. My query: select count(*) from ... returns a single row & I want to read the count value which is its first column. So if I can get the first row then I can do row[0] for the first column. I can iterate: row in queryJob but since I require only the first row this seems unneccesary.
Below is what I've tried:
row = self.client.query(count_query)
count = row.result()[0]
This gives an error:
'QueryJob' object is not subscriptable"
How can I get individual rows from queryJob by the row index?
|
[
"Just do:\nrow = self.client.query(count_query)\nresult = row.result().total_rows\n\nThis will give the count from the query\n",
"you can use to_dataframe():\nresult = self.client.query(count_query).to_dataframe()\n\n#if you want to result as a integer:\nresult = self.client.query(count_query).to_dataframe()['first_column_name'].iat[0]\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"google_bigquery",
"pandas",
"python",
"sql"
] |
stackoverflow_0074557942_google_bigquery_pandas_python_sql.txt
|
Q:
Can't get table data from grid formatted website
I am trying to extract data from https://www.lipidmaps.org/databases/lmsd/LMSL01010001. I usually use beautifulsoup or pandas to extract table data. But the tables in the website dont seem to have been made with the table class. For example, the Calculated Physicochemical Properties table has been made with "flex-grow flex-shrink p-3 px-5".
How can I extract the data from the tables (specifically Calculated Physicochemical Properties table and SMILES value)?
I tried the following code but I get almost the whole websites text:
'soup.find("div")'.
I usually use pandas.read_table(link)
A:
Here is one way of getting that information, and displaying it into a dataframe format:
import requests
from bs4 import BeautifulSoup as bs
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_colwidth', None)
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'
}
big_list = []
r = requests.get('https://www.lipidmaps.org/databases/lmsd/LMSL01010001', headers=headers)
soup = bs(r.text, 'html.parser')
smiles = soup.select_one('div:-soup-contains("SMILES") > span:-soup-contains("(Click to copy)")').find_next('div').text.strip()
heavy_atoms = soup.select_one('strong:-soup-contains("Heavy Atoms")').find_next_sibling(string=True).strip()
rings = soup.select_one('strong:-soup-contains("Rings")').find_next_sibling(string=True).strip()
big_list.append((smiles, heavy_atoms, rings))
df = pd.DataFrame(big_list, columns=['SMILES', 'Heavy Atoms', 'Rings'])
print(df)
Result in terminal:
SMILES Heavy Atoms Rings
0 O(P(O)(=O)OC[C@@H]1[C@@H](O)[C@@H](O)[C@H](N2C(=O)NC(=O)C=C2)O1)P(O[C@H]1O[C@@H]([C@H]([C@@H]([C@H]1N)OC(C[C@@H](CCCCCCCCCCC)O)=O)O)CO)(=O)O 52 3
You can get the other datapoints as well, using the logic above. Also, make sure your packages are up to date.
BeautifulSoup documentation can be found here
|
Can't get table data from grid formatted website
|
I am trying to extract data from https://www.lipidmaps.org/databases/lmsd/LMSL01010001. I usually use beautifulsoup or pandas to extract table data. But the tables in the website dont seem to have been made with the table class. For example, the Calculated Physicochemical Properties table has been made with "flex-grow flex-shrink p-3 px-5".
How can I extract the data from the tables (specifically Calculated Physicochemical Properties table and SMILES value)?
I tried the following code but I get almost the whole websites text:
'soup.find("div")'.
I usually use pandas.read_table(link)
|
[
"Here is one way of getting that information, and displaying it into a dataframe format:\nimport requests\nfrom bs4 import BeautifulSoup as bs\nimport pandas as pd\n\npd.set_option('display.max_columns', None)\npd.set_option('display.max_colwidth', None)\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'\n}\n\nbig_list = []\nr = requests.get('https://www.lipidmaps.org/databases/lmsd/LMSL01010001', headers=headers)\nsoup = bs(r.text, 'html.parser')\nsmiles = soup.select_one('div:-soup-contains(\"SMILES\") > span:-soup-contains(\"(Click to copy)\")').find_next('div').text.strip()\nheavy_atoms = soup.select_one('strong:-soup-contains(\"Heavy Atoms\")').find_next_sibling(string=True).strip()\nrings = soup.select_one('strong:-soup-contains(\"Rings\")').find_next_sibling(string=True).strip()\nbig_list.append((smiles, heavy_atoms, rings))\ndf = pd.DataFrame(big_list, columns=['SMILES', 'Heavy Atoms', 'Rings'])\nprint(df)\n\nResult in terminal:\n SMILES Heavy Atoms Rings\n0 O(P(O)(=O)OC[C@@H]1[C@@H](O)[C@@H](O)[C@H](N2C(=O)NC(=O)C=C2)O1)P(O[C@H]1O[C@@H]([C@H]([C@@H]([C@H]1N)OC(C[C@@H](CCCCCCCCCCC)O)=O)O)CO)(=O)O 52 3\n\nYou can get the other datapoints as well, using the logic above. Also, make sure your packages are up to date.\nBeautifulSoup documentation can be found here\n"
] |
[
0
] |
[] |
[] |
[
"html",
"python",
"web_scraping"
] |
stackoverflow_0074558011_html_python_web_scraping.txt
|
Q:
Add a gaussian noise to a Tensorflow Dataset
I have a CSVDataset which has around 6 million rows. For the purposes of this question I am making a TensorSliceDataset as following:-
import tensorflow as tf
import numpy as np
datasetz = tf.data.Dataset.from_tensor_slices((np.random.randn(10, 5), np.random.randn(10,1)))
datasetz = datasetz.map(lambda x, y: (x, x))
datasetz
# <MapDataset element_spec=(TensorSpec(shape=(5,), dtype=tf.float64, name=None), TensorSpec(shape=(5,), dtype=tf.float64, name=None))>
I am trying to make a denoising autoencoder. For this, I need to add some noise to my dataset. If dataset were a numpy.ndarray, I could've added the noise the following way:-
corruption_level = 0.3
datasetz = datasetz + (np.random.randn(10, 5) * corruption_level)
But I don't know how to do it with a CSVDataset object.
A:
This adds each row with random noise:
datasetz = tf.data.Dataset.from_tensor_slices((np.random.randn(10, 5), np.random.randn(10,1)))
datasetz = datasetz.map(lambda x, y: (x+corruption_level*tf.random.uniform(shape=(5,), dtype=tf.float64), y))
datasetz
|
Add a gaussian noise to a Tensorflow Dataset
|
I have a CSVDataset which has around 6 million rows. For the purposes of this question I am making a TensorSliceDataset as following:-
import tensorflow as tf
import numpy as np
datasetz = tf.data.Dataset.from_tensor_slices((np.random.randn(10, 5), np.random.randn(10,1)))
datasetz = datasetz.map(lambda x, y: (x, x))
datasetz
# <MapDataset element_spec=(TensorSpec(shape=(5,), dtype=tf.float64, name=None), TensorSpec(shape=(5,), dtype=tf.float64, name=None))>
I am trying to make a denoising autoencoder. For this, I need to add some noise to my dataset. If dataset were a numpy.ndarray, I could've added the noise the following way:-
corruption_level = 0.3
datasetz = datasetz + (np.random.randn(10, 5) * corruption_level)
But I don't know how to do it with a CSVDataset object.
|
[
"This adds each row with random noise:\ndatasetz = tf.data.Dataset.from_tensor_slices((np.random.randn(10, 5), np.random.randn(10,1)))\ndatasetz = datasetz.map(lambda x, y: (x+corruption_level*tf.random.uniform(shape=(5,), dtype=tf.float64), y))\ndatasetz\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"python_3.x",
"tensorflow",
"tensorflow2.0"
] |
stackoverflow_0074558138_python_python_3.x_tensorflow_tensorflow2.0.txt
|
Q:
Translating from Python to C++
I'm struggling to try and convert this from python into c++, please if anyone could help it would be greatly appreciated. Assume that the timern and Day variables are already given so no need for the time.ctime(thing)
if Day!=4:
timeschedule=["08:30","09:20","10:10","11:00","11:30","12:20","13:10","14:00","14:45"]
else:
timeschedule=["08:30","09:20","09:50","10:40","11:30"]
timern=time.ctime()[11:16]
for i in timeschedule:
if timern<i:
Finishes=i
Period=timeschedule.index(Finishes)+1
break
else:
continue
timern=timedelta(hours=int(timern[0:2]), minutes=int(timern[3:5]))
Finishes=timedelta(hours=int(Finishes[0:2]), minutes=int(Finishes[3:5]))
TimeLeft= Finishes-timern
seconds = TimeLeft.total_seconds()
hours = seconds // 3600
minutes = (seconds % 3600) // 60
seconds = seconds % 60
DayL=Days[Day]
Output=str(minutes)[0:2]+" minutes left till "+DayL[Period]
Here is my attempt at the conversion:
if (getCurrentDOWAsString=="Saturday" || getCurrentDOWAsString=="Sunday") {
break;
}
else if (getCurrentDOWAsString=="Friday") {
static const char *TimeSchedule[5] = {"08:30","09:20","09:50","10:40","11:30"};
}
else {
static const char *TimeSchedule[9] = {"08:30","09:20","10:10","11:00","11:30","12:20","13:10","14:00","14:45"};
}
int Cn=0;
for (std::list<TimeSchedule>::iterator it = data.begin(); it != data.end(); ++it){
std::cout << it->name;
int Cn=Cn+1;
if (CTime < it) {
char Finishes=it;
int Period=Cn;
break;
}
int FinH = Finishes.substr(0,2);
int FinM = Finishes.substr(3,2);
if(FinM.minutes > minutes.minutes) {
--hours.hours;
minutes.inutes += 60;
}
difference->minutes = minutes.minutes-FinM.minutes;
difference->hours = hours.hours-FinH.hours;
char TL = char(difference->hours)+":"+char(difference->minutes)
I always end up getting an error here for (std::list::iterator it = data.begin(); it != data.end(); ++it) specifically the ++ part and I can't figure out why. This part of the code is taken from a short tutorial on the internet but I don't fully yet grasp the idea of it that's where I'm struggling.
A:
Only focusing on the problem that you're asking about, a possible solution could be (assuming by reading the code snippet provided that we're inside another kind of loop):
std::vector<std::string> data = {};
if (getCurrentDOWAsString=="Saturday" || getCurrentDOWAsString=="Sunday")
break;
else if (getCurrentDOWAsString=="Saturday")
data = {"08:30","09:20","09:50","10:40","11:30"};
else
data = {"08:30","09:20","10:10","11:00","11:30","12:20","13:10","14:00","14:45"};
for (auto& hour : data)
cout << "Hour: " << hour << std::endl;
In summary:
Use std::vector when you require a container that doesn't have a fixed size
Use std::string almost always when you're working with strings. Try to not use c-style types in modern C``
By using those collections, you enabled yourself to work with ranged for loops, and get rid out of the explicit usage of iterators to iterate over your collection in the way that you provide in your snippet
|
Translating from Python to C++
|
I'm struggling to try and convert this from python into c++, please if anyone could help it would be greatly appreciated. Assume that the timern and Day variables are already given so no need for the time.ctime(thing)
if Day!=4:
timeschedule=["08:30","09:20","10:10","11:00","11:30","12:20","13:10","14:00","14:45"]
else:
timeschedule=["08:30","09:20","09:50","10:40","11:30"]
timern=time.ctime()[11:16]
for i in timeschedule:
if timern<i:
Finishes=i
Period=timeschedule.index(Finishes)+1
break
else:
continue
timern=timedelta(hours=int(timern[0:2]), minutes=int(timern[3:5]))
Finishes=timedelta(hours=int(Finishes[0:2]), minutes=int(Finishes[3:5]))
TimeLeft= Finishes-timern
seconds = TimeLeft.total_seconds()
hours = seconds // 3600
minutes = (seconds % 3600) // 60
seconds = seconds % 60
DayL=Days[Day]
Output=str(minutes)[0:2]+" minutes left till "+DayL[Period]
Here is my attempt at the conversion:
if (getCurrentDOWAsString=="Saturday" || getCurrentDOWAsString=="Sunday") {
break;
}
else if (getCurrentDOWAsString=="Friday") {
static const char *TimeSchedule[5] = {"08:30","09:20","09:50","10:40","11:30"};
}
else {
static const char *TimeSchedule[9] = {"08:30","09:20","10:10","11:00","11:30","12:20","13:10","14:00","14:45"};
}
int Cn=0;
for (std::list<TimeSchedule>::iterator it = data.begin(); it != data.end(); ++it){
std::cout << it->name;
int Cn=Cn+1;
if (CTime < it) {
char Finishes=it;
int Period=Cn;
break;
}
int FinH = Finishes.substr(0,2);
int FinM = Finishes.substr(3,2);
if(FinM.minutes > minutes.minutes) {
--hours.hours;
minutes.inutes += 60;
}
difference->minutes = minutes.minutes-FinM.minutes;
difference->hours = hours.hours-FinH.hours;
char TL = char(difference->hours)+":"+char(difference->minutes)
I always end up getting an error here for (std::list::iterator it = data.begin(); it != data.end(); ++it) specifically the ++ part and I can't figure out why. This part of the code is taken from a short tutorial on the internet but I don't fully yet grasp the idea of it that's where I'm struggling.
|
[
"Only focusing on the problem that you're asking about, a possible solution could be (assuming by reading the code snippet provided that we're inside another kind of loop):\nstd::vector<std::string> data = {};\n \nif (getCurrentDOWAsString==\"Saturday\" || getCurrentDOWAsString==\"Sunday\")\n break;\nelse if (getCurrentDOWAsString==\"Saturday\")\n data = {\"08:30\",\"09:20\",\"09:50\",\"10:40\",\"11:30\"};\nelse\n data = {\"08:30\",\"09:20\",\"10:10\",\"11:00\",\"11:30\",\"12:20\",\"13:10\",\"14:00\",\"14:45\"};\n \nfor (auto& hour : data)\n cout << \"Hour: \" << hour << std::endl;\n\nIn summary:\n\nUse std::vector when you require a container that doesn't have a fixed size\nUse std::string almost always when you're working with strings. Try to not use c-style types in modern C``\nBy using those collections, you enabled yourself to work with ranged for loops, and get rid out of the explicit usage of iterators to iterate over your collection in the way that you provide in your snippet\n\n"
] |
[
0
] |
[] |
[] |
[
"c++",
"python",
"python_3.x",
"time",
"translate"
] |
stackoverflow_0074555818_c++_python_python_3.x_time_translate.txt
|
Q:
"10.9.8.5", port 5433 failed: Connection timed out (0x0000274C/10060) Is the server running on that host and accepting TCP/IP connections?
I am trying to tunnel to my database using python, but crashes with a warning:
"10.9.8.5", port 5433 failed: Connection timed out (0x0000274C/10060)
Is the server running on that host and accepting TCP/IP connections?
My python settings:
`
with SSHTunnelForwarder(
('10.132.230.2', 22),
ssh_username="<usrnm>",
ssh_password="<psswrd>",
remote_bind_address=('10.9.8.5', 5433)) as server:
server.start()
print ("server connected")
params = {
'database': "<dbnm>",
'user': "postgres",
'password': "<dbpsswrd>",
'host': '10.9.8.5',
'port': 5433
}
conn = psycopg2.connect(**params)
curs = conn.cursor()
Everything below happened in 10.9.8.5
I tried to edit postgres configs:
i have changed postgresql.conf
i have changed pg_hba.conf(i added the last row)
i restarted postgres
but it didn't help
ok, after searching for more, I came across the fact that there may be a problem in the firewall
i allowed port 5433
then i restarted server but i still get this message
A:
You're telling your database client to connect directly to 10.9.8.5:5433. That's not how tunneling works.
The SSHTunnelForwarder opens a port on your local machine, which it then forwards to the given remote_bind_address through the intermediate ssh server. It doesn't let you magically access the remote server under its original IP address. So you need to use localhost or 127.0.0.1 when connecting to the database server, and use the right port number.
By default, SSHTunnelForwarder chooses a random available port on the local host; the chosen port number is revealed afterwards through the local_bind_port property. However, by default it opens this port to other hosts as well (0.0.0.0). This is not needed, so it's better to be explicit and bind only to localhost a.k.a. 127.0.0.1; you can do this with the local_bind_address argument.
Putting all of that together:
with SSHTunnelForwarder(
('10.132.230.2', 22),
ssh_username="<usrnm>",
ssh_password="<psswrd>",
remote_bind_address=('10.9.8.5', 5433),
local_bind_address=('127.0.0.1',)) as server: # Open port on localhost
server.start()
print ("server connected")
params = {
'database': "<dbnm>",
'user': "postgres",
'password': "<dbpsswrd>",
'host': '127.0.0.1', # Connect to localhost...
'port': server.local_bind_port, # ... on the chosen port
}
conn = psycopg2.connect(**params)
curs = conn.cursor()
|
"10.9.8.5", port 5433 failed: Connection timed out (0x0000274C/10060) Is the server running on that host and accepting TCP/IP connections?
|
I am trying to tunnel to my database using python, but crashes with a warning:
"10.9.8.5", port 5433 failed: Connection timed out (0x0000274C/10060)
Is the server running on that host and accepting TCP/IP connections?
My python settings:
`
with SSHTunnelForwarder(
('10.132.230.2', 22),
ssh_username="<usrnm>",
ssh_password="<psswrd>",
remote_bind_address=('10.9.8.5', 5433)) as server:
server.start()
print ("server connected")
params = {
'database': "<dbnm>",
'user': "postgres",
'password': "<dbpsswrd>",
'host': '10.9.8.5',
'port': 5433
}
conn = psycopg2.connect(**params)
curs = conn.cursor()
Everything below happened in 10.9.8.5
I tried to edit postgres configs:
i have changed postgresql.conf
i have changed pg_hba.conf(i added the last row)
i restarted postgres
but it didn't help
ok, after searching for more, I came across the fact that there may be a problem in the firewall
i allowed port 5433
then i restarted server but i still get this message
|
[
"You're telling your database client to connect directly to 10.9.8.5:5433. That's not how tunneling works.\nThe SSHTunnelForwarder opens a port on your local machine, which it then forwards to the given remote_bind_address through the intermediate ssh server. It doesn't let you magically access the remote server under its original IP address. So you need to use localhost or 127.0.0.1 when connecting to the database server, and use the right port number.\nBy default, SSHTunnelForwarder chooses a random available port on the local host; the chosen port number is revealed afterwards through the local_bind_port property. However, by default it opens this port to other hosts as well (0.0.0.0). This is not needed, so it's better to be explicit and bind only to localhost a.k.a. 127.0.0.1; you can do this with the local_bind_address argument.\nPutting all of that together:\nwith SSHTunnelForwarder(\n ('10.132.230.2', 22),\n ssh_username=\"<usrnm>\",\n ssh_password=\"<psswrd>\", \n remote_bind_address=('10.9.8.5', 5433),\n local_bind_address=('127.0.0.1',)) as server: # Open port on localhost\n \n server.start()\n print (\"server connected\")\n params = {\n 'database': \"<dbnm>\",\n 'user': \"postgres\",\n 'password': \"<dbpsswrd>\",\n 'host': '127.0.0.1', # Connect to localhost...\n 'port': server.local_bind_port, # ... on the chosen port\n }\n conn = psycopg2.connect(**params)\n curs = conn.cursor()\n\n"
] |
[
0
] |
[] |
[] |
[
"postgresql",
"python",
"ubuntu"
] |
stackoverflow_0074557998_postgresql_python_ubuntu.txt
|
Q:
How to apply Video augmentation with keras preprocessing layers uniformly for all frames in the video?
I'm trying to apply data augmentation to a video dataset wherein each video is applied with the different augmentations. For example, all frames in video 1 are flipped horizontally and rotated by 10Β°. All frames in video 2 on the other hand, are not flipped and rotated by -5Β°. I passed a seed in the preprocessing layers, however, each frames of video 1 are augmented differently. This is what my approach looks:
def data_augment(frames,seed):
x = tf.keras.layers.CenterCrop(height=1000,width=1200) (frames)
x = Resizing(width=128,height=128) (x)
x = Rescaling(1./255) (x)
x = RandomContrast((0.2,0.2),seed=seed) (x)
x = RandomTranslation(height_factor=0.15,width_factor=0.2,fill_mode="constant",fill_value=0.0,seed=seed) (x)
x = RandomFlip("horizontal",seed=seed) (x)
x = RandomRotation(factor=0.01,fill_mode="constant",seed=seed) (x)
return x
A:
Video Augmentation:: For videos, combine time and channel axis and treat it as an image augmentation problem. And reshape the end result to get videos augmented same for all frames.
#input dimension:
BATCH, TIME,WIDTH, HEIGHT,_= tf.shape(videos)
Step1: change input shape-(batch, time, width, height, 3) to (batch, width, height, time*3)
#move time to last
videos = tf.transpose(videos, [0, 2, 3, 4, 1])
#combine channels and time
out_shape = (BATCH, WIDTH, HEIGHT, TIME*3)
videos = tf.reshape(videos, out_shape)
Step2: apply augmentation
augmented_data = data_augment(videos,...)
Step 3: reshape back to original
BATCH, WIDTH, HEIGHT,channels= tf.shape(augmented_data)
augmented_data = tf.reshape(augmented_data, (BATCH, HEIGHT, WIDTH, 3, channels//3))
augmented_data = tf.transpose(augmented_data, [0, 4, 1, 2, 3])
|
How to apply Video augmentation with keras preprocessing layers uniformly for all frames in the video?
|
I'm trying to apply data augmentation to a video dataset wherein each video is applied with the different augmentations. For example, all frames in video 1 are flipped horizontally and rotated by 10Β°. All frames in video 2 on the other hand, are not flipped and rotated by -5Β°. I passed a seed in the preprocessing layers, however, each frames of video 1 are augmented differently. This is what my approach looks:
def data_augment(frames,seed):
x = tf.keras.layers.CenterCrop(height=1000,width=1200) (frames)
x = Resizing(width=128,height=128) (x)
x = Rescaling(1./255) (x)
x = RandomContrast((0.2,0.2),seed=seed) (x)
x = RandomTranslation(height_factor=0.15,width_factor=0.2,fill_mode="constant",fill_value=0.0,seed=seed) (x)
x = RandomFlip("horizontal",seed=seed) (x)
x = RandomRotation(factor=0.01,fill_mode="constant",seed=seed) (x)
return x
|
[
"Video Augmentation:: For videos, combine time and channel axis and treat it as an image augmentation problem. And reshape the end result to get videos augmented same for all frames.\n#input dimension:\nBATCH, TIME,WIDTH, HEIGHT,_= tf.shape(videos)\n\nStep1: change input shape-(batch, time, width, height, 3) to (batch, width, height, time*3)\n#move time to last\nvideos = tf.transpose(videos, [0, 2, 3, 4, 1])\n\n#combine channels and time\nout_shape = (BATCH, WIDTH, HEIGHT, TIME*3)\n\nvideos = tf.reshape(videos, out_shape) \n\nStep2: apply augmentation\naugmented_data = data_augment(videos,...)\n\nStep 3: reshape back to original\nBATCH, WIDTH, HEIGHT,channels= tf.shape(augmented_data)\n\naugmented_data = tf.reshape(augmented_data, (BATCH, HEIGHT, WIDTH, 3, channels//3))\naugmented_data = tf.transpose(augmented_data, [0, 4, 1, 2, 3])\n\n"
] |
[
1
] |
[] |
[] |
[
"data_augmentation",
"keras",
"python",
"tensorflow"
] |
stackoverflow_0074508852_data_augmentation_keras_python_tensorflow.txt
|
Q:
How to make debug window expressions always stick as watches in PyCharm?
I recently updated PyCharm to 2022.2.4 (Professional Edition), and now the debug window looks like so
Before updating, The debug window had a "+" icon in which I could add a watch expression.
Now, I can type the expression in the top line, but a watch is not added and I have to go all the way to the right and click the + after I typed the expression.
A:
While writing the question, I found the answer. Thought I'd save someone the hassle by posting anyway:
In the expression line, notice the text
"Evaluate expression (Enter) or add a watch (Ctrl + Shift + Enter)
So just (Ctrl + Shift + Enter) would create a watch.
|
How to make debug window expressions always stick as watches in PyCharm?
|
I recently updated PyCharm to 2022.2.4 (Professional Edition), and now the debug window looks like so
Before updating, The debug window had a "+" icon in which I could add a watch expression.
Now, I can type the expression in the top line, but a watch is not added and I have to go all the way to the right and click the + after I typed the expression.
|
[
"While writing the question, I found the answer. Thought I'd save someone the hassle by posting anyway:\nIn the expression line, notice the text\n\n\"Evaluate expression (Enter) or add a watch (Ctrl + Shift + Enter)\n\nSo just (Ctrl + Shift + Enter) would create a watch.\n\n"
] |
[
0
] |
[] |
[] |
[
"debugging",
"pycahrm",
"python",
"watch"
] |
stackoverflow_0074558534_debugging_pycahrm_python_watch.txt
|
Q:
Optimizing python loop & function
I'm looking for some optimization on the below code.
I've measured the time of various parts of it, which prompted me to few optimization areas:
scores.append(...)) seems to be taking a lot of time, compared to just running the function for the result. Any way to improve the efficiency of collecting the results? At the and I want to calculate average for each of the run (MC). 184s/272s - 70% seems really a lot?
for i in range(0,100000): - since I'm basically running the same function x-times, maybe I can try to increase the efficiency there somehow? Also, maybe some parallelization is an option?
match_simulation function, I probably cannot vectorize/parellelize - because each loop is dependent on the results of previous ones. But I've found a couple of areas that improved the perfomance, e.g. by moving stats calculation before the big loop (it's the same), which improved a lot. Anything else anyone sees there?
resolve_action function basically adds values to score dictionary based on random and stats. It imputs current score as parameter and utilizes a lot of score[xxx] += 1 - I'm wondering if that is optimal or could be improved?
If anyone sees anything else, I'd be glad for a suggestions! Basically I have the same "function", dummy-written in excel file, where I can refresh the randoms to get the results and it is faster, which means there must be a lot wrong with the python implementation atm ;)
A bit of stats on execution times. Using https://github.com/amerghaida/jackedCodeTimerPY as timer:
label min max mean total run count
----------------- ------ ------------ ------------- ---------- -----------
Total 272.84 272.84 272.84 272.84 1
Loop 0 0.25997 0.000271895 271.895 1000000
match_simulation 0 0.25797 8.817e-05 88.17 1000000
Action loop 0 0.25797 6.96475e-05 69.6475 1000000
resolveAction 0 0.0290003 2.77846e-06 41.6769 15000000
pullBack 0 0.00300026 6.83808e-07 10.2571 15000000
Shuffling 0 0.00199914 9.55838e-06 9.55838 1000000
Initial asignment 0 0.00299644 4.98002e-06 4.98002 1000000
scoreCalc 0 0.0149989 8.25524e-07 0.825524 1000000
Thanks!
Code:
def match_simulation(teamHStat, teamAStat, stats, JTimer):
JTimer.start('match_simulation')
JTimer.start('Initial asignment')
act_stats = stats
score = {"H_wins" : 0, "H_draws" : 0, "H_looses" : 0, 'H_goals' : 0, 'A_goals' : 0
, "H_rolled" : 0, "A_rolled" : 0, "H_not_rolled" : 0, "A_not_rolled" : 0
, "H_l" : 0, "H_r" : 0, "H_c" : 0, "H_sp" : 0
, "H_lg" : 0, "H_rg" : 0, "H_cg" : 0, "H_spg" : 0
, "H_ca_l" : 0, "H_ca_r" : 0, "H_ca_c": 0, "H_ca_sp": 0
, "H_ca_lg" : 0, "H_ca_rg" : 0, "H_ca_cg": 0, "H_ca_spg": 0
, "A_l" : 0, "A_r" : 0, "A_c" : 0, "A_sp" : 0
, "A_lg" : 0, "A_rg" : 0, "A_cg" : 0, "A_spg" : 0
, "A_ca_l" : 0, "A_ca_r" : 0, "A_ca_c": 0, "A_ca_sp": 0
, "A_ca_lg" : 0, "A_ca_rg" : 0, "A_ca_cg": 0, "A_ca_spg": 0
, "H_common" : 0, "A_common" : 0, "H_own" : 0, "A_own" : 0
, 'H_miss' : 0, "A_miss" : 0, 'H_ca_miss' : 0, "A_ca_miss" : 0
, "H_PDIM": 0, "A_PDIM" : 0, "H_PNF" : 0, "A_PNF" : 0, "H_PNF_miss" : 0, "A_PNF_miss" : 0
, "H_ca_not_rolled" : 0, "A_ca_not_rolled" : 0, "H_ca_rolled" : 0, "A_ca_rolled" : 0}
action_types = ['H', 'H', 'H', 'H', 'H', 'A', 'A', 'A', 'A', 'A', 'Common', 'Common', 'Common', 'Common', 'Common']
JTimer.stop('Initial asignment')
JTimer.start('Shuffling')
shuffle(action_types)
JTimer.stop('Shuffling')
JTimer.start('Action loop')
for action_type in action_types:
prev_score = score
JTimer.start('resolveAction')
score = resolveAction(act_stats, action_type, score)
JTimer.stop('resolveAction')
JTimer.start('pullBack')
if(score['H_goals'] > prev_score['H_goals'] or score['A_goals'] > prev_score['A_goals']):
act_stats = stats_calculation(
pullBack(score['H_goals'], score['A_goals'], teamHStat)
, pullBack(score['A_goals'], score['H_goals'], teamAStat))
JTimer.stop('pullBack')
JTimer.stop('Action loop')
JTimer.start('scoreCalc')
if(score['H_goals'] > score['A_goals']):
score['H_wins'] += 1
elif score['H_goals'] == score['A_goals']:
score['H_draws'] += 1
else:
score['H_looses'] += 1
JTimer.stop('scoreCalc')
JTimer.stop('match_simulation')
return score
JTimer = JackedTiming()
JTimer.start('Total')
scores = []
stats = stats_calculation(teamHstats.iloc[0], teamAstats.iloc[0])
for i in range(0,100000):
JTimer.start('Loop')
scores.append(match_simulation(teamHstats.iloc[0], teamAstats.iloc[0], stats, JTimer))
JTimer.stop('Loop')
JTimer.stop('Total')
print(JTimer.report())
A:
While testing, I've noticed that
scores.append(match_simulation(teamHstats.iloc[0], teamAstats.iloc[0], stats, JTimer))
is very inefficient due to teamHstats.iloc[0]. So instead of doing that, I've changed to teamHstat = teamHstats.iloc[0] and match_simulation(teamHstat, ...) which improved speed significantly.
Interesting to see.
|
Optimizing python loop & function
|
I'm looking for some optimization on the below code.
I've measured the time of various parts of it, which prompted me to few optimization areas:
scores.append(...)) seems to be taking a lot of time, compared to just running the function for the result. Any way to improve the efficiency of collecting the results? At the and I want to calculate average for each of the run (MC). 184s/272s - 70% seems really a lot?
for i in range(0,100000): - since I'm basically running the same function x-times, maybe I can try to increase the efficiency there somehow? Also, maybe some parallelization is an option?
match_simulation function, I probably cannot vectorize/parellelize - because each loop is dependent on the results of previous ones. But I've found a couple of areas that improved the perfomance, e.g. by moving stats calculation before the big loop (it's the same), which improved a lot. Anything else anyone sees there?
resolve_action function basically adds values to score dictionary based on random and stats. It imputs current score as parameter and utilizes a lot of score[xxx] += 1 - I'm wondering if that is optimal or could be improved?
If anyone sees anything else, I'd be glad for a suggestions! Basically I have the same "function", dummy-written in excel file, where I can refresh the randoms to get the results and it is faster, which means there must be a lot wrong with the python implementation atm ;)
A bit of stats on execution times. Using https://github.com/amerghaida/jackedCodeTimerPY as timer:
label min max mean total run count
----------------- ------ ------------ ------------- ---------- -----------
Total 272.84 272.84 272.84 272.84 1
Loop 0 0.25997 0.000271895 271.895 1000000
match_simulation 0 0.25797 8.817e-05 88.17 1000000
Action loop 0 0.25797 6.96475e-05 69.6475 1000000
resolveAction 0 0.0290003 2.77846e-06 41.6769 15000000
pullBack 0 0.00300026 6.83808e-07 10.2571 15000000
Shuffling 0 0.00199914 9.55838e-06 9.55838 1000000
Initial asignment 0 0.00299644 4.98002e-06 4.98002 1000000
scoreCalc 0 0.0149989 8.25524e-07 0.825524 1000000
Thanks!
Code:
def match_simulation(teamHStat, teamAStat, stats, JTimer):
JTimer.start('match_simulation')
JTimer.start('Initial asignment')
act_stats = stats
score = {"H_wins" : 0, "H_draws" : 0, "H_looses" : 0, 'H_goals' : 0, 'A_goals' : 0
, "H_rolled" : 0, "A_rolled" : 0, "H_not_rolled" : 0, "A_not_rolled" : 0
, "H_l" : 0, "H_r" : 0, "H_c" : 0, "H_sp" : 0
, "H_lg" : 0, "H_rg" : 0, "H_cg" : 0, "H_spg" : 0
, "H_ca_l" : 0, "H_ca_r" : 0, "H_ca_c": 0, "H_ca_sp": 0
, "H_ca_lg" : 0, "H_ca_rg" : 0, "H_ca_cg": 0, "H_ca_spg": 0
, "A_l" : 0, "A_r" : 0, "A_c" : 0, "A_sp" : 0
, "A_lg" : 0, "A_rg" : 0, "A_cg" : 0, "A_spg" : 0
, "A_ca_l" : 0, "A_ca_r" : 0, "A_ca_c": 0, "A_ca_sp": 0
, "A_ca_lg" : 0, "A_ca_rg" : 0, "A_ca_cg": 0, "A_ca_spg": 0
, "H_common" : 0, "A_common" : 0, "H_own" : 0, "A_own" : 0
, 'H_miss' : 0, "A_miss" : 0, 'H_ca_miss' : 0, "A_ca_miss" : 0
, "H_PDIM": 0, "A_PDIM" : 0, "H_PNF" : 0, "A_PNF" : 0, "H_PNF_miss" : 0, "A_PNF_miss" : 0
, "H_ca_not_rolled" : 0, "A_ca_not_rolled" : 0, "H_ca_rolled" : 0, "A_ca_rolled" : 0}
action_types = ['H', 'H', 'H', 'H', 'H', 'A', 'A', 'A', 'A', 'A', 'Common', 'Common', 'Common', 'Common', 'Common']
JTimer.stop('Initial asignment')
JTimer.start('Shuffling')
shuffle(action_types)
JTimer.stop('Shuffling')
JTimer.start('Action loop')
for action_type in action_types:
prev_score = score
JTimer.start('resolveAction')
score = resolveAction(act_stats, action_type, score)
JTimer.stop('resolveAction')
JTimer.start('pullBack')
if(score['H_goals'] > prev_score['H_goals'] or score['A_goals'] > prev_score['A_goals']):
act_stats = stats_calculation(
pullBack(score['H_goals'], score['A_goals'], teamHStat)
, pullBack(score['A_goals'], score['H_goals'], teamAStat))
JTimer.stop('pullBack')
JTimer.stop('Action loop')
JTimer.start('scoreCalc')
if(score['H_goals'] > score['A_goals']):
score['H_wins'] += 1
elif score['H_goals'] == score['A_goals']:
score['H_draws'] += 1
else:
score['H_looses'] += 1
JTimer.stop('scoreCalc')
JTimer.stop('match_simulation')
return score
JTimer = JackedTiming()
JTimer.start('Total')
scores = []
stats = stats_calculation(teamHstats.iloc[0], teamAstats.iloc[0])
for i in range(0,100000):
JTimer.start('Loop')
scores.append(match_simulation(teamHstats.iloc[0], teamAstats.iloc[0], stats, JTimer))
JTimer.stop('Loop')
JTimer.stop('Total')
print(JTimer.report())
|
[
"While testing, I've noticed that\nscores.append(match_simulation(teamHstats.iloc[0], teamAstats.iloc[0], stats, JTimer))\n\nis very inefficient due to teamHstats.iloc[0]. So instead of doing that, I've changed to teamHstat = teamHstats.iloc[0] and match_simulation(teamHstat, ...) which improved speed significantly.\nInteresting to see.\n"
] |
[
0
] |
[] |
[] |
[
"dictionary",
"list",
"loops",
"optimization",
"python"
] |
stackoverflow_0074552971_dictionary_list_loops_optimization_python.txt
|
Q:
NotImplementedError: 'split_respect_sentence_boundary=True' is only compatible with split_by='word'
I have the following lines of code
from haystack.document_stores import InMemoryDocumentStore, SQLDocumentStore
from haystack.nodes import TextConverter, PDFToTextConverter,PreProcessor
from haystack.utils import clean_wiki_text, convert_files_to_docs, fetch_archive_from_http, print_answers
doc_dir = "C:\\Users\\abcd\\Downloads\\PDF Files\\"
docs = convert_files_to_docs(dir_path=doc_dir, clean_func=None, split_paragraphs=True
preprocessor = PreProcessor(
clean_empty_lines=True,
clean_whitespace=True,
clean_header_footer=True,
split_by="passage",
split_length=2)
doc = preprocessor.process(docs)
When i try to run it, i get the following error message
NotImplementedError Traceback (most recent call last)
c:\Users\abcd\Downloads\solr9.ipynb Cell 27 in <cell line: 23>()
16 print(type(docs))
17 preprocessor = PreProcessor(
18 clean_empty_lines=True,
19 clean_whitespace=True,
20 clean_header_footer=True,
21 split_by="passage",
22 split_length=2)
---> 23 doc = preprocessor.process(docs)
File ~\AppData\Roaming\Python\Python39\site-packages\haystack\nodes\preprocessor\preprocessor.py:167, in PreProcessor.process(self, documents, clean_whitespace, clean_header_footer, clean_empty_lines, remove_substrings, split_by, split_length, split_overlap, split_respect_sentence_boundary, id_hash_keys)
165 ret = self._process_single(document=documents, id_hash_keys=id_hash_keys, **kwargs) # type: ignore
166 elif isinstance(documents, list):
--> 167 ret = self._process_batch(documents=list(documents), id_hash_keys=id_hash_keys, **kwargs)
168 else:
169 raise Exception("documents provided to PreProcessor.prepreprocess() is not of type list nor Document")
File ~\AppData\Roaming\Python\Python39\site-packages\haystack\nodes\preprocessor\preprocessor.py:225, in PreProcessor._process_batch(self, documents, id_hash_keys, **kwargs)
222 def _process_batch(
223 self, documents: List[Union[dict, Document]], id_hash_keys: Optional[List[str]] = None, **kwargs
224 ) -> List[Document]:
--> 225 nested_docs = [
226 self._process_single(d, id_hash_keys=id_hash_keys, **kwargs)
...
--> 324 raise NotImplementedError("'split_respect_sentence_boundary=True' is only compatible with split_by='word'.")
326 if type(document.content) is not str:
327 logger.error("Document content is not of type str. Nothing to split.")
NotImplementedError: 'split_respect_sentence_boundary=True' is only compatible with split_by='word'.
I don't even have split_respect_sentence_boundary=True as my argument and also i don't have split_by='word' rather i have it set as split_by="passage".
This is the same error if i try changing it to split_by="sentence".
Do let me know if i am missing out anything here.
Tried using split_by="sentence" but getting same error.
A:
As you can see in the PreProcessor API docs, the default value for split_respect_sentence_boundary is True.
In order to make your code work, you should specify split_respect_sentence_boundary=False:
preprocessor = PreProcessor(
clean_empty_lines=True,
clean_whitespace=True,
clean_header_footer=True,
split_by="passage",
split_length=2,
split_respect_sentence_boundary=False)
I agree that this behavior is not intuitive.
Currently, this node is undergoing a major refactoring.
|
NotImplementedError: 'split_respect_sentence_boundary=True' is only compatible with split_by='word'
|
I have the following lines of code
from haystack.document_stores import InMemoryDocumentStore, SQLDocumentStore
from haystack.nodes import TextConverter, PDFToTextConverter,PreProcessor
from haystack.utils import clean_wiki_text, convert_files_to_docs, fetch_archive_from_http, print_answers
doc_dir = "C:\\Users\\abcd\\Downloads\\PDF Files\\"
docs = convert_files_to_docs(dir_path=doc_dir, clean_func=None, split_paragraphs=True
preprocessor = PreProcessor(
clean_empty_lines=True,
clean_whitespace=True,
clean_header_footer=True,
split_by="passage",
split_length=2)
doc = preprocessor.process(docs)
When i try to run it, i get the following error message
NotImplementedError Traceback (most recent call last)
c:\Users\abcd\Downloads\solr9.ipynb Cell 27 in <cell line: 23>()
16 print(type(docs))
17 preprocessor = PreProcessor(
18 clean_empty_lines=True,
19 clean_whitespace=True,
20 clean_header_footer=True,
21 split_by="passage",
22 split_length=2)
---> 23 doc = preprocessor.process(docs)
File ~\AppData\Roaming\Python\Python39\site-packages\haystack\nodes\preprocessor\preprocessor.py:167, in PreProcessor.process(self, documents, clean_whitespace, clean_header_footer, clean_empty_lines, remove_substrings, split_by, split_length, split_overlap, split_respect_sentence_boundary, id_hash_keys)
165 ret = self._process_single(document=documents, id_hash_keys=id_hash_keys, **kwargs) # type: ignore
166 elif isinstance(documents, list):
--> 167 ret = self._process_batch(documents=list(documents), id_hash_keys=id_hash_keys, **kwargs)
168 else:
169 raise Exception("documents provided to PreProcessor.prepreprocess() is not of type list nor Document")
File ~\AppData\Roaming\Python\Python39\site-packages\haystack\nodes\preprocessor\preprocessor.py:225, in PreProcessor._process_batch(self, documents, id_hash_keys, **kwargs)
222 def _process_batch(
223 self, documents: List[Union[dict, Document]], id_hash_keys: Optional[List[str]] = None, **kwargs
224 ) -> List[Document]:
--> 225 nested_docs = [
226 self._process_single(d, id_hash_keys=id_hash_keys, **kwargs)
...
--> 324 raise NotImplementedError("'split_respect_sentence_boundary=True' is only compatible with split_by='word'.")
326 if type(document.content) is not str:
327 logger.error("Document content is not of type str. Nothing to split.")
NotImplementedError: 'split_respect_sentence_boundary=True' is only compatible with split_by='word'.
I don't even have split_respect_sentence_boundary=True as my argument and also i don't have split_by='word' rather i have it set as split_by="passage".
This is the same error if i try changing it to split_by="sentence".
Do let me know if i am missing out anything here.
Tried using split_by="sentence" but getting same error.
|
[
"As you can see in the PreProcessor API docs, the default value for split_respect_sentence_boundary is True.\nIn order to make your code work, you should specify split_respect_sentence_boundary=False:\npreprocessor = PreProcessor(\n clean_empty_lines=True,\n clean_whitespace=True,\n clean_header_footer=True,\n split_by=\"passage\",\n split_length=2,\n split_respect_sentence_boundary=False)\n\nI agree that this behavior is not intuitive.\nCurrently, this node is undergoing a major refactoring.\n"
] |
[
0
] |
[] |
[] |
[
"haystack",
"preprocessor",
"python"
] |
stackoverflow_0074557335_haystack_preprocessor_python.txt
|
Q:
How to use map function to save a list of dataframes to the desired path using python
I have a code written using for loop to save the dataframes present in a list (Date is the name of the list) to the specified path
for Dates in Date:
if Dates.empty:
pass
else:
PATH = f'C:/Users/Desktop/' + Dates.iloc[0]['Col1'] + '/' + Dates.iloc[0]['Col2'] + '/'
if not os.path.exists(PATH):
os.makedirs(PATH)
Day = Dates.iloc[0]["DAY"]
Dates = Dates.drop(['Col1', 'Col2'], axis=1)
Dates.to_csv(os.path.join(PATH,f'{Day}.csv'),index=False)
However, this code executes for a long time. Can anyone please help me how to modify the above code using map function to reduce the time?
A:
Under the assumption that your for loop works correctly you could use a map function as follow:
import pandas as pd
def save_dataframe(Dates: pd.DataFrame):
if not Dates.empty:
PATH = f'C:/Users/Desktop/' + Dates.iloc[0]['Col1'] + '/' + Dates.iloc[0]['Col2'] + '/'
if not os.path.exists(PATH):
os.makedirs(PATH)
Day = Dates.iloc[0]["DAY"]
Dates = Dates.drop(['Col1', 'Col2'], axis=1)
Dates.to_csv(os.path.join(PATH,f'{Day}.csv'),index=False)
my_map = map(save_dataframe, Date) # generation of your map
list(my_map) # excutions of the map
|
How to use map function to save a list of dataframes to the desired path using python
|
I have a code written using for loop to save the dataframes present in a list (Date is the name of the list) to the specified path
for Dates in Date:
if Dates.empty:
pass
else:
PATH = f'C:/Users/Desktop/' + Dates.iloc[0]['Col1'] + '/' + Dates.iloc[0]['Col2'] + '/'
if not os.path.exists(PATH):
os.makedirs(PATH)
Day = Dates.iloc[0]["DAY"]
Dates = Dates.drop(['Col1', 'Col2'], axis=1)
Dates.to_csv(os.path.join(PATH,f'{Day}.csv'),index=False)
However, this code executes for a long time. Can anyone please help me how to modify the above code using map function to reduce the time?
|
[
"Under the assumption that your for loop works correctly you could use a map function as follow:\nimport pandas as pd\n\ndef save_dataframe(Dates: pd.DataFrame):\n if not Dates.empty:\n PATH = f'C:/Users/Desktop/' + Dates.iloc[0]['Col1'] + '/' + Dates.iloc[0]['Col2'] + '/'\n if not os.path.exists(PATH):\n os.makedirs(PATH)\n Day = Dates.iloc[0][\"DAY\"]\n Dates = Dates.drop(['Col1', 'Col2'], axis=1)\n Dates.to_csv(os.path.join(PATH,f'{Day}.csv'),index=False)\n\nmy_map = map(save_dataframe, Date) # generation of your map\nlist(my_map) # excutions of the map\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"for_loop",
"pandas",
"python"
] |
stackoverflow_0074557880_dataframe_for_loop_pandas_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.